A Comprehensive Study on Pain Assessment from Multimodal Sensor Data

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/139249
Información del item - Informació de l'item - Item information
Título: A Comprehensive Study on Pain Assessment from Multimodal Sensor Data
Autor/es: Benavent-Lledó, Manuel | Mulero Pérez, David | Ortiz Pérez, David | Rodríguez Juan, Javier | Berenguer-Agullo, Adrian | Psarrou, Alexandra | Garcia-Rodriguez, Jose
Grupo/s de investigación o GITE: Arquitecturas Inteligentes Aplicadas (AIA)
Centro, Departamento o Servicio: Universidad de Alicante. Departamento de Tecnología Informática y Computación
Palabras clave: Pain assessment | Computer vision | Deep learning | Sensor data | Signal processing | Pattern recognition
Fecha de publicación: 7-dic-2023
Editor: MDPI
Cita bibliográfica: Benavent-Lledo M, Mulero-Pérez D, Ortiz-Perez D, Rodriguez-Juan J, Berenguer-Agullo A, Psarrou A, Garcia-Rodriguez J. A Comprehensive Study on Pain Assessment from Multimodal Sensor Data. Sensors. 2023; 23(24):9675. https://doi.org/10.3390/s23249675
Resumen: Pain assessment is a critical aspect of healthcare, influencing timely interventions and patient well-being. Traditional pain evaluation methods often rely on subjective patient reports, leading to inaccuracies and disparities in treatment, especially for patients who present difficulties to communicate due to cognitive impairments. Our contributions are three-fold. Firstly, we analyze the correlations of the data extracted from biomedical sensors. Then, we use state-of-the-art computer vision techniques to analyze videos focusing on the facial expressions of the patients, both per-frame and using the temporal context. We compare them and provide a baseline for pain assessment methods using two popular benchmarks: UNBC-McMaster Shoulder Pain Expression Archive Database and BioVid Heat Pain Database. We achieved an accuracy of over 96% and over 94% for the F1 Score, recall and precision metrics in pain estimation using single frames with the UNBC-McMaster dataset, employing state-of-the-art computer vision techniques such as Transformer-based architectures for vision tasks. In addition, from the conclusions drawn from the study, future lines of work in this area are discussed.
Patrocinador/es: We would like to thank “A way of making Europe” European Regional Development Fund (ERDF) and MCIN/AEI/10.13039/501100011033 for supporting this work under the “CHAN-TWIN” project (grant TED2021-130890B-C21). HORIZON-MSCA-2021-SE-0 action number: 101086387, REMARKABLE, Rural Environmental Monitoring via ultra wide-ARea networKs and distriButed federated Learning. CIAICO/2022/132 Consolidated group project “AI4Health” funded by Valencian government and International Center for Aging Research ICAR funded project “IASISTEM”. This work has also been supported by a Spanish national and two regional grants for PhD studies, FPU21/00414, CIACIF/2021/430 and CIACIF/2022/175.
URI: http://hdl.handle.net/10045/139249
ISSN: 1424-8220
DOI: 10.3390/s23249675
Idioma: eng
Tipo: info:eu-repo/semantics/article
Derechos: © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Revisión científica: si
Versión del editor: https://doi.org/10.3390/s23249675
Aparece en las colecciones:Investigaciones financiadas por la UE
INV - AIA - Artículos de Revistas

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
ThumbnailBenavent-Lledo_etal_2023_Sensors.pdf1,05 MBAdobe PDFAbrir Vista previa


Todos los documentos en RUA están protegidos por derechos de autor. Algunos derechos reservados.