Evaluation of different chrominance models in the detection and reconstruction of faces and hands using the growing neural gas network

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/96998
Información del item - Informació de l'item - Item information
Título: Evaluation of different chrominance models in the detection and reconstruction of faces and hands using the growing neural gas network
Autor/es: Angelopoulou, Anastassia | Garcia-Rodriguez, Jose | Orts-Escolano, Sergio | Kapetanios, Epaminondas | Liang, Xing | Woll, Bencie | Psarrou, Alexandra
Grupo/s de investigación o GITE: Informática Industrial y Redes de Computadores | Robótica y Visión Tridimensional (RoViT)
Centro, Departamento o Servicio: Universidad de Alicante. Departamento de Tecnología Informática y Computación | Universidad de Alicante. Departamento de Ciencia de la Computación e Inteligencia Artificial
Palabras clave: Expectation maximisation (EM) algorithm | Colour models | Self-organising networks | Shape modelling
Área/s de conocimiento: Arquitectura y Tecnología de Computadores
Fecha de publicación: nov-2019
Editor: Springer London
Cita bibliográfica: Pattern Analysis and Applications. 2019, 22(4): 1667-1685. doi:10.1007/s10044-019-00819-x
Resumen: Physical traits such as the shape of the hand and face can be used for human recognition and identification in video surveillance systems and in biometric authentication smart card systems, as well as in personal health care. However, the accuracy of such systems suffers from illumination changes, unpredictability, and variability in appearance (e.g. occluded faces or hands, cluttered backgrounds, etc.). This work evaluates different statistical and chrominance models in different environments with increasingly cluttered backgrounds where changes in lighting are common and with no occlusions applied, in order to get a reliable neural network reconstruction of faces and hands, without taking into account the structural and temporal kinematics of the hands. First a statistical model is used for skin colour segmentation to roughly locate hands and faces. Then a neural network is used to reconstruct in 3D the hands and faces. For the filtering and the reconstruction we have used the growing neural gas algorithm which can preserve the topology of an object without restarting the learning process. Experiments conducted on our own database but also on four benchmark databases (Stirling’s, Alicante, Essex, and Stegmann’s) and on deaf individuals from normal 2D videos are freely available on the BSL signbank dataset. Results demonstrate the validity of our system to solve problems of face and hand segmentation and reconstruction under different environmental conditions.
Patrocinador/es: This work has been supported by the Spanish Government TIN2016-76515R Grant, supported with FEDER funds, the University of Alicante Project GRE16-19, the Valencian Government Project GV-2018-022, and the UK Dunhill Medical Trust Grant RPGF1802\37.
URI: http://hdl.handle.net/10045/96998
ISSN: 1433-7541 (Print) | 1433-755X (Online)
DOI: 10.1007/s10044-019-00819-x
Idioma: eng
Tipo: info:eu-repo/semantics/article
Derechos: © The Author(s) 2019. Open Access. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Revisión científica: si
Versión del editor: https://doi.org/10.1007/s10044-019-00819-x
Aparece en las colecciones:INV - AIA - Artículos de Revistas
INV - RoViT - Artículos de Revistas
INV - I2RC - Artículos de Revistas

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
Thumbnail2019_Angelopoulou_etal_PatternAnalApplic.pdf6,53 MBAdobe PDFAbrir Vista previa


Este ítem está licenciado bajo Licencia Creative Commons Creative Commons