Adaptive Human Action Recognition With an Evolving Bag of Key Poses

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/37068
Información del item - Informació de l'item - Item information
Título: Adaptive Human Action Recognition With an Evolving Bag of Key Poses
Autor/es: Chaaraoui, Alexandros Andre | Flórez-Revuelta, Francisco
Grupo/s de investigación o GITE: Informática Industrial y Redes de Computadores
Centro, Departamento o Servicio: Universidad de Alicante. Departamento de Tecnología Informática y Computación
Palabras clave: Evolutionary computing and genetic algorithms | Feature evaluation and selection | Human computer interaction | Vision and Scene Understanding
Área/s de conocimiento: Arquitectura y Tecnología de Computadores
Fecha de publicación: 7-abr-2014
Editor: IEEE
Cita bibliográfica: IEEE Transactions on Autonomous Mental Development. 2014. doi:10.1109/TAMD.2014.2315676
Resumen: Vision-based human action recognition allows to detect and understand meaningful human motion. This makes it possible to perform advanced human-computer interaction, among other applications. In dynamic environments, adaptive methods are required to support changing scenario characteristics. Specifically, in human-robot interaction, smooth interaction between humans and robots can only be performed if these are able to evolve and adapt to the changing nature of the scenarios. In this paper, an adaptive vision-based human action recognition method is proposed. By means of an evolutionary optimisation method, adaptive and incremental learning of human actions is supported. Through an evolving bag of key poses, which models the learnt actions over time, the current learning memory is developed to recognise increasingly more actions or actors. The evolutionary method selects the optimal subset of training instances, features and parameter values for each learning phase, and handles the evolution of the model. The experimentation shows that our proposal achieves to adapt to new actions or actors successfully, by rearranging the learnt model. Stable and accurate results have been obtained on four publicly available RGB and RGB-D datasets, unveiling the method’s robustness and applicability.
Patrocinador/es: This work has been partially supported by the European Commission under project “caring4U - A study on people activity in private spaces: towards a multisensor network that meets privacy requirements” (PIEF-GA-2010-274649) and by the Spanish Ministry of Science and Innovation under project “Sistema de visión para la monitorización de la actividad de la vida diaria en el hogar” (TIN2010-20510-C04-02). Alexandros Andre Chaaraoui acknowledges financial support by the Conselleria d’Educació, Formació i Ocupació of the Generalitat Valenciana (fellowship ACIF/2011/160).
URI: http://hdl.handle.net/10045/37068
ISSN: 1943-0604 (Print) | 1943-0612 (Online)
DOI: 10.1109/TAMD.2014.2315676
Idioma: eng
Tipo: info:eu-repo/semantics/article
Derechos: © Copyright 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.
Revisión científica: si
Versión del editor: http://dx.doi.org/10.1109/TAMD.2014.2315676
Aparece en las colecciones:INV - I2RC - Artículos de Revistas
INV - AmI4AHA - Artículos de Revistas

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
Thumbnail2014_Chaaraoui_Florez_IEEE-TAMD.pdfVersión revisada (acceso abierto)7,41 MBAdobe PDFAbrir Vista previa


Todos los documentos en RUA están protegidos por derechos de autor. Algunos derechos reservados.