Hand gesture recognition using sEMG and deep learning

Please use this identifier to cite or link to this item: http://hdl.handle.net/10045/121510
Información del item - Informació de l'item - Item information
Title: Hand gesture recognition using sEMG and deep learning
Authors: Nasri, Nadia
Research Director: Cazorla, Miguel | Orts-Escolano, Sergio
Center, Department or Service: Universidad de Alicante. Instituto Universitario de Investigación Informática
Keywords: 3D Object Recognition | 2D Hand Pose Estimation | Deep Learning | Machine Learning
Knowledge Area: Ciencia de la Computación e Inteligencia Artificial
Date Created: 2021
Issue Date: 2021
Date of defense: 17-Jun-2021
Publisher: Universidad de Alicante
Abstract: In this thesis, a study of two blooming fields in the artificial intelligence topic is carried out. The first part of the present document is about 3D object recognition methods. Object recognition in general is about providing the ability to understand what objects appears in the input data of an intelligent system. Any robot, from industrial robots to social robots, could benefit of such capability to improve its performance and carry out high level tasks. In fact, this topic has been largely studied and some object recognition methods present in the state of the art outperform humans in terms of accuracy. Nonetheless, these methods are image-based, namely, they focus in recognizing visual features. This could be a problem in some contexts as there exist objects that look alike some other, different objects. For instance, a social robot that recognizes a face in a picture, or an intelligent car that recognizes a pedestrian in a billboard. A potential solution for this issue would be involving tridimensional data so that the systems would not focus on visual features but topological features. Thus, in this thesis, a study of 3D object recognition methods is carried out. The approaches proposed in this document, which take advantage of deep learning methods, take as an input point clouds and are able to provide the correct category. We evaluated the proposals with a range of public challenges, datasets and real life data with high success. The second part of the thesis is about hand pose estimation. This is also an interesting topic that focuses in providing the hand's kinematics. A range of systems, from human computer interaction and virtual reality to social robots could benefit of such capability. For instance to interface a computer and control it with seamless hand gestures or to interact with a social robot that is able to understand human non-verbal communication methods. Thus, in the present document, hand pose estimation approaches are proposed. It is worth noting that the proposals take as an input color images and are able to provide 2D and 3D hand pose in the image plane and euclidean coordinate frames. Specifically, the hand poses are encoded in a collection of points that represents the joints in a hand, so that they can be easily reconstructed in the full hand pose. The methods are evaluated on custom and public datasets, and integrated with a robotic hand teleoperation application with great success.
URI: http://hdl.handle.net/10045/121510
Language: eng
Type: info:eu-repo/semantics/doctoralThesis
Rights: Licencia Creative Commons Reconocimiento-NoComercial-CompartirIgual 4.0
Appears in Collections: Doctoral theses

Files in This Item:
Files in This Item:
File Description SizeFormat 
Thumbnailtesis_nadia_nasri.pdf14,89 MBAdobe PDFOpen Preview


Items in RUA are protected by copyright, with all rights reserved, unless otherwise indicated.