3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/54931
Información del item - Informació de l'item - Item information
Título: 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands
Autor/es: Mateo Agulló, Carlos | Gil, Pablo | Torres, Fernando
Grupo/s de investigación o GITE: Automática, Robótica y Visión Artificial
Centro, Departamento o Servicio: Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal | Universidad de Alicante. Instituto Universitario de Investigación Informática
Palabras clave: Visual perception | Vision algorithms for grasping | 3D-object recognition | Sensing for robot manipulation
Área/s de conocimiento: Ingeniería de Sistemas y Automática
Fecha de publicación: 5-may-2016
Editor: MDPI
Cita bibliográfica: Mateo CM, Gil P, Torres F. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands. Sensors. 2016; 16(5):640. doi:10.3390/s16050640
Resumen: Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.
Patrocinador/es: The research leading to these result has received funding from the Spanish Government and European FEDER funds (DPI2015-68087R), the Valencia Regional Government (PROMETEO/2013/085) as well as the pre-doctoral grant BES-2013-062864.
URI: http://hdl.handle.net/10045/54931
ISSN: 1424-8220
DOI: 10.3390/s16050640
Idioma: eng
Tipo: info:eu-repo/semantics/article
Derechos: © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).
Revisión científica: si
Versión del editor: http://dx.doi.org/10.3390/s16050640
Aparece en las colecciones:INV - AUROVA - Artículos de Revistas

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
Thumbnail2016_Mateo_etal_Sensors.pdf8,41 MBAdobe PDFAbrir Vista previa


Este ítem está licenciado bajo Licencia Creative Commons Creative Commons