Vision for Robust Robot Manipulation

Please use this identifier to cite or link to this item: http://hdl.handle.net/10045/90949
Full metadata record
Full metadata record
DC FieldValueLanguage
dc.contributorRobótica y Visión Tridimensional (RoViT)es_ES
dc.contributor.authorMartinez-Martin, Ester-
dc.contributor.authorPobil, Angel P. del-
dc.contributor.otherUniversidad de Alicante. Departamento de Ciencia de la Computación e Inteligencia Artificiales_ES
dc.date.accessioned2019-04-08T11:25:30Z-
dc.date.available2019-04-08T11:25:30Z-
dc.date.issued2019-04-06-
dc.identifier.citationMartinez-Martin E, del Pobil AP. Vision for Robust Robot Manipulation. Sensors. 2019; 19(7):1648. doi:10.3390/s19071648es_ES
dc.identifier.issn1424-8220-
dc.identifier.urihttp://hdl.handle.net/10045/90949-
dc.description.abstractAdvances in Robotics are leading to a new generation of assistant robots working in ordinary, domestic settings. This evolution raises new challenges in the tasks to be accomplished by the robots. This is the case for object manipulation where the detect-approach-grasp loop requires a robust recovery stage, especially when the held object slides. Several proprioceptive sensors have been developed in the last decades, such as tactile sensors or contact switches, that can be used for that purpose; nevertheless, their implementation may considerably restrict the gripper’s flexibility and functionality, increasing their cost and complexity. Alternatively, vision can be used since it is an undoubtedly rich source of information, and in particular, depth vision sensors. We present an approach based on depth cameras to robustly evaluate the manipulation success, continuously reporting about any object loss and, consequently, allowing it to robustly recover from this situation. For that, a Lab-colour segmentation allows the robot to identify potential robot manipulators in the image. Then, the depth information is used to detect any edge resulting from two-object contact. The combination of those techniques allows the robot to accurately detect the presence or absence of contact points between the robot manipulator and a held object. An experimental evaluation in realistic indoor environments supports our approach.es_ES
dc.description.sponsorshipThis research was partially funded by Ministerio de Economía y Competitividad grant number DPI2015-69041-R.es_ES
dc.languageenges_ES
dc.publisherMDPIes_ES
dc.rights© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).es_ES
dc.subjectRoboticses_ES
dc.subjectRobot manipulationes_ES
dc.subjectDepth visiones_ES
dc.subject.otherCiencia de la Computación e Inteligencia Artificiales_ES
dc.titleVision for Robust Robot Manipulationes_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.peerreviewedsies_ES
dc.identifier.doi10.3390/s19071648-
dc.relation.publisherversionhttps://doi.org/10.3390/s19071648es_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
Appears in Collections:INV - RoViT - Artículos de Revistas

Files in This Item:
Files in This Item:
File Description SizeFormat 
Thumbnail2019_Martinez_del-Pobil_Sensors.pdf24,17 MBAdobe PDFOpen Preview


This item is licensed under a Creative Commons License Creative Commons