Deep learning-based visual control assistant for assembly in Industry 4.0

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/115218
Información del item - Informació de l'item - Item information
Título: Deep learning-based visual control assistant for assembly in Industry 4.0
Autor/es: Zamora Hernández, Mauricio Andrés | Castro-Vargas, John Alejandro | Azorin-Lopez, Jorge | Garcia-Rodriguez, Jose
Grupo/s de investigación o GITE: Arquitecturas Inteligentes Aplicadas (AIA)
Centro, Departamento o Servicio: Universidad de Alicante. Departamento de Tecnología Informática y Computación
Palabras clave: Deep learning | Visual control assistant | Industry | Manufacturing processes | Assembly processes
Área/s de conocimiento: Arquitectura y Tecnología de Computadores
Fecha de publicación: oct-2021
Editor: Elsevier
Cita bibliográfica: Computers in Industry. 2021, 131: 103485. https://doi.org/10.1016/j.compind.2021.103485
Resumen: Product assembly is a crucial process in manufacturing plants. In Industry 4.0, the offer of mass-customized products is expanded, thereby increasing the complexity of the assembling phase. This implies that operators should pay close attention to small details, potentially resulting in errors during the manufacturing process owing to its high level of complexity. To mitigate this, we propose a novel architecture that evaluates the activities of an operator during manual assembly in a production cell so that errors in the manufacturing process can be identified, thus avoiding low quality in the final product and reducing rework and waste of raw materials or time. To perform this assessment, it is necessary to use state-of-the-art computer vision techniques, such as deep learning, so that tools, components, and actions may be identified by visual control systems. We develop a deep-learning-based visual control assembly assistant that enables real-time evaluation of the activities in the assembly process so that errors can be identified. A general-use language is developed to describe the actions in assembly processes, which can also be used independently of the proposed architecture. Finally, we generate two datasets with annotated data to be fed to the deep learning methods, the first for the recognition of tools and accessories and the second for the identification of basic actions in manufacturing processes. To validate the proposed method, a set of experiments are conducted, and high accuracy is obtained.
Patrocinador/es: This work was funded by the Spanish Government PID2019-104818RB-I00 grant for the MoDeaAS project and TIN2017-89069-R for Tech4Diet project, supported by Feder funds. It was also supported by the University of Alicante grant for PhD studies UAFPU2019-13. We would like to thank Nvidia for their generous hardware donations that made these experiments possible.
URI: http://hdl.handle.net/10045/115218
ISSN: 0166-3615 (Print) | 1872-6194 (Online)
DOI: 10.1016/j.compind.2021.103485
Idioma: eng
Tipo: info:eu-repo/semantics/article
Derechos: © 2021 Elsevier B.V.
Revisión científica: si
Versión del editor: https://doi.org/10.1016/j.compind.2021.103485
Aparece en las colecciones:INV - AIA - Artículos de Revistas

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
ThumbnailZamora-Hernandez_etal_2021_ComputersIndustry_final.pdfVersión final (acceso restringido)4,24 MBAdobe PDFAbrir    Solicitar una copia
ThumbnailZamora-Hernandez_etal_2021_ComputersIndustry_preprint.pdfPreprint (acceso abierto)12,27 MBAdobe PDFAbrir Vista previa


Todos los documentos en RUA están protegidos por derechos de autor. Algunos derechos reservados.