Incremental Unsupervised Domain-Adversarial Training of Neural Networks

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/114414
Información del item - Informació de l'item - Item information
Título: Incremental Unsupervised Domain-Adversarial Training of Neural Networks
Autor/es: Gallego, Antonio-Javier | Calvo-Zaragoza, Jorge | Fisher, Robert B.
Grupo/s de investigación o GITE: Reconocimiento de Formas e Inteligencia Artificial
Centro, Departamento o Servicio: Universidad de Alicante. Departamento de Lenguajes y Sistemas Informáticos
Palabras clave: Convolutional neural networks (CNNs) | Domain adaptation (DA) | Incremental labeling | Neural networks | Self-labeling | Unsupervised learning
Área/s de conocimiento: Lenguajes y Sistemas Informáticos
Fecha de publicación: 7-oct-2020
Editor: IEEE
Cita bibliográfica: IEEE Transactions on Neural Networks and Learning Systems. 2021, 32(11): 4864-4878. https://doi.org/10.1109/TNNLS.2020.3025954
Resumen: In the context of supervised statistical learning, it is typically assumed that the training set comes from the same distribution that draws the test samples. When this is not the case, the behavior of the learned model is unpredictable and becomes dependent upon the degree of similarity between the distribution of the training set and the distribution of the test set. One of the research topics that investigates this scenario is referred to as domain adaptation (DA). Deep neural networks brought dramatic advances in pattern recognition and that is why there have been many attempts to provide good DA algorithms for these models. Herein we take a different avenue and approach the problem from an incremental point of view, where the model is adapted to the new domain iteratively. We make use of an existing unsupervised domain-adaptation algorithm to identify the target samples on which there is greater confidence about their true label. The output of the model is analyzed in different ways to determine the candidate samples. The selected samples are then added to the source training set by self-labeling, and the process is repeated until all target samples are labeled. This approach implements a form of adversarial training in which, by moving the self-labeled samples from the target to the source set, the DA algorithm is forced to look for new features after each iteration. Our results report a clear improvement with respect to the non-incremental case in several data sets, also outperforming other state-of-the-art DA algorithms.
Patrocinador/es: The work of Antonio-Javier Gallego and Jorge Calvo-Zaragoza was supported in part by the Spanish Ministry under Project TIN2017-86576-R and in part by the University of Alicante under Project GRE19-04.
URI: http://hdl.handle.net/10045/114414
ISSN: 2162-237X (Print) | 2162-2388 (Online)
DOI: 10.1109/TNNLS.2020.3025954
Idioma: eng
Tipo: info:eu-repo/semantics/article
Derechos: © 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
Revisión científica: si
Versión del editor: https://doi.org/10.1109/TNNLS.2020.3025954
Aparece en las colecciones:INV - GRFIA - Artículos de Revistas

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
ThumbnailGallego_etal_2020_IEEE-TNNLS_accepted.pdfAccepted Manuscript (acceso abierto)2,67 MBAdobe PDFAbrir Vista previa
ThumbnailGallego_etal_2020_IEEE-TNNLS_final.pdfVersión final (acceso restringido)2,5 MBAdobe PDFAbrir    Solicitar una copia


Todos los documentos en RUA están protegidos por derechos de autor. Algunos derechos reservados.