Applying Human-in-the-Loop to construct a dataset for determining content reliability to combat fake news

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/137336
Información del item - Informació de l'item - Item information
Título: Applying Human-in-the-Loop to construct a dataset for determining content reliability to combat fake news
Autor/es: Bonet-Jover, Alba | Sepúlveda-Torres, Robiert | Saquete Boró, Estela | Martínez-Barco, Patricio | Piad-Morffis, Alejandro | Estévez-Velarde, Suilan
Grupo/s de investigación o GITE: Procesamiento del Lenguaje y Sistemas de Información (GPLSI)
Centro, Departamento o Servicio: Universidad de Alicante. Departamento de Lenguajes y Sistemas Informáticos | Universidad de Alicante. Instituto Universitario de Investigación Informática
Palabras clave: Natural language processing | Fake news detection | Assisted annotation | Dataset construction | Human-in-the-Loop Artificial Intelligence | Active learning
Fecha de publicación: 20-sep-2023
Editor: Elsevier
Cita bibliográfica: Engineering Applications of Artificial Intelligence. 2023, 126(Part D): 107152. https://doi.org/10.1016/j.engappai.2023.107152
Resumen: Annotated corpora are indispensable tools to train computational models in Natural Language Processing. However, in the case of more complex semantic annotation processes, it is a costly, arduous, and time-consuming task, resulting in a shortage of resources to train Machine Learning and Deep Learning algorithms. In consideration, this work proposes a methodology, based on the human-in-the-loop paradigm, for semi-automatic annotation of complex tasks. This methodology is applied in the construction of a reliability dataset of Spanish news so as to combat disinformation and fake news. We obtain a high quality resource by implementing the proposed methodology for semi-automatic annotation, increasing annotator efficacy and speed, with fewer examples. The methodology consists of three incremental phases and results in the construction of the RUN dataset. The annotation quality of the resource was evaluated through time-reduction (annotation time reduction of almost 64% with respect to the fully manual annotation), annotation quality (measuring consistency of annotation and inter-annotator agreement), and performance by training a model with RUN semi-automatic dataset (Accuracy 95% F1 95%), validating the suitability of the proposal.
Patrocinador/es: This research work is funded by MCIN/AEI/10.13039/501100011033 and, as appropriate, by “ERDF A way of making Europe”, by the “European Union” or by the “European Union NextGenerationEU/PRTR” through the project TRIVIAL: Technological Resources for Intelligent VIral AnaLysis through NLP (PID2021-122263OB-C22) and the project SOCIALTRUST: Assessing trustworthiness in digital media (PDC2022-133146-C22). It is also funded by Generalitat Valenciana, Spain through the project NL4DISMIS: Natural Language Technologies for dealing with dis- and misinformation (CIPROM/2021/21), and the grant ACIF/2020/177.
URI: http://hdl.handle.net/10045/137336
ISSN: 0952-1976 (Print) | 1873-6769 (Online)
DOI: 10.1016/j.engappai.2023.107152
Idioma: eng
Tipo: info:eu-repo/semantics/article
Derechos: © 2023 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Revisión científica: si
Versión del editor: https://doi.org/10.1016/j.engappai.2023.107152
Aparece en las colecciones:INV - GPLSI - Artículos de Revistas

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
ThumbnailBonet-Jover_etal_2023_EngApplArtifIntellig.pdf2,08 MBAdobe PDFAbrir Vista previa


Este ítem está licenciado bajo Licencia Creative Commons Creative Commons