RUN-AS: a novel approach to annotate news reliability for disinformation detection

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/136804
Información del item - Informació de l'item - Item information
Título: RUN-AS: a novel approach to annotate news reliability for disinformation detection
Autor/es: Bonet-Jover, Alba | Sepúlveda-Torres, Robiert | Saquete Boró, Estela | Martínez-Barco, Patricio | Nieto Pérez, Mario
Grupo/s de investigación o GITE: Procesamiento del Lenguaje y Sistemas de Información (GPLSI)
Centro, Departamento o Servicio: Universidad de Alicante. Departamento de Lenguajes y Sistemas Informáticos | Universidad de Alicante. Instituto Universitario de Investigación Informática
Palabras clave: Natural language processing | Annotation guideline | Dataset annotation | Reliability detection | Disinformation detection
Fecha de publicación: 6-ago-2023
Editor: Springer Nature
Cita bibliográfica: Language Resources and Evaluation. 2023. https://doi.org/10.1007/s10579-023-09678-9
Resumen: The development of the internet and digital technologies has inadvertently facilitated the huge disinformation problem that faces society nowadays. This phenomenon impacts ideologies, politics and public health. The 2016 US presidential elections, the Brexit referendum, the COVID-19 pandemic and the Russia-Ukraine war have been ideal scenarios for the spreading of fake news and hoaxes, due to the massive dissemination of information. Assuming that fake news mixes reliable and unreliable information, we propose RUN-AS (Reliable and Unreliable Annotation Scheme), a fine-grained annotation scheme that enables the labelling of the structural parts and essential content elements of a news item and their classification into Reliable and Unreliable. This annotation proposal aims to detect disinformation patterns in text and to classify the global reliability of news. To this end, a dataset in Spanish was built and manually annotated with RUN-AS and several experiments using this dataset were conducted to validate the annotation scheme by using Machine Learning (ML) and Deep Learning (DL) algorithms. The experiments evidence the validity of the annotation scheme proposed, obtaining the best F1m, 0.948, with the Decision Tree algorithm.
Patrocinador/es: Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This research work is funded by MCIN/AEI/ 10.13039/501100011033 and, as appropriate, by “ERDF A way of making Europe”, by the “European Union” or by the “European Union NextGenerationEU/PRTR” through the project TRIVIAL: Technological Resources for Intelligent VIral AnaLysis through NLP (PID2021-122263OB-C22) and the project SOCIALTRUST: Assessing trustworthiness in digital media (PDC2022-133146-C22). Also funded by Generalitat Valenciana through the project NL4DISMIS: Natural Language Technologies for dealing with dis- and misinformation (CIPROM/ 2021/21), and the grant ACIF/2020/177.
URI: http://hdl.handle.net/10045/136804
ISSN: 1574-020X (Print) | 1574-0218 (Online)
DOI: 10.1007/s10579-023-09678-9
Idioma: eng
Tipo: info:eu-repo/semantics/article
Derechos: © The Author(s) 2023. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Revisión científica: si
Versión del editor: https://doi.org/10.1007/s10579-023-09678-9
Aparece en las colecciones:INV - GPLSI - Artículos de Revistas

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
ThumbnailBonet-Jover_etal_2023_LangResourcesEvaluation.pdf1,29 MBAdobe PDFAbrir Vista previa


Este ítem está licenciado bajo Licencia Creative Commons Creative Commons