Bias mitigation for fair automation of classification tasks

Please use this identifier to cite or link to this item: http://hdl.handle.net/10045/148340
Full metadata record
Full metadata record
DC FieldValueLanguage
dc.contributorProcesamiento del Lenguaje y Sistemas de Información (GPLSI)es_ES
dc.contributor.authorConsuegra-Ayala, Juan Pablo-
dc.contributor.authorGutiérrez, Yoan-
dc.contributor.authorAlmeida-Cruz, Yudivian-
dc.contributor.authorPalomar, Manuel-
dc.contributor.otherUniversidad de Alicante. Departamento de Lenguajes y Sistemas Informáticoses_ES
dc.contributor.otherUniversidad de Alicante. Instituto Universitario de Investigación Informáticaes_ES
dc.date.accessioned2024-10-25T07:55:14Z-
dc.date.available2024-10-25T07:55:14Z-
dc.date.issued2024-10-21-
dc.identifier.citationExpert Systems. 2024. https://doi.org/10.1111/exsy.13734es_ES
dc.identifier.issn0266-4720 (Print)-
dc.identifier.issn1468-0394 (Online)-
dc.identifier.urihttp://hdl.handle.net/10045/148340-
dc.description.abstractThe incorporation of machine learning algorithms into high-risk decision-making tasks has raised some alarms in the scientific community. Research shows that machine learning-based technologies can contain biases that cause unfair decisions for certain population groups. The fundamental danger of ignoring this problem is that machine learning methods can not only reflect the biases present in our society but could also amplify them. This article presents the design and validation of a technology to assist the fair automation of classification problems. In essence, the proposal is based on taking advantage of the intermediate solutions generated during the resolution of classification problems through using Auto-ML tools, in particular, AutoGOAL, to create unbiased/fair classifiers. The technology employs a multi-objective optimization search to find the collection of models with the best trade-offs between performance and fairness. To solve the optimization problem, we introduce a combination of Probabilistic Grammatical Evolution Search and NSGA-II. The technology was evaluated using the Adult dataset from the UCI repository, a common benchmark in related research. Results were compared with other published results in scenarios with single and multiple fairness definitions. Our experiments demonstrate the technology's ability to automate classification tasks while incorporating fairness constraints. Additionally, our method achieves competitive results against other bias mitigation techniques. A notable advantage of our approach is its minimal requirement for machine learning expertise, thanks to its Auto-ML foundation. This makes the technology accessible and valuable for advancing fairness in machine learning applications. The source code is available online for the research community.es_ES
dc.description.sponsorshipThis research has been partially funded by the University of Alicante and the University of Havana, the Spanish Ministry of Science and Innovation, the Generalitat Valenciana, and the European Regional Development Fund (ERDF) through the following funding: At the national level, the following projects were granted: TRIVIAL (PID2021-122263OB-C22); CORTEX (PID2021-123956OB-I00); CLEARTEXT (TED2021-130707B-I00); and SOCIALTRUST (PDC2022-133146-C22), funded by MCIN/AEI/10.13039/501100011033 and, as appropriate, by ERDF A way of making Europe, by the European Union or by the European Union NextGenerationEU/PRTR. Also, the VIVES: ‘Pla de Tecnologies de la Llengua per al valencià’ project (2022/TL22/00215334) from the Projecte Estratègic per a la Recuperació i Transformació Econòmica (PERTE). At regional level, the Generalitat Valenciana (Conselleria d'Educacio, Investigacio, Cultura i Esport), granted funding for NL4DISMIS (CIPROM/2021/21). Moreover, it was backed by the work of two COST Actions: CA19134—‘Distributed Knowledge Graphs’ and CA19142—‘Leading Platform for European Citizens, Industries, Academia, and Policymakers in Media Accessibility’.es_ES
dc.languageenges_ES
dc.publisherJohn Wiley & Sonses_ES
dc.rights© 2024 John Wiley & Sons Ltd.es_ES
dc.subjectAuto-MLes_ES
dc.subjectBias mitigationes_ES
dc.subjectEnsemble methodses_ES
dc.subjectFairnesses_ES
dc.subjectMulti-objective optimizationes_ES
dc.titleBias mitigation for fair automation of classification taskses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.peerreviewedsies_ES
dc.identifier.doi10.1111/exsy.13734-
dc.relation.publisherversionhttps://doi.org/10.1111/exsy.13734es_ES
dc.rights.accessRightsinfo:eu-repo/semantics/embargoedAccesses_ES
dc.relation.projectIDinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2021-2023/PID2021-122263OB-C22es_ES
dc.relation.projectIDinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2021-2023/PID2021-123956OB-I00es_ES
dc.relation.projectIDinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2021-2023/TED2021-130707B-I00es_ES
dc.relation.projectIDinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2021-2023/PDC2022-133146-C22es_ES
dc.date.embargoEndinfo:eu-repo/date/embargoEnd/2025-10-22-
Appears in Collections:INV - GPLSI - Artículos de Revistas

Files in This Item:
Files in This Item:
File Description SizeFormat 
ThumbnailConsuegra‐Ayala_etal_2024_ExpertSystems_final.pdfVersión final (acceso restringido)1,91 MBAdobe PDFOpen    Request a copy
ThumbnailConsuegra‐Ayala_etal_2024_ExpertSystems_revised.pdfEmbargo 12 meses (acceso abierto: 22 oct. 2025)655,89 kBAdobe PDFOpen    Request a copy


Items in RUA are protected by copyright, with all rights reserved, unless otherwise indicated.