A Methodology based on Rebalancing Techniques to measure and improve Fairness in Artificial Intelligence algorithms

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/123225
Registro completo de metadatos
Registro completo de metadatos
Campo DCValorIdioma
dc.contributorLucentiaes_ES
dc.contributor.authorLavalle, Ana-
dc.contributor.authorMaté, Alejandro-
dc.contributor.authorTrujillo, Juan-
dc.contributor.authorGarcía Carrasco, Jorge-
dc.contributor.otherUniversidad de Alicante. Departamento de Lenguajes y Sistemas Informáticoses_ES
dc.date.accessioned2022-05-02T08:15:21Z-
dc.date.available2022-05-02T08:15:21Z-
dc.date.issued2022-04-25-
dc.identifier.citationProceedings of the 24th International Workshop on Design, Optimization, Languages and Analytical Processing of Big Data (DOLAP), co-located with the 25th International Conference on Extending Database Technology and the 25th International Conference on Database Theory (EDBT/ICDT 2022). Edinburgh, UK, March 29, 2022. CEUR Workshop Proceedings, Vol-3130, 81-85es_ES
dc.identifier.issn1613-0073-
dc.identifier.urihttp://hdl.handle.net/10045/123225-
dc.description.abstractArtificial Intelligence (AI) has become one of the key drivers for the next decade. As important decisions are increasingly supported or directly made by AI systems, concerns regarding the rationale and fairness in their outputs are becoming more and more prominent nowadays. Following the recent interest in fairer predictions, several metrics for measuring fairness have been proposed, leading to different objectives which may need to be addressed in different fashion. In this paper, we propose (i) a methodology for analyzing and improving fairness in AI predictions by selecting sensitive attributes that should be protected; (ii) We analyze how the most common rebalance approaches affect the fairness of AI predictions and how they compare to the alternatives of removing or creating separate classifiers for each group within a protected attribute. Finally, (iii) our methodology generates a set of tables that can be easily computed for choosing the best alternative in each particular case. The main advantage of our methodology is that it allows AI practitioners to measure and improve fairness in AI algorithms in a systematic way. In order to check our proposal, we have properly applied it to the COMPAS dataset, which has been widely demonstrated to be biased by several previous studies.es_ES
dc.description.sponsorshipThis work has been co-funded by the AETHER-UA project (PID2020-112540RB-C43), funded by Spanish Ministry of Science and Innovation and the BALLADEER (PROMETEO/2021/088) projects, funded by the Conselleria de Innovación, Universidades, Ciencia y Sociedad Digital (Generalitat Valenciana).es_ES
dc.languageenges_ES
dc.publisherCEURes_ES
dc.rights© Copyright 2022 for this paper by its author(s). Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).es_ES
dc.subjectArtificial Intelligencees_ES
dc.subjectFairnesses_ES
dc.subjectRebalancing techniqueses_ES
dc.subject.otherLenguajes y Sistemas Informáticoses_ES
dc.titleA Methodology based on Rebalancing Techniques to measure and improve Fairness in Artificial Intelligence algorithmses_ES
dc.typeinfo:eu-repo/semantics/conferenceObjectes_ES
dc.peerreviewedsies_ES
dc.relation.publisherversionhttp://ceur-ws.org/Vol-3130/es_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
dc.relation.projectIDinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/PID2020-112540RB-C43es_ES
Aparece en las colecciones:INV - LUCENTIA - Comunicaciones a Congresos, Conferencias, etc.

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
ThumbnailLavalle_etal_2022_CEUR.pdf875,57 kBAdobe PDFAbrir Vista previa


Este ítem está licenciado bajo Licencia Creative Commons Creative Commons