A Methodology based on Rebalancing Techniques to measure and improve Fairness in Artificial Intelligence algorithms

Empreu sempre aquest identificador per citar o enllaçar aquest ítem http://hdl.handle.net/10045/123225
Información del item - Informació de l'item - Item information
Títol: A Methodology based on Rebalancing Techniques to measure and improve Fairness in Artificial Intelligence algorithms
Autors: Lavalle, Ana | Maté, Alejandro | Trujillo, Juan | García Carrasco, Jorge
Grups d'investigació o GITE: Lucentia
Centre, Departament o Servei: Universidad de Alicante. Departamento de Lenguajes y Sistemas Informáticos
Paraules clau: Artificial Intelligence | Fairness | Rebalancing techniques
Àrees de coneixement: Lenguajes y Sistemas Informáticos
Data de publicació: 25-d’abril-2022
Editor: CEUR
Citació bibliogràfica: Proceedings of the 24th International Workshop on Design, Optimization, Languages and Analytical Processing of Big Data (DOLAP), co-located with the 25th International Conference on Extending Database Technology and the 25th International Conference on Database Theory (EDBT/ICDT 2022). Edinburgh, UK, March 29, 2022. CEUR Workshop Proceedings, Vol-3130, 81-85
Resum: Artificial Intelligence (AI) has become one of the key drivers for the next decade. As important decisions are increasingly supported or directly made by AI systems, concerns regarding the rationale and fairness in their outputs are becoming more and more prominent nowadays. Following the recent interest in fairer predictions, several metrics for measuring fairness have been proposed, leading to different objectives which may need to be addressed in different fashion. In this paper, we propose (i) a methodology for analyzing and improving fairness in AI predictions by selecting sensitive attributes that should be protected; (ii) We analyze how the most common rebalance approaches affect the fairness of AI predictions and how they compare to the alternatives of removing or creating separate classifiers for each group within a protected attribute. Finally, (iii) our methodology generates a set of tables that can be easily computed for choosing the best alternative in each particular case. The main advantage of our methodology is that it allows AI practitioners to measure and improve fairness in AI algorithms in a systematic way. In order to check our proposal, we have properly applied it to the COMPAS dataset, which has been widely demonstrated to be biased by several previous studies.
Patrocinadors: This work has been co-funded by the AETHER-UA project (PID2020-112540RB-C43), funded by Spanish Ministry of Science and Innovation and the BALLADEER (PROMETEO/2021/088) projects, funded by the Conselleria de Innovación, Universidades, Ciencia y Sociedad Digital (Generalitat Valenciana).
URI: http://hdl.handle.net/10045/123225
ISSN: 1613-0073
Idioma: eng
Tipus: info:eu-repo/semantics/conferenceObject
Drets: © Copyright 2022 for this paper by its author(s). Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Revisió científica: si
Versió de l'editor: http://ceur-ws.org/Vol-3130/
Apareix a la col·lecció: INV - LUCENTIA - Comunicaciones a Congresos, Conferencias, etc.

Arxius per aquest ítem:
Arxius per aquest ítem:
Arxiu Descripció Tamany Format  
ThumbnailLavalle_etal_2022_CEUR.pdf875,57 kBAdobe PDFObrir Vista prèvia

Aquest ítem està subjecte a una llicència de Creative Commons Llicència Creative Commons Creative Commons