A Methodology based on Rebalancing Techniques to measure and improve Fairness in Artificial Intelligence algorithms
Please use this identifier to cite or link to this item:
http://hdl.handle.net/10045/123225
Title: | A Methodology based on Rebalancing Techniques to measure and improve Fairness in Artificial Intelligence algorithms |
---|---|
Authors: | Lavalle, Ana | Maté, Alejandro | Trujillo, Juan | García Carrasco, Jorge |
Research Group/s: | Lucentia |
Center, Department or Service: | Universidad de Alicante. Departamento de Lenguajes y Sistemas Informáticos |
Keywords: | Artificial Intelligence | Fairness | Rebalancing techniques |
Knowledge Area: | Lenguajes y Sistemas Informáticos |
Issue Date: | 25-Apr-2022 |
Publisher: | CEUR |
Citation: | Proceedings of the 24th International Workshop on Design, Optimization, Languages and Analytical Processing of Big Data (DOLAP), co-located with the 25th International Conference on Extending Database Technology and the 25th International Conference on Database Theory (EDBT/ICDT 2022). Edinburgh, UK, March 29, 2022. CEUR Workshop Proceedings, Vol-3130, 81-85 |
Abstract: | Artificial Intelligence (AI) has become one of the key drivers for the next decade. As important decisions are increasingly supported or directly made by AI systems, concerns regarding the rationale and fairness in their outputs are becoming more and more prominent nowadays. Following the recent interest in fairer predictions, several metrics for measuring fairness have been proposed, leading to different objectives which may need to be addressed in different fashion. In this paper, we propose (i) a methodology for analyzing and improving fairness in AI predictions by selecting sensitive attributes that should be protected; (ii) We analyze how the most common rebalance approaches affect the fairness of AI predictions and how they compare to the alternatives of removing or creating separate classifiers for each group within a protected attribute. Finally, (iii) our methodology generates a set of tables that can be easily computed for choosing the best alternative in each particular case. The main advantage of our methodology is that it allows AI practitioners to measure and improve fairness in AI algorithms in a systematic way. In order to check our proposal, we have properly applied it to the COMPAS dataset, which has been widely demonstrated to be biased by several previous studies. |
Sponsor: | This work has been co-funded by the AETHER-UA project (PID2020-112540RB-C43), funded by Spanish Ministry of Science and Innovation and the BALLADEER (PROMETEO/2021/088) projects, funded by the Conselleria de Innovación, Universidades, Ciencia y Sociedad Digital (Generalitat Valenciana). |
URI: | http://hdl.handle.net/10045/123225 |
ISSN: | 1613-0073 |
Language: | eng |
Type: | info:eu-repo/semantics/conferenceObject |
Rights: | © Copyright 2022 for this paper by its author(s). Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). |
Peer Review: | si |
Publisher version: | http://ceur-ws.org/Vol-3130/ |
Appears in Collections: | INV - LUCENTIA - Comunicaciones a Congresos, Conferencias, etc. |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
![]() | 875,57 kB | Adobe PDF | Open Preview | |
This item is licensed under a Creative Commons License