One of the most critical and controversial topics around artificial intelligence centers around bias. As more apps come to market that rely on artificial intelligence, software developers and data scientists can unwittingly inject their personal biases into these solutions. In addition, the datasets curated over a period of time based on historical data have become inherently biased towards a particular gender, race and other attributes.
Given how these AI systems are utilized to make decisions in criminal systems, approve or deny college admissions, loans etc, it has become critical to have tools to detect and remediate these biased AI systems. We have launched AI Fairness 360, an open source library to detect and remove bias in models and data sets, with 70+ Metrics and 10 Algorithms.
We will share lessons learnt while using AI Fairness 360 and show how to leverage it to detect and de bias models during pre-processing, in-processing, and post-processing.