2023-04-19 –, B07-B08
Is your model prejudicial? Is your model deviating from the predictions it ought to have made? Has your model misunderstood the concept? In the world of artificial intelligence and machine learning, the word "fairness" is particularly common. It is described as having the quality of being impartial or fair. Fairness in ML is essential for contemporary businesses. It helps build consumer confidence and demonstrates to customers that their issues are important. Additionally, it aids in ensuring adherence to guidelines established by authorities. So guaranteeing that the idea of responsible AI is upheld. In this talk, let's explore how certain sensitive features are influencing the model and introducing bias into it. We'll also look at how we can make it better.
We cannot escape thinking about fairness through numbers and math. Models are not fair simply because they are mathematical, contrary to popular belief. AI systems are subjected to bias. It may be inherent which is due to historical bias in the training dataset. There may be label bias that occurs when the set of labeled data is not a full representation of the entire universe of existing potential labels. Another potential bias is sampling bias, which occurs when certain people in the intended universe have a higher or lower sampling probability than others. Models learn from such biased datasets which may lead to unfair decisions. As cascading models are developed, this bias continues to spread.
Model fairness is an alerting concern. Unfair AI systems can create habitual losses for businesses. It may also contribute unfavorable commercial values to the company, creating situations like customer eroding, slandering, and decreasing transparency. As a result, Model fairness is becoming increasingly necessary.
In the proposed talk, I would gently introduce you to the above concepts and some open source libraries that would help us in accessing ML models' fairness. Lastly, I would be walking you through how to assess the fairness of a model for a law school dataset using Fairlearn, an open source library by Microsoft and the measures that can be taken to mitigate the same.
My Talk will Focus On
- What are the metrics that need to be considered for assessing the fairness of an ML model?
- What are the mitigation measures that can be implemented for the same?
- Python code to gauge the fairness of a model trained on a law school dataset using Fairlearn and steps to mitigate the model.
Novice
Abstract as a tweet –Biased models can impact each of us. While it may feel abstract, AI fairness can be achieved through many methods and metrics. More so, mitigation reports can initiate you to responsible AI. Check out my talk & demo at PyData Berlin.
Expected audience expertise: Python –Intermediate
Nandana is a data scientist at Censius AI. She had completed her bachelors degree from College Of Engineering, Trivandrum. She had previously worked in the e-commerce industry where she had to deal with real world problem statements including product ranking and recommendation systems. She had published a research paper in health care domain in an international journal. Currently, her research interests are aligned to the Explainable AI domain.