Bias Audit and Mitigation in Julia
2021-07-29 , Green

This talk introduces Fairness.jl, a toolkit to audit and mitigate bias in ML decision support tools. We shall introduce the problem of fairness in ML systems, its sources, significance and challenges. Then we will demonstrate Fairness.jl structure and workflow.


Machine Learning is involved in a lot of crucial decision support tools. Use of these tools range from granting parole, shortlisting job applications to accepting credit applications. There have been numerous political and policy developments during the past one year that have pointed out the transparency issues and bias in these ML based decision support tools. Thus it has become crucial for the ML community to think about fairness and bias. Eliminating bias isn't easy due to the existence of various trade-offs. There exist various performance-fairness, fairness-fairness (various definitions of fairness might not be compatible) trade-offs.

In this talk we shall we shall discuss
- Challenges in mitigating bias
- Metrics and fairness algorithms offered by Fairness.jl and the workflow with the package
- How Julia's ecosystem of packages (MLJ, Distributed) helped us in performing a large systematic benchmarking of debiasing algorithms, which helped us understand their generalization properties.

Repository: Fairness.jl

Documentation is available here, and introductory blogpost is available here

Senior year CS undergraduate at BITS Pilani - Pilani campus India. As a JSOC '20 student I worked on Fairness.jl . I am interested in fairness in machine learning, causality, counterfactual fairness and reinforcement learning. Lately, I have been exploring Quantum AI and causal RL. Happy to chat at ashryaagr@gmail.com or julia slack :-)
To know more about me, visit www.ashrya.in