Mozilla Festival 2021 (March 8th – 19th, 2021)

Mozilla Festival 2021 (March 8th – 19th, 2021)

AI Bias beyond the western lens: Perspectives from India

Several studies have documented the risks of AI bias, most notably in the context of gender and racial bias. Most of these conversations, particularly those involving empirical analysis, stem from a western perspective, generally relying on western datasets and institutional structures.

This session will discuss the implementations of such systems from an Indian perspective, highlighting the role that institutional biases play in all stages, from data generation to decision making.

By looking at Indian implementations we hope to highlight the importance of local context and evaluate biases that cut across complexities of caste, religion, class, and regional divides. This offers a starting point for discussions on AI bias rooted in the realities of bias as experienced across different countries and communities.


What is the goal and/or outcome of your session?:

The session aims to foster discussion on issues with the implementation of AI systems in real life from a non-western perspective. The goal of the session is to inform AI developers, researchers, policy practitioners and other participants about the multidimensional nature of biases that exist in communities and could be picked up by AI systems and lead to adverse effects.
We also hope to encourage more empirical research work to understand AI Bias in various contexts and communities. The outcomes of such research work to lead the way to understand how AI implementations might behave in real life and how to mitigate any biases or unwanted effects.

We're hoping that many efforts and discussions will continue after Mozfest. Share any ideas you already have for how to continue the work from your session.:

While the focus of this session is to highlight the Indian perspective, we hope to use this opportunity to foster alliances with participants from other regions, including other parts of Asia, Middle East and Africa to understand what sort of biases AI implementers need to take into account in their specific contexts. We are particularly interested to build and encourage, collaborations on evidence-based studies on AI bias.
Also, we will share a repository of relevant research and tools as a guide to understand potential sources of biases - algorithmic and institutional for participants.

How will you deal with varying numbers of participants in your session?:

The session format is agnostic to the number of participants including the interactive activities. We hope to foster a discussion around this topic - fewer participants would mean more engagement and more participants would mean more ideas. In both situations, a varying number of participants shouldn’t impact the session. The compilation and repository that will be developed as an outcome of the session are designed to reach a broader audience than the immediate participants of the session.

See also: Speaker Profile (1.8 MB)

Gaurav is a YLT Fellow in India working on Responsible Technology. He has a Master in Public Policy from the University of Oxford and an engineering degree from IIT Kharagpur.