When looking at the possibilities, limitation and harms of AI there is a tendency to focus on the technology. This approach might allow us to capture some social issues related to AI systems, there are many this approach will not as it overlooks how these technologies get implemented in existing economic, social and political power relations. For a do no harm approach we need to decenter technology and understand the context in which these systems get implemented. As such this session aims to unpack, discuss and share experiences of contextualizing AI.
With this session I hope we can find a different approach to data harms related to AI by looking at the context and contribute to what do no harm means.
We're hoping that many efforts and discussions will continue after Mozfest. Share any ideas you already have for how to continue the work from your session.:The idea about contextualizing is part of my PhD and practical work, so it will contribute here
How will you deal with varying numbers of participants in your session?:Depending on the number of participants we can decide how to structure the session, but as these are intimate conversation I might suggest that there is either a size limit of that we do co facilitated breakout sessions
Fieke is a researcher and practitioner on issues related to technology, autonomy, power, and human rights. She is a Mozilla fellow and a PhD candidate at the Data Justice Lab.