The goal of this workshop is to develop a public-good template for communities to develop their own socio-technical AI Implementation Case Studies.
Life is infinitely more complex than can be made machine-readable, more complex, dynamic, creative, and unpredictable than machinic operations can account for. This shortfall leads to devaluation, discrimination, and erasure of outlying social groups with dangerous effects.
Rather than reify the promise of technological solutions, we believe in people power. Our focus on AI implementation examines how we might make interventions for the benefit of all.
Blending ethnographic, social-science, and user-research methods, participants will come away with an actionable framework supporting their communities to produce qualitative case studies on how AI is interpreted and negotiated by real people in real situations. Resulting outputs can potentially be shared with developers of algorithmic and AI systems, as well as key constituents in policy and governance.
Towards the Mozilla Foundation's goal for developing Trustworthy AI, including key considerations of privacy, transparency, and well-being, this workshop recognizes that local cultures, contexts, and practices of AI implementation are far too often overlooked, especially in non-western spaces and the Global South. This workshop pushes for recognition of the full complexity and integrity of social life in its many forms as well as the preservation of local cultures and practices, believing AI can and should be a tool to empower and uplift communities rather than a prescriptive logic that adheres to Western, hegemonic norms. The workshop can be recorded for the purposes of wide distribution.
How will you deal with varying numbers of participants in your session?:Our workshop is intended to convene participants whose work experiences span across industries and have differing forms of expertise, skills, and orientations towards AI. We hope to co-develop and share a framework for building AI implementation case studies, a collaborative exercise that can be engaging and appreciated by a group of 3 or 30. Fewer participants will allow for more in-depth conversation, more will afford broader goals and stakes. In all cases, we’ll be exploring a reliable framework for systematically documenting AI implementation within their own sites of study or work.
We're hoping that many efforts and discussions will continue after Mozfest. Share any ideas you already have for how to continue the work from your session.:We hope to extend the life of this session by building connections with as many different people as possible, in different communities and contexts, and sharing the template with them. Through these interactions, we hope to build reflexivity and flexibility into this framework for building case studies.
Watkins studies the integration of AI into sociotechnical systems. She is an affiliate at Data & Society and in 2021 she'll be a Postdoctoral Fellow at Princeton.
Kiran is a PhD student in sociology at Columbia University, and received her MA from NYU in Media, Culture, and Communication. Preceding academia, she enjoyed a career in brand strategy.