MozFest 2022

Insights from the Accountability Case Labs project
Language: English (mozilla)

Accountability Case Labs is a Mozfest Civil Society Actors for Trustworthy AI Working Group project. We are prototyping an open, cross-disciplinary community dedicated to understanding what makes building an ecosystem of accountability for AI so disorienting. During our prototyping phase (Nov 2021 to March 2022) we built our MVP of a case study based workshop on bias bounties (January 2022), and conducted qualitative research on the challenges faced by different actors and professionals in the AI accountability space.

The session will start with a short presentation about our open community and our research. From there, we will engage in collaborative activities (collaborative writing, breakout rooms, and discussions) to build shared insights on questions like: What makes AI accountability so disorienting? What is accountability for? What tools are available for greater accountability? Who does accountability affect? How can we better support the many different communities of social and technical actors, researchers, and builders involved in tackling algorithmic accountability?

Our open community is just getting started. This session will also be an opportunity for participants to shape where we go next, and to get involved!


What is the goal and/or outcome of your session?:

Accountability Case Labs, as a project, is meant to target one of the roots that makes building an ecosystem of accountability around AI so difficult: the dizzying range of expertise involved. The goal of this session is to engage with the lessons we uncover, and to interrogate the solutions we propose. Participants interested in the topic of AI accountability will find a space for thinking through what makes it so hard. Participants interested in designing collaborative experiences that help tackle wicked problems will find a space for interrogating what solutions we need.

Why did you choose that space? How does your session align with the space description?:

We think of AI accountability as among the ecosystem level problems at the intersection of power and ethics in AI.

How will you deal with varying numbers of participants in your session? What if 30 participants attend? What if there are 3?:

We have experience running insights driven collaborative workshop with anywhere from 3 to 55 participants. In our experience the breaking point for the kind of collaborative activities we design is usually how large a group they can accommodate, not so much how small. We design activities with a maximum group size in mind, and split the workshop audience into as many sub groups as are needed to achieve that group size for the activity.

What happens after MozFest? We're hoping that many efforts and discussions will continue after MozFest. Share any ideas you already have for how to continue the work from your session.:

This session will be part of a collaborative working group project that also aims to generate new insights about the problem of AI accountability. One of our current project milestones beyond Mozfest is to share our insights with the world through documentation we are planning to build.

What language would you like to host your session in?:

English