MozFest 2022

Insights from the Accountability Case Labs project
Idioma: English (mozilla)

Accountability Case Labs is a Mozfest Civil Society Actors for Trustworthy AI Working Group project. We are prototyping an open, cross-disciplinary community dedicated to understanding what makes building an ecosystem of accountability for AI so disorienting. During our prototyping phase (Nov 2021 to March 2022) we built our MVP of a case study based workshop on bias bounties (January 2022), and conducted qualitative research on the challenges faced by different actors and professionals in the AI accountability space.

The session will start with a short presentation about our open community and our research. From there, we will engage in collaborative activities (collaborative writing, breakout rooms, and discussions) to build shared insights on questions like: What makes AI accountability so disorienting? What is accountability for? What tools are available for greater accountability? Who does accountability affect? How can we better support the many different communities of social and technical actors, researchers, and builders involved in tackling algorithmic accountability?

Our open community is just getting started. This session will also be an opportunity for participants to shape where we go next, and to get involved!


¿Cuál es el objetivo y/o el resultado de tu sesión?:

Accountability Case Labs, as a project, is meant to target one of the roots that makes building an ecosystem of accountability around AI so difficult: the dizzying range of expertise involved. The goal of this session is to engage with the lessons we uncover, and to interrogate the solutions we propose. Participants interested in the topic of AI accountability will find a space for thinking through what makes it so hard. Participants interested in designing collaborative experiences that help tackle wicked problems will find a space for interrogating what solutions we need.

¿Por qué has elegido ese espacio? ¿Cómo se ajusta tu sesión a la descripción del espacio?:

We think of AI accountability as among the ecosystem level problems at the intersection of power and ethics in AI.

¿Cómo vas a hacer frente si varía el número de participantes en tu sesión? ¿Y si asisten 30 participantes? ¿Y si son 3?:

We have experience running insights driven collaborative workshop with anywhere from 3 to 55 participants. In our experience the breaking point for the kind of collaborative activities we design is usually how large a group they can accommodate, not so much how small. We design activities with a maximum group size in mind, and split the workshop audience into as many sub groups as are needed to achieve that group size for the activity.

¿Qué pasará después del MozFest? Esperamos que muchos esfuerzos y discusiones continúen después de MozFest. Comparte cualquier idea que tengas sobre cómo continuar el trabajo de tu sesión.:

This session will be part of a collaborative working group project that also aims to generate new insights about the problem of AI accountability. One of our current project milestones beyond Mozfest is to share our insights with the world through documentation we are planning to build.

¿En qué idioma te gustaría realizar tu sesión?:

English