Today, internet platforms use algorithmic decision-making to shape and curate the online ecosystem, through content moderation, ranking and recommendations systems, and more. These pervasive but invisible systems are powerful and they influence how we see and engage with the world. However, platforms provide little transparency and accountability around how they deploy these tools and what impact they have on user experiences and user speech. In addition, when platforms do engage with stakeholders on issues of algorithmic accountability, their outreach is too narrow, and skews the conversation towards the needs of stakeholders in the Global North. This discussion will bring together audience members from around the world to discuss what algorithmic transparency and accountability means to stakeholders in different regions. The discussion will also explore how we can collaboratively press companies to demonstrate accountability that is valuable for all users, and not just those based in the Global North.
I hope this session will serve as the starting point for a series of conversations that explore what meaningful algorithmic transparency and accountability for a diverse set of users looks like. I especially hope that this session can catalyze collaborative work such as sign on letters, joint research, etc.
How will you deal with varying numbers of participants in your session?:OTI has a longstanding history of hosting convenings of many sizes, and is therefore comfortable working with both small and large audiences. Since we intend for this session to be the first in a series, the size of the audience does not matter as much. We will spend some time identifying who can participate in future conversations, so regardless of the size, the event will be valuable.
Spandana Singh is a policy analyst with New America's Open Technology Institute where she leads projects related to algorithmic accountability, content moderation, transparency reporting, intermediary liability, and disinformation.