Mozilla Festival 2021 (March 8th – 19th, 2021)

Mozilla Festival 2021 (March 8th – 19th, 2021)

How to make unethical AI

There are a lot of high-level and often abstract AI guidelines being created, which talk about "human flourishing" and "ethics should be considered from the start". But what does this mean in the real world? In this session we take a 'bottom up' approach, and look at the real-world ethical conundrums that creators run into.

To start us of we'll dive into the development of a BMI prediction algorithm that was created for “How Normal Am I?”, an interactive documentary about the limitations of face recognition technology, created by artist Tijmen Schep.

He is joined by by Tom Simonite, a senior writer with WIRED in San Francisco, who has covered the tech industry’s soaring AI ambitions and the depths of its ethical dilemmas.

Session participants are encouraged to share conundrums they have wrestled with. Try out HowNormalAmI.eu before you join this session.


We're hoping that many efforts and discussions will continue after Mozfest. Share any ideas you already have for how to continue the work from your session.:

This session will practically be used to inform the development of SHERPA's explorations in this area - I will be inviting my academic compadres to join in too. E.g. Kevin Macnish, Bernd Stahl and others who would like to join. As the artist in the consortium, this is one way for me to play a playful role in how this research and its outcomes may develop.

I am also literally developing a new art piece for SHERPA, which is a humorous online AI impact assessment tool. Of course it will be fun, but come with a deeper message about being both more holistic and more pragmatic about these types of tools.

How will you deal with varying numbers of participants in your session?:

We will start of by talking about some of the practical issues I myself ran into, so that should hopefully be interesting to any size group. Then, as we move to the conversation, the more people there are, the more anekdotes may surface. Tom will help in structuring this discourse, as well as adding his own insights and framing where useful. For example to discern patterns or point out potential blind spots in current approaches.

We do want to keep it one communal session, so there is a practical limit to how many people can join while feeling like they can contribute. But I hope just listening in will also be a lot of fun.

What is the goal and/or outcome of your session?:

The goal is to try and bridge the gap between practice and idealism. For example, there are currently countless documents being created by AI think tanks and EU research groups (including mine) that attempt to address the many human rights issues surrounding AI. My harddrive is currently full of them. But I find that because these academic researchers often aren't developers themselves, these lists of principles might not always optimally be informed by the 'truth on the ground'.

For example, if a policy document says that "AI should promote human flourishing", then a party like Google can easily say "well, we feel we do that". Because you're not being precise, you leave a lot of space for interpretation.

I would like to 'lay my ear to ground' with creators who might recognise these issues, and may contribute to the development of more pragmatic and 'strict' AI impact assessments.

Tijmen Schep is a Dutch artist, technology critic and privacy designer. He created hownormalami.eu and various other works that explore issues around ethics, data and AI.