We will discuss how to balance competing aims in the policy space -- namely, (1) how can you use AI modeling so it's interpretable for completely non-technical people, (2) how can you build trust in that AI system to produce policy interventions among stakeholders with competing goals, and (3) what considerations should you make during deployment to ensure buy-in for the system created? In this session, we will consider simplified examples from the criminal justice space in order to come up with best practices. I'll draw on my work at the UChicago Crime Lab, where I've spent the past few years helping to build a data-driven Early Intervention System (EIS) for police officers. This discussion will be in general terms from my own personal view.
I'd like to eventually determine some guiding principles for ethical engagement.
What is the goal and/or outcome of your session?:I'm interested in working through questions around how to balance competing goals in an AI system for the police so that it is helpful, rather than harmful to human beings -- and so that it engages both groups (the community and the police) it serves.
How will you deal with varying numbers of participants in your session?:Varying numbers could be exciting! I'd have tiered breakout rooms as an option if there are many participants, but I'm also excited to have a smaller, more intimate discussion.
Emma Nechamkin is a Data Scientist in the University Chicago Crime Lab.