PRESC is a tool to help data scientists, developers, academics and activists evaluate the performance of machine learning classification models, specifically in areas which tend to be underexplored, such as generalizability and bias. Our current focus on misclassifications, robustness and stability will help facilitate the inclusion of bias and fairness analyses on the performance reports so that these can be taken into account when crafting or choosing between models.
This is a project sprint from the "AI IRL Hackathon - Building Trustworthy AI". Registration and more information here: http://mzl.la/taihackathon
PRESC is an open source project, so contributions are always encouraged both before and after Mozfest.
How will you deal with varying numbers of participants in your session?:More participants will allow for a richer discussion, they can be separated in smaller work groups dealing with particular problems, and a lower number of participants will allow for a more personalized experience and discussions.
Researcher. Developer. Data Scientist. Human rights & freedoms activist, including their expression in a digital and technological context.