Mozilla Festival 2021 (March 8th – 19th, 2021)

Mozilla Festival 2021 (March 8th – 19th, 2021)

The Zen of Machine Learning (ML)

The Zen of ML is a set of design principles that helps ML educators and (self-)learners prioritise responsible machine learning practices. Inspired by the Zen of Python, the Zen of ML can be viewed as a culture code that promotes the responsible development of ML products and projects. It is not binding or enforceable, but is intended to shape industry norms and offer a practical guide to building trustworthy AI. The principles consider the end-to-end machine learning development cycle, from data collection to model evaluation and continuous deployment.

In the workshop we will unpack pedagogical challenges that ML (self-)learners experience and explore how design principles can be used to encourage responsible ML practice. Participants should have some experience in learning or teaching ML, in crafting design principles, in programming best-practice, or in developing and deploying ML systems.


What is the goal and/or outcome of your session?:

The goal of the workshop is to review, refine and expand The Zen of ML. The Zen of ML is a set of design principles for the responsible development of machine learning (ML) code and use of ML tools. It is targeted at entry-level ML practitioners that have an abundance of enthusiasm to develop and deploy ML projects, but lack the technical foundations to do so responsibly. The Zen of ML can be viewed as a culture code that promotes the responsible development and deployment of ML. It is not binding or enforceable, but is intended to shape industry norms and offer a practical guide to the properties that responsible ML design for trustworthy AI should have.

We're hoping that many efforts and discussions will continue after Mozfest. Share any ideas you already have for how to continue the work from your session.:

The end goal is to publish The Zen of ML as a set of community-accepted design principles. After the session we will continue to collect suggestions from the public in the form of aphorisms, description text, and references. We plan to collect this information virtually via our website and periodically incorporate such feedback into the published result. We will also invite practitioners to review and give feedback on the design principles.

Participants who want to get involved more actively have an option to join us and shape the final Zen of ML principles through the Trustworthy AI working group.

How will you deal with varying numbers of participants in your session?:

The session is intended to break into small group discussion across 15-20 draft statements provided by facilitators. Should there be a small number of participants, we will combine into a single discussion group covering 1-2 statements. For large numbers of participants, the number of small groups can increase with the same statements being discussed in multiple groups.

As an ethicist, I build frameworks that empower responsible and trustworthy approaches to ethical uncertainty. I hold a Ph.D. in philosophy.

Gaurav is a YLT Fellow in India working on Responsible Technology. He has a Master in Public Policy from the University of Oxford and an engineering degree from IIT Kharagpur.

I am an engineer and designer, who serendipitously evolved into a computer scientist. I am now doing my PhD on deeply personal, completely private voice assistants at the TU Delft.

Bernease is a data scientist at University of Washington and WhyLabs. At WhyLabs, she builds data monitoring tools using approximate statistics. At UW, human-centered evaluation metrics with synthetic data.