Complex machine learning models make better predictions, but at the cost of turning into an unexplainable black-box model. In this talk, we'll look into a framework that allows us to explain why a model makes a specific prediction.
We all love linear regression for its interpretability: Increase square
meters by 1, that leads to the rent going up by 8 euros. A human can easily
understand why this model made a certain prediction.
Complex machine learning models like tree aggregates or neural networks
usually make better predictions, but this comes at a price: it's hard to
understand these models.
In this talk, we'll look at a few common problems of black-box models, e.g. unwanted discrimination or unexplainable false predictions ("bugs"). Next, we go over three methods to pry open these models and gain some insights into how and why they make their predictions.
I'll conclude with a few predictions about the future of (interpretable) machine learning.
Specifically, the topics covered are
- What makes a model interpretable?
- Linear models, trees
- How to understand your model
- Model-agnostic methods for interpretability
- Permutation Feature Importance
- Partial dependence plots (PDPs)
- Shapley Values / SHAP
- The future of (interpretable) machine learning
some
Python Skill Level:basic
Abstract as a tweet:In this talk, we'll find out how to interpret the predictions of otherwise black-box models.
Domains:Data Science, Machine Learning
Public link to supporting material: