Opening the black box: SHAP values.
18/10/2025 , Track 03 - E04, A02
Idioma: English

State-of-the-art machine learning models can give unparalleled accuracy and still be useless. Why?

Explainability. If your model is a black box, you won’t be able to explain its predictions to business stakeholders, regardless of how accurate they are. And business stakeholders won’t use a model they don’t understand.

But what if you could show them what happens inside the black box?

With SHAP values you can do it. SHAP values are model agnostic, so you won’t even need to change your machine learning pipelines to make them interpretable. All you need are a few lines of code and understanding what those lines do.

Because even the best explainability tool is useless if you don’t understand the explainability tool.

That’s why in this talk we’ll walk through both the math and the intuition behind SHAP values.

Where do they come from?
Why do they work?
How are they computed?
How do you interpret them?
What are their limitations?

Imagine you could explain the predictions of even the most complex machine learning models to your business stakeholders.

That’s what understanding SHAP values will allow you to do.


Temática:

Machine Learning and Artificial Intelligence (ML, deep learning, AI ethics, generative models...)

Temáticas adicionales:

Data Science and Data Engineering (analytics, visualization, pipelines, data engineering, notebooks...)

Nivel de la propuesta:

Intermediate (it is necessary to understand the related bases to go into detail)

I studied Mathematics and Statistics at the Universidad Complutense de Madrid, and I work as a Data Scientist at Decide4AI.