PyCon DE & PyData 2025

Driving Trust and Addressing Ethical Challenges in Transportation through Explainable AI
2025-04-23 14:10-14:40 (Africa/Abidjan), Hassium

Machine Learning can transform transportation—improving safety, optimizing routes, and reducing delays—yet it also presents ethical concerns. In this talk,I will show how Explainable AI (XAI) can offer practical solutions these ethical dilemmas like lack of trust in AI solutions. Instead of focusing on the technical underpinnings, we will discuss how transparency can be enhanced in AI-supported transportation systems. Using a real-world example, I will demonstrate how XAI provides the groundwork for building ethical, trustworthy, and socially responsible AI solutions in public transportation systems.


AI systems in transportation make decisions that directly impact people's lives, such as route optimization, safety measures, and resource allocation. These decisions often rely on complex algorithms, which can be opaque to stakeholders, including operators, regulators, and passengers.

One possible solution: Explainable AI (XAI)

Explainable AI (XAI) refers to methods and tools that make AI systems more transparent by providing interpretable insights into their decision-making processes. By integrating XAI, stakeholders can understand, validate, and trust the outputs of AI systems.

KARL: A Case Study in XAI for Public Transportation

The KARL (KI in Arbeit und Lernen in der Region Karlsruhe) project is an exemplary initiative showcasing how XAI can address ethical challenges in AI-suported public transportation.

Technical Implementation

While the presentation will not delve deeply into technical specifics, it will touch upon key elements such as:
* The use of open-source libraries like SHAP (SHapley Additive exPlanations) to provide interpretability.
* Integration of XAI tools into the operational dashboard used by tram operators.
* Collaboration with domain experts to ensure the explanations are meaningful and actionable.

Takeaways for the Audience

At the end of this talk, attendees will:
1. Understand the ethical challenges posed by AI in transportation and how they can undermine trust.
2. Learn how XAI tools can address these challenges by enhancing transparency.
3. Gain insights into the practical implementation of XAI in a real-world setting through the KARL project.
4. Be inspired to incorporate XAI principles into their own AI projects to build ethical and socially responsible solutions.


Expected audience expertise: Domain:

Intermediate

Expected audience expertise: Python:

None

See also: Slides to my talk (5.6 MB)

Natalie co-founded Lavrio.solutions, a company specializing in AI implementation. Since then, she has helped numerous organizations integrate AI into their processes and optimize their workflows. She has also conducted AI training sessions for businesses and professionals, bridging the gap between technical innovation and real-world usability.