07-11, 15:50–16:00 (Europe/Amsterdam), For Loop (3.2)
ExplainableAI.jl, a comprehensive set of XAI tools for Julia, has undergone significant development since our initial presentation at JuliaCon 2022, and has since been expanded into the Julia-XAI ecosystem. This lightning talk will highlight the latest developments, including new methods, the new XAIBase.jl core interface, and new utilities for visualizing explanations of vision and language models.
In machine learning, understanding the inner workings of black-box models is critical to ensuring their safety and trustworthiness. The field of Explainable AI (XAI) aims to provide practitioners with methods to gain insight into the decision-making processes of their models.
The Julia-XAI ecosystem provides such methods, with a focus on post-hoc, local input space explanations. Simply put, methods that try to answer the question "What part of the input is responsible for the model's output?".
Since our first presentation of ExplainableAI.jl at JuliaCon 2022, the package has been expanded into the Julia-XAI ecosystem. This lightning talk will cover the latest additions and present new features:
- XAIBase.jl: Core package that defines the Julia-XAI interface, allowing developers to quickly implement or prototype new methods without writing boilerplate code.
- VisionHeatmaps.jl and TextHeatmaps.jl: Lightweight dependencies for visualizing explanations of vision and language models.
- RelevancePropagation.jl: A new package for Layer-wise Relevance Propagation (LRP) and Concept Relevance Propagation (CRP) for use with Flux.jl models, supporting ResNets and Transformer architectures.
- New XAI methods in ExplainableAI.jl
PhD student in the Machine Learning Group at TU Berlin.
Interested in in automatic differentiation, explainability and dynamical systems.
- Personal website: adrianhill.de
- GitHub profile: @adrhill
- Project spotlight