Sebastian Raschka
Sebastian is an LLM Research Engineer with over a decade of experience in artificial intelligence. His work bridges academia and industry, including roles as a senior engineer at Lightning AI and a statistics professor at the University of Wisconsin–Madison.
He is also the author of Build a Large Language Model (From Scratch).
His expertise lies in LLM research and the development of high-performance AI systems, with a strong focus on practical, code-driven implementations.
Sessions
Python has been at the center of my work in machine learning and AI for more than a decade. It is where I start from scratch, experiment with ideas, and build systems that help me understand how large language models really work.
In this keynote, we will explore how Python enables this entire journey, from defining model architectures and training loops to scaling data and computation across devices. I will also reflect on how Python continues to support both the large models of today and the evolving systems of tomorrow, even as new backends take over the heavy lifting.
Chinese and American open-source LLMs are competing head-to-head — from DeepSeek and Qwen to Llama and Mistral. The model landscape is broader than ever, yet in Germany the debate still circles around waiting for the next breakthrough. Alexander Hendorf and Sebastian Raschka discuss what these models can and cannot do today, what biases to watch for, and which deployment strategies actually work in practice. The session reserves substantial time for questions and discussion with the audience.