JuliaCon 2026

Spying into Julia’s Runtime.
2026-08-12 , Room 1

Julia aggressively transforms your code during compilation and execution, which can make it difficult to see what actually runs. This can introduce subtle performance and memory costs that are not directly visible in existing tools. In this talk, we show a compiler and runtime instrumentation approach that provides a runtime-level view of program execution, links runtime behavior back to source code, and show how hidden costs can be uncovered and performance assumptions validated.


Julia lets you write high-level, generic code while still getting the best performance. To make this possible, the compiler and runtime apply aggressive transformations during compilation and execution. As a result, the code that runs is different from the source code. When this behavior is not fully understood, it can lead to subtle performance issues and unexpected memory use.

Julia already provides useful performance tools. However, these tools often show only part of the picture. Understanding how compilation decisions, dispatch, and allocation behavior interact can require switching between tools and manually connecting the dots. It is not always clear where a runtime event came from in the source code or what caused it. This makes it difficult to answer questions like: where was a conversion inserted, why did a call dispatch dynamically, or why does a function that looks allocation-free allocate at runtime?

When examining real-world Julia projects, we repeatedly observed hidden costs, especially allocations caused by implicit conversions, even in code that looks type stable. The compiler may insert conversions to preserve correctness, but their runtime impact is not always visible through existing tools.

To address this, we introduce a compiler and runtime instrumentation layer that provides a view into what happens during compilation and execution. It records function calls, type conversions, dynamic dispatch, allocations, deallocations, and garbage collection activity across the application. It also links runtime behavior back to call sites and method specializations, including context information such as concrete types involved. We briefly explain the key design decisions behind this approach and how we instrument compilation and execution without changing program behavior.

This session focuses mainly on practical examples from real Julia code. We look at:

  • Implicit conversions that allocate memory

  • The runtime impact of macros such as @assert, @inline, and @noinline

  • Dynamic dispatch in performance-critical or seemingly type-stable code

  • Allocation and garbage collection patterns that are hard to trace back to their source

For each case, we show how connecting runtime events with context helps you confirm your performance assumptions and better understand how your Julia code actually runs.

Finally, we discuss how this instrumentation work is being prepared for upstreaming into Julia itself. The goal is to make runtime observability part of the language infrastructure, rather than something tied to a single tool.

I've been writing code since I was 11. Nearly two decades later, I'm still baffled by the fact that most developers spend only 32% of their time actually coding.
My professors used to say this was just the way things were. But instead of accepting it, I decided to push back. One step at a time.
Why? Because we can.
As developers, we build the tools that move entire industries forward. So why not turn that same energy inward and improve our own?

What I Love:
• Diving deep into complex codebases
• Sharing developer knowledge
• Building powerful tools (like CodeGlass)
• Exploring superconductors and the Meissner effect (hoverboards when?)
• I Like Trains
• Lizard Doggo

This speaker also appears in:

Software Architect in ASML working on Julia algorithms in the near real time system.
GitHub

This speaker also appears in:

Born in Mexico City. Studied a Bachelors in Chemical Engineering at UNAM. M.Sc. on Materials Science and Engineering at MIT. Studied PhD at TU Eindhoven on Applied Physics.
Worked for Philips Research 1 year.
Working at ASML for 13 years on algorithms.

This speaker also appears in: