PyCon APAC 2023

ロケール設定が保存されました。pretalxでは英語のサポートが充実していると思っていますが、問題やエラーが発生した場合は、ぜひご連絡ください。

Improving debuggability of complex asyncio applications
2023/10/27 , track 2

The key of debugging is observability and reproducibility. It is still challenging to see what’s happening in complex real-world asyncio applications. To tackle this problem, I wrote the new, improved version of aiomonitor based on my fork by collaboration with the aio-libs maintainers met in person in PyCon US 2023. This talk will present the challenges of asyncio debugging, how I improved the ex


When we compose multiple asyncio libraries and our own codes together, it is hard to track down silently swallowed cancellations and resource-hogging floods of tasks triggered by internals of 3rd-party callbacks. Moreoever, such misbehaviors are often observed only in production environments where the app faces the actual workloads and I/O patterns, making it even harder to reproduce.

The aio-libs/aiomonitor project provides a live access to a running asyncio process using a telnet-based REPL interafce to inspect the list of tasks and their current stacks. I have added more features to help tracking the above issues of asyncio apps running in production: task creation tracker and termination tracker. These trackers keeps the stack traces whenever a new task is created or terminated, and provides a holistic view of chained stack traces when the tasks are nested with arbitrary depths. aiomonitor also demonstrates a rich async TUI (terminal UI) based on prompt toolkit and Click, with auto-completion of commands and arguments, far enhancing the original version’s simple REPL. With aiomonitor, I could successfully debug several production bugs.

In PyCon US 2023, I’ve presented my fork, aiomonitor-ng, and collaborated with Sviatoslav Sydorenko (@webknjaz) to backport it to the original project while revamping the CI/CD pipelines. This talk will also cover the stories of the backporting process.

I hope this talk would help our fellow asyncio developers to build more complex yet stable applications at scale.

Joongi is the creator of Backend.AI and the CTO of Lablup, where he oversees the development of MLOps pipelines and GPU-accelerated AI services. He earned his Ph.D. in Computer Science from KAIST by creating a GPU-accelerated packet processing framework with world-leading speed of 80 Gbps. His major areas of interest include scalable and automated backend systems, as well as their analysis and design. He's also a big fan of open source, having contributed to projects like Python, iPuTTY, Textcube, aiodocker, aiohttp, pyzmq, DPDK, and others.