2025-12-10 –, Deborah Sampson
You're already using AI coding assistants for more than autocomplete, but are you using them effectively? This talk presents battle-tested patterns for productive collaboration with AI coding agents on real Python projects.
You'll learn a structured four-step approach: plan your changes, write tests first, let the agent build, then document what you created—iterating through this cycle multiple times. We'll explore why fast test harnesses are critical for agent productivity, how to pipe shell tools and logging output back to your agent for better context, and how custom slash-commands can automate repetitive tasks like code cleanup and style enforcement.
This session is for intermediate Python programmers who are already working with AI coding agents and want proven patterns for getting more value from the collaboration.
Takeaway: A practical framework and concrete techniques for collaborating effectively with AI coding assistants on real projects.
Who should attend
This talk targets intermediate Python developers, data scientists, and ML engineers who are already using (or curious about) AI coding assistants like GitHub Copilot, Cursor, Aider, or Claude. You should be comfortable with Python, testing frameworks, and command-line tools.
What you'll learn
The core framework follows a four-step cycle that you'll iterate through multiple times:
Plan: Articulate what you want to build clearly before involving the agent
Test: Write tests that define success before writing implementation
Build: Let the agent generate code, guided by your plan and tests
Document: Write documentation that explains what you built—often revealing gaps that trigger another iteration
Why iteration matters
The first pass rarely produces production-ready code. This talk emphasizes that iteration is not failure—it's the process. You'll learn to recognize when to iterate and how to provide better context in each cycle.
Technical patterns covered
Fast test harnesses: Why agent-assisted development demands rapid feedback loops, and how to structure your tests for quick iteration cycles.
Shell tools and logging: Techniques for capturing compiler errors, linter output, test failures, and application logs, then feeding them back to the agent for context-aware fixes.
Slash-commands for automation: How to create custom commands that handle repetitive tasks—deduplicating code, enforcing project conventions (like using pixi instead of pip), running formatters and linters, and maintaining consistency across files.
Talk outline (approximate timing)
0-5 min: Introduction and motivation—why most people struggle with AI coding assistants
5-12 min: The plan-test-build-document framework explained with a concrete example
12-20 min: Deep dive into iteration—recognizing when to iterate, improving context between cycles
20-27 min: Fast test harnesses and feedback loops—practical setup and examples
27-35 min: Shell integration and slash-commands—demonstrations of real workflows
35-40 min: Q&A and discussion
As Senior Principal Data Scientist at Moderna Eric leads the Data Science and Artificial Intelligence (Research) team to accelerate science to the speed of thought. Prior to Moderna, he was at the Novartis Institutes for Biomedical Research conducting biomedical data science research with a focus on using Bayesian statistical methods in the service of discovering medicines for patients. Prior to Novartis, he was an Insight Health Data Fellow in the summer of 2017 and defended his doctoral thesis in the Department of Biological Engineering at MIT in the spring of 2017.
Eric is also an open-source software developer and has led the development of pyjanitor, a clean API for cleaning data in Python, and nxviz, a visualization package for NetworkX. He is also on the core developer team of NetworkX and PyMC. In addition, he gives back to the community through code contributions, blogging, teaching, and writing.
His personal life motto is found in the Gospel of Luke 12:48.