PyData Boston 2025

Three agents, three frameworks, one talk
2025-12-10 , Deborah Sampson

The popularity of agent-based workflows has led to a proliferation of frameworks, each representing different design philosophies. At the core of each framework is a similar set of components; memory, tools and “planning”. By understanding these components, it becomes easier to experiment with different frameworks. In this talk, we will talk about these components and then see how they are implemented in three frameworks: LangGraph, Pydantic.AI and LlamaBot. Our use case will be agent-based search, where our agent will respond to a user query based on a knowledge base. We’ll see how each handles this simple workflow and discuss advantages and disadvantages to these different approaches.


Anatomy of an agent
In this section of the talk, we will break down the key components of an agent, which can loosely be described as memory, tools and planning. Memory can be as simple as a running collection of the conversation so far or as complex as having its own retrieval workflow. Tools provide the agent with the possible actions it can take. And planning enables the model to decide which steps to take and process the results of its actions. These components will inform our exploration of the three frameworks.

LangGraph, Pydantic.AI and LlamaBot
For each framework, we will introduce the components described above are implemented and build an agent that is able to retrieve results from a knowledge base in response to a user query. Each section will be a code walkthrough of this minimal example and brief discussion of the design.

Conclusion
To finish the session, we will look across the three frameworks and discuss what use-cases might be best for each.


Prior Knowledge Expected: Previous knowledge expected

Ben is the Lead Data Scientist at GoGuardian, working on text-based ML pipelines for student safety. Previously, he led Data Science teams in academia (Northeastern University, MIT) and industry (ThriveHive). He obtained his Masters in Public Health (MPH) from Johns Hopkins and his PhD in Policy Analysis from the Pardee RAND Graduate School. Since 2014, he has been working in data science for government, academia and industry. His major focus has been on Natural Language Processing (NLP) technology and applications. Throughout his career, he has pursued opportunities to contribute to the larger data science community. He has presented his work at conferences, published articles, taught courses in data science and NLP, and is co-organizer of the Boston chapter of PyData. He also contributes to volunteer projects applying data science tools for public good.