2025-12-08 –, Thomas Paul
This tutorial tackles a fundamental challenge in modern AI development: creating a standardized, reusable way for AI agents to interact with the outside world. We will explore the Model Context Protocol (MCP) designed to connect AI agents with external systems providing tools, data, and workflows.
This session provides a first-principles understanding of the protocol, by building an MCP server from scratch, attendees will learn the core mechanics of the protocol's data layer: lifecycle management, capability negotiation, and the implementation of server-side "primitives." The goal is to empower attendees to build their own MCP-compliant services, enabling their data and tools to be used by a growing ecosystem of AI applications.
Target Audience
- Python Developers: Engineers who want to make their applications, data, or APIs accessible to AI agents.
- AI Application Builders: Developers creating agents or AI-powered applications who need to connect to external systems. 🤖
Prerequisites
- An intermediate understanding of Python.
- No prior experience with LLMOps is required.
Tutorial Outline (90 Minutes)
Part 1: Concepts & Setup (15 mins)
- What is the Model Context Protocol? We'll explain the MCP Host, Client, and Server architecture.
- A breakdown of the core concepts: JSON-RPC, the
initializehandshake for lifecycle management, and the key server primitives (tools, resources, prompts). - Environment Setup: A quick tour of the provided GitHub repository, explaining the project structure, dependencies, and a simple MCP client we'll use for testing.
Part 2: Hands-On: Lifecycle and Tool Discovery (30 mins)
- Implementing the
initializeHandshake andtools/list. - We'll define a simple Python function and then build the MCP logic to expose it as a "tool," complete with a name, description, and a JSON Schema for its inputs.
Part 3: Hands-On: Tool Execution and Integration (35 mins)
- Implementing
tools/call. We will build the endpoint that receives a tool name and arguments from the AI client, securely executes the corresponding Python function, and returns the result in the MCP-specified format. - We'll run our completed server and use a provided command-line MCP client to simulate an AI Host. Attendees will see their client discover the custom tool via
tools/listand then execute it viatools/call, seeing the live JSON-RPC exchange.
Part 4: Recap & The Road Ahead (10 mins)
- Reviewing the key protocol messages (
initialize,tools/list,tools/call) and emphasizing how they create a standard, interoperable bridge for AI. - A brief discussion of next steps, including other primitives like resources, real-time notifications, different transport layers (e.g.,
stdio), and a final Q&A.
Chuxin Liu, PhD, is a Senior Quantitative Modeling Associate at JPMorgan Chase, focusing on model risk and LLM applications in this space. She is also a WiDS NYC Ambassador and organizer of multiple communities. She is passionate about how AI and automation are reshaping the workforce. Blending her research background with hands-on experience in modeling and community leadership, she speaks on building human-centered AI practices and empowering professionals to adapt in an evolving AI era.