BSides Toronto 2025

Building a Zero-Trust MCP Server Gateway: Policy, Isolation, and Observability for AI Tooling
2025-10-04 , ENG 103

The Model Context Protocol (MCP) unlocks powerful tool use for LLMs—but it also widens the blast radius: arbitrary tool calls, untrusted context, and exfil-prone plugins. This talk introduces a Zero-Trust MCP Server Gateway that sits between LLM agents and MCP tools to enforce policy, isolate risk, and add observability. We’ll cover identity for agents and tools, per-tool allow/deny lists, schema validation, and least-privilege scopes.We’ll map MCP server security controls to AI risks (prompt injection, sensitive information disclosure, insecure tool use). Attendees leave with a reference architecture for secure MCP server deployment.


MCP simplifies how LLMs call tools, databases, and retrieval sources, but without controls it can become an exfiltration superhighway. This session presents a practical gateway pattern that brokers every MCP interaction, applying Zero-Trust principles: authenticate every caller, authorize every action, and monitor every effect.
We’ll decompose the MCP flow (client/agent ↔ gateway ↔ MCP servers/resources) and detail enforcement points:
Identity: workload identities for agents and MCP servers, mutual TLS, short-lived tokens.
Policy: per-tool scopes, allow/deny lists, schema and parameter validation, approval workflows for high-risk actions.
Data security: retrieval-time authorization per resource, output PII/secret filtering and redaction, privacy-by-default logging.
Network/process isolation: egress deny-by-default with DNS/path allowlists, sandboxed execution, read-only filesystems, least-privilege GPU/CPU.
Resilience and integrity: signed MCP server images and attestations, SBOM/MBOM, admission policies to block unsigned or unvetted tools.
Telemetry and IR: structured events for tool calls, redactions, denials; anomaly detection; incident playbooks for prompt injection and data disclosure.

There are 2 speakers for this talk. Below is the biography of both speakers:

Aakansha Puri: Aakansha Puri is a Cloud Security Associate Architect at Thomson Reuters with 6 years of information security experience specializing in AI and cloud security. She leads enterprise AI/ML security reviews, develops AI security standards, and assesses AI applications from third-party SaaS to internal development.
A Thomson Reuters CISO Award and Hall of Fame recipient, she previously worked in Deloitte's Cyber Detect and Respond practice. AWS Solutions Architect certified, Aakansha actively shares AI security research through blogs and community engagement, focusing on the critical intersection of AI, cloud, and enterprise security.

Navjot Singh is a Cloud Security Associate Architect at Thomson Reuters with 7 years of information security experience in cloud security and AI/ML. He specializes in AI/ML security reviews, cloud security architecture and governance, and cyber due diligence for mergers and acquisitions. Previously at Deloitte Risk Advisory, he worked with major retail and government clients to design and secure cloud-native environments, critical workloads, and built vulnerability management program.
Navjot holds a Master of Applied Science in Electrical and Computer Engineering from the University of Ottawa and a Bachelor of Technology in Computer Science. He is multi‑cloud certified (AWS, Azure, GCP)