2025-10-30 –, Auditorium
Everyone is building "Chat with your data" applications, but moving them from a simple prototype to a secure, reliable enterprise-grade service is a monumental challenge. How does an agent navigate a labyrinth of hundreds of datasets to find the right one for a user's query? How does it integrate a growing ecosystem of specialized data tools alongside those datasets? And critically, how do you ensure it respects user permissions and interacts with other systems safely?
This talk presents a practical blueprint for building robust AI agents in a large enterprise environment. We will demonstrate a solution that allows users to chat with structured data to get answers, insights, and visualizations. Crucially, we will dive deep into the architecture that enables the agent to dynamically discover the most relevant datasets and tools for a given task, all while inheriting and restricting permissions based on the end-user's identity, preventing unauthorized data access. You will learn how to build a controllable, interoperable, and observable ecosystem of agents using standard protocols for dynamic tool discovery and governed communication.
The Problem: Why Simple Agents Fail in the Enterprise
The initial excitement of building a generative AI agent quickly meets the messy reality of the enterprise environment. There is no single, clean "data layer." We face a fragmented landscape of technical catalogs, business glossaries, semantic ontologies, and multiple data sources. This presents two core challenges:
-
Navigating the Data & Tool Maze: With potentially hundreds of datasets and specialized tools (e.g., for visualization, data quality checks, forecasting), a primary challenge is selection and relevance. How does the agent understand a user's ambiguous query and identify the appropriate datasets and tools?
-
The Governance & Security Mandate: Equally critical is governance. How can an agent act on a user’s behalf without “god mode” access to all data?
Our Solution: A Multi-Level Governance Architecture
We'll share a real-world architecture for a structured-data AI agent that works inside a heavily governed environment, built on three pillars:
-
Secure by Design - Remove unnecessary autonomy and place explicit guardrails and failsafe mechanisms around critical steps like data retrieval and tool execution.
-
Delegated Permissions & Authentication - Users authenticate via SSO and provide the agent with a token. The agent uses this delegated authority to interact with other services. It operates under its own identity (service principal) but forwards the user's identity so downstream systems can enforce user-specific access rules and masking.
-
Governed Interoperability & Discovery - Using Model Context Protocol (MCP) as a universal bridge between AI systems and data sources, the agent dynamically discovers datasets and tools relevant to a query while respecting permissions. Governance is enforced at discovery. Tools are treated as first-class citizens alongside data. This dynamic approach decouples the agent from specific tool implementations, enabling a flexible, maintainable, and scalable ecosystem of reusable capabilities.
Irene Donato is Lead Data Scientist at Agile Lab, working on the development and application of machine learning models. With an academic background that includes postdoctoral research at the University of Alberta and Aix-Marseille Université, she transitioned to industry, where she has led data science projects and teams.
Irene enjoys bridging the gap between deep research and practical application. She is passionate about making complex topics accessible to the developer community.