John Robert
John Robert leads data and cloud projects at Sunnic Lighthouse (Enerparc AG), where he works on building and operating data-intensive workflows in production. He has over eight years of experience with Python, machine learning, and AI, and began his career working on autonomous driving systems at Daimler (Mercedes-Benz).
John has spoken at conferences across Europe, the United States, and other regions, sharing practical insights on building, deploying, and operating AI systems in real-world environments. His current focus is on AI safety and AI security, particularly how agentic and autonomous systems can be designed with clear boundaries and controls.
He is the founder of Don’t Fear AI, an initiative aimed at helping people understand how to use AI responsibly and how to build reliable AI systems without hype or unnecessary complexity. John believes in a future where humans and AI systems work together safely and effectively.
Outside of technology, John enjoys traveling and has visited nearly 50 countries.
Session
AI agents are increasingly deployed with autonomy: calling tools, accessing data, modifying systems, and making decisions without human supervision. While prompts and guardrails are often presented as safety solutions, they break down quickly in real-world agentic systems.
In this talk, we explore how to enforce safety constraints in AI agents beyond prompting, using engineering techniques familiar to Python developers and data engineers. We will examine common failure modes in agentic systems such as tool misuse, goal drift, and over-permissioning and show how to mitigate them using policy layers, capability boundaries, and execution-time validation.