Emanuele Fabbiani
Emanuele is an engineer, researcher, and entrepreneur with a passion for artificial intelligence.
He earned his PhD by exploring time series forecasting in the energy sector and spent time as a guest researcher at EPFL in Lausanne. Today, he is co-founder and Head of AI at xtream, a boutique company that applies cutting-edge technology to solve complex business challenges.
Emanuele is also a contract professor in AI at the Catholic University of Milan. He has published eight papers in international journals and contributed to over 30 international conferences worldwide. His engagements include AMLD Lausanne, ODSC London, WeAreDevelopers Berlin, PyData Berlin, PyData Paris, PyCon Florence, the Swiss Python Summit in Zurich, and Codemotion Milan.
Emanuele has been a guest lecturer at Italian, Swiss, and Polish universities.
@donlelef
Session
Large Language Models (LLMs) are transforming digital products, but their non-deterministic behaviour challenges predictability and testing, making observability essential for quality and scalability.
This talk presents observability for LLM-based applications, spotlighting three tools: Langfuse, OpenLIT, and Phoenix. We'll share best practices about what and how to monitor LLM features and explore each tool's strengths and limitations.
Langfuse excels in tracing and quality monitoring but lacks OpenTelemetry support and customization. OpenLIT, while less mature, integrates well with existing observability stacks using OpenTelemetry. Phoenix stands out in debugging and experimentation but struggles with real-time tracing.
The comparison will be enhanced by live coding examples.
Attendees will walk away with an improved understanding of observability for GenAI applications and will understand which tool to use for their use case.