PyConDE & PyData Berlin 2024

Improve LLM-based Applications with Fallback Mechanisms
04-24, 11:40–12:10 (Europe/Berlin), A1

While RAG addresses the common LLM pitfalls, challenges like handling out-of-domain queries still persist. Learn the significance of fallback mechanisms to tackle these issues gracefully, incorporating strategies like web searches and alternative data sources to improve the user experience of your system. In this session, we’ll discover various fallback techniques and practical implementation using Haystack, empowering you to develop resilient LLM-based systems for diverse scenarios without human intervention.


Large Language Model (LLM)-based systems have demonstrated remarkable advancements in various natural language processing (NLP) tasks, particularly through the Retrieval Augmented Generation (RAG) approach. This approach addresses some of the pitfalls associated with LLMs, such as hallucination or issues related to the recentness of its training data. However, RAG systems may encounter other challenges in real-world scenarios, including handling out-of-domain queries (e.g., requesting medical advice from a finance app), struggling to generate meaningful answers from retrieved data, or failing to provide any answer at all. To address these situations effectively, it is necessary to implement a fallback mechanism capable of gracefully handling such scenarios. 🧗

This fallback mechanism can incorporate alternative strategies, such as conducting a web search with the same query to retrieve more up-to-date information or utilizing alternative information sources (such as Slack, Notion, Google Drive, etc.) to gather more relevant data and generate a satisfactory or comprehensive response. However, the question arises: how can we determine if the response is inadequate? 🤔

During this session, we will explore various fallback mechanism techniques and ensure that our system can assess the adequacy of a response and improve it if necessary without human intervention. On the practical side, we will use the open source LLM framework Haystack to implement end-to-end RAG systems. By the end of this talk, you will have learned to select the appropriate fallback method for your use case, enabling you to develop more dependable and versatile LLM-based systems and implement them effectively using Haystack. 💪


Expected audience expertise: Domain

Novice

Expected audience expertise: Python

Novice

Abstract as a tweet (X) or toot (Mastodon)

RAG handles common issues in LLM applications, but a dependable system requires one more step: a fallback mechanism. Explore the implementation of LLM applications with diverse fallback techniques using Haystack in Bilge's insightful talk.

Bilge is a Developer Advocate at deepset, working with Haystack, an open source LLM framework. With over two years of experience as a Software Engineer, she developed a strong interest in NLP and pursued a master's degree in Artificial Intelligence at KU Leuven with a focus on NLP. Now, she enjoys working with Haystack and helping the community build LLM applications. ✨ 🥑
Let's connect if you'd like to talk about how fast the AI field is: Twitter, Linkedin