2024-10-25 –, Workshop Class #1
Language: English
Large Language models are all over the place, driving the advancement of AI in today's era. For enterprises and businesses, integrating LLM with custom data sources is crucial to provide more contextual understanding and reduce hallucinations. In my talk, I'll emphasize on building an effective RAG pipeline for production using Open Source LLMs. In simple words, Retrieval Augmented Generation involves retrieving relevant documents as context for user queries and leveraging LLMs to generate more accurate responses.
Problem Statement
- Closed-source models like GPT, Claude, and Gemini demonstrate significant potential as LLMs, but enterprises and startups with sensitive data hesitate to rely on them due to data privacy and security concerns.
- While numerous solutions and resources on the internet utilize closed-source models like GPT and Gemini to construct RAG pipelines, there is limited information available on building effective RAG pipelines using Open Source LLMs.
- When it comes to using Open Source LLM, it is important to understand the prompt template to use to get response in specific format. While those with a basic grasp of Transformers can adjust parameters to enhance results, this approach may not be suitable for everyone.
- Basic RAG solutions often struggle and tend to produce hallucinations.
In my talk, I will demonstrate how to build effective pipeline to build RAG using Open Source LLM such as Mistral or Llama.
Tarun Jain is a Data Scientist at AI Planet, a Belgium based AI Startup. He is also a renowned speaker and recognised as Google Developer Expert in AI/ML. Furthermore, he has contributed to various Open Source projects and is currently part of Google Summer of Code 2024 at RedHenLab. He is also a content creator at AI with Tarun Youtube channel.