Devconf.US

Hema Veeradhi

Hema Veeradhi is a Principal Data Scientist working in the Emerging Technologies team part of the office of the CTO at Red Hat. Her work primarily focuses on implementing innovative open AI and machine learning solutions to help solve business and engineering problems. Hema is a staunch supporter of open source, firmly believing in its ability to propel AI advancements to new heights. She has been a previous speaker at Open Source Summit NA, KubeCon NA, DevConf CZ and FOSSY.


Sessions

08-14
16:40
35min
Building Trust with LLMs
Hema Veeradhi, Surya Pathak

Have you ever questioned the reliability of Large Language Models (LLMs)? In today’s open source world, Large Language Models (LLMs) are revolutionizing how we innovate and build applications. However, before fully embracing them in our projects and applications, it's essential to evaluate their performance. This talk is designed to be your guide through the intricate process of LLM evaluation, equipping you with practical insights to navigate the complexities of implementing LLMs in real-world applications.

We will go over the fundamentals of LLM evaluation, beginning with an examination of existing traditional metrics such as ROUGE and BLEU scores and highlighting their significance in assessing model efficacy. We will then delve into more specialized techniques such as model based evaluation using LangChain criteria metrics. In addition, we will also cover human based evaluation and different evaluation benchmarks. Using a text generation demo application, we’ll compare the different evaluation techniques, highlighting their pros and cons. Throughout the session, we will address common challenges that you may face when assessing the quality of your LLMs and how to overcome them.

By the end of the talk, attendees will gain a comprehensive understanding of LLM evaluation techniques.

Artificial Intelligence and Data Science
Conference Auditorium (capacity 260)
08-15
16:00
80min
LLMs 101: Introductory Workshop
Hema Veeradhi, Surya Pathak, Aakanksha Duggal

Are you curious to learn about Large Language Models (LLMs), but unsure how and where to begin? This workshop is designed with specifically you in mind. LLMs have emerged as powerful tools in natural language processing, yet their implementation poses challenges, particularly in managing computational resources effectively.

During this workshop, we will delve into the fundamentals of LLMs and guide you in selecting the appropriate open source models for your requirements. We will discuss the concept of self-hosted LLMs and introduce containerization technologies such as Kubernetes, Docker, and Podman. Through illustrative use-cases like RAG application, text generation or speech recognition, you will learn how to set up LLMs locally on your laptop and build container images for the models using Podman. We will also be exploring model serving and inference methods, including interaction with the model via a simple UI application. Moreover, the workshop will cover model evaluation techniques and introduce various metrics that can be utilized to effectively measure the performance and quality of model outputs.

Attendees will gain practical knowledge and skills to effectively harness the capabilities of LLMs in real-world applications. They will understand the challenges associated with managing computational resources and learn how to overcome them. By the end of the workshop, participants will be equipped with the tools to set up and deploy LLMs, evaluate model performance, and implement them in various natural language processing tasks.

Artificial Intelligence and Data Science
Terrace Lounge (capacity 48)
08-16
14:05
35min
Self-Hosted LLMs: A Practical Guide
Hema Veeradhi, Aakanksha Duggal

Have you ever considered deploying your own large language model (LLM), but the seemingly complex process held you back from exploring this possibility? The complexities of deploying and managing LLMs often pose significant challenges. This talk aims to provide a comprehensive introductory guide, enabling you to embark on your LLM journey by effectively hosting your own models on your laptops using open source tools and frameworks.

We will discuss the process of selecting appropriate open source LLM models from HuggingFace, containerizing the models with Podman, and creating model serving and inference pipelines. For newcomers and developers delving into LLMs, self-hosted setups offer various advantages such as increased flexibility in model training, enhanced data privacy and reduced operational costs. These benefits make self-hosting an appealing option for those seeking a user-friendly approach to exploring AI infrastructure.

By the end of this talk, attendees will possess the necessary skills and knowledge to navigate the exciting path of self-hosting LLMs.

Artificial Intelligence and Data Science
Conference Auditorium (capacity 260)