Devconf.US

Self-Hosted LLMs: A Practical Guide
08-16, 14:05–14:40 (US/Eastern), Conference Auditorium (capacity 260)

Have you ever considered deploying your own large language model (LLM), but the seemingly complex process held you back from exploring this possibility? The complexities of deploying and managing LLMs often pose significant challenges. This talk aims to provide a comprehensive introductory guide, enabling you to embark on your LLM journey by effectively hosting your own models on your laptops using open source tools and frameworks.

We will discuss the process of selecting appropriate open source LLM models from HuggingFace, containerizing the models with Podman, and creating model serving and inference pipelines. For newcomers and developers delving into LLMs, self-hosted setups offer various advantages such as increased flexibility in model training, enhanced data privacy and reduced operational costs. These benefits make self-hosting an appealing option for those seeking a user-friendly approach to exploring AI infrastructure.

By the end of this talk, attendees will possess the necessary skills and knowledge to navigate the exciting path of self-hosting LLMs.

Hema Veeradhi is a Principal Data Scientist working in the Emerging Technologies team part of the office of the CTO at Red Hat. Her work primarily focuses on implementing innovative open AI and machine learning solutions to help solve business and engineering problems. Hema is a staunch supporter of open source, firmly believing in its ability to propel AI advancements to new heights. She has been a previous speaker at Open Source Summit NA, KubeCon NA, DevConf CZ and FOSSY.

This speaker also appears in:

Aakanksha Duggal is a Senior Data Scientist in the Emerging Technologies Group at Red Hat. She is a part of the Data Science team and works on developing open source software that uses AI and machine learning applications to solve engineering problems.

This speaker also appears in: