Devconf.US

James Busche

James Busche is a senior software engineer in the IBM Open Technologies Group, currently focused on the Open Source CodeFlare project. Previously, James has been a DevOps Cloud engineer for IBM Watson and the worldwide Watson Kubernetes deployments.


Sessions

08-14
13:00
35min
Creating Your Own LLM Tuning Platform with Open Source Technologies
James Busche, Kelly Abuelsaad

FMS HF Tuning is an open source package by IBM that leverages Supervised Fine-tuning Trainer from HuggingFace to support multiple tuning techniques for LLMs. We will give an overview of the tuning techniques available and demonstrate how the library can be utilized from a Jupyter notebook from the Open Data Hub platform.

The session will include:
- Introduction into fms-hf-tuning, when, why and where you can use it.
- Architectural overview of how it fits into the Open Data Hub and Red Hat OpenShift AI platform.
- Exploring different tuning techniques like Low-rank adaptation (LoRA), prompt tuning, fine tuning, and inference.
- Deploy and run production ready LLM model tuning and inference on ODH.

Attendees will leave with a greater understanding of the complexity and benefits of LLM tuning, and the open source tools and platforms available that they can leverage to improve their AI solutions.

Artificial Intelligence and Data Science
Conference Auditorium (capacity 260)
08-15
10:00
80min
How To Win Friends & Influence LLMs (with Prompt Engineering)
James Busche

Part art, part science, prompt engineering is the process of crafting input text to fine-tune a given large language model for best effect.

Foundation models have billions of parameters and are trained on terabytes of data to perform a variety of tasks, including text-, code-, or image generation, classification, conversation, and more. A subset known as large language models are used for text- and code-related tasks. When it comes to prompting these models, there isn't just one right answer. There are multiple ways to prompt them for a successful result.

In this workshop, you will learn the basics of prompt engineering, from monitoring your token usage to balancing intelligence and security. You will be guided through a range of exercises where you will be able to utilize the different techniques, dials, and levers illustrated in order to get the output you desire from the model. Participants of this workshop will be equipped with a comprehensive understanding of prompt engineering along with the practical skills required to achieve the best results with large language models.

Artificial Intelligence and Data Science
Terrace Lounge (capacity 48)