Devconf.US

To see our schedule with full functionality, like timezone conversion and personal scheduling, please enable JavaScript and go here.
09:00
09:00
10min
Welcome
Urvashi Mohnani, Sally Ann O'Malley

The welcome address followed by Day 1 Keynote.

General
Metcalf Hall (capacity 300)
09:10
09:10
40min
Fireside Chat with Kelsey Hightower
Kelsey Hightower

Please join us for a fireside chat with Kelsey as he talks about his journey with Open Source.

General
Metcalf Hall (capacity 300)
10:00
10:00
35min
5 Must-Know Open Source Identity Management Tools For Cloud Native Stacks
Ran Ne'man

In the words of Werner Vogels, identity management is the core of our systems, and touches every single part of our applications and stacks. Knowing this, plenty of excellent open source tooling has been built over the years to combat the diversity of challenges that arise with managing identity and access for different cloud native environments.

In this talk we’ll take a deep dive on the known challenges in the identity space and how they impact our apps and systems. But don’t panic - as we will share great tips and practices for how to mitigate these risks, and demo how to leverage excellent open source tooling to do so. We have selected five excellent tools that cover the most common risks and stacks in use today, that provide a good baseline for understanding and reducing identity attack surface.

Whether you’re running on AWS, Azure or GCP, or want better visualization and graph of who has permissions to which resources, or even want to manage internal access to resources, the OSS community has you covered. Join us to learn how to level up your identity management with an open source stack.

DevOps and Automation, Security and Compliance
Metcalf Small Ballroom (capacity 100)
10:00
35min
Going from containers, to pods, to Kubernetes – help for your developer environments!
Cedric Clyburn

Today, Kubernetes is the undisputed go-to platform for scaling containers. But for developers, Kubernetes can be daunting, particularly when working with the discrepancies between local and production environments. Podman and Podman Desktop bridges this gap. In this talk, you’ll be introduced to Podman and witness the unveiling of Podman Desktop, an open-source GUI tool that streamlines container workflows and is compatible with Podman, Lima, Docker, and more. Podman Desktop serves as a beginner-friendly launch pad to Kubernetes, enabling developers to spin up local clusters (with Kind and Minikube) or work with remote environments. A demo will be given that helps you navigate the paths necessary to transition from app to containers, to pods, and ultimately to Kubernetes, highlighting how it reduces discrepancies and enables predictability in your deployments by leveraging Podman and Podman Desktop's perks and security advantages. You'll also learn how you can benefit from Podman Desktop to streamline your container development processes!

Application and Services Development
East Balcony (capacity 80)
10:00
35min
Truth-seeker: Using LLM agents to build and verify knowledge bases
Jeremy Peterson

Most researchers agree that quality data is the foundation of building quality LLMs. Truth-seeker utilizes open source LLMs to run agents to build up a knowledge base from a corpus of source documents. The agents break-down the source documents into statements which can be evaluated for their veracity. Then they build the knowledge base by using the results of search engines queries to score statements according to how well sourced they are, how consistent they are with other parts of the knowledge base, and their classification: "fact", "opinion", "bias", etc. This tool is designed to improve the quality of training data by making it possible to filter out undesirable data and enhance desirable data (e.g. by adding sources).

Artificial Intelligence and Data Science
Conference Auditorium (capacity 260)
10:35
10:35
5min
Break
Metcalf Small Ballroom (capacity 100)
10:35
5min
Break
East Balcony (capacity 80)
10:35
5min
Break
Conference Auditorium (capacity 260)
10:40
10:40
35min
A Guide to Responsible Data Collection In Open Source
Arjun Devarajan

Collecting usage data in open source can be a controversial topic, but recent shifts in attitudes have become evident. With extensive experience working with open source projects and companies over the past four years, our team has empirically established successful best practices and considerations. All open source projects should be aware of these to effectively track software usage. This talk will cover a range of considerations, encompassing community messaging and expectation management, security and privacy, compliance, ethics, and governance.

Artificial Intelligence and Data Science
Conference Auditorium (capacity 260)
10:40
80min
Autoscaling Everything in Kubernetes Meetup
Michael McCune, Subin Modeel

Whether it's pods, nodes, or something entirely new, let's gather to talk about the state of the art with autoscaling in Kubernetes. This meetup is focused on discussing autoscaling technology and projects within the Kubernetes community. We will gather topics on the day of the meetup and then have discussions based on the desires of the group. Topics for discussion might include:

Is Karpenter better than the Cluster Autoscaler?
What is the status of the multi-dimensional pod autoscaler enhancement?
Will we see a predictive AI-based autoscaler in the near future?

Cloud, Hybrid Cloud, and Hyperscale Infrastructure
Metcalf Hall (capacity 300)
10:40
35min
Handling 100000+ visitors at the world's largest physics lab (CERN)
Cristian Schuszter

This talk gives an insight into the technologies used by CERN (the European Organization for Nuclear Research) in order to render the average visitor's experience as optimal as possible. The Science Gateway is a new exhibition center freely accessible to visitors, meaning that a significantly higher volume of people started coming to CERN as a touristic destination. The problem is gracefully handling the capacity that we have for visits, using Drools as the main engine for making the booking experience ideal.

We'll do a deep-dive into the challenges and the technical details faced by this project, as well as some performance metrics showcasing the strength of this solution.

Application and Services Development
East Balcony (capacity 80)
10:40
35min
SpiceDB: open source, hyperscale authorization
Jimmy Zelinskie

As more folks deploy cloud-native architectures and technologies, store ever larger amounts of data, and build ever more complex software suites, the complexity required to correctly and securely authorize requests only becomes exponentially more difficult.

Broken authorization now tops OWASP's Top 10 Security Risks for Web Apps. Their recommendation? Adopt an ABAC or ReBAC authorization model. This talk establishes the problems with the status quo, explains the core concepts behind ReBAC, and introduces SpiceDB, a widely adopted open source system inspired by the system internally powering Google: Zanzibar.

DevOps and Automation, Security and Compliance
Metcalf Small Ballroom (capacity 100)
11:15
11:15
5min
Break
Metcalf Small Ballroom (capacity 100)
11:15
5min
Break
East Balcony (capacity 80)
11:15
5min
Break
Conference Auditorium (capacity 260)
11:20
11:20
35min
Auto-Instrumenting Go Libraries for Tracing with eBPF and OpenTelemetry
Mike Dame, Tyler Yahn

Context propagation is critical to tracing requests through applications, and instrumenting user space code to implement propagation often requires manual code changes. While the runtime of certain languages such as Python and Java allow for agents to automatically instrument common libraries without any code changes, other compiled languages like Go do not natively have this ability.

However, with eBPF we are able to achieve this in Go without any recompilation or even restarting the user process! This talk will show our approach to auto-instrumentation for Go with OpenTelemetry tracing. We will discuss the technical details of our approach, as well as roadblocks and issues we have encountered alongside alternatives and future plans for this open source project.

Application and Services Development
East Balcony (capacity 80)
11:20
35min
Jupyter extension for executing kubeflow pipeline seamlessly
Harshad Reddy Nalla

As an enthusiastic contributor to the open-source project Elyra, I am excited to introduce you to the seamless integration of Elyra with Kubeflow 2.0. Elyra, a powerful toolkit designed to enhance the usability of Kubeflow Pipelines, brings a host of features to streamline the development and deployment process of machine learning workflows on Kubernetes. With Elyra, users gain access to a user-friendly visual editor, support for multiple programming languages, and advanced collaboration tools, all tailored to enhance the efficiency and effectiveness of building and deploying machine learning models within the Kubeflow ecosystem. Let's explore how Elyra can elevate your experience with Kubeflow 2.0 and empower you to unleash the full potential of your machine learning projects.

Artificial Intelligence and Data Science
Conference Auditorium (capacity 260)
11:20
35min
The OSS IaC Tooling Face-off for Modern Cloud Native Ops
Asaf Blubshtein

With Infrastructure as Code becoming the de facto way we manage our infrastructure today, a lot of excellent tools have become widely adopted that each have a different set of strengths. In this talk we'd like to take a look at the evolution of the IaC landscape over the past decade, and where we're heading.

We'll examine some of the biggest Ops - DevOps / SRE / Platform - engineering challenges through the lens of IaC including disaster recovery, security, cost, performance and even where complexity factors in when choosing your tooling of choice. While many of us have already chosen our tooling, we may also like to consider migration and integration of multiple tools for different use cases and stacks.

In this interactive talk, we'll allow you to decide which tools we explore - from CDK to Pulumi, Terraform, OpenTofu, Helm, ArgoCD to learn about how they stack up versus modern cloud native challenges.

DevOps and Automation, Security and Compliance
Metcalf Small Ballroom (capacity 100)
11:20
35min
To cooperate or to betray? Decoding the tester's dilemma
Deepak Koul

The Prisoner's Dilemma is a game theory concept where cooperation seems rational but often leads to betrayal. It is a ​​thought experiment that involves two rational agents, each of whom can cooperate for mutual benefit or betray their partner for individual reward.

Based on my decade long experience as an test engineering manager, I have come to realize that this concept models a lot of situations of strategic decision making in a tester’s life too. For example:

Individual vs collective benefit : Consistently picking test infrastructure related tasks over actual testing because that might make your CV well rounded in terms of skills and tooling. Focusing more on 'how' to test as opposed to 'what' to test.

Short-term gain vs long-term consequences : Adding a hardcoded wait in a test to get the test to pass but making the code unmaintainable in the long run.

Collaboration and trust within an organization : Based on how teams are organized, testing,UI, backend and ops might have their own group goals and will most likely prioritize them over the common goal that benefits the product or company. For example, in a lot of organizations that have a full fledged testing department upto director/VP level, a tester might focus more on their department's goals that the product success goals because the incentives and rewards come from department.

Competition and market pressures : As a tester, you might strategically pick up a tool or technology for use just because it is a sought after thing in the job market as opposed to picking up something that would be a better fit for the team.

In this thought provoking critique, I am going to talk about the factors that often tip the scales towards one choice or the other. I believe understanding these dynamics better could be valuable for testers navigating these dilemmas and potentially even fostering a more collaborative and trusting environment within teams. Lastly, I am also going to share my tips for overcoming this behavior.

Key takeaways of this presentation would be -

  1. Reflection - As a tester, am I doing practising all of these behaviours.
  2. Realisation - How thinking about the product and people who build and use it should be the true purpose of one's job.

Overall goal of the session is unveiling the psychology behind the phenomenon that stops testers from being team players and thinking for the greater good of the product and the team.

Agility, Leadership, and DEI
Terrace Lounge (capacity 48)
11:55
11:55
65min
Lunch
Metcalf Small Ballroom (capacity 100)
11:55
65min
Lunch
East Balcony (capacity 80)
11:55
65min
Lunch
Conference Auditorium (capacity 260)
11:55
65min
Lunch
Terrace Lounge (capacity 48)
13:00
13:00
35min
Building a Better Software Supply Chain
Ann Marie Fred

At Red Hat, we had a standard build pipeline for software, but it had a problem. It consisted of more than 250 services across more than 1000 host systems, which made it difficult to understand, and it required dozens of people to maintain.
We started the project now known as Konflux in order to simplify release cycles; improve the security of our software supply chain; improve the data collected for attestation, provenance, and software bill-of-materials; reduce the number of duplicate services; simplify maintenance; reduce maintenance costs; collaborate on open source projects; and improve the onboarding experience for our development teams.
We chose Kubernetes as the foundation of our architecture, because of its proven model for deploying scalable, secure services. We chose Tekton, along with Tekton Chains and Tekton Results, for our build and test pipelines, because of their open and flexible design. We chose Argo CD because of its GitOps model, full featured support for Kubernetes, and community adoption. We chose a suite of open source command-line tools for the security checks and other automation. And we’re using Backstage to teach developers how to onboard, by example.
Along the way, we learned a great deal about what we should and shouldn’t standardize in our pipelines. This talk will explain how we implemented the system, and more importantly, the course corrections we made to our plans as we built it out. You will come away from this session with a reference architecture as well as a list of key lessons learned in CI/CD and software supply chain security.

DevOps and Automation, Security and Compliance
Metcalf Small Ballroom (capacity 100)
13:00
35min
Creating Your Own LLM Tuning Platform with Open Source Technologies
James Busche, Kelly Abuelsaad

FMS HF Tuning is an open source package by IBM that leverages Supervised Fine-tuning Trainer from HuggingFace to support multiple tuning techniques for LLMs. We will give an overview of the tuning techniques available and demonstrate how the library can be utilized from a Jupyter notebook from the Open Data Hub platform.

The session will include:
- Introduction into fms-hf-tuning, when, why and where you can use it.
- Architectural overview of how it fits into the Open Data Hub and Red Hat OpenShift AI platform.
- Exploring different tuning techniques like Low-rank adaptation (LoRA), prompt tuning, fine tuning, and inference.
- Deploy and run production ready LLM model tuning and inference on ODH.

Attendees will leave with a greater understanding of the complexity and benefits of LLM tuning, and the open source tools and platforms available that they can leverage to improve their AI solutions.

Artificial Intelligence and Data Science
Conference Auditorium (capacity 260)
13:00
35min
Exploring the English Divide: A look at Open Source Inclusion
Karen Noel, Eric Duen

This talk dives deep into the fascinating topic of written and spoken English as a factor in open source communities. We'll explore the experiences of both native and non-native English speakers, sparking a conversation about:
* The impact of language fluency on participation, contribution, and barriers to entry.
* Potential biases within the open source community based on English proficiency.
* Strategies for fostering inclusivity and creating welcoming spaces.
Join Karen and Eric for an open discussion about our experiences and ideas for a more diverse and thriving open source future.

Agility, Leadership, and DEI
Terrace Lounge (capacity 48)
13:00
80min
Fedora/CentOS Cloud Infrastructure Users Meetup
Neal Gompa, David Duncan

Are you a user of public cloud services or interested in leveraging the power of the cloud for your projects? Join us for an engaging and informative Public Cloud Users Meetup, where cloud enthusiasts and developers gather to share their experiences, best practices, and insights around architecture.

Cloud, Hybrid Cloud, and Hyperscale Infrastructure
Metcalf Hall (capacity 300)
13:00
35min
Simplifying Backend Agility: Transitioning from Debezium ElasticSearch to Postgres for Streamlined Resilience
Chris Hambridge

Join us for an insightful journey into the realm of backend development and DevOps as we share our experience transitioning from Debezium ElasticSearch to Postgres option. Our aim? To streamline tooling, reduce complexities, and optimize infrastructure for our application.

We’ll delve deep into the challenges we faced, from coding intricacies to the constant battle for maintainability. We’ll reveal the insights that drove us towards embracing Postgres alternatives, aiming to simplify our tech stack and lighten our infrastructure load.

Specifically, we’ll uncover the appeal of options like foreign data wrapper, logical replication, and database consolidation within the Postgres ecosystem. These choices promise to streamline our toolkit and trim down infrastructure overheads, paving the way for a more resilient setup.

We’ll unveil tangible outcomes, showcasing the transformative impact of our Postgres evolution. From heightened agility to streamlined operations, our journey offers insights for those navigating similar paths.

Join us as we explore the convergence of simplicity and resilience in backend development, uncovering strategies for innovation amidst evolving landscapes.

Application and Services Development
East Balcony (capacity 80)
13:35
13:35
5min
Break
Metcalf Small Ballroom (capacity 100)
13:35
5min
Break
East Balcony (capacity 80)
13:35
5min
Break
Conference Auditorium (capacity 260)
13:35
5min
Break
Terrace Lounge (capacity 48)
13:40
13:40
35min
AI Lab Recipes: Cooking up AI Applications on your Laptop
Michael Clifford

What if we told you there’s a GitHub repository that makes building, running, and developing AI powered applications from your laptop as easy as making toast? No cloud-based AI platform required, no specialized hardware accelerators needed, and with the most up-to-date open source models? In this session, we'll dig in to explore ai-lab-recipes. This repository is a collaboration of data scientists and application developers that brings together best practices from both worlds. The result is a set of containerized AI powered applications and tools that are fun to build, easy to run, and convenient to fork and make your own! We'll show you how easy it is to spin up a local code generation assistant with only two simple commands.

Artificial Intelligence and Data Science
Conference Auditorium (capacity 260)
13:40
35min
Crafting Seamless Development: Improving the Kubernetes Operator Developer Experience
Zack Zlotnik, Yu Qi (Jerry) Zhang

Are you tired of waiting for minutes or even hours to see the results of your code changes? The "edit-compile-run-debug" loop is a critical part of software development because the faster one can execute it, the faster one can make and verify changes. But modern development environments such as Kubernetes operators can slow this loop down immensely. From leveraging advanced tooling and automation to adopting best practices in code organization and testing, we explore actionable steps to significantly reduce iteration times, enhance developer experience, and accelerate the pace of delivering robust and reliable Kubernetes operators.

This session will cover:
- What challenges we faced
- How we looked more holistically at our developer experience (DevEx)
- How we reduced our loop time by over 50%
- How we get better feedback during each CI run

Join us to learn how you can optimize your "edit-compile-run-debug" loop and improve your developer experience.

DevOps and Automation, Security and Compliance
Metcalf Small Ballroom (capacity 100)
13:40
35min
Observability and instrumentation via OpenTelemetry
Karl Johan Grahn

This talk will cover the basics of observability - what are metrics, logs, and traces, and what is the difference between them. Then we will explain instrumentation. Then we will show how instrumentation is meant to be standardized via the open-source telemetry solution OpenTelemetry. Finally, we will see how instrumentation can be done via OpenTelemetry in Python. We will demo how tracing works, all based on open-source examples.

Application and Services Development
East Balcony (capacity 80)
13:40
35min
Things I Wish I Knew When I Became a Manager
Greg Blomquist

When I became a manager, I had very little guidance apart from how to use the tools for salary adjustments and approving time off. What I was missing was what to talk about in a 1:1; how to have the difficult conversations around salary, promotions, or lackluster performance; what it takes to build strong teams; and, how to make sure you're taking care of yourself along the way. The goal of my talk is to help new managers understand some of the nuance of being a successful manager. I distill my 9+ years of engineering management experience into several lessons divided into three categories: Managing Teams, Managing Individuals, and Managing Yourself. While the primary audience is new managers, I believe that individual contributors, experienced managers, and people thinking about a future in management can all learn something.

Agility, Leadership, and DEI
Terrace Lounge (capacity 48)
14:15
14:15
5min
Break
Metcalf Small Ballroom (capacity 100)
14:15
5min
Break
East Balcony (capacity 80)
14:15
5min
Break
Conference Auditorium (capacity 260)
14:15
5min
Break
Terrace Lounge (capacity 48)
14:20
14:20
80min
Code review automation with GenAI
Andrey Shakirov

During the workshop, we will add GenAI powered code review features into the CICD pipelines.

What you will learn and take away from the session:
- GitHub action workflow with code review enabled
- GitLab CICD pipeline with code review enabled
- Bitbucket pipeline with code review enabled
- CircleCI workflow with code review automation connected to GitLab repository
- Using LangChain ReAct agents to automate tasks to comment on the GitLab issue with code review findings
- Using LangChain ReAct agents to automate tasks to open new JIRA issues with code review findings
- Enable LangChain ReAct Agents LLM tracing to understand Thought, Action, Observation loop

Attendees will follow step by step instructions that will guide them on how to setup all the integrations.

GitHub Repo: https://goo.gle/genai-for-dev

Register to access classroom for hands-on session:
https://rsvp.withgoogle.com/events/genai-for-developers-boston-aug14

DevOps and Automation, Security and Compliance
Terrace Lounge (capacity 48)
14:20
35min
DOWN with Shift Left: Why You Should be Shifting DOWN into the Platform
Scott Rosenberg

We’ve all had just about enough with shift left at this point - which basically means, let’s shift all of our problems onto someone else. If therapy and cognitive load are the outcomes we’re aiming for, then shift left is great.

Otherwise, shift left needs a rethink.

In this talk, I’ll go on a rant about why shift left needs to ship out, and will introduce shifting down (the new and better version of shift left).

We’ll talk about how we can empower our platforms (that everyone is already building anyway!) to make decisions developers shouldn’t have to be making.

By shifting the complexity to our intelligent platform with its nifty gadgets and gizmos - we can offload & automate all the things - such as security scanning policies (defined by security engineers!), generation of manifests, building and rebasing images (built by DevOps!), leverage policy languages and security prioritization frameworks like VEX to make our decisions and much more. So for those of us left standing after it’s all been shifted onto our plates, come learn how to get down with the shifting––into the platform.

DevOps and Automation, Security and Compliance
Metcalf Small Ballroom (capacity 100)
14:20
35min
Efficiently Deploying and Benchmarking LLMs in Kubernetes
Nikhil Palaskar

As LLMs gain mainstream adoption by businesses, operating them efficiently on Kubernetes is becoming an important area of concern. One aspect of ensuring the optimal performance of running LLM services is to first reliably measure the key runtime performance metrics for LLMs. In this talk, we will demonstrate how to performance benchmark LLMs on Kubernetes with the Kserve stack under various inference runtimes. We will demonstrate LLM deployment strategies, load testing across various configs, and techniques for capturing the key performance indicators such as tokens per second, time per output token, time to first token, and so forth. We will also show how to capture relevant resource consumption metrics such as GPU utilization and GPU memory consumption to aid in performance bottleneck analysis. The runtime performance metrics coupled with the evaluation metrics for LLMs can be an extremely useful tool in optimizing the performance of running LLM services in a production environment.

Artificial Intelligence and Data Science
Conference Auditorium (capacity 260)
14:20
35min
Expanding on a Modern Java Developer's Toolkit
Ming Wang

Do you know how your Java workloads are performing in your cloud deployment? Are your services consuming too many resources or lagging in performance as load increases? Deep dive into your application’s performance by adding Cryostat to your arsenal of developer tools; a container-native application used to retrieve and analyze profiling data from your workloads running in Kubernetes.

Application and Services Development
East Balcony (capacity 80)
14:55
14:55
25min
Coffee Break
Metcalf Small Ballroom (capacity 100)
14:55
25min
Coffee Break
East Balcony (capacity 80)
14:55
25min
Coffee Break
Conference Auditorium (capacity 260)
15:20
15:20
35min
AI & Automation: How Generative AI and Automation Can Revolutionize Certification Study
Angela Andrews, Randy Romero, Jordan Jacobs, Kush Gupta

In the fast-paced world of technology, staying ahead of the curve is crucial. Acquiring technical certifications has become the cornerstone of professional development, but the journey from aspirant to certified expert is a challenging one. This panel discussion, "AI & Automation: How Generative AI and Automation Can Revolutionize Certification Study" delves deep into the multifaceted process of preparing for and passing technical certification exams using AI and automation.

The panel will take you through their usage of AI, automation,time management process, study tips and what it takes to pass. With experienced test takers who have AWS, Azure, Red Hat, and others exams under their belts, they’ll share their rules for success and open your aperture to the benefits of using an emerging technology like generative AI as well as using automation to fast track your task-based study in a repeatable way.

Whether you are a newcomer to the certification world or a seasoned professional looking to upskill, our panel of experts will provide valuable insights and practical advice. Join us for a thought-provoking discussion that will empower you to tackle technical certification exams with confidence and success. "AI & Automation: How Generative AI and Automation Can Revolutionize Certification Study" is your roadmap to conquering the world of technology certifications.

Artificial Intelligence and Data Science
Conference Auditorium (capacity 260)
15:20
35min
APIs Without Borders: Exploring the world of Locationless API Management
Vamsi Ravula

Locationless API management is a paradigm shift in API governance and security. This innovative approach allows organizations to efficiently manage their APIs distributed across multiple clouds and clusters without exposing them publicly over the internet. Instead, it leverages a sophisticated layer seven service network, offering a secure and seamless solution for API management. By operating locationless, businesses can maintain a high degree of control and privacy over their APIs, mitigating the risks associated with public exposure while facilitating efficient communication between services across diverse deployment environments.

A Layer 7 service network provides the necessary infrastructure to establish secure connections between services, regardless of location. Through this network, APIs can be efficiently managed and accessed within a controlled environment, ensuring that sensitive data and functionalities remain protected. This locationless API management approach not only enhances security but also streamlines operations, as it minimizes the complexities of public-facing API management. As organizations continue to prioritize data privacy and security, locationless API management, powered by technologies like a Layer 7 service network, emerges as a game-changing solution that empowers businesses to securely manage and utilize their APIs without the need for public internet exposure.

Application and Services Development
East Balcony (capacity 80)
15:20
35min
Policy-Driven Supply Chain Security with Enterprise Contract
Mark Bestavros

Modern organizations are subject to ever-increasing expectations for security and regulatory compliance in their software supply chains. How can appropriate checks be performed simply and easily?

In this talk, Mark will discuss how Enterprise Contract (or EC) works as a simple decision engine that can help enforce the necessary provenance, regulatory compliance, and security requirements imposed on container images. Users can express a policy configuration and requirements that EC will enforce. This user-friendly system can verify image signatures, ensure attestations match the expected public key, check for CVE alerts, and more in an easily encoded manner. EC leverages the Open Policy Agent’s widely-used Rego rule system to provide an extensible interface for evaluating container attributes, allowing enterprises to more easily standardize on supply chain security expectations.

Additionally, Mark will discuss and show the process for building an image, verifying it using EC, and customizing the enforced policies with a live demo.

DevOps and Automation, Security and Compliance
Metcalf Small Ballroom (capacity 100)
15:40
15:40
15min
Coffee Break
Terrace Lounge (capacity 48)
15:55
15:55
5min
Break
Metcalf Small Ballroom (capacity 100)
15:55
5min
Break
East Balcony (capacity 80)
15:55
5min
Break
Conference Auditorium (capacity 260)
15:55
80min
Stop Kubernetes' Revolving Door: A Hands-On Workshop to Secure a Kubernetes Cluster
Rey Lejano, Savitha Raghunathan

Out-of-the-box, upstream Kubernetes is not secure by-default. Attendees of this hands-on workshop will walk through the official/upstream Kubernetes Security Checklist to set up a cluster securely.

The workshop starts with an introduction to the critical security considerations for Kubernetes environments. Participants will then embark on a guided journey through practical exercises designed to implement security best practices within Kubernetes clusters.

Throughout the workshop, attendees will gain firsthand experience in securing Kubernetes environments, covering aspects such as authentication, authorization, network policies, pod security, and more. These exercises will provide participants a comprehensive understanding of Kubernetes security principles and practical implementation techniques.

Attendees will walk away equipped with the knowledge and skills necessary to effectively secure Kubernetes clusters in real-world scenarios. Whether you're new to Kubernetes security or seeking to enhance your existing expertise, this workshop offers valuable insights and hands-on experience to strengthen your Kubernetes deployments against potential threats.

Cloud, Hybrid Cloud, and Hyperscale Infrastructure
Terrace Lounge (capacity 48)
16:00
16:00
35min
Enter the Brave New World of GenAI with Vector Search
Mary Grygleski

With ChatGPT taking center stage since the beginning of 2023, developers who have not had a chance to work with any forms of Artificial Intelligence or Machine Learning systems may find themselves either intrigued by the “maze” of new terminologies, or some may be eager to learn more, while perhaps a smaller group may not actually want to get themselves into a territory that’s unknown to them.

The truth is that, whether we like it or not, we have all been “thrust” into this new era of computing. Instead of procrastinating, let’s start by learning about Generative AI specifically with this presentation. We will go over the history and evolution of AI and ML, then look at how it has evolved to where it is today. We will touch upon as many new concepts that have popped up in the last 6-9 months, which include: Generative AI (GenAI), ChatGPT, Large Language Models (LLMs), Natural Language Processing (NLP), Vector DB, and the growing importance of Vector Search. We will then look at a demo on how Vector Search is being done behind the scenes. We will discuss the benefits of this new wave of technology as well as the challenges that it brings to the industry and the marketplace.

Artificial Intelligence and Data Science
Conference Auditorium (capacity 260)
16:00
35min
Managing thousands of DNS records in a GitOps fashion using Ansible and NS1
Michael Kearey

Managing massive amounts of DNS records and zones can be really daunting, and very prone to mistakes. Even more when a lot of those records and zones are managed by multiple teams at the same time. Writing simple YAML files, storing all data in Git repositories and running through validation pipelines, with a sprinkle of lots of automation, allows you to manage this large amount of data with a very small team, preventing mistakes and allowing them to focus more on the customer needs and partnering with them in their solutions, rather than spending time with repetitive tasks. It also provides great visibility and easier auditing for compliance and governance requirements.

In this session you will learn how Red Hat IT manages and publishes hundreds of zones and thousands of DNS records for all customer-facing services (and community projects) in a GitOps fashion using GitLab and Ansible Automation Platform, all hosted on NS1 (an IBM Company).

DevOps and Automation, Security and Compliance
Metcalf Small Ballroom (capacity 100)
16:00
35min
REST & GraphQL: Adventures in schema-first API development
Jan Koscielniak, Erik Mravec

In this talk we will introduce architecture and development history of Pyxis - a high-traffic API that provides both a REST and GraphQL interfaces, featuring a large schema, advanced validation and modular architecture. Pyxis was initially a small REST application, but customer feature requests have led us to the introduction of a GraphQL layer on top of the REST application. Over time, this resulted in several manually maintained schemas (API, validation, database) which presented an increasing maintenance burden within the ever-evolving application. Growing customer interest in GraphQL also forced us to rethink the REST-first approach.

We’ll talk about how we solved this problem and arrived at a single opinionated schema specification, embracing schema-first principles and automation everywhere. From this schema we generate not only API and validation schemas, but also documentation, testing data and database index specification. We’ll also share our experience with adopting and maintaining GraphQL, how it went from an experimental layer on top of REST to a modular schema-first application which is now our authoritative API layer.

This talk is for anyone interested in API design, schema-first approach or in how to modernize an application without service interruption and API breakages. You only need to have basic REST and GraphQL knowledge to benefit from this talk.

Application and Services Development
East Balcony (capacity 80)
16:35
16:35
5min
Break
Metcalf Small Ballroom (capacity 100)
16:35
5min
Break
East Balcony (capacity 80)
16:35
5min
Break
Conference Auditorium (capacity 260)
16:40
16:40
35min
Building Trust with LLMs
Hema Veeradhi, Surya Pathak

Have you ever questioned the reliability of Large Language Models (LLMs)? In today’s open source world, Large Language Models (LLMs) are revolutionizing how we innovate and build applications. However, before fully embracing them in our projects and applications, it's essential to evaluate their performance. This talk is designed to be your guide through the intricate process of LLM evaluation, equipping you with practical insights to navigate the complexities of implementing LLMs in real-world applications.

We will go over the fundamentals of LLM evaluation, beginning with an examination of existing traditional metrics such as ROUGE and BLEU scores and highlighting their significance in assessing model efficacy. We will then delve into more specialized techniques such as model based evaluation using LangChain criteria metrics. In addition, we will also cover human based evaluation and different evaluation benchmarks. Using a text generation demo application, we’ll compare the different evaluation techniques, highlighting their pros and cons. Throughout the session, we will address common challenges that you may face when assessing the quality of your LLMs and how to overcome them.

By the end of the talk, attendees will gain a comprehensive understanding of LLM evaluation techniques.

Artificial Intelligence and Data Science
Conference Auditorium (capacity 260)
16:40
35min
RHEL 10 Roadmap
Scott McCarty

Join to learn more about the RHEL 10 roadmap and all the amazing plans for it!

Linux Distributions and Operating Systems
East Balcony (capacity 80)
16:40
35min
Working with filesystem in Time Series database
Roman Khavronenko

Time Series databases face the significant challenge of processing vast amounts of data. At VictoriaMetrics, we are actively developing an open-source Time Series database entirely from scratch using Go. Our average installation handles between 2 to 4 million samples per second during ingestion, with larger setups managing over 100 million samples per second on a single cluster.
In his presentation, Roman will explore various techniques essential for constructing write-heavy applications such as:
- Understanding and mitigating write amplification.
- Implementing instant database snapshots.
- Safeguarding against data corruption post power outages.
- Evaluating the advantages and disadvantages of utilizing Write Ahead Log.
- Enhancing reliability in Network File System (NFS) environments.
Throughout the talk, Roman will illustrate these concepts with real code examples sourced from open-source projects.

DevOps and Automation, Security and Compliance
Metcalf Small Ballroom (capacity 100)
17:30
17:30
120min
Containerization Guild Gathering
Jeff Ligon

A gathering is an informal space for individual contributors in or adjacent to a specialized and timely topic to share (not critique or evaluate) ideas that are in progress or perhaps should be.
We want to see new and experienced speakers, alike. If you have been working on something in the realm of containers, submit a talk! https://forms.gle/pqdbTwvxqLJkRGyy5

The format
* Each speaker has 10 minutes to share an idea.
* Ideas can be big or small. It can be show-and-tell of a personal project or a loosely sketched out paradigm shift.
* The speaker will indicate their desired next steps at the end, which can range from organizing a dedicated breakout session to a no-op.

Gathering Etiquette
* All participants should help speakers feel heard by giving speakers their full attention.
* We are all working towards building a shared vision and a shared understanding. This is not the time to work through implementation details or identify risks. Positive vibes only.

Info for potential speakers (that's you!)
* Each speaker represents themselves, unless otherwise specified.
* Sharing an idea is not a commitment to implementing the idea.
* Due to the informal nature of the gathering, there will not be an opportunity to use slides. But feel free to get creative!
* Limit your talk to 10 minutes.

We will allocate time on a first-come, first-serve basis. Although we do not curate the talks, we will ensure that topics pertain to the containerization space.

General
Metcalf Hall (capacity 300)
09:00
09:00
50min
Operations and AI
Jen Krieger

Our industry has been transformed by new technologies, radically changing how we work. Thirty years ago, deployment cycles took 3-5 years; today, they happen in seconds. This rapid pace is largely due to our focus on investing in tools that boost developer productivity.

We're still driven to create tools that simplify developers' work and make it easier for others to join the field. Many companies are heavily invested in enhancing productivity. However, the abundance of productivity tools doesn't automatically lead to increased productivity. Companies need to examine and adapt their internal systems and processes, especially in decision-making and information sharing.

This is where many companies stumble. They heavily invest in areas where developers are coding but neglect other crucial areas. The impact of these changes is significant for enterprise businesses, often with failure indicators appearing too late for recovery.

Join us at DevConf as we delve into these topics and prepare for the future of software development in an era of unprecedented speed.

General
Metcalf Hall (capacity 300)
10:00
10:00
80min
How To Win Friends & Influence LLMs (with Prompt Engineering)
James Busche

Part art, part science, prompt engineering is the process of crafting input text to fine-tune a given large language model for best effect.

Foundation models have billions of parameters and are trained on terabytes of data to perform a variety of tasks, including text-, code-, or image generation, classification, conversation, and more. A subset known as large language models are used for text- and code-related tasks. When it comes to prompting these models, there isn't just one right answer. There are multiple ways to prompt them for a successful result.

In this workshop, you will learn the basics of prompt engineering, from monitoring your token usage to balancing intelligence and security. You will be guided through a range of exercises where you will be able to utilize the different techniques, dials, and levers illustrated in order to get the output you desire from the model. Participants of this workshop will be equipped with a comprehensive understanding of prompt engineering along with the practical skills required to achieve the best results with large language models.

Artificial Intelligence and Data Science
Terrace Lounge (capacity 48)
10:00
35min
Kubernetes as a Hypervisor: Automating the lifecycle of virtual machines in Kubernetes using Ansible and KubeVirt
Andrew Block, Harsha Cherukuri

Automation is crucial for being able to run reproducible workloads regardless of environment. These days, systems are being run in a variety of locations (on premise, cloud, and at the edge) and managing them effectively is paramount. Ansible, as an automation tool, plays a key role when managing these systems. More and more workloads are being run within Kubernetes thanks to the many benefits provided by the platform. Unfortunately, not all workloads and systems are ready for containers. KubeVirt is an open source project that provides capabilities for running Virtual Machines within Kubernetes and unlocks a new set of opportunities not seen previously. However, KubeVirt only provides the primitives. Additional setup, configuration and management must be applied in order to be truly successful at scale.

Ansible automation simplifies many different aspects when running KubeVirt within Kubernetes as it can be used to not only provide support preparing the operating environment supporting Kubevirt, but assist in all aspects of virtual machine management -- from migration, provisioning, configuration and day-two operations.

In this session, attendees will discover how Ansible, and its support for the KubeVirt ecosystem, treats Kubernetes as yet another target hypervisor

Specifically:

  • The Ansible collections, plugins and modules available for use with KubeVirt
  • Methods for automating the migration of Virtual Machines to Kubernetes and KubeVirt
  • Common automation use cases and approaches when using KubeVirt
  • A reusable set of assets to kickstart an automation journey towards KubeVirt with Ansible

Expand your horizons by running Virtual Machines in new ways while building upon the robust automation of Ansible!

DevOps and Automation, Security and Compliance
Metcalf Small Ballroom (capacity 100)
10:00
35min
Modernization 101: A Beginner's Guide to Application Modernization and Methodology
Savitha Raghunathan, Shawn Hurley

Join our session as we explore Application Modernization's fundamentals, its importance, principles, and practical strategies. We'll demystify App Modernization, answering why it's crucial, what it involves, and how it benefits organizations. Delving into the 7 R's framework, folks will learn migration strategies like rehosting and replatforming as well as modernization strategies like refactoring using Konveyor, a CNCF Sandbox project. We'll showcase the importance of tailoring modernization strategies to specific personas. Whether you're a developer, IT manager, or business executive, attendees will be provided insights into how modernization can address your unique challenges and objectives. Lastly, we'll look into the future, contemplating how emerging technologies like AI could revolutionize the modernization landscape. Attendees will walk away with actionable insights, practical tools, and a clear understanding of how to embark on the App Modernization journey confidently.

Application and Services Development
East Balcony (capacity 80)
10:00
35min
Optimizing your Hybrid Cloud Operating System Experience
Yu Qi (Jerry) Zhang, Ines Qian

Hybrid Cloud is here to stay, and having a consistent Operating System management experience is crucial for the success of Hybrid Cloud platforms. As a Kubernetes or OpenShift admin, you know how important it is to ensure that the underlying OS is up-to-date, and to keep track of software versions running across multiple platforms. But how do you create, test, and deploy changes in a safe, unified, and streamlined manner? Join us as we explore the past, present, and future of the Cloud Operating System management story, and learn how to tackle these challenges head-on.

In this session, we will cover:
- How Openshift evolved to operator based workflows to manage the OS
- Going from single clusters to multi-cluster and multi-arch management
- Why we believe image-based Operating Systems is the next evolution for Hybrid Cloud, and how Openshift will be using On Cluster Layering to achieve this

Join us and share your own stories as we discuss the Operating System story for your Hybrid Cloud environments.

Cloud, Hybrid Cloud, and Hyperscale Infrastructure
Conference Auditorium (capacity 260)
10:35
10:35
5min
Break
Metcalf Small Ballroom (capacity 100)
10:35
5min
Break
East Balcony (capacity 80)
10:35
5min
Break
Conference Auditorium (capacity 260)
10:40
10:40
35min
10 Cool features in Podman and Podman Desktop
Dan Walsh

This talk will cover the new features in podman 5.0 and Podman Desktop along with other technologies.
podmansh
bootc containers
crun-wasm
crun-vm
podman-machine
Building SBOMs with podman build
...

Cloud, Hybrid Cloud, and Hyperscale Infrastructure
Conference Auditorium (capacity 260)
10:40
35min
In pursuit of maximum change velocity: Thinking like a site reliability engineer to improve CI
Brenton Leanhardt

Site reliability engineers have long known the benefits of balancing risk with measurable objectives. Come hear how OpenShift engineers at Red Hat have applied the core principles of SRE to continuous integration at massive scale. Learn how to adopt the pieces of it that make sense for your team.

You'll find this talk most interesting if some of the following are true:

  • You've been pressured to merge a feature and deliver the tests later
  • You have more data coming out of CI than you know what to do with
  • You live in the real world where integration and systems tests fail seemingly at random
Application and Services Development
East Balcony (capacity 80)
10:40
80min
Open Source for Open Hardware: the RISC-V Software Ecosystem (Meetup)
Jeffrey Osier-Mixon

Let's get together and discuss RISC-V and the universe forming around it. This talk covers advances in the RISC-V software ecosystem. RISC-V is a flexible and fully open hardware architecture, curated by the non-profit RISC-V International foundation (riscv.org) using the best practices of open source. While the open source community has responded with a great deal of distributed effort, there are notable gaps in support, particularly for commercial-grade applications. The industry has responded by forming the RISE Project (riseproject.dev), a collaboration among 20+ organizations to support and advance the software ecosystem by providing engineering and financial resources.

Future Tech and Open Research
Metcalf Hall (capacity 300)
10:40
35min
SBOMs: zero to hero in 25 mins!
Brian Cook

Want a crash course in SBOMs (source bill of materials)? What is an SBOM? Why do we need them? What is inside them? How can you create one? What kind of open source tools can you use for SBOMs? Learn how we are integrating SBOM creation into build processes and some of the challenges that exist and possible ways to deal with them. This will pack as much about SBOMs as possible into the session and include some live demos.

DevOps and Automation, Security and Compliance
Metcalf Small Ballroom (capacity 100)
11:15
11:15
5min
Break
Metcalf Small Ballroom (capacity 100)
11:15
5min
Break
East Balcony (capacity 80)
11:15
5min
Break
Conference Auditorium (capacity 260)
11:20
11:20
35min
Automate Openshift Cluster deployment with RHACM and AAP
Michael DiDato, Michael Zamot, Michael Navarro

Discover the future of streamlined deployment with this captivating talk on "Deploying OpenShift with ACM and Ansible Automation Platform with Zero Touch Provisioning." Join us as we delve into the seamless integration of OpenShift, ACM, and Ansible Automation Platform, revolutionizing the landscape of IT operations. Learn how zero-touch provisioning optimizes efficiency, accelerates deployment, and ensures unparalleled reliability. Embrace the power of automation and orchestration to propel your organization to the forefront of modern infrastructure management.

DevOps and Automation, Security and Compliance
Metcalf Small Ballroom (capacity 100)
11:20
35min
Linux Kernel Dynamic CPU Isolation
Waiman Long

The traditional way to run a latency sensitive and CPU intensive
user space workload is to use the "nohz_full", "isolcpus" and "rcu_nocbs"
boot kernel parameters to statically isolate a set of CPUs at boot
time from kernel disturbance. This workload will run continuously on the
isolated CPUs until the system is shut down. The remaining housekeeping
CPUs will have a greater burden to run the necessary kernel background
activities on behalf of the isolated CPUs.

In the new world of containerized computing environment, workloads come
and go dynamically. These latency sensitive and CPU intensive workloads
will not work well as we can't pre-reserve at boot time a set of isolated
CPUs which can be a waste of resource as they may not be usable for other
types of workloads. Dynamic CPU isolation is a new way to create a set
of isolated CPUs dynamically on demand when they are needed and release
them back to the housekeeping CPU pool once they are no longer needed.

A number of kernel background activities can be offloaded from the
statically isolated CPUs. The latest Linux kernel is able to dynamically
offload a subset of these kernel background activities. More work
still need to be done to enable offloading of the remaining background
activities to make dynamic CPU isolation as close to static CPU isolation
as possible.

This session will talk about CPU isolation in general and the current
progress in closing the gap between dynamic and static CPU isolation.

Linux Distributions and Operating Systems
Terrace Lounge (capacity 48)
11:20
35min
Open Education in the New England Research Cloud (NERC)
Isaiah Stapleton

The Open Education Project (OPE) leverages modern open source technologies to create an open environment and platform in which educators can create, publish, and operationalize high-quality open source materials, while students require no more than access to a web browser to access them. To achieve this, we have built an open ownership model that starts with high-performance, open data centers providing the hardware resources. This model allows us to exploit Linux and build a rich environment of tools and services to support a novel approach to educational material. It also provides a natural way of leveraging Red Hat cloud hosted platforms for running courses at scale.
In this presentation, we’ll first explore the architecture of OPE, focusing on utilizing the OPE SDK to construct courses with “books” and containers. We then look at how we have hosted large courses on NERC using Red Hat OpenShift AI. We will highlight the successful deployment of OPE on NERC OpenShift, demonstrating how NERC infrastructure supports the flexibility and accessibility of open education. We will conclude with a nod to broader NERC educational efforts beyond OPE that are under exploration.

Cloud, Hybrid Cloud, and Hyperscale Infrastructure
Conference Auditorium (capacity 260)
11:20
35min
UKELELE
Eric Munson

Modern server applications are built around an event loop where they wait for some form of input from a user and then perform some computation in response to the content or type of input received. At the heart of this event loop is a method for the application to communicate interest in certain events to the operating system and for the operating system to return a list of requested events which have occurred to the process when requested.
The abstractions built for this event handling each come with a cost to performance. In this talk we will quantify this cost and propose a system, UKELELE, which provides an abstraction for event handling that is easy to use, as quantified by the amount of change required to port to it, and more performant than what is available in Linux today. We will also explore several optimizations available to us by building on UKL to create shorter code paths for event handling and measure each of their impacts on application performance. UKELELE shows an improvement in overall throughput but more significantly a reduction in tail latency when making use of all our available optimizations.

PhD Highlight
East Balcony (capacity 80)
11:55
11:55
65min
Lunch
Metcalf Small Ballroom (capacity 100)
11:55
65min
Lunch
East Balcony (capacity 80)
11:55
65min
Lunch
Conference Auditorium (capacity 260)
11:55
65min
Lunch
Terrace Lounge (capacity 48)
13:00
13:00
35min
Building the Community Enterprise Operating System through CentOS Stream
Neal Gompa, Davide Cavalca

In 2019, the CentOS Project unveiled CentOS Stream, a distribution where the community could collaborate and contribute to the future of Enterprise Linux. Today, there's a burgeoning and vibrant community of makers and shakers that leverage CentOS Stream to support each other and the wider ecosystem.

In this talk, we'll introduce CentOS Stream, talk about what it has enabled, and dive into one of the major groups, the CentOS Hyperscale SIG, building on it and how they support the CentOS and Enterprise Linux community at large.

Linux Distributions and Operating Systems
Terrace Lounge (capacity 48)
13:00
80min
Containers BOF (Meetup)
Dan Walsh

General discussions on containers. Podman, Docker, Buildah, CoreOS, Bootc ....

Cloud, Hybrid Cloud, and Hyperscale Infrastructure
Metcalf Hall (capacity 300)
13:00
35min
Data Security and Storage Hardening in Rook and Ceph
Federico Lucifredi, Sage McTaggart

We explore the security model exposed by Rook with Ceph, the leading software-defined storage platform of the Open Source world. Digging increasingly deeper in the stack, we examine hardening options for Ceph storage appropriate for a variety of threat profiles. Options include defining a threat model, limiting the blast radius of an attack by implementing separate security zones, the use of encryption at rest and in-flight and FIPS 140-2 validated ciphers, hardened builds and default configuration, as well as user access controls and key management. Data retention and secure deletion are also addressed. The very process of containerization creates additional security benefits with lightweight separation of domains. Rook makes the process of applying hardening options easier, as this becomes a matter of simply modifying a .yaml file with the appropriate security context upon creation, making it a snap to apply the standard hardening options of Ceph to a container-based storage system.

Cloud, Hybrid Cloud, and Hyperscale Infrastructure
Conference Auditorium (capacity 260)
13:00
35min
Improve Upstream Code Quality by Bringing Testing to Patch Level
Lei Yang, Yumei Huang

As downstream testers, we used to hit a few regression bugs when testing downstream. Normally they would get fixed upstream first, then downstream, which causes some pain points for both testers and developers. For testers, they need to run git bisect and repeat tests to find out the culprit. It takes much time and effort especially when there are a lot of patches and the testing is complex. For developers, they need to revisit the patches to find out the root cause. It might be hard to recall all the details as the patches could be worked on quite a long time ago. Besides, we have a strict schedule for each release, and if a bug is discovered in a late phase, it would be risky to get it fixed and tested in limited time, thus compromising our product to some extent.
Based on all the reasons, we brought some testing from downstream to upstream. Not only test upstream code regularly, but also run some tests against patches under review, engage with developers actively, and provide test results before the patches get merged into master. A lot of benefits for upstream and downstream products and also individuals have been gained. In this talk, we will introduce our experience of upstream testing, the effort we put, and the benefits and achievements we get. Also some tips about undertaking upstream testing and insights regarding how to cooperate with developers upstream better will be shared. We intend to call for more participation from both developers and testers for upstream testing to help improve the upstream code quality together.

DevOps and Automation, Security and Compliance
Metcalf Small Ballroom (capacity 100)
13:00
35min
Porting and Generalizing Dynamic Privilege in Linux
Arlo Albelli

Dynamic Privilege is the ability for an authorized process to acquire and
relinquish hardware privilege (supervisor privilege) on the fly. Recent work
in our research group introduced the notion of Dynamic Privilege, and the
attendant kernel mechanisms to introduce it to the Linux kernel. This permits
the exploration of several interesting optimizations and novel approaches to
system specialization - for example, shortcutting long code paths by calling
internal kernel routines.
The initial implementation was developed in x86_64. In this talk, we will
present our work in porting the core primitives for Dynamic Privilege to
ARM64 and discuss the details of this approach. Through a comparison of the
ARM64 and x86 implementations, we will seek to differentiate the functional
goal of Dynamic Privilege from the underlying architectural mechanisms. In
doing so, we will summarize what we have learned through the process of
generalizing its implementation beyond a single architecture. Finally, we
will discuss how our experiences introducing the mechanism to ARM64 inform a
natural path towards a RISC-V implementation which we will briefly introduce.

PhD Highlight
East Balcony (capacity 80)
13:35
13:35
5min
Break
Metcalf Small Ballroom (capacity 100)
13:35
5min
Break
East Balcony (capacity 80)
13:35
5min
Break
Conference Auditorium (capacity 260)
13:35
5min
Break
Terrace Lounge (capacity 48)
13:40
13:40
35min
(Less Than) 50 Ways to Build Multi-Arch Containers
Adam Kaplan, Urvashi Mohnani

Are there 50 ways to build multi-arch container images? Perhaps - which is why building containers for multiple computer architectures can be so hard. Given the various tools and methods to do this, as a developer it can be confusing to understand and figure out what is best for your use case. In this talk, we will do a deep dive on what multi-arch container images are and explore all the different ways of creating them. Get ready to learn how to make containers that can run anywhere on anything!

Cloud, Hybrid Cloud, and Hyperscale Infrastructure
Conference Auditorium (capacity 260)
13:40
35min
Minimizing Infrastructure Exposure with Open Source
Ran Ne'man

With virtually everyone in the cloud, exposing infrastructure on public networks––well established as a bad security practice––remains more popular than you would think. In this talk we'll take a look at common types of infrastructure exposure, as it applies to modern cloud native operations.

There are a few popular ways to tackle exposed infrastructure - leveraging a bastion host, while effective, still requires a lot of effort with rotating credentials to avoid the risks of shared static credentials. Other methods include VPN/ZTNA solutions, but these come with a price tag, and while commercial clouds come with built-in capabilities for making public infrastructure private, this all comes with a lot of overhead and complexity. However, this problem is solvable all with open source tooling built for public clouds like AWS or GCP.

In this talk we'll demo with a simple stack how to minimize exposure on public networks, and best practices to ensure your environments remain secure, and accessible.

DevOps and Automation, Security and Compliance
Metcalf Small Ballroom (capacity 100)
13:40
35min
The hard problems: towards stronger checks on dependencies and compose inputs in Fedora
Adam Williamson

We have made substantial strides towards improving Fedora's quality and reliability through automated testing in recent years. Critical path updates are gated on extensive integration tests in openQA, and many packages have opted into gating on sanity and functionality tests via Fedora CI. dist-git commits can also be tested for buildability, installability and functionality via Fedora CI. However, there are still some substantial opportunities for improvement. Gating on installability could be enforced distribution-wide for packages that are in the critical compose path. The same could be done for reverse dependency testing, with some improvements to the testing itself. There are also many opportunities to improve testing and gating of compose inputs like comps, kickstarts, and other configuration elements, and of changes like package retirements that can also cause unexpected consequences. This talk will present ideas, plans and work towards these goals.

Linux Distributions and Operating Systems
Terrace Lounge (capacity 48)
13:40
35min
Zero-instrumentation observability based on eBPF
Peter Zaitsev

Observability is a critical aspect of any infrastructure as it enables teams to promptly identify and address issues. Nevertheless, achieving system observability comes with its own set of challenges. It is a time- and resource-intensive process as it necessitates the incorporation of instrumentation into every application.
In this talk, we will delve into the gathering of telemetry data, including metrics, logs, and traces, using eBPF. We will explore tracking various container activities, such as network calls and filesystem operations. Additionally, we will discuss the effective utilization of this telemetry data for troubleshooting.

Future Tech and Open Research
East Balcony (capacity 80)
14:15
14:15
5min
Break
Metcalf Small Ballroom (capacity 100)
14:15
5min
Break
East Balcony (capacity 80)
14:15
5min
Break
Conference Auditorium (capacity 260)
14:15
5min
Break
Terrace Lounge (capacity 48)
14:20
14:20
35min
Container life cycle management in Automotive/Edge deployments.
Yariv Rachmani, Rakesh Musalay

Explore Podman Quadlet's integration with systemd for efficient container management in automotive projects, enhancing resource utilization and management.

Edge, Mobile, and Automotive
Metcalf Small Ballroom (capacity 100)
14:20
35min
Learn about Linux, containers, and networking through self-hosting
Justin Sun

Self-hosting is running a cloud service on your own while keeping more control over your private data. It's also a great way to learn about Linux, open source software, and how to operate a server.

I will describe my journey to self-hosting and my use of photo, music, and video sharing applications running in containers. Before self-hosting, I relied big tech for these services.

You'll learn about how to acquire and setup a Linux server, install applications running in containers, and secure access to your server. This talk is for developers, designers, and anyone interested in learning how to operate a server for personal use that's available 24/7 and can be used as a personal cloud.

Linux Distributions and Operating Systems
Terrace Lounge (capacity 48)
14:20
35min
ORAS: Powering the next generation of Cloud Native
Andrew Block

Containers have become fundamental to cloud native and much of its success was due to Docker providing a runtime along with an extensive set of utilities that enabled anyone to easily consume and run containers. OCI artifacts are a way to publish and store content within a container registry in addition to container images and the community is just beginning to see the potential opportunities.

ORAS is a Cloud Native Computing Foundation (CNCF) sandbox project that provides a set of utilities and libraries for interacting with OCI artifacts, and like Docker in the past, has enabled those interested with the tools for getting started and being productive.

In this session, attendees will become immersed in the world of OCI artifacts including how ORAS plays a key role in their adoption and use. By learning the fundamentals of OCI artifacts including their use cases, it will become clear why the ORAS project has become an essential tool when working with this technology and has become adopted by many Open Source projects including those in the security and AI/ML domains.

Future Tech and Open Research
East Balcony (capacity 80)
14:20
35min
When Boring is Good - Ensuring a Consistent Installation Time for ROSA with Hosted Control Planes
Russ Zaleski

There are few cluster interactions that are more noticeable than installation success and time. It is our first exposure to a product and if it does not go well it may be our last encounter with it. The goal is to make it an interaction that is boring and quickly forgotten. Come on our journey of triumph and tribulations as we discuss how, through extensive testing, the Red Hat OpenShift Performance and Scale team was able to achieve a consistent and reliable installation time and more importantly, a good first user experience on ROSA with Hosted Control Planes.

Cloud, Hybrid Cloud, and Hyperscale Infrastructure
Conference Auditorium (capacity 260)
14:55
14:55
25min
Coffee Break
Metcalf Small Ballroom (capacity 100)
14:55
25min
Coffee Break
East Balcony (capacity 80)
14:55
25min
Coffee Break
Conference Auditorium (capacity 260)
14:55
25min
Coffee Break
Terrace Lounge (capacity 48)
15:20
15:20
35min
Accelerating Linux Boot Time: Techniques and Strategies for Optimal Performance
Eric Curtin, Ed Chong, Brian Masney

In this session, we will explore a variety of strategies and techniques
to optimize boot time, from measuring boot performance to specific
optimizations within systemd, kernel, and filesystem configurations.
We'll cover everything from the basics to advanced methods, ensuring
that by the end, you have a comprehensive understanding of how to
achieve faster boot times on your Linux systems.

1. Measuring Boot Performance

Before diving into optimizations, it's crucial to measure and
understand your current boot performance. This helps in identifying
bottlenecks and evaluating the impact of changes.

  • Tools:
  • systemd-analyze - Provides a detailed breakdown of the boot process.

  • Steps:

  • Use systemd-analyze time to get a high-level overview of the boot time.
  • systemd-analyze blame shows the time taken by each service.
  • systemd-analyze plot > boot.svg generates a graphical representation.

2. Optimizations in systemd

Systemd, being the init system and service manager, plays a
significant role in boot time. Optimizing systemd can lead to
substantial improvements.

  • Parallelization:
  • Enable parallel execution of units where possible using
    DefaultDependencies=no in unit files.

  • Service Optimization:

  • Disable unnecessary services with systemctl disable.
  • Use systemd-analyze critical-chain to identify and minimize the
    impact of critical services.
  • Implement on-demand services using socket activation.

3. Kernel and Initramfs Optimizations

Optimizing the components that are loaded during the early boot phase
can significantly reduce boot time.

  • Building Components:
  • Directly in the Kernel: Compiling essential components
    directly into the kernel (using make menuconfig) avoids the overhead
    of loading modules during boot.
  • As Modules in the initramfs: Use dracut to include only the
    necessary modules in the initramfs, reducing its size and load time.
  • Modules in the Rootfs: Delay the loading of non-critical
    modules until after the root filesystem is mounted to expedite early
    boot stages.

4. Expedited Read-Copy Update (RCU) Mechanisms

  • RCU Boosting:
  • Enable RCU boosting to prioritize RCU callback threads, reducing
    the time spent in the quiescent state.

5. Efficient Read-Only File System (erofs)

  • Advantages of erofs:
  • EROFS (Enhanced Read-Only File System) offers faster access times
    due to its optimized compression and reduced metadata overhead.

  • Implementation:

  • Convert static parts of your root filesystem to use EROFS,
    improving read performance during boot.

6. Initramfs Minimization

  • Size Reduction:
  • Minimize the initramfs size by stripping out unnecessary modules
    and files, using tools like dracut with the --omit and --add
    options.

Conclusion

Optimizing boot time requires a multi-faceted approach, involving
careful measurement, fine-tuning of systemd, strategic kernel and
initramfs configurations, and leveraging advanced filesystem
technologies. By systematically applying these techniques, you can
achieve a significant reduction in boot times, enhancing the overall
performance and responsiveness of your Linux systems.

Edge, Mobile, and Automotive
Metcalf Small Ballroom (capacity 100)
15:20
35min
Enhancing Infrastructure as Code (IaC) with AI: A Novel Approach to Generating Terraform Configurations for Google Cloud Platform
George Trammell, Max Karambelas

In the rapidly evolving domain of cloud computing, the ability to efficiently deploy and manage infrastructure is paramount. This project introduces a novel approach to producing Infrastructure as Code (IaC) by leveraging a custom retriever architecture in conjunction with a Large Language Model (LLM) to automatically generate single-file Terraform configurations for Google Cloud Platform (GCP). Our approach combines the capabilities of multimodal databases and self-querying retrievers through Retrieval Augmented Generated (RAG) in order to integrate both text-based documentation and Terraform code samples, thus enhancing an LLM's understanding and generation of complex cloud infrastructures.

At the core of our methodology is the development of a specialized database architecture, comprising a vector database for semantically rich documentation and a retriever-friendly filesystem for Terraform code samples and their GCP product interrelations. This dual-database setup facilitates precise semantic searches and complex relational queries, enabling an LLM to access a rich corpus of GCP knowledge and examples. We further expand our generative capabilities by incorporating Terraform modules into our database, addressing the challenge of understanding and generating configurations that leverage both basic resources and complex preexisting solutions for comprehensive GCP project deployments.

Our database architecture is intricately linked to an LLM through a RAG application, in which the data first passes through an embeddings and vector storage model. This setup serves as a dynamic context window for a specialized retriever which assigns detailed metadata to each document within the database. The generated metadata summarizes the expected use case for that document and explains how its contents can be integrated into a larger Terraform project. Utilizing this expanded context, our structured conversation chain can adapt user queries to optimally convey the desired project specifications to an LLM, enhancing the model's ability to generate fully complete and actionable Terraform configurations.

Further significance lies in our project’s potential to radically simplify cloud infrastructure management, making it more accessible to developers and organizations by reducing manual coding requirements and barriers to entry. By automating the generation of Terraform configurations, we aim to accelerate deployment times, decrease potential human error, and democratize advanced cloud infrastructure setups. Our project’s retriever architecture is also of notable interest, as this design could be easily adapted to generate Terraform for AWS, Azure, or other IaC-compatible platforms with many distributed services.

This project stands at the intersection of AI and cloud computing, offering a user-friendly solution to the complexities of modern cloud infrastructure management. We aim to show that AI can pave the way for more intelligent, efficient, and accessible cloud computing paradigms.

Future Tech and Open Research
East Balcony (capacity 80)
15:20
35min
Podman 5 & Long-Term Software Maintenance
Matthew Heon

The Podman project is now 7 years old, and just released a new major version, Podman 5, with a healthy number of new features - and many breaking changes. This talk will dive into the balance between maintaining compatibility and making changes that break users. Attendees will learn how the Podman team decides to do major releases and how major and breaking changes are evaluated through the lense of the recent Podman 5 release.

Cloud, Hybrid Cloud, and Hyperscale Infrastructure
Conference Auditorium (capacity 260)
15:20
35min
bpftrace: what is it, what's new, and where is it going
Jordan Rome

This talk with re-introduce bpftrace, which is a high-level tracing language for Linux BPF that is used for debugging, observability, and a variety of other use-cases. It's designed to be a quick way to utilize the powerful BPF Linux subsystem. This talk will also go over new features/fixes and dive into the future plans for bpftrace including ahead-of-time compilation, improved C++/Python language support, user-functions, and trace format output.

Linux Distributions and Operating Systems
Terrace Lounge (capacity 48)
15:55
15:55
5min
Break
Metcalf Small Ballroom (capacity 100)
15:55
5min
Break
East Balcony (capacity 80)
15:55
5min
Break
Conference Auditorium (capacity 260)
15:55
5min
Break
Terrace Lounge (capacity 48)
16:00
16:00
35min
Jamming with CRI-O and crun: Facilitating the Convergence of AI, WASM, and Kubernetes
Peter Hunt

Discover how integrating WebAssembly (WASM) in Kubernetes, facilitated by CRI-O and crun, enables the effortless deployment of Generative AI models. Learn about the ongoing advancements in CRI-O + WASM integration. Explore potential optimization techniques for efficient deployment of WASM workloads in Kubernetes. This talk is ideal for those interested in the intersection of WASM, Kubernetes, and AI.

Future Tech and Open Research
East Balcony (capacity 80)
16:00
80min
LLMs 101: Introductory Workshop
Hema Veeradhi, Surya Pathak, Aakanksha Duggal

Are you curious to learn about Large Language Models (LLMs), but unsure how and where to begin? This workshop is designed with specifically you in mind. LLMs have emerged as powerful tools in natural language processing, yet their implementation poses challenges, particularly in managing computational resources effectively.

During this workshop, we will delve into the fundamentals of LLMs and guide you in selecting the appropriate open source models for your requirements. We will discuss the concept of self-hosted LLMs and introduce containerization technologies such as Kubernetes, Docker, and Podman. Through illustrative use-cases like RAG application, text generation or speech recognition, you will learn how to set up LLMs locally on your laptop and build container images for the models using Podman. We will also be exploring model serving and inference methods, including interaction with the model via a simple UI application. Moreover, the workshop will cover model evaluation techniques and introduce various metrics that can be utilized to effectively measure the performance and quality of model outputs.

Attendees will gain practical knowledge and skills to effectively harness the capabilities of LLMs in real-world applications. They will understand the challenges associated with managing computational resources and learn how to overcome them. By the end of the workshop, participants will be equipped with the tools to set up and deploy LLMs, evaluate model performance, and implement them in various natural language processing tasks.

Artificial Intelligence and Data Science
Terrace Lounge (capacity 48)
16:00
35min
Linux in Cars! CentOS AutoSD, SDVs, and more
Jeffrey Osier-Mixon

This presentation covers the current state of in-vehicle automotive efforts for Linux. Topics will include the automotive industry transition to software-defined vehicles (SDVs), the CentOS Automotive SIG and its Automotive Stream Distribution (AutoSD), andthe many open source communities in automotive such as Eclipse SDV and SOAFEE.

Edge, Mobile, and Automotive
Metcalf Small Ballroom (capacity 100)
16:00
35min
Your own personal supercomputer within 15 minutes or less
Jason C. Nucciarone

Your own personal supercomputer? Within 15 minutes? Are you crazy?

I promise you that the answer is no. What if you could have a supercomputing system that ships with all the necessary components – storage, networking, identity, orchestration, and more – wherever you want it? And the system could easily be deployed on your laptop, home lab, or public/private/hybrid in a short amount of time? The barrier between general purpose cloud computing and traditional high-performance computing (HPC) is coming down with the increasing need for HPC systems to be capable of supporting complex, heterogeneous workloads, and the need for cloud systems to be more efficient for resource intensive tasks such as training large-language AI models on a massive, distributed datasets. Rather than painstakingly evaluating each cloud-based or on-premise HPC platform and the complementary tools offered by each vendor, what if you could just use a single, open source project that works both in the cloud and on-premises? And that project holds true to the straight-forward 15 minute deployment time promise? Well… there is such an open source project...

In this talk, you will learn about Charmed HPC, an open source HPC infrastructure stack being developed by the Ubuntu HPC community team. We will explore how all the services that compose Charmed HPC are integrated together using the Juju orchestration engine, and how common life-cycle events such as compute node registration and filesystem provisioning are automatically handled by Charmed HPC. You will also learn how Charmed HPC enables you to take control of where your supercomputing system is deployed and how you can leverage multiple cloud platforms such as OpenStack, LXD, GCP, AWS, or Azure... within 15 minutes!

At the end of the talk, we will demo a development preview of Charmed HPC showcasing the features we are actively developing. For example, we will demo the new CephFS storage provider and show Open OnDemand in action as the web-based user interface for our test cluster. Lastly, we will outline our current roadmap for Charmed HPC’s development - such as adding both an observability and identity + access management platform - and opportunities for how the open source community can get involved with the Ubuntu HPC community team.

Cloud, Hybrid Cloud, and Hyperscale Infrastructure
Conference Auditorium (capacity 260)
16:35
16:35
5min
Break
Metcalf Small Ballroom (capacity 100)
16:35
5min
Break
East Balcony (capacity 80)
16:35
5min
Break
Conference Auditorium (capacity 260)
16:40
16:40
35min
Everybody is talking about Confidential Computing - this is the minimum Developers should know
Klaus Heinrich Kiwi, Yash Mankad

Confidential Computing is an emerging set of technologies that are coming to the Linux Platform and Cloud providers with the availability of Intel TDX, AMD SEV-SNP and other similar technologies. In this primer, you'll learn more about WHAT Confidential Computing is, WHY is it important, and HOW the upstream development is going for what is a lot more than just hardware-enablement, but involves collaboration of an entire stack from the Hardware Root-of-Trust up towards Remote Attestation. If you would like to learn about concepts such as FHE (Fully Homomorphic Encryption), SMPC (Secure Multi Party Computation), and TEEs (Trusted Execution Environments) as well as SVSMs (Secure Virtual Service Machines), vTPMs (virtual Trusted Platform Modules), UKI (Unified Kernel Image) and how this letter soup makes any sense together for Confidential Computing, this session is for you! We will also talk briefly about communities, how to engage, and industry bodies such as the Confidential Computing Consortium (A Linux Foundation project) and their roles in the ecosystem.

Future Tech and Open Research
East Balcony (capacity 80)
16:40
35min
Freedom of Interference (FFI) on Containers - Paving the Way for Uninterrupted Car Operations
Douglas Schilling Landgraf, Yariv Rachmani

In this presentation, we delve into the concept of Freedom of Interference (FFI) on containers with podman and its profound implications for the next generation of vehicles and embedded devices. FFI on containers is set to revolutionize the way cars, airplanes, and embedded devices work.

Join us as we demonstrate where the next generation of automobiles, aircraft, and embedded devices will work without disruptions but in efficiency, safety, and incredible performance. Get ready to witness the future of mobility and embedded systems, where interference is a distant memory, and the possibilities are limitless.

Edge, Mobile, and Automotive
Metcalf Small Ballroom (capacity 100)
16:40
35min
Jumpstarter: Cloud-Native Hardware in the Loop
Nick Cao

Jumpstarter helps you test your software stack in your hardware stack in CI/CD pipelines and streamline your development workflow. Where traditional cloud software has been testing this way for a long time now, testing software for edge devices has been a challenge, in many cases emulators for the hardware are not available, the GPU, the specific sensors, etc. In this talk we'll look into the architecture and implementation of Jumpstarter, and how it fits into the Hardware in the Loop ecosystem.

Cloud, Hybrid Cloud, and Hyperscale Infrastructure
Conference Auditorium (capacity 260)
18:00
18:00
180min
Conference Party at Bleacher Bar!

Join us for a night of fun at Bleacher Bar located right under the Fenway stadium! It is about a 20 mins walk from the conference venue.
The party is open to all on a first come, first serve basis so get there early! Conference badges and ID are required to enter.

Address: 82A Lansdowne St, Boston, MA 02215.

Details TBA

General
Offsite Location
09:30
09:30
50min
Student and Intern Showcase!
Urvashi Mohnani, Meera Malhotra, Austin Jamias

Start the day off with a view into real student experiences with open source! Undergraduates from the UMass Lowell SoarCS summer program will present summaries of the fun and amazing projects they have worked on over the summer. A few Red Hat interns will also speak about their ongoing projects and experiences as interns!

Join us to learn more about what the future generation is up to in the open source world!

SoarCS Projects

  • QuickSwitch - Phi Nguyen & Cristian Cannella
  • Tileset Maker - Eevie Booth
  • Haunted Mansion Escape - Himani G., Kelby C., & Priscilla V.
  • Foxfire Alchemy - Armando Oritz
  • ALPR Prototype - Lourenco DaSilva, Vettri Velmurugan, & Jack Scholander
  • PYphone - Om Patel, Jayam Patel, & Shubh Patel
  • Scramble - Molly Cao

RH Intern Projects

The Open Education Project: Using Technology to Improve Classroom Environments - Meera Malhotra
This presentation covers the Open Education Project (OPE), an open source project started by Red Hat Research that provides tooling for both students and professors. OPE is a command line tool that professors can create customizable containerized lab environments to teach computer science with, as well as tooling to author their own hosted textbooks with. The talk will go over how the tool is used, how it can assist professors in improving classroom environments, and how professors (or anyone interested) can create content with the tool. It will cover a student’s perspective who has both used the tool to learn programming and later contributed to the development of the tool, and share textbook content created from the tool.

ESI UI on OpenStack Dashboard - Austin Jamias
ESI stands for Elastic Secure Infrastructure, which is a resource allocation and initial provisioning tool that we use in the Mass Open Cloud. Please use the full name instead of "ESI" in the program (but you don't need to change the slides. Austin did not provide a bio, so you can say simply that he is a Boston University undergraduate and a Red Hat intern in the Research group.

General
Metcalf Hall (capacity 300)
10:30
10:30
35min
Detoxification of LLMs using TrustyAI Detoxify and HuggingFace SFTTrainer
Christina Xu

Detoxification of large language models is challenging because it requires the curation of high quality, annotated data that needs to align with human values. The standard protocol for LLM detoxification is to perform prompt tuning and then supervised finetuning on a pretrained model. While HuggingFace’s Supervised Finetuning Trainer (SFT) streamlines this protocol, it still requires high quality, human aligned training data which is expensive to curate. TrustyAI Detoxify is an open source library for scoring and rephrasing toxic content generated by LLMs.

During this talk, Christina will show how TrustyAI Detoxify can be leveraged to rephrase toxic content for supervised fine-tuning. Attendees will learn the capabilities of TrustyAI Detoxify and how it can be used with HuggingFace’s SFT to optimize detoxification.

Artificial Intelligence and Data Science
Conference Auditorium (capacity 260)
10:30
35min
KubeArchive: Don’t use Kubernetes as a Database
Sam Koved, Greg Allen

KubeArchive aims to provide a way to archive kubernetes objects to a database and provide an API to allow you to search and query data from archived kubernetes objects. You cannot create infinite objects in a Kubernetes cluster. Objects like Jobs, PipelineRuns, and TaskRuns stay on your cluster after their workload completes. The data they contain can also be difficult to search and query for. They also take up valuable space that could be used by active workloads on your cluster. Deleting them means that you lose logs and other related data about them. The Tekton community solved this problem by archiving "*Run" objects using Tekton Results, allowing you to store these objects in a database instead of on the cluster. This allows you to prune them from the cluster and makes it easier to perform complex queries and analysis on them. What if we could expand this pattern to any Kubernetes object?

KubeAchive is a system for archiving Kubernetes objects to a database in real time. It also provides an API that makes it easy to search and query archived objects from the database. This talk will cover everything you need to know to get started using KubeArchive. We will demo how to setup KubeArchive to archive different resources on your cluster as well as how to search and query data stored in KubeArchive. At the end, we will discuss the roadmap and future plans for KubeArchive.

Cloud, Hybrid Cloud, and Hyperscale Infrastructure
Metcalf Small Ballroom (capacity 100)
10:30
35min
Team cohesion and team efficiency in software development
Karl Johan Grahn

How does remote work in software development affect team efficiency? This question intrigued us, especially during the COVID-19 pandemic when it became necessary for people who could work remotely to work remotely. Despite the hardships and loss of lives, the pandemic created a unique setting for investigating how a majority of the workforce behaves during remote work.

Using regression analysis, we examined the correlation between team cohesion and team efficiency before and during the pandemic. The Language Style Matching (LSM) algorithm evaluated team cohesion based on verbal mimicry in chat content. Team performance was evaluated based on git contributions and tickets done. Team efficiency was analyzed via Data Envelopment Analysis (DEA).

The result is that when efficiency is correlated with the LSM score (cohesion) for teams working remotely, there is a significantly strong positive correlation, suggesting cohesion plays an important role in team efficiency when working remotely.

Agility, Leadership, and DEI
East Balcony (capacity 80)
10:30
80min
Workshop: Supercharging the Developer Workflow with AI
Beau Morley, Andy Braren

We’re rapidly entering a new era of AI-enhanced software development and AI-infused applications. As we move into this new age of AI, how might we use this new superpower to make our lives as developers easier?

In this design thinking workshop we’ll leverage recent user research conducted by Red Hat’s User Experience Design Team to brainstorm how AI could help address real developer pain points. You'll have the option to share your own experiences working with and using AI, and contribute solution ideas in a fun, interactive format.

Artificial Intelligence and Data Science
Terrace Lounge (capacity 48)
11:05
11:05
5min
Break
Metcalf Small Ballroom (capacity 100)
11:05
5min
Break
East Balcony (capacity 80)
11:05
5min
Break
Conference Auditorium (capacity 260)
11:05
80min
Konflux Community BOF
Brian Cook

If you are a current Konflux user, a future one, a contributor or just curious, come hang out and talk about Konflux, build + supply chain security, our mission, roadmap etc.

DevOps and Automation, Security and Compliance
Metcalf Hall (capacity 300)
11:10
11:10
35min
Delegation is a Love Language
Jeff Ligon

The act of delegating work to someone else can be a gift or a curse. Come learn about how to make it more useful for you and the delegate. We'll walk through examples of doing it well and doing it badly, and examine the consequences of both. You might even leave the talk with an assignment designed to help you! homework is optional of course

Agility, Leadership, and DEI
East Balcony (capacity 80)
11:10
35min
Nmstate Polyglot Model - Translating Natural Language into Nmstate States
Wen Liang

Nmstate is a network management tool that is particularly focused on reporting and configuring network settings on hosts in a declarative manner. It uses a state-driven approach where the desired state of the network settings is described in the YAML file, and applying the desired state using the tool ensures the system's actual state matches the desired state presented in the YAML file.

While it is rather easy for users to describe in natural language what they would like to configure, it can be hard to find the right options or specify the right syntax in the desired states. AI provides a way to translate natural language into Nmstate state in real-time and low latency, which also strictly conforms to the Nmstate setting schema. Implementing the Nmstate Polyglot model can enhance the user experience, making it more intuitive and less daunting than traditional network management tools. What’s more, the Polyglot model can significantly speed up the process of network configuration by reducing the complexity and time required to write and debug YAML configuration files manually. Last but not least, This model can be easily integrated into larger, automated systems, facilitating broader network management tasks within IT environments, possibly through voice commands.

Artificial Intelligence and Data Science
Conference Auditorium (capacity 260)
11:10
35min
Why Open Source and Web3 need to find (back) together
Daniel Riek

Free Software started out as movement to empower the downstream user and give end-users a way to be in control of their own technology. But with the rebranding to Open Source and the commercial success, this aspect has taken backseat to corporate interests. In addition, Cloudification has driven a general trend to centralization that has not spared Open Source. At the same time, a rift has appeared between very much Open Source-based decentralization movements - namely Fediverse and Web2 - and the mainstream Free and Open Source Software development. Some of this rift is a reflection of different technology philosophies, some of a broader cultural rift.This has practical consequences for Free and Open Source Software development and its growing dependency on proprietary centralized services as well as its ability to support the original goal of technological self-sovereignty.

Cloud, Hybrid Cloud, and Hyperscale Infrastructure
Metcalf Small Ballroom (capacity 100)
11:45
11:45
5min
Break
Metcalf Small Ballroom (capacity 100)
11:45
5min
Break
East Balcony (capacity 80)
11:45
5min
Break
Conference Auditorium (capacity 260)
11:50
11:50
35min
Building the Code, Nurturing the People – Overcoming Challenges in Open Source Community Management
Carles Arnal

Maintaining a vibrant and successful open-source project involves more than just lines of code. In this talk, I'll share my experiences as a Principal Software Engineer in the Apicurio community, focusing on the often overlooked aspects of community management. From fostering a welcoming and inclusive environment to managing expectations, we'll dive into the following challenges:

Engagement: Attracting new contributors and maintaining sustained participation.
Communication: Navigating various time zones and ensuring effective information flow.
Conflict Resolution: Managing disagreements and fostering a collaborative environment.
Decision-making: Balancing transparency with the need for clear project direction.
Sustainability: Finding models that encourage long-term health and project growth.

I'll provide insights and real-world examples from Apicurio, offering strategies for overcoming these hurdles and fostering a thriving open-source community. Whether you're a code contributor, project maintainer, or simply passionate about collaborative software development, this talk will highlight the vital work of community management.

Open Source Success Stories
Terrace Lounge (capacity 48)
11:50
35min
Integrating DEI values into business strategies and driving values
Apurva Bhide, Rajan Shah

Diversity fosters innovation, adaptability, and emphasizes the importance of inclusive leadership and equitable participation. Integrating diversity, equity, and inclusion (DEI) into business strategies not only aligns with ethical principles, but also drives financial success by unlocking the full potential of employees, attracting diverse talent & customers, and fostering innovation and market expansion.

Aligning DEI helps companies also understand their (diverse) markets, appeals to a broader customer base, and in some jurisdictions is increasingly becoming a legal imperative – all of these factors lead to increased sales and revenue and better decision making. Diversity also mirrors how software is built in the open source software community.

This presentation covers how DEI positively impacts organizational performance, customer satisfaction, and long-term success and competitiveness in a rapidly evolving global landscape. It also focuses on how to prevent tokenism, how not to consider these points only for optics but recognize individual capabilities, and address unconscious gender bias (benevolent sexism).
A standout example of community empowerment is seen in initiatives such as Red Hat’s Women’s Leadership Community (WLC), especially in WLC-India, where male members actively support the committee’s mission as vice chair and core members.

Agility, Leadership, and DEI
East Balcony (capacity 80)
11:50
35min
Serverless Java in Action: Cloud Agnostic Design Patterns and Tips
Daniel Oh

You've probably seen how to create a Function-as-a-Service with one of the cloud providers, but if this is all you know about Serverless, prepare to have your mind blown! In this session we'll show you how to create a production-grade, cloud-agnostic, event-driven serverless solution with Quarkus, a Java stack optimized for fast startup and small footprint; and Knative, an open source community project for deploying, running and managing serverless applications on Kubernetes. Say goodbye to vendor lock-in and hello to Supersonic Subatomic Java-based Serverless bliss!

Cloud, Hybrid Cloud, and Hyperscale Infrastructure
Metcalf Small Ballroom (capacity 100)
11:50
35min
Who Watches the Watchmen? Understanding LLM Benchmark Quality
Erik Erlandson

The ecosystem of Large Language Models (LLMs) is extremely active, with new models being released every week. LLM leaderboards have emerged as a popular resource on model hubs, such as Hugging Face, where purveyors of new models can measure up against their competition, and model users can evaluate new alternatives for their business needs.

Leaderboards rank models using one or more popular LLM benchmarks: data sets with queries and expected answers that LLMs can be tested against. But how well do these benchmarks really measure model effectiveness? There are many ways for a user to ask a question, and many ways to express a correct (or incorrect) answer! There are also multiple requirements for LLM outputs besides factual correctness, including providing responses that do not harm human users or providing answers that are socially sensitive. Measuring model quality in any of these ways must contend with the practically infinite variations of human language. How robust is a model with respect to changes in a query? How well does a benchmark cover the full range of conceivable human inputs? Does a good score on a benchmark translate into good performance for your specific application?

In this talk, Erik Erlandson will take the audience on a tour of the multiple dimensions of model performance and quality, and the popular benchmarks for measuring them. He will explain how benchmarks work, what they are measuring, and what they might not be measuring. Attendees will leave armed with the knowledge to go beyond the LLM leaderboards and ask smart questions about the models they are choosing.

Artificial Intelligence and Data Science
Conference Auditorium (capacity 260)
12:25
12:25
60min
Lunch
Metcalf Hall (capacity 300)
12:25
60min
Lunch
Metcalf Small Ballroom (capacity 100)
12:25
60min
Lunch
East Balcony (capacity 80)
12:25
60min
Lunch
Conference Auditorium (capacity 260)
12:25
60min
Lunch
Terrace Lounge (capacity 48)
13:25
13:25
80min
Adopting Kubernetes
Karl Johan Grahn

This meetup is for adopters of micro-services and Kubernetes, in particular OpenShift. Topics to be discussed are:

  • Have you built your own Kubernetes platform?
  • Have you used a managed service?
  • Based on your experience, what would you have done differently if you could do it again?
DevOps and Automation, Security and Compliance
Metcalf Hall (capacity 300)
13:25
35min
Building the Responsible Workforce of Tomorrow through PIT
Colette Basiliere

Public Interest Technology (PIT) is an emerging field that asks us to create technical solutions that center the lives of people, equity, inclusion, and accountability to address pressing problems. While many technical solutions have been suggested for the problems our communities face, successful technical solutions will need to include PIT values to better society. Public Interest Technology - New England (PIT-NE) is a new consortium that brings together leaders from academia, industry, government, and non-profits to co-design solutions and create programming to teach fundamental PIT skills to the workforce of tomorrow to ensure technology is used for good.

This interactive talk will introduce the practice of PIT and how everyone can critically assess technology's ethical and social implementation across all sectors. We will also look at a case study of a summer program where undergraduate students from across the region come together to design solutions for project partners while learning PIT skills. While PIT continues to grow, all technologists can learn how to approach problems like a public interest technologist to ensure technology works to serve and protect those who use it by delivering better outcomes.

Open Source Success Stories
Terrace Lounge (capacity 48)
13:25
35min
How to Build Collaboration and Influence Open Source Projects
Michael McCune

Perhaps you’ve joined a project, maybe several. You’ve made a few commits and started to recognize the voices of project maintainers in the office hours. You might be contributing as part of a work assignment or as a passion project. Now you might have questions such as
How do I take my skills to the next level?
How do I propose and implement large changes to the project?
How do I become one of the maintainers?
In this talk, Michael will discuss the questions, providing examples gleaned from more than a decade of open source contribution. Michael will focus on skills you can develop to improve your effectiveness and presence in open source communities. Expect to walk away from this talk with new tools for driving change and a boost to your F/OSS enthusiasm.

Agility, Leadership, and DEI
East Balcony (capacity 80)
13:25
35min
Scale your Batch / Big Data / AI Workloads Beyond the Kubernetes Scheduler
Anish Asthana, Kevin Postlethwait

Whether you want to run AI model distributed training, or big data processing on Kubernetes, chances are you’ll face some challenges when scaling your workloads, like resource fragmentation, lack of all-or-nothing semantics for quota management and auto-scaling, low throughput, and limited priority and preemption management. The Kubernetes scheduler has historically been designed to orchestrate containers of (micro-)services, rather than workloads of highly-coupled, heterogeneous and resource intensive batch processes.
There has recently been a Cambrian explosion of projects in the Kubernetes ecosystem that have innovated to solve these challenges such as Karmada, Koordinator, Kueue, MCAD, Volcano and YuniKorn. In this session, we’ll compare these projects, review their design choices, discuss their pros-and-cons, so you’ll have a better understanding of the landscape, and be able to decide which one best suits your needs when it comes to achieving better utilization of your Kubernetes clusters for your batch workloads.

Cloud, Hybrid Cloud, and Hyperscale Infrastructure
Metcalf Small Ballroom (capacity 100)
13:25
35min
Store AI/ML models efficiently with OCI Artifacts
Tom Coufal

Not only are AI/ML models resource hungry at runtime, they also tend to take up a lot of storage space. This may not be obvious, as their raw storage footprint is an order of magnitude smaller than data sets or other associated data. However, it becomes an obstacle when the same models are frequently updated, versioned and progressively distributed over the network to edge devices for inference. Of course, data versioning, deduplication and easy transfer is not a new problem to solve. In particular, the OCI standard does a very good job of solving these problems and has a lot of lessons learned from more than a decade of development.

In this talk we'll explore how OCI Artifacts can be used to efficiently store, version and distribute AI/ML models. We'll look at how we can break an AI model into atomic units, store them in the OCI registry, and later reassemble them locally on the target device. And when an update comes, only the difference is distributed.

Artificial Intelligence and Data Science
Conference Auditorium (capacity 260)
14:00
14:00
5min
Break
Metcalf Small Ballroom (capacity 100)
14:00
5min
Break
East Balcony (capacity 80)
14:00
5min
Break
Conference Auditorium (capacity 260)
14:00
5min
Break
Terrace Lounge (capacity 48)
14:05
14:05
35min
Connecting the Dots: Skupper.io as an application modernization Enabler
Vamsi Ravula

Skupper.io, streamlines application and service connectivity across diverse environments. By creating seamless interconnections in minutes, it eliminates the need for extensive networking planning and reduces overhead. Join us in this talk to discover the transformative role of the open source project skupper.io in the modernization journey of a fictitious healthcare company, illustrating its impact on agile, efficient, and cost-effective modernization strategies. We'll cover various capabilities of Skupper.io which include load balancing, fail over, cost based routing etc.

Cloud, Hybrid Cloud, and Hyperscale Infrastructure
Metcalf Small Ballroom (capacity 100)
14:05
35min
Gotta go fast: how we started the Ubuntu High-Performance Computing team
Jason C. Nucciarone

At the 2022 Ubuntu Summit, a realization was made. There were many groups of individuals working independently on making Ubuntu, traditionally unrepresented in the supercomputing ecosystem, a better distribution for high-performance computing (HPC) workloads. Some of us were working on packaging common HPC applications for Ubuntu, some were developing Juju operators for the Slurm workload manager, and others were working on writing comprehensive documentation. Our realization was that rather than working independently on our overlapping challenges, we should instead work together as a community team within the Ubuntu project!

In this talk, you will learn how our chance meeting at the 2022 Ubuntu Summit lead to the creation of the Ubuntu HPC community team. We will discuss the work that we needed to do to become an official Ubuntu community team, steps we needed to take to set up a successful community team, and how we work together as a globally distributed team to develop Charmed HPC, an open source HPC infrastructure stack for Ubuntu. We will also discuss how the Ubuntu HPC community can interact with other open source communities, and how we can create opportunities for new individuals to become involved with the open source HPC ecosystem. Lastly we will discuss current challenges that we are having such as onboarding new community contributors and supporting our community of users, and come up with potential strategies to help address these challenges.

Open Source Success Stories
Terrace Lounge (capacity 48)
14:05
35min
Self-Hosted LLMs: A Practical Guide
Hema Veeradhi, Aakanksha Duggal

Have you ever considered deploying your own large language model (LLM), but the seemingly complex process held you back from exploring this possibility? The complexities of deploying and managing LLMs often pose significant challenges. This talk aims to provide a comprehensive introductory guide, enabling you to embark on your LLM journey by effectively hosting your own models on your laptops using open source tools and frameworks.

We will discuss the process of selecting appropriate open source LLM models from HuggingFace, containerizing the models with Podman, and creating model serving and inference pipelines. For newcomers and developers delving into LLMs, self-hosted setups offer various advantages such as increased flexibility in model training, enhanced data privacy and reduced operational costs. These benefits make self-hosting an appealing option for those seeking a user-friendly approach to exploring AI infrastructure.

By the end of this talk, attendees will possess the necessary skills and knowledge to navigate the exciting path of self-hosting LLMs.

Artificial Intelligence and Data Science
Conference Auditorium (capacity 260)
14:05
35min
UX Research and Design: Crucial to development
Amber Asaro, Vince Conzola

As a developer, you already know how to build functional software and how to build it right. Research and design that focuses on the end user throughout the software development lifecycle can help you know how to build the right software. Data-informed design, based on well-accepted research practices, ensures that your software products and services not only provide necessary features and functions, but also address the pain points and expectations of your target users. A small investment in research and design conducted throughout the development process leads to more satisfied users and saves you time and money from refactoring code after it’s been released.

We’ll talk about:
- What user experience, user interface design, and information architecture are, and why you should care
- How UX research and design support software development
- How you can use PatternFly, an open source design system, in your work
- How you can use this new knowledge to improve the software you develop

Agility, Leadership, and DEI
East Balcony (capacity 80)
14:40
14:40
5min
Break
Metcalf Small Ballroom (capacity 100)
14:40
5min
Break
East Balcony (capacity 80)
14:40
5min
Break
Conference Auditorium (capacity 260)
14:40
5min
Break
Terrace Lounge (capacity 48)
14:45
14:45
35min
Challenging Our Subconscious Biases: Designing Inclusive and Accessible Experiences in the Era of AI
Lisa Lyman

Join us for a thought-provoking panel discussion with industry experts exploring the ethical considerations in user experience design and the quickly evolving field of AI. Hear how we, as designers, engineers, and researchers, must actively confront our subconscious biases to create inclusive and accessible experiences. These considerations span from the user research recruitment phase to the training of AI models, demonstrating the crucial role ethics play in shaping user experiences that resonate with diverse audiences. The panel will be composed of user researchers, user experience designers, engineering leads and AI technology leads.

Agility, Leadership, and DEI
East Balcony (capacity 80)
14:45
35min
GPU Accelerated Containers on Apple Silicon with libkrun and podman machine
Tyler Fanelli, Jake Correnti

Advances in AI have allowed users to run machine learning models locally on their desktop. However, due to the heterogeneous software stack for AI accelerators, they can be difficult to run efficiently locally. For example, when running macOS on Apple Silicon, a user can build llama.cpp (with Metal backend) and offload the inference work to the M-based GPU. However, running this model will result in your desktop getting a thorough exercise in the process. In an effort to control resources and scope of the model, is it possible to run GPU-accelerated applications from containers, even on macOS?

As containers are mainly a Linux paradigm, the use of them on macOS implies the use of virtualization. Software tools such a podman machine are able to accomplish this by running containers inside of a Linux virtual machine. Recently, libkrun was accepted as a hypervisor backend (i.e. the hypervisor that runs the Linux containers in virtual machines) for podman machine. Latest enhancements in libkrun on macOS have allowed users to run workloads with Apple GPU acceleration.

In this talk, we will discuss how podman machine and libkrun work together (coupled by a new project, krunkit) to make this possible. The talk will conclude with a demonstration running an AI workload offloaded to the host GPU on macOS. In conclusion, users will have a better understanding of how you can leverage podman machine and libkrun to make the most of your hardware when running AI workloads in containers on macOS.

Artificial Intelligence and Data Science
Conference Auditorium (capacity 260)
14:45
60min
Open Sourcing Out Opportunity, Experience, and Passion: Using the Open Source Mindset in the K-12 Education Space
Mary Shakshober, Kevin Hatchoua, PAULINE KIMSOUNG, Matthew Crossman, Malica Armand, Russell Lamberti, Julia Denham

In this panel-style talk, we will learn how the tech industry is partnering with local K-12 schools to expose students to engaging career paths at young age. We will explore this topic from various perspectives ranging from: the benefit to the industry of igniting tech curiosity early, how these efforts impact DEI, why the education community cares about this, why parents and families care about this, recent public school engagements, and more.

Open Source Success Stories
Terrace Lounge (capacity 48)
14:45
35min
Power Efficiency Aware Kubernetes Scheduler
Han Dong

While there has been a variety of Kubernetes Enhancement Proposals (KEPs) to improve its scheduler plugins for a range of resource and node allocation requirements, unfortunately, we find there hasn't been a focus on sustainability. Towards this, we propose a new scheduler - PEAKS (Power Efficiency Aware Kubernetes Scheduler) that can factor in sustainability goals such as power utilization and carbon footprint. PEAKS leverages the Kepler (Kubernetes-based Efficient Power Level Exporter) project, which uses eBPF to probe system wide utilization (i.e. power, memory, CPU, etc) and dynamically expose these metrics via Prometheus. Given these metrics, PEAKS makes dynamic decisions to recommend nodes for pod scheduling, addressing power inefficiencies on underutilized nodes. In this talk, I will present on-going work from collaborators at Boston University, Red Hat, and IBM in deploying PEAKS in a variety of experimental scenarios that range from synthetic to more realistic microservices-based deployments.

Cloud, Hybrid Cloud, and Hyperscale Infrastructure
Metcalf Small Ballroom (capacity 100)
15:20
15:20
5min
Break
Metcalf Small Ballroom (capacity 100)
15:20
5min
Break
East Balcony (capacity 80)
15:20
5min
Break
Conference Auditorium (capacity 260)
15:25
15:25
35min
Cloud-Native Databases in Kubernetes with OpenShift and Postgres
Torsten Steinbach, Gabriele Bartolini, Michael St-Jean

PostgreSQL is this incredible open-source gem that’s been shaking up the database management scene for a cool couple of decades now. Shaped by the genius mind of database science legend Michael R. Stonebraker, it has blossomed into one of the world’s go-to database management systems, loved in both virtualized and bare-metal setups.
The Kubernetes/PostgreSQL/CloudNativePG open-source stack on OpenShift provides unparalleled freedom of choice, mitigating the risk of vendor lock-in across various dimensions:
- Choose between on-premise, public cloud, hybrid cloud, or multi-cloud OpenShift deployment.
- Opt for OpenShift clusters managed internally or by third-party OpenShift aaS providers such as AWS, Azure, Google, or IBM Cloud Paks
- Decide on self-managed PostgreSQL or enlist support from external organizations.
- Embrace “vanilla” Kubernetes, OpenShift or opt for other third-party Kubernetes distributions.
- Build and deploy Enterprise applications with highest qualities of service for HA, DR, Scale and Security, RPO and RTO
- Combine with OpenShift AI to build and run modern AI solutions with Postgres, e.g. using pgvector for RAG patterns

The power to make these choices is now in your hands.

Cloud, Hybrid Cloud, and Hyperscale Infrastructure
Metcalf Small Ballroom (capacity 100)
15:25
35min
Super Accessible No Math Intro to Neural Networks For Beginners
Lance Galletti

Through simple and intuitive examples, this talk will not only teach you what neural networks are designed to do and how they work, you’ll also be presented with a perspective and way of thinking to better grasp their limitations and pitfalls. My hope is that this 30min talk will help provide a foundation to better understand the current AI/ML landscape.

Artificial Intelligence and Data Science
Conference Auditorium (capacity 260)
15:25
35min
Unlock Your Team's Superpower - It's Not in Your Toolbox, It's in Your Head!
Nancy Jain, Anuj Singla

It is essential for people to feel comfortable speaking up in the workplace. When they are too afraid to do so, it can create a hostile atmosphere that stops innovation and collaboration, which ultimately hurts a company's productivity and profit.

“Who is on a team matters less than how the team members interact, structure their work, and view their contributions.”
Rozovsky Julia https://rework.withgoogle.com/jp/guides/understanding-team-effectiveness#identify-dynamics-of-effective-teams

By creating a psychologically safe work environment, team members will feel encouraged to share their ideas, take risks, and learn from their mistakes.
You can compare it to a sports team, where players perform better when they trust each other and feel safe to take risks. The team becomes stronger if the coach (who is the leader in this case) encourages open communication and learning from mistakes.
In this talk, we will explore practical strategies for building psychological safety in the workplace. You will learn how to create an environment where everyone feels valued and heard, using real-world examples and case studies.
By the end of the talk, you will understand the importance of psychological safety for leaders and how it can help create a successful team and organization.

Agility, Leadership, and DEI
East Balcony (capacity 80)
16:15
16:15
45min
Closing + Trivia
Urvashi Mohnani, Sally Ann O'Malley

Join us for the closing of the conference and for a chance to win some prizes by participating in a trivia!

General
Metcalf Hall (capacity 300)