If you want to warm up and enjoy a little bit of exercise before sitting down for the conference talks feel free to join us for a fun run near close to the conference venue.
We want to meet at the #1 tram stop "Tylova" at 7:00 am on Thursday the 13th and are planning to run for about 30-40 minutes.
The details for the start/finish point and the route can be seen in this mapped route we've put together: https://en.mapy.cz/s/lutebebozo
All types of runners, whether fast or slow, are welcome to join. We'll adjust our pace to accommodate everyone so that we can all enjoy it and of course, we can also adjust our route if we need to.
The conference opening with the organizers.
This presentation offers a comprehensive exploration of artificial intelligence (AI) and its trajectory from broad, foundational principles to specialized applications at the technological forefront. Starting with an introduction to AI and its evolution, the presentation transitions to discussing the applications and impact of AI, we spotlight the transformative effects on sectors such as healthcare, finance, automotive, and entertainment, while also addressing the ethical and societal implications that accompany widespread AI adoption.
Diving deeper, the presentation shifts focus towards the frontier of AI technology—edge AI. Here, we uncover the significance of bringing AI processing closer to the data source, highlighting the benefits of reduced latency and enhanced privacy, alongside the challenges faced in implementation. Through real-world examples, attendees will gain insights into how edge AI is being integrated into smart devices, autonomous vehicles, and industrial predictive maintenance.
Concluding with a look at future directions, we speculate on emerging trends and potential breakthroughs in AI, including the role of quantum computing and the importance of AI governance. Designed to be both informative and thought-provoking, this keynote aims to provide a holistic view of AI's current state and its boundless future possibilities, encouraging a dialogue on how we, as a society, can navigate the ethical, technological, and practical challenges ahead. This presentation is a call to action for professionals, researchers, and enthusiasts to reflect on the implications of AI advancements and to participate in shaping a future where technology amplifies human potential and addresses global challenges.
As community architects, we try to make design decisions that will lead to the type of community we're trying to create. We tend to think about this is the same way that we would construct a building (often we call ourselves architects!), but humans are not bricks, and don't always react in rational ways.
In this talk, we'll look at some of the problems we have to confront when designing communities, including:
- How can we get people to contribute to specific tasks/efforts
- How do we motivate & recognise contribution
- How do we encourage the right behaviour and norms, and deal with those who break the rules
- How do we integrate newcomers with our existing members
Drawing on evidence from the literature of psychology, sociology, and economics, we can learn more about how to design around these questions. Many of the results are known from experience to community managers and architects, but other results are not so obvious, and some are completely counter-intuitive. We'll go into detail on a handful, and provide some further reading material for those who want to further.
This talk is aimed at new and existing community managers/architects, and hopes to leave them better informed about some of the choices and pitfalls in front of them when working in their own communities
Driving automation with events can be crucial to reduce time to action and react to issues and anomalies in an efficient way. Vendors from all industries are providing their own connectors and plugins to start integrating events from products and platforms to integrate with Event-driven Ansible.
During this session you will learn more about Event-driven automation and you will see it in action in 3 different use cases to see how it is possible to integrate the Event-driven capabilities of Red Hat Ansible Automation Platform with monitoring systems, ITSM services and hypervisors to:
- Provision a VM on Openshift Virtualization
- Proactively patch it with Red Hat Insights integration
- Configure monitoring with Dynatrace
- React to anomalies with automatic ITSM incident creation and resolution leveraging Alertmanager integration
In this workshop, we will guide participants through the process of building Fedora Cloud Images using the powerful and versatile Kiwi image builder. Kiwi is an open-source tool that simplifies the creation of customized Linux images for various platforms, including the cloud.
During the session, we will cover the following topics:
Introduction to Kiwi Image Builder: We will provide an overview of the Kiwi tool, its features, and its role in building Fedora Cloud Images.
Setting Up the Environment: We will guide participants on setting up the necessary dependencies and configuring the environment for image building.
Image Customization: We will explore the different customization options available in Kiwi, such as package selection, configuration tweaks, and adding custom scripts.
Advanced Techniques: We will demonstrate advanced techniques for optimizing image size, enhancing security, and integrating with cloud platforms.
Testing and Deployment: We will discuss strategies for testing the built images and deploying them to popular cloud providers.
When you publish your first HTTP API, you’re more focused on short-term issues than planning for the future. However, chances are you’ll be successful, and you’ll “hit the wall”. How do you evolve your API without breaking the contract with your existing users?
In this talk, I’ll first show you some tips and tricks to achieve that: moving your endpoints, deprecating them, monitoring who’s using them, and letting users know about the new endpoints. The talk is demo-based, and I’ll use the Apache APISIX project for it.
Come to learn Go programming language. Powerful compiled, strongly typed language conceived at Google with influence of Plan 9 that favors concurrency and ease of use. Currently core to most of the current container and cloud-native ecosystem components like Kubernetes, Openshift, Podman, Docker, Prometheus,... No prior experience is needed, although we will not cover general basic concepts of programming. Please bring your computer with any of Linux, Windows or Mac OS with you.
If language models seem a black box to you, join the talk to understand their history and importance. Language models have evolved over the years with key milestones and achievements that played a pivotal role in the field of Natural language processing. Starting from the early statistical, regular expression, and rule-based systems, the talk will progress to the emergence of neural networks based language models. We will go over landmark studies and breakthroughs that propelled the field with recent transformer based architecture along with large scale pretraining and transfer learning that has revolutionized language learning.
The research in the area fostered innovation in the open source world with adoption of projects such as PyTorch, Hugging Face, and LangChain. The frameworks democratized access to the technology and are further accelerating research and development. We will discuss key projects and ecosystems for developing and deploying applications with these technologies. Attendees will gain insights derived from real-life examples to enhance development with large language models.
eBPF, fully available since Linux 4.4, is a kernel technology enabling
programs to run without modifying the kernel source code or adding extra
modules. Acting as a lightweight, sandboxed virtual machine within the Linux
kernel, eBPF executes Berkeley Packet Filter (BPF) bytecode, utilizing kernel
resources efficiently. By eliminating the need for kernel source code
modifications, eBPF enhances software's ability to utilize existing layers,
potentially revolutionizing service delivery in observability, security, and
networking domains.
Bpfman, a system daemon falling under Extended Berkeley
Packet Filter, serves as a pivotal tool in this domain. It simplifies eBPF
application deployment and management, notably within Kubernetes clusters,
offering a Custom Resource (CR) operator for streamlined operations.
Our presentation will delve into Bpfman's evolution, stemming from the Rust
library Aya for eBPF development. We'll explore practical aspects like
leveraging the Kubernetes operator, deploying applications, and how Fedora
enhances user experience. Security concerns surrounding eBPF application
execution within Kubernetes pods will be addressed, along with insights into
integration challenges and ongoing collaborative efforts within the eBPF
and the rust sig-groups in Fedora.
Notably, eBPF's adoption by industry giants like
Google, Netflix, Shopify, and Cloudflare underscores its relevance,
prompting an insightful discussion on its orchestration in Kubernetes and
Fedora.
In this session, we'll touch on securing cloud-native resources with Open Policy Agent (OPA), a powerful policy-based control tool.
As organizations increasingly adopt cloud-native architectures, ensuring the security and compliance of these dynamic environments becomes paramount. Open Policy Agent provides a flexible and extensible framework for implementing fine-grained, declarative policies across the entire cloud-native stack, from infrastructure to applications.
In the presentation will delve into the fundamentals of OPA, its integration with popular cloud-native technologies like Kubernetes, and practical strategies for enforcing security policies, enabling attendees to enhance the resilience and integrity of their cloud-native deployments.
We invite you to join a meetup for upstream maintainers to share with each other the best practices and lessons learned in nurturing open source projects. We plan to discuss:
- Collaboration with your colleagues and other contributors (corporate or private) from all over the world
- Increasing awareness about your OSS projects by presenting, teaching and doing workshops, etc.
- Mentoring new contributors, growing your skillset, helping others become a maintainer
- Best practices for managing, leading projects and building communities
- Community health and what metrics are important for community growth
- Best practices with communication channels, tools and platforms that you use in your community
If you are a project maintainer, be ready to share your project's best practices, but also failures so we can learn from them. And if you are not a maintainer, don’t worry! You are welcome to join as well. Come and learn what it takes to be an open source maintainer and see how you, as a contributor or user, can help them or become one.
The Shared Memory Communications (SMC) protocol is an addition to TCP/IP and can be used transparently for shared memory communications. In my presentation I am going to share my experience of implementing Shared Memory Communications Direct (SMC-D) in Software-defined storage (SDS) on IBM Linux on Z platform. I will talk about advantages and disadvantages of using SMC-D in comparison with TCP/IP, explain how to enable SMC in your application and demonstrate the performance evaluation of SMC-D with Software-defined storage.
OpenShift comes in many shapes and sizes, from a tiny Kubernetes distro requiring just 2 CPUs and 2 GB of RAM to a 2 000 node beast that can run and manage other OpenShift clusters. Join us for this talk and learn how to pick among MicroShift, Single Node OpenShift, compact OpenShift, standalone OpenShift and HyperShift. Additionally, learn about what tools are available to install and manage them at scale.
The relationship between platform engineering and developer portals often mirrors a complex dance of love and hate, where harmony and discord intertwine. As we dive into this intricate relationship, we must navigate the fine line between the structured order of platform engineering and the creative freedom of developer portals. Picture a therapy session where both entities, embodying the essence of control and autonomy, sit down to untangle their intertwined destinies, seeking to understand and appreciate the unique value each brings to the development ecosystem.
In this therapy session, we meet Kubernetes and its new partner Backstage, and observe the nuanced dynamics of their relationship, highlighting the tensions that arise from their conflicting goals — platform engineering's drive for standardization, security, and scalability versus developer portals' quest for accessibility, autonomy, and innovation. Like any couple in therapy, k8s and backstage will learn to communicate effectively, recognizing their mutual dependencies and the strength that lies in their unity.
By the end of this session, attendees have witnessed a newmade couple, that can coexist and thrive together, paving the way for a future where efficiency and innovation go hand in hand. And they live happily ever after.
Join us for a discussion aimed at supporting Linux distribution developers in enhancing accessibility. As blind software engineers, we believe we can bring a unique perspective to this essential endeavor. We'll explore the challenges faced by blind users when using the Linux desktop environment, with a focus on constructive solutions. We will discuss two main topics: lack of preinstalled assistive technologies and accessibility problems of basic components of desktop environments. Our goal is to offer guidance and encouragement to developers, empowering them to create more inclusive experiences for all users. In our vision for the ideal accessibility scenario, Linux distributions seamlessly accommodate the needs of blind users. To achieve this, we'll stress the importance of collaboration among developers and upstream projects, ensuring that accessibility becomes a fundamental consideration in the development process. Additionally, we'll introduce the "Linux Accessibility Guide", a valuable resource that can assist developers in making graphical applications more accessible. This guide will offer practical insights and best practices to support their efforts. Join us as we embark on a journey to improve the accessibility of Linux distributions, fostering an environment where accessibility is a shared goal, benefiting blind users and the entire Linux community. Together, we can create a more accessible and inclusive Linux desktop experience.
Perhaps you’ve joined a project, maybe several. You’ve made a few commits and started to recognize the voices of project maintainers in the office hours. You might be contributing as part of a work assignment or as a passion project. Now you might have questions such as
- How do I take my skills to the next level?
- How do I propose and implement large changes to the project?
- How do I become one of the maintainers?
In this talk, Michael will discuss the questions, providing examples gleaned from more than a decade of open source contribution. Michael will focus on skills you can develop to improve your effectiveness and presence in open source communities. Expect to walk away from this talk with new tools for driving change and a boost to your F/OSS enthusiasm.
This talk presents changes that needed to be done to the OpenShift WebConsole
in order to make it work in an environment where an OpenShift cluster is configured
to use an OIDC provider directly instead of having an internal OAuth2 server as the
authentication middle man, which is the OpenShift traditional setup.
The audience will learn about issues that are common when implementing sessions
that need to work among multiple server instances that are using OIDC as the backing
protocol for authentication.
Segmentation tasks can be very easy to humans but very tough to machines. For us is quite simple to identify what objects are inside an image and their edges, however a machine doesn't know the meaning of pixels of an image, only their position and intensity. Recent advances in AI are bringing new capabilities to those models and make them learn from data, however to achieve good results is required to follow some steps as well as choose the correct tool for each type of problem. The aim of this talk is to introduce to the audience which categories AI is organized and which one fits into the segmentation task, which deep learning structures are commonly used and the steps to create an image segmentation model.
This project introduces an extension to the successful OCI container
model, expanding the concept of 'layers' to model bootable host systems
and leveraging standard OCI containers as a delivery format for
base operating system updates.
We present podman-bootc, a scriptable tool designed to streamline the
"edit-compile-debug" cycle for bootable containers.
Essentially, it facilitates an ergonomic and efficient workflow by
enabling the direct booting of a container image as a virtual machine.
In the rapidly evolving landscape of AI, leveraging open source models is a game-changer. There are thousands of models to choose from, and new ones are published every day on Hugging Face. Many of them can be run without requiring expensive hardware.
In this session, we'll explore how to discover the best AI models to validate your ideas, automate processes, and innovate.
We invite all event participants from underrepresented groups and their allies to join us for a networking lunch focused on connecting this community. We'll do our best to accommodate everyone interested, but please note that space at the venue is limited, and participation will be on a first-come, first-served basis.
Ansible Code bot overview along with a quick demo. In this session we will see how you can define your own code bot in your Github repository.The Ansible code bot scans existing content collections, roles, and playbooks hosted in GitHub repositories, and proactively creates pull requests whenever best practices or quality improvement recommendations are available. The bot automatically submits pull requests to the repository, which proactively alerts the repository owner to a recommended change to their content . You can configure Ansible code bot to scan your existing Git repositories (both public and private).
There are few Open Source projects today with as much rich history, releases, and lived experience as the Linux kernel and the various Linux distributions that provide a user-ready operating system. Linux is the rock-solid foundation underlying most of the world’s Internet infrastructure online today. But what is the lifecycle of these complex operating system distributions and how does someone participate as an Open Source contributor? This interactive workshop will get attendees to explore the dynamics of open source Linux communities, what software engineering and packaging look like in the RHEL, Fedora, and CentOS distributions, and create a self-guided map for attendees to land a contribution in an open source ecosystem with over 30 years of history.
The rapid evolution of autonomous systems is leading to the formation of Autonomous Ecosystems, complex networks of machines with their own governance challenges. A good example of such ecosystem could be a city with Autonomous Vehicles. A key issue in these ecosystems is to ensure that machines behave fairly towards each other. This talk proposes a novel interdisciplinary approach by utilizing blockchain technology to monetize fairness. By creating fairness tokens, the fair behavior of machines becomes a quantifiable, incentivized metric. Fair behavior can be rewarded with these tokens, which can also be used to purchase fairness within the ecosystem. This approach not only encourages equitable machine interactions but also offers a self-regulating mechanism for the maintenance and governance of these ecosystems.
Event driven applications provide for a future proof foundation to a number of use cases which need real-time responsiveness. Such use cases include Improved User Experiences, IoT, edge computing, predictive analytics and microservice architectures.
But building event driven apps can be hard if you are used to synchronous communication patterns. Knative Eventing abstracts away the complexity of event driven architectures by integrating easily with various sources including Apache Kafka. With concepts such as brokers, triggers, sources and sinks, Knative makes it easy for developers to integrate various event sources without the complexity of handling them manually. Developers can just focus on building applications
In this talk we will discuss the concepts behind Knative Eventing, and show a practical example of an event-driven data pipeline built using those concepts. The live demo based on Customer Review Moderation and Sentiment Analysis includes some AI/ML magic along with audience participation for added fun!
Let's talk about Kubernetes and OKD, the OpenShift community project, in Fedora and CentOS
Nix is a powerful build system and package manager that enables declarative builds and deployments. This talk is about a couple of Nix's key strengths and how traditional package managers like DNF can (or can't) evolve to achieve some of the things Nix does.
I'll discuss three advantages of Nix:
-
Determinism: Nix can set up identical development/build/runtime environments anywhere, so software runs the same on developer workstations, on the CI, and in production.
-
Customizability: With Nix, there's no separation between the package manager and the build system, so it's easy to patch dependencies to suit your needs, for example to add a compiler flag or change the source repository out for a fork.
-
Isolation: There's no issue installing multiple versions of a program or library on the same system, since Nix stores each package in a separate filesystem tree.
Traditional "imperative" package managers (APT, DNF, pacman) and even container tools (Docker, Podman) fall short of these goals (reproducible Docker builds are not straightforward!). But with a few changes, we can get closer.
This talk is intended for people who have experience packaging software and/or building containers. Familiarity with Nix is great but is not required; there will be plenty of Nix demonstrations!
Bonus: building Docker images with Nix!
We will have an interactive group discussion about how to create an ideal manager for open source teams.
It will give a chance for managers to hear from non-managers what is expected of them.
It will give a chance for non-managers to consider how complicated it is to be a manager.
As the fast-paced AI-driven landscape of computing continues to diversify, the importance of multi-architecture container images cannot be overstated. Applications are no longer confined to data centers but extend across multiple platforms, devices, and appliances. Moreover, development environments are equally varied. Multi-architecture images bridge the gap between development environments, like those on MacBooks, and deployment targets, often on x86 architecture. This talk will clarify the process of building multi-architecture images and demonstrate how Podman is an ideal tool for doing so.
Wouldn’t it be great if we could build images for every architecture from just one machine? It would be even more amazing if we could do that without the slowness of emulation! This is where Podman farm comes in. Podman farm is a new feature that allows you to 'farm' out builds to groups of machines you have access to, enabling you to easily build multi-architecture images with a single command. In this talk, we will highlight the challenges of multi-architecture builds and demonstrate how Podman farm addresses them, keeping performance and usability in mind.
Container images that run seamlessly across different architectures ensure consistency, reduce complexity, and accelerate the development cycle. This session will empower attendees to develop on one architecture and deploy confidently on another.
Workshop Attendees Requirement:
Please follow the instructions at https://podman.io/docs/installation to download and install podman or podman desktop on your machines prior to the workshop. We will get hands on to run a bunch of containers during the workshop!
Observability serves as a cornerstone for modern application development and operations, enabling developers and Site Reliability Engineers (SREs) to gain deep insights into system behaviors and efficiently tackle issues in production environments. This workshop delves into the fundamental principles and practical applications of observability, offering participants an immersive learning experience over 80 minutes.
Workshop Agenda:
Introduction to Observability: Understanding the concept and its significance in modern software development and operations.
The Three Pillars of Observability: Exploring telemetry, logging, and tracing as essential components for achieving comprehensive observability.
Instrumentation and Data Collection: Techniques for effectively instrumenting applications and collecting relevant data to gain actionable insights.
Tools and Projects: Overview of popular tools and projects utilized in the observability landscape, including their features and use cases.
Hands-on Practical Example: Guided walkthrough of implementing observability practices in a real-world scenario.
Participants in this workshop will not only grasp the foundational concepts of observability but also acquire practical skills and insights necessary to integrate observability into their own projects and environments. Through interactive discussions and hands-on exercises, attendees will leave equipped with the knowledge and tools to enhance the quality, performance, and debuggability of their applications. Join us to embark on a journey towards empowered application development through observability.
The repo to use will be this one: https://github.com/iblancasa/kiali-tracing-tutorial
Check the requirements before the session!
Infrastructure as Code (IaC) plays a critical role in effectively managing systems, whether it's the configuration of a single laptop or the orchestration of cloud assets. Ansible Playbooks provide an intuitive approach: simply declare the desired state of your systems in a YAML file, and Ansible takes care of the rest. However, users often face the challenge of recalling the appropriate Ansible modules to accomplish their tasks.
Now, with Ansible Lightspeed, this process is streamlined even further. By articulating your intentions in natural language, Lightspeed - a generative AI-based service autonomously generates the necessary code. Rooted in the collaborative ethos of the Ansible community, Ansible Lightspeed has been released as an open-source tool, Ansible AI Connect Service to empower users with unprecedented simplicity and efficiency.
Join our community project to harness its power and contribute to its growth. In this session, we'll show you how to maximize its benefits and to get involved
In this lightning talk I will present you 9 magic rules that will help you to improve your debugging capabilities. I will briefly introduce each rule and tell you why it is important. I will also give you one or two war stories related to each rule to help you remember the importance of each rule.
AC3 is an innovative and open-source EU research project focused on the cloud-edge infrastructure with deep AI integration and efficient energy consumption. It's developed by a consortium of researchers and industry professionals committed to conforming to standards in security, scalability, and interoperability.
During this session, we'll explore how the Roque de los Muchachos observatory partnered with the University of Madrid exploits the AC3 infrastructure to store and process massive amounts of data using advanced techniques. These functions are provided by application microservices which are containerised and strategically placed on a range of environments, from cloud to far edge.
AC3’s AI-driven decision making is vital for allowing developers to focus on creating applications, rather than on operations and maintenance. This is achieved by providing a zero touch and proactive approach to lifecycle management, and is enabled through detailed resource profiles and real time monitoring.
Join us in Brno for an overview of AC3 and view the future of AI driven cloud edge infrastructures.
Changelogs – the essential but often underestimated narrative of software updates. Ever found yourself in the dilemma of "keep it short" versus "give me all the details" when documenting changes? You're not alone.
Join us for an engaging discussion where we'll collectively tackle the changelog chaos. Throughout the talk, we'll navigate the complexities, explore best practices, brainstorm areas for improvement, and share experiences.
Whether you're an upstream maintainer, a coding enthusiast, a downstream package maintainer, or simply intrigued by the challenges, your perspectives are more than welcome. Come, share your thoughts, and let's make sense of the changelogs together!
In an era where artificial intelligence (AI) is not just an auxiliary tool but a core component of digital ecosystems, ensuring the trustworthiness of AI-powered systems has become paramount. As an Open Source enthusiast, I propose to explore the multifaceted approach required to safeguard the integrity and reliability of large language models (LLMs). This talk will delve into the current state of Trustworthy AI, highlighting the latest developments, challenges, and the critical need for transparent, ethical, and secure AI practices.
We will begin by defining what makes AI "trustworthy," focusing on the principles of fairness, accountability, transparency, and ethical use. The talk will then pivot to the specific challenges posed by LLMs, including bias, interpretability, and the potential for misuse. We will outline practical strategies for implementing guardrails around LLMs. This includes the development of robust frameworks for model governance, the role of open-source tools and communities in fostering responsible AI, and the importance of cross-industry collaboration.
Furthermore, the talk will address how companies and communities can ensure that their AI-powered software systems are not only efficient and innovative but also worthy of trust. This involves a comprehensive approach that encompasses regulatory compliance, continuous monitoring, and the cultivation of an ethical AI culture.
Containers must be small but still easy to use. It creates contradicting requirements where it requires features from software management tools like DNF or Microdnf, then the image contains additional dependencies. The talk will present an alternative way to manage micro containers without DNF and Microdnf, but with a similar confort. The concept uses a servicing container for the micro-container. The talk will cover installation and upgrade of packages from repositories for Fedora and RHEL micro-containers regardless of the host environment.
In the era of digital transformation and the adoption of cloud native technologies, organizations running large scale deployments encounter a significant challenge in modernizing their legacy applications to fully leverage the benefits of cloud native technologies. Konveyor project provides methodology and tools to assess, prioritize, and refactor applications to Kubernetes in a predictable manner and at scale.
Attendees will get hints on how Konveyor enables a methodology that can scale modernization and adoption efforts across large application portfolios and how Konveyor Analyzer tool can help identify issues that need to be addressed before running legacy applications on Kubernetes.
Infrastructure is a crucial yet under discussed part of any important project. The choice of software, hosting and system can hinder or stimulate a project growth, but that's not without having trade-offs, and community leaders and systems administrators alike are often struggling to decide on what could be the right choice.
This panel will bring various people working on free software communities and their infrastructure to explore various questions around infrastructure for free software project, like the trade-offs between self hosting, using off the shelf infra or paying for a service, how to get contributions from your community, what to do if you do not have a solution for some specific needs, etc.
In this practical session, I will demonstrate how Quarkus, the cloud-native Java development framework, and its Langchain4j extension can be used for working with AI models like GPT and others, including image-generating models, opening the door to endless possibilities.
I will show how to use Quarkus to:
- Build a chatbot application that allows you to supply data from your own custom data store and feed it to the AI model, enabling it to answer questions using this data (this technique is called Retrieval Augmented Generation).
- Build a highly autonomous agent by supplying the model with various tools, that means locally implemented functions that the model can decide to execute when necessary (for example, send something via email, or write to a database), and then letting the agent determine the appropriate sequence of steps to accomplish the given high-level goal using these tools.
- Demonstrate the Quarkus goodies that make the development process easier, like the interface for manually chatting with the model directly through the application and for exploring image-generating capabilities of image models.
The development joy of Quarkus with all its goodies and incredible productivity now also extends to creating AI applications. Join me and see for yourself!
INCODE, a project funded by the Horizon Europe Research programme, is dedicated to advancing the IoT to edge to cloud computing continuum. Application deployment on IoT devices is simplified through the INCODE developer platform. Additionally, the INCODE platform includes the necessary infrastructure for deployments across the continuum. The platform has access to various domains including 5G and RAN networks, a network fabric, and a cloud platform. Furthermore, through the device registration framework, developers can seamlessly integrate IoT devices into the centrally managed pool of resources.
Join us for a discussion on the ongoing research efforts towards the ease of deployment that INCODE promises. The potential of the INCODE platform is highlighted through two use cases: 1) The collaboration of drone and ground vehicles in search and rescue efforts, 2) Monitoring the operators in a manufacturing facility through an exoskeleton and sensors. This talk is of particular interest to DevOps engineers.
Ever wondered what happens
- when a defer
red readCloser.Close() errors?
- when nil is not nil
- when short variable declarations are about to shadow (or not?) a variable
I'll explore this in a talk about the weirder parts of go to sow some confusion and hopefully gain some laughs. :)
The rapid evolution of cloud-native environments demands efficient management and automation tools. Kubernetes has emerged as the de facto standard for container orchestration, but operational complexities persist, necessitating advanced automation solutions. Ansible, a powerful automation framework, offers a seamless approach to manage Kubernetes resources through its Operator pattern.
This session explores the development of an Ansible-based Operator for Kubernetes, facilitating streamlined management and automation of complex application deployments and day-2 operations. Leveraging Ansible's simplicity and versatility, developers can create custom operators tailored to specific workload requirements.
We delve into the architectural components of Kubernetes Operators, highlighting how Ansible seamlessly integrates with Kubernetes' Custom Resource Definitions (CRDs) and controllers. By defining desired states and workflows as code, operators enable self-healing, scaling, and management of Kubernetes-native applications.
Key topics covered include:
Introduction to Kubernetes Operators and their role in automating operations.
- Overview of Ansible's capabilities and its suitability for Kubernetes automation.
- Step-by-step guide to developing an Ansible-based Operator, including CRD definition, controller implementation, and reconciliation loops.
- Best practices for structuring Ansible playbooks and roles to ensure scalability and maintainability.
Real-world use cases demonstrating the efficacy of Ansible-based Operators in managing diverse Kubernetes workloads.
By adopting Ansible-based Operators, organizations can achieve significant efficiency gains in Kubernetes management, reducing manual intervention and minimizing operational overhead. This session serves as a comprehensive guide for developers and operators looking to harness the power of Ansible for Kubernetes automation, empowering them to unlock the full potential of cloud-native environments.
In the rapidly evolving landscape of Backstage, an open platform for building developer portals, the shift towards dynamic plugins with Backstage represents a significant leap forward in enhancing modularity, scalability, and the overall developer experience.
This 80-minute workshop offers an insightful journey into converting static Backstage plugins to dynamic, a key advancement for developers in the Backstage ecosystem. Dynamic plugins boost modularity and scalability by enabling more flexible and efficient functionality loading, significantly enhancing the developer experience and customisation of a Backstage instance. Participants will gain a foundational understanding of dynamic plugins, covering architectural differences, lifecycle management, and on-demand loading benefits. The session includes a concise overview and a hands-on mini-exercise where participants will convert a static plugin component to dynamic, providing a practical understanding of the conversion process.
Aimed at Backstage plugin developers and software engineers, this workshop is designed to impart the necessary knowledge and confidence to transition to dynamic plugins, thereby contributing to a more efficient and future-proof Backstage ecosystem. Attendees will leave with strategic insights and actionable steps for initiating their plugin conversion projects.
-
Introduction to Dynamic Plugins (10 minutes)
* Quick overview of dynamic plugins and their benefits in Backstage.
* Architectural differences between static and dynamic plugins. -
Converting a plugin (20 minutes)
* Step-by-step outline for converting a static plugin to dynamic.
* Highlight best practices, limitations and common challenges. -
Hands-On Exercise: Converting a Plugin Component (40 minutes)
* Participants will engage in a guided exercise to convert a small, predefined part of a static plugin to dynamic.
* Focus on key conversion steps and immediate troubleshooting. -
Q&A and Wrap-Up (10 minutes)
* Address participant questions.
* Resources for further learning and exploration.
Project Loom, or virtual threads, promised fast, lightweight user-space threads that are very cheap to block. While this is true, everything in life comes at a price. Virtual threads allow users to not care about it. It is the job of the underlying libraries that all our applications use. Issues that can still occur with virtual threads, such as pinning, monopolization, or large thread-local objects, still present real-world problems many libraries still need to account for. Especially in enterprises, these issues might only be noticed once the system reaches the peak loads, which is usually too late. In this session, we explain the virtual thread model execution and compare it to the event loop/reactive model utilized in Quarkus. We will also dive into individual problems that virtual threads might encounter and demonstrate how you can verify that your code doesn’t run into them. By the end of the talk, you’ll understand these (from the user point of view invisible) potential issues with Project Loom.
Cloud Cost Optimization, Savings & FinOps is a hot topic in the current environment. But when it comes to applying these practises to your production K8s environments without sacrificing performance and reliability, things get complicated very quickly.
Why does this happen? In this talk, we will highlight the main culprits causing waste, overrcommitment and a lack of resources across CPU, Memory and Network Bandwidth.
We will discuss the production issues we had in our own environments, how they woke us up and how we tackled them. We will help you understand what kind of optimizations need to be considered when applying these practises to your clusters & applications, instead of blindly trusting various autoscalers and k8s schedulers out of the box.
Abstract: Fedora CoreOS is a perfect fit to host and run your containerized services !
Learn how to provision a CoreOS instance and take advantage of the auto-updates for a maintenance-free system, so you can focus on what matters to you: the running workloads.
We will briefly go over the differences between Fedora CoreOS and traditional Linux operating system distributions.
Throughout the workshop attendees will gain practical insights and hand-on experience in deploying and running FCOS and applications on it.
The hands-on session of the workshop will cover:
- Provisioning with Ignition/Butane
- Booting Fedora CoreOS for the first time
- Running provisioning scripts and containers on boot
- Understanding how updates work
- Performing rollback if needed
By the end you will be ready to deploy Fedora CoreOS to run your workloads and contribute back to the growing Fedora CoreOS community.
Linux operating systems may not yet be the dominant choice for audio and music production workstations, but over time the emergence of sophisticated free and open source sound server software and music and audio production applications (sound and notation editors, digital audio workstations, plugins, music trackers and software synthesizers) has made the Linux desktop a viable and dynamic environment for making music. Even some popular proprietary software now runs on Linux desktops and benefits from this underlying free software ecosystem. We will explore the music production tools and workflows available to the Linux desktop user and conclude with a demonstration of signal processing and software synthesis in a Linux environment.
OpenScanHub is a service for static and dynamic code analysis. It was internally used inside Red Hat for more than a decade and was open sourced in 2023. This talk is going to be about:
- History
- Open Sourcing
- Key features
- Importance of statically analyzing a Linux distribution
- Running mass scans on Fedora
- Integration with Fedora related services like Packit
It would be a brief introduction about taking the idea of an open source static analysis service towards upstream communities.
With more and more macros, build scripts, and other tools being split out from RPM itself, a larger and more diverse group of people is working on RPM and all pieces involved in building packages. While this is very much intentional it also fragments the work into many small groups.
So let's meet up at DevConf and talk about what is going on, see if there are opportunities for more cooperation and exchange ideas! We as RPM upstream are very interested how (new) features are used and anticipated and if they are actually moving in the right direction. On the other hand are we happy to shared in what direction we think things should/might head and what we thought things might be used.
Please bring your own project/component/script/set of macros/packaging policy/... you are working on, dealing with or are planning to.
Running virtual machines natively in Kubernetes has long been a balancing act, as container and VM philosophies butt up against each other. And sometimes the simple act of creating a new VM that includes a variety of different features can feel unnecessarily cumbersome.
Enter: The InstanceTypes and Preferences API, to effectively simplify this process. These provide abstractions for resource sizing, performance and OS support, allowing users to focus on parameters relevant to their applications.
We initially introduced v1beta1 of this API in KubeVirt v1.0.0. During the last months we have learned and improved a lot, making it all available now in Kubevirt v1.2.0.
In this talk we will introduce you to the basics of InstanceTypes and Preferences and how you can use them to make life easier. For more tech-savvy, we will look at the latest developments of the API and what has happened since the v1.0.0 release of KubeVirt. Finally, we will present our future plans and the roadmap to stable v1 of the API.
the path of JDK to its fully opensource shape was painful. Over GNU classpath, through VM-less JDK, up to OpenJDK, from Sun, over distribution builds to Oracle. From OpenJDK sources to final JDK. Because how to build jDK, really metters. This talk should cover up build evolution of JDK which led to current mainstream Eclipse Adoptium Temrin JDK as new reference build
Join me as I share my journey from developer to team leader over the last two years. I'll share my personal lessons learned, things I wish I had known before becoming a team leader, and the ups and downs of the new role, that you might expect in your new future role as a team leader.
This talk aims to be a deep dive into how some of the OpenShift networking
is implemented under the hood. Since OpenShift 4.12, the default certified
Container Network Interface (CNI) has been ovn-kubernetes (OVN-K8s). OVN-K8s
provides a Kubernetes networking solution by using the open source
Open Virtual Networking (OVN) and Open vSwitch (OVS) projects at its core.
While we plan to briefly describe how OVN-K8s configures the OVN logical
network topology, this talk will not focus on that. Instead, the goal of
this talk is to describe how individual packets are processed in the kernel
OVS datapath and to provide a bottom-up way of mapping the processing
steps to the upper layers, in this order:
- OVS kernel datapath flows
- OVS OpenFlow rules
- OVN logical flows
- OVN logical network constructs (e.g., switches, routers)
- Kubernetes objects
We all use computer chips such as processors, memory and sensors in our daily lives. But how are they created? How did the chip creation process evolve and what future changes can we expect?
This session explains how computer chips are physically created by some of the most advanced machines on the planet. Did you know that these chips, nowadays, can contain more than one hundred million transistors per square millimeter?
Java software is used everywhere, also in the process of chip manufacturing. In my project at ASML we’re working on a relatively new analytics platform which is used to process the data from the machines. The application then visualizes the results in order to find issues or improvement areas. This information is used to change the configuration parameters of the physical machine in order to create more and better chips. I will explain, on a high level, how our applications look like and which Java technologies we use.
Welcome to the history lesson of raising technologies and how people perceive them. How SW projects grow until they become obsolete. Maybe you learn a few lessons, and maybe you will have fun with a guy with a bullwhip and hat.
Software supply chain security (SSSC) is a hot topic these days, but confidently implementing SSSC can take a small army and be a heavy tax on development teams. Konflux CI is an opinionated, Kubernetes native, security-first software factory based on Tekton. The Konflux community aims to make secure software delivery achievable by anyone. This is accomplished by providing a well integrated CI / CD distribution based on open source technologies and backed by a SIG composed of security experts who interpret regulatory frameworks, drive requirements and provide acceptance of technical solutions.
In this talk, we will cover:
* the architecture of Konflux, the sub-projects it is composed from and how security is achieved without sacrificing flexibility and agility.
* demonstrations of various features
* new Konflux deployer supporting EKS and Kind
* Roadmap
* Next steps for would-be contributors
With today's cutting-edge Confidential Computing technology, users of public cloud infrastructure can protect their sensitive data with a higher level of security.
This innovative technology can be used to build a confidential cluster.
This secure cluster allows users to run their workloads in a safe environment without worrying about threats from the cloud provider, which brings them closer to a zero trust model.
We will discuss the fundamentals of Confidential Clusters, their challenges, and future landscape plans. Additionally, we will demonstrate the deployment of confidential clusters in Azure/GCP.
Why does an installer even need to do this ?
We will shortly introduce how the Fedora/RHEL installation process handles graphics, keyboard input & remote access & how these are impacted by the Xorg to Wayland migration.
Next we will go over the various possible migration scenarios & what option we choose in the end. Lastly there will be an overview of current progress of the Xorg -> Wayland migration process for the various installer use cases on RHEL & Fedora.
While the Wayland migration is presented from the Anaconda installer point of view, there should certainly be useful lessons and bits of information that would be useful for any other projects that are also finally leaving Xorg behind & heading for the future of Linux graphics technology.
Test cases are the foundation of tests for Quality Engineer, there are usually several aspects that need to be considered when designing cases, like test depth and breadth, test according to product source code and customer scenarios, test on bare metal and on clouds, etc. And there is usually limited resources for tests, so it is important to balance the test matrices.
How do you balance test matrices when designing test cases?
The presentation will show you a good practice to balance test matrices. It would take cloud-init as an example to show how to balance the test matrices when designing test cases, and would also show you a tool/framework that can maximize the completion of the test matrices under limited resources. More specifically focusing on:
1. Test matrices that need to be balanced
a) Test depth, test breadth
b) Test according to product source code, test according to customer scenarios
c) Test on bare metal, test on clouds(test on which clouds)
2. The good practice of cloud-init
3. Introduce a good tool/framework to maximize the completion of the test matrices under limited resources
Are you ready to simplify graph database management and unlock its full potential? Look no further than CypherGUI! This open-source, single-page application empowers you with an intuitive interface and powerful features, regardless of your technical expertise.
Join us for a demo and explore how CypherGUI can transform your graph database administration experience!
Get ready for a laid-back session where we demystify backups and restores in Kubernetes using the awesome tool, Velero! We'll chat about why keeping your data cool and resilient is crucial in the Kubernetes world. Discover how Velero brings the party with its automated backup vibes, making your data recovery moves smooth and stress-free.
Are you a user of public cloud services or interested in leveraging the power of the cloud for your projects? Join us for an engaging and informative Public Cloud Users Meetup, where cloud enthusiasts and developers gather to share their experiences, best practices, and insights around architecture.
This Meetup is a community-driven event designed to bring together individuals and organizations who are using or considering public cloud services. This interactive meetup aims to foster knowledge exchange, networking, and collaboration among open source community users and developers of many different backgrounds.
Topics will include using Podman on individual cloud instances, container management across different infrastructure environments, cloud migration, and issues related to repatriation, and the use of kubernetes from desktop to combinations of hyperscalers
Automating OpenShift VMs Compliance with Knative and Tekton the cloud-native way
Introduction
Background
In the rapidly evolving landscape of cloud computing, virtual machine (VM) provisioning in OpenShift environments has become increasingly streamlined. However, compliance tasks often remain a bottleneck, characterized by manual interventions, time-consuming configurations, and a high potential for human error. These challenges undermine the efficiency gains achieved through modern provisioning processes.
Objective
This project aims to revolutionize the VM compliance automation phase by automating these tasks using Knative and Tekton. Our goal is to enhance operational efficiency and reliability in managing OpenShift VIrtualization environments.
Problem Statement
Current compliance processes often involve cumbersome manual steps, leading to significant delays and high error rates. These include configuring network settings, installing software, and applying security patches.
Impact
These inefficiencies adversely affect resource utilization and operational costs, while increasing the likelihood of human error, thereby compromising system integrity and performance.
Proposed Solution
Overview
We propose a solution that leverages Knative to trigger Tekton pipelines, automating the compliance tasks in OpenShift environments.
How It Works
Upon VM creation, a Knative trigger will send the VM payload to a Tekton EventListener. This event triggers a Tekton pipeline, which is pre-configured to execute a series of compliance tasks automatically via Ansible.
Technologies Used
- OpenShift: A Kubernetes distribution that simplifies the management of Kubernetes clusters, providing a robust foundation for this solution.
- Knative: An event-driven framework that facilitates serverless workloads in Kubernetes, crucial for triggering automated workflows.
- Tekton: A powerful Kubernetes-native CI/CD framework, used here to create and manage the pipelines executing post-provisioning tasks.
- Ansible: Ansible is a suite of software tools that enables configuration as code. It is open-source and the suite includes software provisioning, configuration management, and application deployment functionality.
Implementation
Architecture Diagram
A diagram will be provided to visually represent the workflow from VM creation to task completion.
Step-by-Step Process
The concept involves the creation of a Tekton pipeline whenever a VM is created/deleted. This pipeline accesses a configmap and subsequently executes automation tasks on the VM.
It is essential for the VM to have an annotation indicating the configmap's name.
Benefits
- Efficiency: Significantly reduces the time required for post-provisioning tasks.
- Reliability: Minimizes human error through automation.
- Scalability: Easily adapts to increasing infrastructure demands.
- Cost-Effectiveness: Reduces manpower requirements and operational costs.
Conclusion
This proposal outlines a transformative approach to managing compliance tasks in OpenShift VM environments. By leveraging Knative and Tekton, we can significantly enhance efficiency, reliability, scalability, and cost-effectiveness.
Q&A / Discussion Points
- How does this solution integrate with existing CI/CD pipelines?
- Can this framework support complex, multi-step provisioning tasks?
- How does this approach ensure security and compliance during the automation process?
- What are the limitations of this solution in its current form?
- How can this solution be adapted for hybrid or multi-cloud environments?
RPM packaging is a foundation of the software delivery mechanism on RPM-based Linux distributions, with RPM spec files acting as blueprints for package builds. Despite their enduring relevance, support for editing RPM spec files has not evolved significantly over time, leaving users with rudimentary tooling.
In response to this challenge, we have developed a prototype of a language server tailored specifically for RPM spec files. Based on a language server protocol, the solution offers a unified editing experience across various editors, giving users features like auto-completion, linting, and jump-to-definition. All of this is achieved by centralizing "code smarts" in one place.
Join us to learn about the thinking behind the language server and the challenges we faced when implementing this new tool.
Many companies operate in the Information Technology space but have very different business models.
Based on a company's business model, it will behave differently toward its customers and the open-source projects it backs.
Since many open-source projects are backed by just one company, it is crucial to understand the backer's business model and possible future behaviors to understand which direction the project will probably take.
This talk will start by reviewing the most important software licenses. We'll then see the various business models that IT companies can adopt and some examples of how business models and licenses interact in real-world examples.
The book Ada & Zangemann (licensed under Creative Commons BY-SA) tells the story of the famous inventor Zangemann and the girl Ada, a curious tinkerer. Ada begins to experiment with hardware and software, and in the process realizes how crucial it is for her and others to control technology.
During the last months, Matthias has read the book to >1000 children (from 6 years onwards) and adults, at schools, libraries, or other events. Afterward the readings there are long discussions about how technology works, aspects of sharing software, helping others to use technology to shape their future, and other ethical questions. There were programming workshops afterwards, and in some cases we had ice cream machines at the reading. It was an amazing experience getting standing ovations from 160 3rd graders in a cinema, 30 children who want to get a high five on the school yard after a reading, over 100 children translating the book into French, or children doing project weeks with their teacher about the book, or young girls telling you they now want to start programming.
Together we will read the illustrated story (~33 minutes), Matthias will briefly share experiences from readings, and we will discuss concepts how to engage with younger audiences about Free Software. The goal is to enable each participant to better connect with young audiences about technology and together further develop a reusable tool kit for such readings: https://git.fsfe.org/FSFE/ada-zangemann/src/branch/main/Readings-Organisation.md
"This book illustrates the power of free and open source software in a way that's both fun and accessible." - Chris Wright, CTO and open source technologist
The book is published in English, German, French, and Italian, translated into Arabic, Dutch, Ukrainian, Valencian, and more translations are currently on the way.
All materials of the book is available under https://git.fsfe.org/FSFE/ada-zangemann/
Ever wondered if our tools are worth the hype? Well, instead of us blowing our own trumpet, let's hear it straight from the horse's mouth – our satisfied users! Join us for a session where a handful of our users take the stage to showcase their setups, leaving you inspired and itching to join the world of Packit, Testing Farm and tmt.
Prepare to be impressed as they unveil the secrets behind seamlessly integrating your own tests into projects you rely on, automating your RPM pipeline, and using one test definition across various systems. Whether you're a seasoned quality engineer or just dipping your toes into the testing pool, you'll walk away with fresh ideas and a newfound enthusiasm for the endless possibilities our tools offer. So, come and get ready for a delightful journey through the world of testing made easy!
Here are the tools involved:
Packit is an open-source project aiming to ease the integration of your project with Fedora Linux, CentOS Stream and other distributions.
Testing Farm is a reliable and scalable Testing System as a Service. It is commonly used as a test execution back-end of other services or CI systems.
The tmt tool provides a user-friendly way to work with tests. You can comfortably create new tests, safely and easily run tests across different environments, review test results, debug test code and enable tests in the CI using a consistent and concise config.
This workshop offers a comprehensive introduction to containers, covering their conceptual foundations and practical applications. Participants will explore how containers encapsulate applications and dependencies, and will gain hands-on experience with Podman and Podman Desktop. The session will illuminate the core principles and inner workings of containers, highlighting their role in modern software development. By the end of the workshop, participants will have deployed a basic application using containers, thereby gaining the knowledge and skills to get running with containers in their projects!
Designed with beginners in mind, this workshop is perfect for students and industry newcomers wanting to explore containerization and virtualization. Our objective is to empower attendees with a foundational understanding that enhances their grasp of subsequent discussions at DevConf.CZ.
From cypherpunks and crypto anarchists to many ordinary open-source developers, we all care about free speech and privacy. At Logos, we build decentralized censorship-resistant infrastructure - namely open source projects Waku, Codex and Nomos - which we deem necessary to keep the internet free (as in speech) in the future. Let's explore the history, present and future of these and other projects which attempt to allow our digital identities to avoid surveillance, stay private and uncensored.
Robots is one of the biggest use-cases for edge.
From fire fighting robots to manufacturing, robots has proved their worth in many cases.
ROS (Robot operating system) is modern age robotics development platform with its use ranging from research to enterprise grade application.
Unfortunately adpotibility is restricted as deployment and maintainability of ROS is pain due the way it is packaged making it available to only few distributions and that too tied to particular release.
Fedora IoT users are still left out to use the power of ROS.
The main aim is to streamline ROS deployment on Fedora IoT using bootable containers
Key points and takeaway:
1. Resources - Base image of Fedora IoT with ROS, deployment script to build deployable image
2. Containerized approach so layering additional packages is as easy as writing a Containerfile,
3. Building deployable image is as easy as "podman run.."
4. Reliable upgrade and rollback mechanism
In this session, you will discover that being creative isn't just for a few special folks - it's something we're all capable of. Yep, that means you too!
Letting your creative side shine can help you come up with loads of new ideas at work, boosts your well-being and strengthens the team spirit when creative activities are part of team-building. I will show you that being creative is not that difficult by sharing some easy, fun activities (e.g. blind portraits). You can use them to kickstart your next meeting or brainstorming session. They get the creative juices flowing for everyone involved.
Join me, if you're ready to break free from the "I'm out of ideas" blues, boost your well-being, and strengthen your team spirit.
As a young engineer, I discovered the satisfaction of resolving a critical bug for a valued client, earning recognition and a bonus. Ever since, my passion was for writing code and designing software that preemptively addresses potential issues. However, despite my efforts to prevent incidents, I never received the same recognition. This raises a crucial question: how can organizations foster a culture that motivates engineers to proactively prevent incidents rather than merely reacting to them? Join me as we explore strategies for building an organizational ethos that incentivizes proactive problem-solving, ultimately leading to more resilient and efficient software development practices.
In the dynamic landscape of software engineering, navigating one’s career path can be akin to traversing a minefield, fraught with potential pitfalls and challenges. However, by adopting a well-grounded approach, software engineers can steer clear of common traps and setbacks, ultimately achieving success and fulfillment in their profession.
This talk aims to highlight the essential strategies and principles that empower software engineers to build resilient and rewarding careers. Through practical advice and industry insights, attendees will gain valuable perspectives on how to:
- Cultivate a strong engineering foundation
- Set achievable goals to mark progress
- Embracing communication and soft skills
- Recognizing and mitigating burnout or overwork
- Seeking mentorship for continued growth
By proactively addressing these key areas, software engineers can fortify themselves against common career pitfalls, ensuring fulfillment and resilience in an ever-evolving industry. Join us as we delve into the strategies and insights that pave the way for a well-grounded and flourishing career in software engineering.
CODECO is a Horizon Europe open source research collaboration project with sixteen partners (universities and companies). CODECO, short for Cognitive Decentralized Edge to Cloud Orchestration, is an innovative open-source framework, currently in development, aiming to boost Edge-Cloud infrastructure efficiency and infrastructure resilience. It focuses on optimizing application deployment and runtime within Kubernetes environments through cognitive and cross-layer orchestration, spanning data flow, computation, and network layers. CODECO provides a comprehensive view of data in the IoT-edge-cloud continuum.
CODECOs dedicated Innovation and Research Community Engagement Programme encourages collaboration among developers, SMEs, and research communities. This talk targets stakeholders keen on advancing Edge-Cloud orchestration. Attendees will grasp CODECO's principles, objectives, and key research contributions, recognizing its potential impact across various industry verticals. Attendees will come away with knowledge of open-source toolkits, training resources, and use-case deployments (across Smart Cities, Energy, Manufacturing, and Smart Buildings). This talk will highlight an example deployment of the CODECO framework in the smart city of Göttingen, showcasing impressive energy efficiency and the leveraging of AI via edge cloud devices to enable smart solutions (including smart buildings, infrastructure monitoring, and vehicular safety enhancement). This talk details the architecture and components of the framework.
Eclipse BlueChi (https://github.com/eclipse-bluechi/bluechi) - formerly known as Hirte - is a multi-node systemd service controller facilitating deterministic state transitions for systems with limited resources. Its significant focus is on the ability to run in the highly regulated, safety-critical automotive industry.
This talk shows how Eclipse BlueChi extends systemd for managing and monitoring services across multiple nodes by explaining its overall architecture and the exposed API. In addition, it illustrates Eclipse BlueChi’s approach to resolving inter-node dependencies between services as well as basic performance metrics like CPU and Memory utilization. This presentation also dives into the seamless integration with Podman, forming a resource-friendly and deterministic container orchestration tool. This is considered a key enabler for the so-called Software Defined Vehicle (SDV).
At Gen, it's essential for us, as an AV company, to share our know-how and participate in innovations that move the industry forward and help protect users worldwide.
In recent years, we have open-sourced and contributed to several projects, and I would like to present you a selection of them with practical examples of their usage.
My team focuses on development around YARA - a great pattern-matching tool used by malware analysts all over the world. We are actively contributing to this tool and developing additional tools that make analysts' lives easier.
In this talk, I will introduce GenRex, our latest published project for the generation of regular expression for YARA rules, YARI, and YLS - developing tools for YARA, and YARA-X, the ongoing development of a new generation of YARA tools.
Among others, I will also present the results of the work of students from FIT BUT who participated in these projects, for example, as part of their final bachelor's and diploma theses.
In the realm of software development, CI/CD stands as a cornerstone, guiding us towards efficiency and agility. However, as the pace of technology accelerates, the traditional tools of Bash and YAML, which have been reliable allies, are starting to show their age. This session is a call to arms for developers, DevOps practitioners, and sysadmins who are on the lookout for the next evolution in CI/CD practices.
We'll dive into the concept of CI/CD as code, spotlighting innovative tools like Dagger.io that offer fresh perspectives and solutions. Our focus will be on how these new approaches can address the challenges we face with current practices, enhancing scalability, maintainability, and the overall efficiency of our pipelines.
Our journey will be both exploratory and practical, aiming to equip you with the knowledge and tools to navigate the changing landscape of CI/CD. By sharing real-world experiences and strategies, we'll foster a collaborative environment where we can all learn from each other's successes and hurdles.
Join us as we venture into the future of CI/CD, embracing the possibilities that CI/CD as code brings to the table. This session is not just about listening; it's an invitation to engage, experiment, and elevate our CI/CD practices together. Let's embark on this path of continuous improvement, ready to transform our pipelines and set new benchmarks in software development.
In today's constantly evolving digital era, ensuring the reliability and resilience of cloud native applications is crucial. As these applications become more complex, chaos engineering has emerged as a powerful methodology for proactively testing and validating their robustness. Chaos Mesh, an open-source chaos engineering tool for Kubernetes, provides a comprehensive framework for orchestrating chaos experiments in cloud native environments.
In the session, I'll take a deep dive into Chaos Mesh, exploring its essential features and functionalities, and demonstrating how it can simplify the chaos engineering process for cloud native applications.
I'll also discuss a few companies that have solved their problems using chaos engineering. For instance, Netflix's Chaos Monkey was one of the pioneers in Chaos Engineering. They intentionally disrupted their systems to identify and address weaknesses, leading to improved fault tolerance and a more resilient streaming platform.
In addition to exploring Chaos Mesh's essential features and functionalities, we'll delve into best practices for integrating chaos engineering into your application development lifecycle, how we can also reduce downtime and enhance System Reliability. Let's discover how you can apply these insights to enhance the performance and stability of your own applications.
This talk explores how autoscaling paradigms can enhance network ingress in Kubernetes.
With GatewayAPI becoming a popular choice to manage network traffic, we will see how KEDA's capability to interpret various metrics as scaling decisions can be used to automate deployment patterns. By bringing these two technologies together, I would like to advocate for a revised look into standard traffic management strategies, such as canary deployments, blue/green deployments, and A/B testing.
Presenters and hosts of community workshop are ambassadors of the software project they contribute to. During demo and interactive session, they can show and tell key features of User Interface and deliver content with most efficient settings of the Free and Open Source Software used for workshop. You can help build the community through learning session and user group.
This talk will share;
- Learning from monthly writing workshop for users and contributors of the Fedora Linux and software packages. // This is ongoing since Sep 2023.
- Initiative to create a new section in Fedora "Quick Docs": The "Why" and "How" // Due to start July 2024.
The Linux kernel is one of the largest and most complex open source projects in the world. It's constantly changing and evolving. Getting into Linux kernel development is not an easy task. The code base is complex, the documentation is lacking and behind the code development and the knowledge to contribute is mostly contained within the active developer community itself (who are the ones behind in documentation).
Starting in fall 2024 Red Hat will bring to Masaryk's University a for credit course aimed at students who want to break through the initial knowledge barrier and cross the gap between academia and the Linux industry. In this class we will cover Introduction to kernel development process, understanding of kernel subsystems and writing your own device drivers.
We will welcome you to be part of a course that is running for its 4th iteration in the US and tap into the tribal knowledge of the kernel community.
In the world of cloud-based software, developers enjoy the advantage of fully automated testing. Continuous Integration (CI) systems check every software update, testing every part of the software. However, this level of automated testing hasn't been easy for embedded software and devices. The main reason is their dependence on specific physical hardware, which traditionally requires manual testing by a person. This manual approach is slow and becomes more challenging as the number of device variants grows.
Jumpstarter was created to help with that. It introduces a way to test embedded software automatically, just like cloud software. With Jumpstarter, we can test our software on the actual hardware it will run on, but without needing someone to manually handle the device for each test. This is a big step forward because it makes testing faster, more consistent, and less reliant on manual effort. It fits right into existing CI/CD systems like GitHub CI, GitLab CI, Jenkins, Tekton, etc.., making it easier for developers to include it in their workflow.
Testing in hardware requires a testing harness to connect your hardware to Jumpstarter, as we couldn't find anything readily available and Open; this is why we also created the dutlink-board that allows power control and metering, storage flashing, console access, and other functions. We are working on the release 2.0.0 with additional functionality and expandability. Additionally, the Jumpstarter software is designed with a driver architecture that enables the creation of drivers for additional testing harnesses.
This talk will explain how Jumpstarter makes embedded software testing easier and more efficient. We'll explain how it can save time and ensure better testing results, which is crucial for developing reliable embedded systems.
We will bring the Jumpstarter board with us and share details to let you build your own or get it built for you.
In today’s landscape there are an increasing number of scenarios (smart city, traffic management, self-driving car, manufacturing, healthcare, etc…) where the need of delivering and updating applications on edge devices is more frequent than ever. This requires a great level of automation and agility without leaving behind the security.
In this context, the lead time from the inception of a business idea to having it live in production on edge devices can be very high with an increased customer’s dissatisfaction. There are a lot of unnecessary frictions and (re)works caused by the need of different platforms, technologies and skills.
What can be obtained if we adopt the abstraction capabilities introduced by Kubernetes deployed on a heterogeneous environment (on prem, on cloud, at the edge) in this context?
In this session we will see the values and benefits of adopting an unified platform based on Open Source technologies that allow us to build and deliver faster applications over edge devices.
In particular we will see:
* How to build an application with a secure software supply chain in the cloud
* Deploy the built containerized applications in a secure way on an edge device running a single node kubernetes cluster
* Demonstrate how the process is simplified and accelerated from inception to deployment in production on edge devices
How do you measure success as a developer ? Is it how good your code looks ? Is it how well covered it is by unit tests ? We think that while those are required, that's not it.
Could it be the opposite of the opened bugs then ? Someone actually bothered to open a bug / issue on it... Sorry to burst your bubble, but that's still not it; usually, projects without bugs just mean … they're not being used.
Thus, in our humble opinion, the code must matter to someone, and it must be used in production by as many people as possible - to the point they contribute, request features, and report broken “things”.
Now the thing is how do you get end users to use your stuff ? It's a non-trivial problem: unless you come up with something useful - and entirely new - odds are you're hoping people will replace one existing component of their stack (sure ... it had its problems …) by your new shiny toy. ... which only you care about !
And we all know migrations have their downsides … It usually requires a big improvement for people to be willing to pay the price.
Join us in this talk as we discuss strategies to get your pet project out there in the wild, and being used as it should / deserves.
In this research, the main focus has been on using Open Source Intelligence (OSINT) for Cyber Threat Intelligence to improve the protection of Enterprise IoT environments. OSINT data can be collected based on public databases (i.e., CVE database). That can be used for Cyber Threat Intelligence (CTI) and the automated detection with AI. Federated Learning has been established for the purpose of Distributed Learning across multiple networks. Local CTI servers can communicate with central CTI servers and exchange data to inform other networks about possible attacks. That can be expanded on a global level. You will receive an overview about the latest research and idea to exchange OSINT data for CTI for all organizations on a global level, that we can achieve a more secure world.
VMware was recently acquired by Broadcom and as was widely predicted the new owners have put up VMware's prices. In some cases massively, with 10x price increases being reported.
There is a way out. You can say goodbye to VMware and take your virtual machines with you to Linux using the virt-v2v tool. Virt-v2v has been in development since 2008 and has liberated millions of virtual machines from proprietary software.
In this lightning talk, Richard Jones, one of the original developers of virt-v2v will show how it works and what it does.
This DevConf.cz side event is an opportunity for our community and anyone interested to join Logos core contributor Vaclav Pavlin to discuss topics technical and philosophical – from the ethos of the cypherpunks to the latest in cryptographic research – in relaxed surroundings.
Logos is a grassroots movement dedicated to building and sustaining a fully decentralised, privacy-preserving, and politically neutral technology stack in the spirit of the cypherpunks.
Sign up to join us on June 13th 2024 for food, drinks, and stimulating discussions.
Programme:
18:00 - 19:00: Drinks and discussion
19:00 - 20:00: Cypherpunks Write Code (documentary screening by Reason)
21:00: Soft close
Limited capacity | Registration required: https://lu.ma/la3
People talk about “Linux containers” forgetting that the part actually called “Linux”, the kernel, isn’t in the container.
But what if you could include a kernel in your container image, and what if you could boot that image? What if you could commit the definition of your whole Linux system to version control. What if you could push around images for the entire system, just like you can with containers. And finally: what if this was a documented and tested first class workflow supported by your Linux OS/distribution?
Let’s take the practices, tooling and standards that have grown around OCI containers for applications and apply them to the operating system. Let’s deploy and update the host via those same patterns, rather than individual fine grained packages. As we emphasize derived, consumer-owned builds, let’s make it ergonomic to create and maintain a complete trust chain all the way from the boot loader through the OS right through to existing containerized apps. Let’s bring immutability, auto-updating, resetting along as well.
We’d like to show how this can work practically, with real world applications, and built out of the packages we have today. We’ll look at the projects that are working on various parts of this puzzle.
There’ll be demos, there’ll be prizes, there’ll be cheers, there’ll be tears. This work has gotten us excited about the operating system again, and we’d love to share it with you.
Want to know how to submit a change to the Fedora Project? Join me, the Fedora Operations Architect for this short talk on the Fedora changes policy. What it is, why its there, and how you can use it to get your work into the next release of Fedora!
As we stand at the cusp of the quantum computing era, the fragility of traditional cryptographic systems has become increasingly evident. To this end, Post Quantum (PQ) transition is already a reality. The National institute of Standards and Technology (NIST) has finalized a few post quantum algorithms that can be widely implemented in software. While companies like IBM, Google, and others are invested in building successful quantum computers, other software vendors like Cloudfare, Chrome, and Mozilla have added support PQ algorithms. Specifically for software adaptation, open-source is at the core. In this talk we will introduce the QUBIP (https://qubip.eu/) project - which is a project funded by the EU that is engineering the PQ transition. In this 2-part talk we will cover the following:Part 1 of the talk:
1. What is PQ transition and why is it relevant to the open-source community? What are it's challenges?
2. What are the opens-source components and standards involved?
3. Current status in Red Hat and Fedora.
4. Future plans.Part 2 of the talk:
1. Introduce QUBIP and it's goals.
2. Talk about the various partners involved and their specific roles.
3. Current status.
4. Future plans.
5. Demo of the working PQ algorithms in Fedora.
Join our 'Foreman provisioning open forum' to explore the extensive provisioning capabilities of Foreman. We'll dive into how Foreman streamlines the setup of physical and virtual servers, integrating with a variety of platforms for a seamless workflow.
Following the presentation, we invite you to engage in a lively discussion about your environments, tools, and practices.
We'd like to use this fantastic opportunity to exchange insights, learn from peers, and perhaps discover new strategies to enhance your provisioning processes. Whether you're new to Foreman or a seasoned user, your input is valuable!
Do you have a tons of small issues with low priority that no one works on? Do you have problem finding interns for your project? Do you enjoy teaching others?
Once upon a time we answered yes to those questions and tried something different. We'll talk about how a Software Factory course at University of Helsinki gave us an idea to onboard a class of students on our project to work on small issues as community contributors to gain hireable GH profiles as a part of Software Engineering class at Mendel University. We'll talk about all highs and lows of such a journey. There were plenty of both :)
We'll start with how surprisingly easy it was to get managers and teachers on board. Then we'll talk about finding the right tasks to split amongst the group, showing them the basics of contributing on GitHub, getting the project working and handling pull requests spikes. And we'll end with how this helped us find amazing teammates.
Please join us for a session where we will speak about how we plan to leverage RHEL’s strengths for supporting ISO 26262 Functional Safety Certification. By repeating maintainer prescribed tests, we are working towards enabling tests within the safety scope to be executed against the target hardware. Additionally, we are contributing to a framework responsible for performing additional validation checks on top of the test artifacts to provide Key Performance Indicators (KPI) to be used towards continuous certification.
We're in the middle of bringing up CentOS Stream 10, why should we be talking about RHEL 11 already? Because it's active now in Fedora! In this talk we'll talk about some lessons we've learned from the current major/minor release cycle so far and discuss how we expect to share feedback over the next few years.
Do you want to work with AI/ML but aren't sure how to get started in a way that doesn't involve a chat bot? This workshop will go over how AI/ML experiments are set up and run with an example project that you can take with you to iterate on and learn with.
Podman 5 is the first Podman major release in two years, and includes a number of new features - and some big changes. In this talk, we will explore what makes a major version (and why the Podman team chose to release Podman 5 now), highlight important features and changes, and explain the logic behind many of the team's decisions. Attendees will learn about Podman, the team's push towards multi-platform support, and the difficulties of long-term software maintenance.
Free Software started out as movement to empower the downstream user and give end-users a way to be in control of their own technology. But with the rebranding to Open Source and the commercial success, this aspect has taken backseat to corporate interests and cloudification has driven a general trend to centralization that has not spared Open Source. At the same time, a rift has appeared between very much Open Source-based decentralization movements - namely Fediverse and Web2 - and the mainstream Open Source development. Some of this rift is a reflection of different technology philosophies, some of a broader cultural rift.
This talk explores the issues and attempts to map a path for overcomming the rift.
Roll up your sleeves and prepare to write and submit your first glibc patch! Last year, I gave a talk about this. This year we go hands on.
The goal: produce small, incremental improvements and get new names on the commit log.
We will pick the low hanging fruit first: typos, doc fixes, improving existing tests and writing new ones. Experienced C hackers are also welcome. Perhaps we fix a bug or three?
I'll provide:
- A solid introduction to glibc and its codebase
- A list of well defined, small, individual problems for everyone to solve (some very easy)
- Page sized cheat-sheets on how to work on each of the problems
You'll bring: A Linux laptop with the necessary tools installed (I'll provide details in advance), some understanding of git, and maybe a bit of comfort with C and Makefiles.
Did you catch yourself providing feedback, which is causing you an issue? Or maybe not if it is only positive feedback, but what about criticism? Each of us has some gaps in some skills, but we do not always see them, or maybe we do not want to admit them. Why is it so uncomfortable to provide criticism? Can we lose good relationships and our jobs or become unpopular? What factors must we consider when we want to give the proper feedback? Is it a safe environment? Trust? Right communication? And what about emotions or our ego? It is a tricky topic, but I want to create more awareness and encourage people to deal with it.
We will navigate through lesser-known command line options, exploring how they can be harnessed to streamline storage management tasks and unlock new capabilities.
We will discover how to use lvm2 with thin provisioning and compression and deduplication or how to easily convert your existing volume into a thin with a simple lvm2 command.
In automotive, Android guests are currently used for deploying infotainment systems. To support Android as a guest, the Virtual Machine Monitor (VMM) requires a set of virtual hardware like sound, video, block devices and networking. This talk presents the current status and efforts to implement VirtIO sound for infotainment systems in automotive. We will shed some light on our decision to implement VirtIO sound in Rust as an vhost-user device under the Rust-Vmm project umbrella.
Having VirtIO for hardware allows Android to be deployed in different VMMs that support VirtIO like Crosvm or QEMU, This deployment has benefits like reducing the attack surface of QEMU and also enables more granularity in setting up rights for the device process. Our VirtIO sound implementation is able to handle different audio backends by relying on a generic interface. Currently, we support the pipewire and Alsa as audio backends. During this presentation, we will share our journey in building the virtio-sound device, including improving its specification, fixing bugs in the virtio-sound driver, and building it as a rust-vmm project. We also plan to outline a roadmap for the future like adding support for other audio backends like Gstreamer.
We will demo and describe the setup used in order to play audio from a guest application to the host using our virtio-sound device.
We will also go through some general tips on how to configure the guest to enable virtio-snd driver module and optimal ways to use QEMU which may be useful to the audience.
CERN, the European Organization for Nuclear Research, is one of the world's largest centres for scientific research. Not only is it home to the world's largest particle accelerator (Large Hadron Collider, LHC), but it also the birthplace of the Web in 1989.
Since 2016, CERN has been using the OpenShift Kubernetes Distribution to host a private platform-as-a-service (PaaS). This service is optimized for hosting web applications and has grown to tens of thousands of individual websites.
By now, we have established on a reliable framework that deals with various use cases: thousands of websites per ingress controller (8K+ routes), dealing with long-lived connections (30K+ concurrent sessions) and high traffic applications (25TB+ per day).
This session will discuss:
* CERN's web hosting infrastructure based on OpenShift Kubernetes clusters;
* usage of open source and in-house developed software for providing a seamless user experience;
* integrations for registering hostnames (local DNS, LanDB, external)
* provisioning of certificates (automatic with external-dns / ACME HTTP-01, manual provisioning)
* access control policies and "connecting" different components with OpenPolicyAgent
* enforcing unique hostnames across multiple Kuberenetes clustes
* strategies for setting up Kubernetes Ingress Controllers for multi-tenant clusters;
* methods for scaling and sharding ingress controllers according to the application's requirements (specifically HAProxy ingress controllers);
Come and learn how to secure your application workload on Kubernetes.
A Supply Chain Security toolset aims to safeguard the software development lifecycle (SDLC), managing the risks and vulnerabilities using tools that integrate continuous safety in a DevOps ecosystem.
Tekton, mostly known for its CI/CD features, is a suite of tools that recently included a new Supply Chain Security project under the name of Tekton Chains. During this talk, it'll be shown how to check on the provenance and the signature for an image before deploying it on a Kubernetes cluster.
In 2022, TutorStack, an open source project supported by the local staff in Red Hat Waterford, and academics and students from SETU, was presented remotely at devconf. The talk demonstrated a Learning Experience Platform designed to engage with students proactively, facilitating early interventions for online students struggling silently, and out of sight of the lecturer. However, there was no way to demonstrate the Tutors-Live “social presence” features of the open source Tutors platform, live, without releasing sensitive data (GDPR).
Since then, the tutors.dev open source community has quadrupled utilizing Holopin digital badging. It supports over 150 active programmes across multiple universities on four continents. On the road to enhancing and growing “social presence” features, we have built Tutors-Simulator to address the privacy issues highlighted. It utilizes AI generative tools and frameworks on top of a tech jamstack including node.js, typescript, Svelte and partykit. In Tutors Simulator, courses are real and open, people are not.
Join us in Brno, where we will demonstrate Tutors Simulator, give a technical walkthrough, and a visualization of our future “social presence” plans, as we invite you (especially educators, developers, & documentation writers) to join and contribute to tutors.dev success.
https://tutors.dev/simulate
When it comes to testing for accessibility of our web apps, most of us would have used Chrome Lighthouse to generate a report and use the results to improve the accessibility of our web apps. However, that is not the only way to test for accessibility, and those tests can only detect a subset of issues. Do you know, there are many other ways in which we can test for accessibility using those same dev tools?
In this talk, we will delve beyond the conventional use of Chrome Lighthouse and uncover several alternative methods within the developer tools to test web accessibility. We will explore techniques such as inspecting the accessibility tree, gaining insights into ARIA attributes, emulating various vision disabilities, and more.
So, join me in this session, where we will unlock the potential of dev tools to unveil a diverse range of accessibility issues. Let's all learn together and improve how we test for web accessibility, and make the web inclusive for all our users.
Dive into the world of containerization tailored for application developers using the innovative features of the open-source tool Podman Desktop. We will explore how Podman Desktop simplifies the containerization journey, particularly highlighting its key features like:
- Support for Compose, enabling smooth management of multi-container applications.
- Locally test and develop on Kubernetes using tools like Kind or Minikube, ensuring a seamless transition from development to production.
- Enhanced support for viewing Kubernetes resources such as Deployments, Services, Ingress, & Routes
Join us to see how to move from a local application to a container, pod, and ultimately, a multi-tier application deployed on Kubernetes! During the talk, we’ll also share practical tips and tricks on how to work more efficiently with containers and Kubernetes.
Sweetest meetup on DevConf. Bring a candy, eat a candy.
Good job! You’ve successfully built a great piece of software. Now the hard part begins. Be it a developer tool, SaaS product, library or app, you will want people to use your new shiny thing. One of the key areas we’ve found exceptionally impactful is enabling people to “play” early on.
This talk focuses on an aspect of software development that is often overlooked: The Developer Experience. While user experience is targeting the end-user of the software, the developer experience targets the SREs, DevOps and Software Engineers tasked with evaluating, setting up and operating your software. Improving your developer experience will not only improve the odds of someone sticking with your solution, but will also make life much easier for people contributing - external and internal alike!
In this session, you'll gain useful insights into the world of developer experience and discover actionable ideas to enhance your own projects. Learn from real-world examples and understand how prioritizing developer experience can lead to more successful software adoption.
In this session we will walk through EDA use cases. The benefits of using EDA. We will also go through rule books and how you can create your custom rules for using Event Driven Ansible. These rulebooks are also created in YAML and are used like traditional Ansible Playbooks, so this makes it easier to understand and build the rulebooks we need. One key difference between playbooks and rulebooks is the If-this-then-that coding that is needed in a rulebook to make an event driven automation approach work.
Let's have a small meetup open to everyone, not limited to Fedora contributors. We'll have a chat about Fedora, contributing to Fedora or just about anything and get the opportunity to catch up with people. This is your chance to meet Fedora contributors face to face and find out who is behind that FAS nick 😉
Eclipse BlueChi (formerly known as Hirte) is a systemd service controller intended for multi-node environments with a static number of nodes and with a focus on highly regulated ecosystems such as Automotive and Edge.
In this workshop we will explore how BlueChi can be used to manage systemd services in a deterministic way with multiple computing units.
We will set up a virtualized multi-node environment. Then we will use BlueChi's command line tool to retrieve information from the different systems, start and stop systemd services as well as monitoring their status. After getting acquainted with the provided tooling and involved components, we will capitalize on BlueChi's seamless integration with Podman containers (using quadlet). We will deploy a containerized application, run it under systemd and manage it via BlueChi. But what if this application needs another service to be running elsewhere? To resolve such inter-node dependencies we will rely on BlueChi's proxy services feature.
If you are planning to attend the workshop, it would be great to have some preparations made in advance:
-
We made sure that the setup works on Fedora 40 and Ubuntu 22.04.4 LTS. If you are running one of these - great! Otherwise, especially if you are on Mac or Windows - having a VM with Fedora 40 pre-installed will make sure everything works as expected.
-
Install podman https://podman.io/docs/installation
-
Pull the
bluechi-workshop
container image:
sudo podman pull quay.io/bluechi/bluechi-workshop
I will be delivering a presentation on how to improve the developer experience by using the Backstage Developer's Portal. Backstage is an open-source platform that allows you to create your own developer portal. Many well-known companies, including Unity, Netflix, and Spotify, have already implemented this highly adaptable platform.
My discussion will center around the key features of Backstage, such as software templating, cataloging, searching, and a straightforward portal for all documentation. By utilizing Backstage, you can overcome various developer challenges, such as managing documentation, clarifying relationships between different parts of your software, identifying the responsible person for a particular module or source code piece, or launching a new project with best practices.
Furthermore, I will demonstrate how you can manage multiple applications from a single portal by creating plugins in the backend, and how you can enhance the user experience by offering.
Now, 1-2 years since LLMs barged into not just our work lives but also our personal ones, you might be feeling the heat to keep pace with the onslaught of new models hitting the scene every week and the pressure to AI-ify everything in sight. But fear not! In this session, we're all about distilling the timeless concepts behind AI models, pointing out where folks tend to trip up, and hopefully, giving you the mojo to dive in and experiment without drowning in the AI ocean. And hey, we're throwing in some 90s grunge music vibes because let's face it, AI is the ultimate remix of humanity's past data.
This session offers a practical introduction to Suricata, a renowned open-source Network Intrusion Detection and Intrusion Prevention System, focusing on its role in detecting and mitigating network threats. Through a series of practical exercises, participants will gain insights into the fundamentals of network security and how Suricata operates within this domain.
This workshop lets the attendees first soak up the knowledge required to properly deploy Suricata at the right place in the network. Attendees will then complete a series of exercises that enable them to evaluate network traffic, identify threats and anomalies, employ and understand world-class security rules, and explore what else Suricata can provide.
This is a unique opportunity to explore Suricata's features and how they can be leveraged to enhance network security, presented by members of the Suricata team. We invite you to join this workshop to refine your network defense skills and advance your understanding of effective security practices with Suricata.
For this workshop, you'll need:
A laptop in which you can install Suricata. Ubuntu is the most common OS, but you can also have another OS or use a virtual machine.
While not required or needed it can help to have the basic knowledge about networking.
To leave more time for the exercises please try to come with Wireshark, Suricata and Evebox installed.
How to install Suricata on Ubuntu/Debian/CentOS...):
https://docs.suricata.io/en/latest/install.html#ubuntu-from-personal-package-archives-ppa
How to install Evebox:
Installation through APT/RPM repository is recommended
https://evebox.org/docs/install/
You can verify the installation by:
- downloading some pcap e.g. from here: https://wiki.wireshark.org/samplecaptures
- running the pcap through Suricata and Evebox with this command:
suricata -r |PATH_TO_PCAP| -l /tmp/ -S /dev/null -k none && sudo evebox oneshot /tmp/eve.json
In the Evebox local website, in the events section, you should now see Suricata events.
For years the LUKS2 (Linux Unified Key Setup), implemented through cryptsetup library,
provides a convenient way to setup FDE (full disk encryption) in many Linux distributions.
Until recently cryptsetup project rejected any suggestion for adding support for closed
hardware-based FDE implementations. This changed with a recent version of cryptsetup
(2.7.0) where we introduced support for OPAL2 standardised self-encrypting drives directly
in LUKS2 format.
In this presentation, we will outline a series of improvements in Linux kernel that opened
the way to integrate OPAL2 drives with LUKS2 format.
We will focus on the integration of the OPAL2 enabled drives in the systems, and how it may
help harden data at rest encryption security and what other benefits the feature brings
to both personal laptop users and enterprise customers where requirements for compliance
with FDE criteria may apply.
In the end, we will demonstrate in the current
Fedora distribution the seamless integration of LUKS2 OPAL2 device ready to use out-of-the-box.
Gear up for a mind-blowing experience in this 25-minute technical presentation where we unravel the intricacies of digitally storing human consciousness. Through a thrilling journey on the cutting-edge landscape of mind storage technologies, we will answer the question "How much space does it take to capture the essance of a human mind?" Sprinkled among the technical discussions will be a touch of excitement and mind-bending ethical considerations regarding mind preservation. Ask yourself, is preserving human consciousness a scientific puzzle or an ethical enigma? We invite you to join us for a lively discussion on privacy, ethics, and the security measures safeguarding the digital frontiers of consciousness. You don't want to miss this!
Kubernetes has firmly established itself as the leading platform for container management, a position so universally recognized that the industry has moved beyond questioning its value. We've embraced Kubernetes to such an extent that we're now developing additional platforms on top of it. However, when it comes to creating Software as a Service (SaaS) platforms or multi-tenant environments, Kubernetes presents certain challenges, particularly regarding multi-tenancy and the complexities of having multiple stakeholders operate the same platform.
In this presentation, we'll explore an innovative approach to leveraging the Kubernetes API for building SaaS platforms. We'll introduce KCP, a project under the Cloud Native Computing Foundation (CNCF) that is currently in the sandbox stage. KCP aims to provide a generalized framework for Kubernetes-like control planes, offering a promising solution to some of the inherent drawbacks of Kubernetes in SaaS/platform development. Join us as we delve into how KCP can redefine the way we think about and utilize Kubernetes for building robust, scalable SaaS platforms.
The Rust programming language has spread worldwide as a new member of the C-family of languages. But what are the benefits of using it?
Over the last few years, leading high-tech companies and projects have started using Rust, including rewriting apps in it. Rust is widely known as a memory-safe language that provides low-level control over system resources, allowing developers to write highly performant code. Why should we start thinking about switching from common languages such as C or C++ to the new way of thinking with Rust?
This session is suitable for people with C-family language experience. Of course, those interested in learning about Rust are also welcome. The talk includes examples of how Rust solves our day-to-day problems.
In this thought-provoking lightning talk, we'll explore the intersection of software development and sustainability, unraveling the potential impact of eco-friendly coding practices on the future of technology. From optimizing energy consumption in data centers to reducing the carbon footprint of applications, join us as we delve into sustainable coding techniques, best practices, and real-world examples that empower developers to contribute to a greener, more environmentally conscious digital future.
Key points:
1. Understanding the Environmental Impact of Software
2. Optimizing Data Center Energy Efficiency
3. Green Coding Practices
4. Renewable Energy Integration in Development
5. Success Stories in Sustainable Development
It's important for open source to embrace AI and lead the way for responsible AI. We need to fight for a future where AI is not controlled by a few richly-resourced corporations with massive infrastructures and we have no control over AI black boxes that we anticipate will increasingly impact humanity. Part of this open source AI effort will require enabling the training and fine-tuning and running of AI on local user systems, essential for data privacy and the ability to modify the technology.
In this talk, you will learn about the new open source AI project "InstructLab" and a novel approach to align a model that have incredible potential to enable truly open source AI technology. We will talk about related projects, tooling, and demonstrating an end-to-end developer workflow.
AI is at a critical inflection point in technology and is moving incredibly fast. Get up-to-date all in one talk and learn the open source tools you should be keeping an eye on now!
Fedora is not the only distribution that will get a shiny new web-based installer. So let's look at Agama, the future installer for openSUSE and SUSE distributions. We will discuss why we need a new installer, which features will be included and there will be room even for talking about some technical details (so expect Ruby, Rust and React content). The non-exclusive list of Agama features that we'll cover includes automatic installation with scripting support, advanced partitioning or transactional systems support. We will also discuss why we ended up moving from Cockpit to a custom HTTP based solution.
In today’s cloud-native landscape, managing and observing multiple clusters is a common requirement for ensuring robust, scalable, and reliable service delivery. However, achieving a comprehensive view across these clusters, while maintaining performance and limiting costs presents numerous challenges. Drawing from our customer experience in multi-cluster environments, we have a few critical lessons to guide practitioners toward effective multi-cluster observability. In this session, we delve into practical strategies and best practices covering areas such as centralized monitoring, logging, tracing, visualization, and analytics.
One of the drawbacks of Python is that applications run relatively slowly written in this language. However, nowadays the situation is not bad at all, because there are both JIT (Just In Time) and AOT (Ahead of Time) Python compilers. What's more, the classic CPython is constantly being improved, with a relatively big performance leap taking place with Python version 3.11. There is also a variant of CPython without Global Interpreter Lock (GIL). In this lecture, we will introduce Numba (JIT), mypyc (AOT) and Python technologies without GIL.
In order to prove that automations do what they are intended to do, they should be run in an ephemeral testing environment which mirrors production: the same dependencies, infrastructure, apps, etc. Ansible Molecule is the preferred framework for testing Ansible Collections in ephemeral environments. In this session we will demonstrate how Ansible code developers can implement and use Ansible Molecule to test their work before it hits production environments.
In academia, analyzing various performance aspects of software is nowadays quite well-researched. Theoretically, one can automatically analyze the complexity of functions, generate time-consuming inputs, or efficiently profile the software with minimal overhead. However, while the results are always exciting, applying these techniques in practice is not so straightforward.
For over seven years, we have developed Perun: a performance management system and a tool suite. We usually evaluated our techniques on smaller or medium-sized projects (at most half a million lines of code). So, in the recent year, we decided to move towards a bigger challenge: the Linux kernel.
In this talk, we will summarize our experience and the challenges we have faced when we moved from the academic field into the real world.
Using git on the command line
Using Git on the command line is
about the most efficient way to use
such a tool. Such usage of this program
allows you to view the software output
in a simple text stream of data and
type in commands line-by-line.
Signing git commits
- Bonus point, GnuPG workshop recommended.
(A short point, no need to attend before this talk.)
We’re announcing the re-release of CommuniShift which is the Fedora Community OpenShift cluster. CommuniShift is a place where community members can spin up things which might be interesting or useful to the community. They can safely experiment and figure out their applications on CommuniShift before moving them later onto Fedora Infra.
In our talk we want to show people how to request resources on the cluster. It’s designed to be beginner friendly, no previous knowledge required or needed.
Traditionally, most test failure analysis tools have relied on text output but OpenQA tests are primarily visual in nature which makes it difficult to use those more common techniques. In Fedora, we have been exploring the use of AI/ML to do some test failure classification using the primarily visual information that we have available. This talk will cover the techniques that we currently evaluating and using for test failure classification for OpenQA test failures in Fedora.
Do you love coffee? Do you enjoy trying new beans or experimenting with different brewing methods? If yes, we would be happy to meet new friends during DevConf.cz that have the same hobby as we have!
Let’s meet, brew some coffee together and exchange our tips and tricks about coffee.
We'll bring some of the beans we love and make a lot of coffee using various methods (V60, Aeropress, Moka, French Press etc.). If you have your favorite coffee beans or method you want to show your coffee friends during DevConf.cz, we encourage you to bring it to this meetup as well!
Fedora offers a set of variants that are image based systems: Fedora CoreOS, IoT, Atomic Desktops (https://fedoraproject.org/atomic-desktops/). They provide atomic and reliable upgrades and let you run applications in containers or Flatpaks.
But sometimes, you still need to change what is in the image to add support for your hardware using out of tree kernel modules, install a security agent for compliance or fix a bug. So how do you customize the system when it is offered as a read only image?
To solve this use-case, we are turning the distribution into a container, to build and distribute OS images. Anyone will be able to modify the Fedora images to include their own packages, configuration files and changes and then distribute that to their systems, using any container tools or registries.
This is how the Universal Blue (https://universal-blue.org), Project Bluefin (https://projectbluefin.io/), Bazzite (https://bazzite.gg/) and related projects are successfully building on top of Fedora Atomic Desktops without having to create a completely new Linux distribution in the process.
In this talk we will give examples of how this works with Fedora CoreOS and Fedora Atomic Desktops.
Running Virtual Machines in Kubernetes (k8s) provides a few challenges for features that are already well-established in simpler virtualization environments. For a proper Cloud Native experience, KubeVirt implements the device plugins mechanism from k8s.
This talk will explain how KubeVirt's Device Plugins work by using the recent addition of the USB device plugin as an example.
Explore the next frontier of resilience in OpenShift with Ansible Event Driven automation. Learn how to optimize OpenShift Data Foundation storage resources through real-time monitoring and dynamic capacity adjustment. Experience the seamless integration of backup restoration techniques, empowering rapid recovery, reporting and minimizing downtime, all achieved without manual intervention.
Automation plays a pivotal role in today’s technology landscape. It streamlines repetitive tasks and ensures consistent, reliable results so creators can focus on innovating and collaborating. Content, such as playbooks, roles, and collections, is the lifeblood of automation. The more high-quality content – you can create and deploy, the more valuable automation becomes. Enter Ansible development tools, a new curated package of tools for Ansible automation development. This workshop will guide the participants through how to use these tools to create and deploy automation content with best practices.
There is no denying the fact that many development efforts have to be spent on existing applications - legacy that is - which typically exhibit a monolithic design based on traditional tech stacks. Thus, affected companies strive to move towards distributed architectures and modern technologies. This talk introduces you to the strangler fig pattern, which aids a smooth and step-wise migration of monolithic applications into separate services. The practical part shows how to apply this pattern to extract parts of a fictional monolith shedding light on its pros and cons in a tangible scenario. We will also have a look at the human and procedural aspects of employing this pattern and will tackle the technical challenges involved.
After this talk, you'll have a better understanding of and a concrete blueprint for extracting functionality from your monoliths, thereby gradually evolving into a (micro)service architecture and an en vogue tech stack.
If you work in software development and love Lego, then this session is for you!
In the not-so-distant future, mankind has finally planned our first manned mission to colonize Mars. The infrastructure is set, the rocket is ready, and now we just need a plan for what we need to build to help our first colonists be successful in sustaining life on a new planet.
Come be a part of the fun as experience what it's like to apply the Möbius Loop rhythm of working, focusing on outcome delivery instead of a single, rigid framework. We'll leverage methodologies such as agile, human-centered design, and DevOps and meld them into a single rhythm of working that will propel us toward creating the first habitable city on Mars - with Lego!
Come ready to collaborate, move stickies around a wall, build with lego, and help our first brave colonists take the next step in the human endeavor!
-
RedHat OpenStack Platform is cloud operating system system which helping in controlling large pools of compute, storage and networking via a CLI or Web Based Interface named Horizon Dashboard.
-
For a long time we use to have OpenStack Director working on standalone Baremetal or as virtualized form
-
As the world is emerging with the help of containerization making application manageable, scalable and portable.
-
So as our Director joining the journey by harnessing the power of Openshift.
-
OSPdO empowers administrators to deploy and manage Red Hat OpenStack more efficiently and reliably.
-
This combinations opens many doors for Industries to leverage both OpenShift and OpenStack and help in innovation and growing business drastically.
-
This session will give a quick glance of OSPdO along with its deployment and day-2 operations.
Developers are under constant pressure to focus efforts on those features that would bring the greatest benefits to their users. It would help to collect metrics that measure how different parts of the software are used, but unfortunately many users are reluctant to supply such data.
Differential Privacy encompasses a variety of techniques where a controlled amount of noise is added to data to allow the reporting of aggregated information without the ability to identify individuals. It remains an active area of research.
This talk provides a brief overview of the development of the subject over the last 20 years. It explains the key terminology and describes the main techniques available while providing a taster of some of the mathematics involved.
Finally it proposes ways these techniques might be used to address telemetry in Fedora.
Nix is a package manager that introduced a novel approach to resolving dependencies, creating reproducible builds, and isolating conflicting package versions. Nix packages are defined in a declarative, functional, domain-specific programming language. They are stored in separate directory structures indexed by a cryptographic build hash, and injected into the runtime environment as needed using symbolic links. Nix forms the core of the NixOS Linux distribution, and inspired a similar GNU project called Guix using the Guile Scheme language. The innovations of Nix cause some challenges for software packagers used to producing more traditional RPMs or Debian packages, but Nix packaging also shares some commonalities with the software packaging process for desktop container platforms like Flatpak or Snap. This talk will present simple techniques for packaging software for Nix.
Join Jared Sprague co-creator of the Red Hat Open Source Arcade as he explores the creation of, arcade.redhat.com, showcasing games crafted with all open source tools. We'll delve into the making of each game, uncovering the latest tools and technologies used in their development. Additionally, we'll examine the state of open source game development tools for all aspects of game development such as engines, graphics, and audio. After the presentation play the games for yourself during the event at the Open Source Arcade station!
How to create an autonomous racecar?
Have you ever wondered what steps are necessary to create a fully autonomous vehicle? Are you interested not only in computer vision but also in mapping the environment or the hardware needed for autonomous movement?
Come and see how we developed an autonomous racing car in the TU Brno Racing team, what it took to debug, simulate, develop, and test it, and what challenges we faced in such development. Even if terms like YOLO, SLAM, or ROS mean nothing to you, we'll go through them together, and after this lecture, you'll have enough knowledge to create an autonomous vehicle.
At least in theory.
Main topics
- Perception - 3D cameras and LiDAR
- Localization and mapping - SLAM
- Path planning
- Vehicle control - Software hardware collaboration
RISC-V is a new instruction set architecture which, unlike Intel's x86-64 or ARM, is open source. It is rapidly being adopted by vendors from embedded to edge to AI. You might even have a RISC-V core or two in your PC right now.
The speakers started porting Fedora to RISC-V in 2016, and this year we expect to complete Fedora 40, 41 and CentOS Stream 10. Join this talk for a tour of the history of Fedora on RISC-V, where we are currently, and future developments in virtualization and platform standards.
Today, Kubernetes is the undisputed go-to platform for scaling containers. But for developers, Kubernetes can be daunting, particularly when working with the discrepancies between local and production environments. Podman and Podman Desktop bridges this gap. In this talk, you’ll be introduced to Podman and witness the unveiling of Podman Desktop, an open-source GUI tool that streamlines container workflows and is compatible with Podman, Lima, Docker, and more. Podman Desktop serves as a beginner-friendly launch pad to Kubernetes, enabling developers to spin up local clusters (with Kind and Minikube) or work with remote environments. A demo will be given that helps you navigate the paths necessary to transition from app to containers, to pods, and ultimately to Kubernetes, highlighting how it reduces discrepancies and enables predictability in your deployments by leveraging Podman and Podman Desktop's perks and security advantages. You'll also learn how you can benefit from Podman Desktop to streamline your container development processes!
Let’s delve into the practical application of Kubernetes, DevOps, and GitOps, demonstrating how these technologies can be integrated to simulate customer environments effectively. Our focus is on creating an autonomous system that leverages Kubernetes tooling and product deployment, thereby enhancing efficiency and reliability. The purpose of this system is to test multiple systems together and observe their behavior at scale.
We will discuss our approach to collecting metrics and monitoring information, crucial for maintaining system health and performance. We will also showcase how we deploy applications using ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes.
Our system operates autonomously, reducing manual intervention and increasing productivity. This presentation will provide insights into how such a system can be implemented, the challenges encountered, and the solutions devised.
Join us as we explore the future of DevOps and GitOps with Kubernetes, and discover how these technologies can revolutionize your approach to software development and deployment. We look forward to sharing our experiences and learning from yours. See you there!
Ever wondered what Podman is doing to enable network connectivity for a container? Did you hear of netavark, aardvark-dns, pasta or slirp4netns but have no idea what they are? You have no idea what network option is right for your use case? Then this talk has you covered.
I will cover the basics of Podman's Networking architecture, you will learn about:
- container networking basics
- netavark and aardvark-dns
- pasta and slirp4netns
- differences between rootful and rootless networking
- common networking issues that users may face when using Podman
I will be showing with practical examples how users can make of use of the different networking functionality offered by Podman.
You've probably heard of security certificates (Common Criteria, FIPS) – they are supposed to certify our software/hardware is secure. But how many products are certified? How long does the certification take? Which provider is the best? What does our competition do? You'd be surprised, but even the engineers in compliance don't know! The single comprehensive database with metadata... well... did not exist :-/.
The talk will introduce sec-certs, a tool for semi-automated analysis of the certificate dataset. It is created by automatically downloading and processing all available metadata and PDFs and cross-referencing them together. This enables to gain data-backed business insights on the certificates, labs, processes and the whole certification ecosystem that were not previously available. And it's all open source as we know it: the whole dataset, tool sources and research outputs are public at sec-certs.org.
This project is a research cooperation between Red Hat, Masaryk University and Brno University of Technology, co-funded by the European Union under the CHESS project (ID 101087529).
In the ever-evolving world of Continuous Kernel Integration (CKI), maintaining the highest quality standards is not just an objective; it's a necessity. At CKI, we've embarked on a strategic journey to redefine how we manage and monitor our testing processes.
Through the development, we are bringing together critical metrics and insights to better understand our CI system's utilization.
Join us as we explore how these changes are setting a new benchmark for project quality and efficiency, demonstrating our unwavering commitment to not just meeting but exceeding the expectations of our users.
We're diving into a super cool way to get things done called backward planning. It's like starting at the end of a story and working your way back to the beginning.
Sounds interesting, right? This isn't your usual step-by-step plan. Instead, it's about figuring out your final goal first and then mapping out how to get there. It's a game-changer for anything you're planning, be it a big project or even just sorting out your career path.
Stick around, and I'll show you how this trick can make your life a lot easier and keep you on track to get those goals.
Whether it's pods, nodes, or something entirely new, let's gather to talk about the state of the art with autoscaling in Kubernetes. This meetup is focused on discussing autoscaling technology and projects within the Kubernetes community. We will gather topics on the day of the meetup and then have discussions based on the desires of the group. Topics for discussion might include:
- Is Karpenter better than the Cluster Autoscaler?
- What is the status of the multi-dimensional pod autoscaler enhancement?
- Will we see a predictive AI-based autoscaler in the near future?
This is a BOF where we discuss all that is new in the container world. Containerized OS, Bootc, Podman, Podman Desktop, Buildah, CRI-O ...
Join us as we delve into the fascinating world of deploying Artificial Intelligence (AI) and Machine Learning (ML) models in diverse edge deployment scenarios. This talk is designed for technology enthusiasts, developers, and industry professionals seeking insights into running AI/ML models efficiently at the edge. We will compare the components used in traditional cloud-based Kubernetes distributions with lightweight Kubernetes distributions optimized for edge devices such as MicroShift. We'll explore crucial factors like power consumption, model size, and performance, shedding light on the considerations necessary for successful edge deployments. Additionally, we'll present a practical example of serving multiple models and discuss strategies to minimize inference process switching time in time-sensitive situations. Learn how open source components can empower you to navigate the challenges of running AI and ML models at the edge efficiently.
One of the main benefits and selling points of serverless solutions like Knative is that it saves CPU cycles and RAM consumption (and by extension, money) by only using resources when they are actually needed. Because of that, we would semi-automatically infer that using a serverless approach when architecting our applications is the way to go if we want to be efficient and do our planet a favor by saving energy.
But wouldn’t it be nice to, actually, have some data to support this statement? Until very recently, we could only guess. But technology is advancing fast and now we have tools to observe and test this hypothesis.
In this talk we will show if, and if so how much, serverless can save not only in computing power but real energy as well. We will do this by actually measuring energy consumption of nodes and workloads with Kepler and shed some light on this topic to figure out if our assumptions are true, or just a myth which needs to be busted.
In this session, we will demonstrate how easy the data scientists and developers can productise their AI/ML models in an cost-effective and agile mode using Open Source projects such as KServe, Codeflare or OpenDataHub, accelerating the AI/ML adoption using multiple open source libraries and frameworks among other AI/ML suites, without worrying about the infrastructure or lock-in from public-cloud specific tools.
We will explore how OpenDataHub can offer organizations a way to rapidly adopt MLOps and deploy an integrated set of common open source and third-party tools to perform AI/ML modeling all of that in a managed cloud service providing AI as a service.
Finally we will demonstrate how to train, deploy and operate an AI/ML model using the most famous libraries and frameworks.
Matrix almost needs no introduction for open source enthusiasts: aside from DevConf, many prominent communities (GNOME, KDE, Mozilla, to name a few) use this decentralised communication platform. The meetup welcomes both volunteers already involved in Matrix projects and people who simply want to learn more about Matrix.
Preliminary agenda (subject to change according to the audience desires):
* a quick state of the project overview
* ecosystem overview, prominent projects and personalities
* free mic for anyone who wants to give a quick pitch about their work, or call out a certain issue
* ask-me-anything and open-ended discussions of anything Matrix.
In recent times we saw a number of improvements to various image building tools. We have osbuild, kiwi-ng, mkosi, lorax, each one with different configuration philosophy and language, build mechanism, features and possible outputs. It's fairly easy to do a superficial comparison that looks at the configuration format and the list of features, but it's much harder to get a good feeling for the the implementation choices and details.
In this panel the developers from the different projects will discuss the strengths and weaknesses of the different projects, make comparisons, and answer questions from the audience.
Some important differences between the projects:
- an API for developers (or the lack thereof). Kiwi has it and it's considered important, mkosi does not.
- a human readable image description. Mkosi uses ini-files, Kiwi uses xml/json/yaml, OSBuild defines the distributions in code, Lorax uses kickstart…
- different output formats, support for signing, file systems.
- unprivileged operation with no device access (via systemd-repart)
- support in build "orchestrators" like koji or OBS. Koji recently gained support for Kiwi and OSBuild, but doesn't support mkosi.
- support for reproducible builds
This game - workshop - will be set of miss-behaving java programs, where attendees will be given runtime java reverse engineering tools and some quick introduction how to use them.
The goal will be to hot-patch and fix those programs as quickly as possible in most complicated way (obfuscated, without debuginfo)... well... to fix them at all if possible.
A prerequisite is need to have at least jdk11 and jdk17 on own laptop, where the game will be played
How do you create products users love? How do you ensure new features are needed or desired? In this session, we'll explore the Continuous Discovery process and its pivotal role in shaping products within the Trusted Software Supply Chain portfolio. Attendees will gain insights into the Continuous Discovery process, learn how to integrate it into their workflow, and understand its synergy with lean and agile methodologies and CI/CD practices.
While touching upon the dynamic landscape of security in modern app development, we’ll address the primary concerns of our prospective users and emphasize the significance of User Experience Design and customer-centric product development. Exploring real examples of how user engagement can steer the course of development, we'll showcase practical strategies for incorporating user feedback into the design process to “de-risk” what we build and ensure we deliver value to our business and our customers.
The presentation will cater to all roles involved in the product development lifecycle, from Engineering to Product Management, Marketing, and naturally UX Design.
Dictionaries are a powerful data structure, but did you that you can define them in Python in a very unique way? In this talk I'm going to explore Python Dictionaries and Dictionary Comprehensions in a very hands-on approach.
Threat modeling is one of the most critical activities if you release any software to the web. There are numerous tools, books (one of each is mine), and tutorials on making it suitable. My talk has a different intent - it walks you through bad practices. How the modeling is wrong, and how bad actors can exploit that.
Here is an example:
Only one person in the company does Threat modeling. On the surface, the "hero" approach might be a good use of someone's time, but in the end, the thread modeling attendees' diversity matters. I'll give you some statistics from an exercise where the group put their heads together to protect a beer tap and a dog.
I'll also focus on actual use cases like this:
We do it once a year as a "team building exercise."
We need to know a threat model before we use all the automated/helping tools.
We know everything, and our model is the best.
I've survived two breaches, and we could have prevented them using proper threat modeling.
The talk is interactive, full of fun stories and a bit of metal music. This talk aims to engage with anyone in the Secure Software development chain and encourage you to adapt your processes to secure your software by knowing and refusing those evil practices.
Toolbx is a tool for Linux, which allows the use of interactive command line environments for development and troubleshooting the host operating system, without having to install software on the host. It is built on top of Podman and other standard container technologies from OCI.
Toolbx is installed by default on Fedora CoreOS, Workstation and Silverblue, and also used by users on various other distributions.
This talk will present some of the latest developments in the Toolbx project, and our plans for the future.
In the wild west of web application security, cookie-based authentication reigns supreme. But when it comes to taming this beast, developers face a showdown: Cypress, the UI automation gunslinger, or Rest Assured, the API sharpshooter or Postman, the user friendly explorer. All three boast impressive arsenals, but who truly rules the cookie kingdom?
Join me as I'll unravel the mysteries of cookie capture, session management, and automated login flows. Witness code-slinging demos as we crack open common authentication challenges encountered in real-world scenarios.
Uncover:
1. Speed vs. Stability: Can Cypress's blazing tests compete with Rest Assured's granular API control?
2. Integration & Versatility: Which tool plays nice with your existing framework and handles diverse authentication setups?
By the end of this session, you'll be equipped to declare your own champion in the battle for cookie-based authentication mastery. So saddle up, partners, and let's see who truly deserves the crown!
As AI becomes increasingly integrated into our daily lives, implementing robust testing practices is crucial to ensure these systems function as expected.
In this talk, we will examine the challenges of testing AI-powered applications and discuss ways to understand AI systems better, instead of viewing them as black boxes.
Using practical, real word examples, we will explore the unique challenges posed by AI, such as clarity, fairness and robustness and how these factors impact testing strategies. Additionally, we will examine the role of data in AI systems and highlight best practices for ensuring data quality and accuracy.
This talk is ideal for developers, testers, data scientists, and anyone involved in the development of AI-driven applications. Attendees will gain valuable insights into effectively testing AI solutions, enabling them to build trust and mitigate risks in these systems.
In the realm of enterprise knowledge management, the content is always king. and customers’ access is paramount. In response to increasing demand from our customers for access to our knowledge content offline, we developed the an Offline Customer Portal. Our single-container solution encapsulates Red Hat knowledge base, product documentation, and critical security data that is capable of running offline - even under a mountain. Join us on a journey through the intricacies of extracting and transforming sprawling enterprise content into a portable, self-contained solution using cutting-edge technologies like GraphQL, Rust, Solr, and Podman.
Until recently, Kubernetes did not support swap in a usable way. This was due to a discontinued development that kept the NodeSwap feature in an alpha state for a an extended period. However, with collaborative efforts from multiple developers, I successfully continued the development and elevated swap support to full Beta in Kubernetes version 1.30. Currently swap on Kubernetes is fully supported bringing cgroup v2 support, newly introduced "swap behaviors" and a strong emphasis on system stability.
In this talk I will share the journey of bringing swap support to Kubernetes. This will include a technical overview, insights into the design choices we made and the challenges we encountered along the way alongside use-cases for using swap. In addition, we'll discuss our future plans and open questions that we still face.
By the end of this talk, I hope you’ll have a deeper understanding of Kubernetes’ swap feature and feel equipped to contribute to its ongoing development!
Agile development methodologies are widely used in software development for their flexibility and light weight processes. Hardware projects often cannot use agile methodologies to their fullest capacity and may lean to more traditional project management approaches. It gets very challenging when software developed in an agile way needs to be tightly integrated with hardware or depends on hardware features.
We will look at some of the issues caused by different dynamics of software and hardware development cycles, discuss possible approaches to solving them, and I will also share some lessons I learned from my experience of driving development of software and hardware for electron microscopes.
Nowadays it is very common to use tools like podman or docker to handle container images.
These tools bring a lot of convenience to developers since they hide the complexity behind the underlying structure of a container image.
One of these complexities is about how to guarantee the container image is going to run on different operating systems and processors’ architecture. The common solution for this complexity scenario is what is called OCI Index / Manifest List, which is implemented in almost all popular container images tools available today.
During this presentation, we will give you an overview about how multi-platform container images work and show an example developed in golang. After this session you will have a better understanding about what is going on behind the scenes when an image is pulled and run on different platforms.
How to spread basic IT knowledge around the world? Find the community to volunteer! I'm a lecturer at Czechitas organization and PyLadies community and I would like to share how is it to teach Python programming and Linux command line.
Address:
Koupaliště Dobrák
Dobrovského 96/29
612 00 Brno
We're excited to announce that there will be a social event held during the conference. We invite all speakers and volunteers, early bird in-person ticket holders plus extra tickets available on a first-come, first-served basis during the conference on Friday.
The social event is held outdoors near the swimming pool. Please note, we do not plan to use the swimming pool for a variety of reasons. We chose the place because it is located conveniently and there is a beautiful outdoor deck that can create the right atmosphere for us. We will have access to beach volleyball courts and we have prepared other outdoor games for your enjoyment during the social event.
Keeping your OpenShift clusters up to date is the cornerstone of sustainable and secure service management. But once your fleet of clusters and portfolio of services grows, coordinating upgrades becomes a challenge on its own.
Learn how Red Hat Service Delivery SREs upgrade their fleet of clusters completely hands off and in alignment with the needs of Red Hat managed services.
Spring AI is a project aiming to simplify the development of applications that integrate artificial intelligence (AI) functionalities. It achieves this by providing abstractions and interfaces that manage the complexity of AI models, allowing developers to focus on the application logic without coping with unnecessary complexity. It focuses on the core belief that the next wave of Generative AI applications will be ubiquitous across many programming languages, including Java.
The presentation provides an excellent introduction to Spring AI, its use, and the typical use cases it can address. Several code samples will be presented and explained, along with several live demos.
Agile transformation is hard. Any change or transformation is hard! How do we spark engagement in a 5000+ people organization?
Culture is not built, it emerges. How can we support an environment where people have the capability of emerging a culture of continuous improvement? How do we prepare our vision to something that everyone adheres to?
Being an open company, we set out to co-create our vision statement using the Open Decision Framework. I will share our story on how we collaborated to create a vision statement and the first three pillars to prepare us for the coming change journey and set us up for success.
Our OpenShift CI environment has experienced explosive growth in recent years. It not only serves OpenShift itself but also incubates open-source projects and handles around 30% of workloads unrelated to OpenShift directly. While our initial policy was permissive, the financial burden posed by newcomers prompted us to implement solutions for accurate cost attribution. This talk delves into our journey, showcasing how we leverage concepts like cluster profiles and dedicated cluster pools, both utilizing user-provided cloud accounts. We will underline the significance of in-depth spend analysis via metrics to achieve cost clarity. Additionally, we will explore the user perspective, highlighting how facing their allocated costs empowered users to make informed decisions for more efficient resource management.
One of the most important use cases in platform engineering is the provisioning of an environment. In this context, environment represents everything that developers need to run their applications.
During the talk we will discuss the different parts and patterns of the provisioning process using a Gitops approach in a Multicluster/Multitenant Environment.
Gickup offers a solution for users seeking to backup their repositories across various Git hosting platforms effortlessly. By configuring Gickup just once, users can automate the backup process, ensuring the security and preservation of their valuable code assets. This tool caters to the needs of individuals and organizations alike, providing a seamless and reliable backup solution for Git repositories.
Bring your children to the conference! We will show them how to code.
Anyone from 6 years to 99 years. No previous experience needed.
We are working on a new scheme to replace the GRUB bootloader with a fast, secure, Linux-based, user-space solution: nmbl (for no more boot loader).
Most people are familiar with GRUB, a powerful, flexible, fully-featured bootloader that is used on multiple architectures (x86_64, aarch64, ppc64le OpenFirmware). Although GRUB is quite versatile and capable, its features create complexity that is difficult to maintain, and that both duplicate and lag behind the Linux kernel while also creating numerous security holes. On the other hand, the Linux kernel, which has a large developer base, benefits from fast feature development, quick responses to vulnerabilities and greater overall scrutiny.
We (Red Hat boot loader engineering) will present our solution to this problem, which is to use the Linux kernel as its own bootloader. Loaded by the EFI stub on UEFI, and packed into a unified kernel image (UKI), the kernel, initramfs, and kernel command line, contain everything they need to reach the final boot target. All necessary drivers, filesystem support, and networking are already built in and code duplication is avoided.
We will showcase the work done so far, and ask you for your feedback and use cases.
RLBot is a framework for making bots, it has also fostered a massive community of 20k+ users and developers with a passion of toying with bots in Rocket League. In this talk I'll present a brief showcase of what this is, some interesting facts, and show SOTA of current bots in the community. Rocket League as an environment is very simple for humans to understand, but what might seem simple for us is exponentially difficult for AI. Making bots for Rocket League can really elevate your understanding of math, 3d space, hard coded AI, and Machine Learning, while having lots of fun and maybe competing in the community tournaments for the next RLBot Champion!
KubeArchive is a system that archives Kubernetes objects to permanent storage, which can then be retrieved through an API. The main goal is to reduce the number of "stale" objects in the Kubernetes cluster, so performance is not impacted. This project was inspired by Tekton Results.
Kubearchive consists of the following components:
- An operator that manages the system as a whole.
- A resource watcher that sends resources when they change.
- An archive service that receives resources to archive, and stores them in a database.
- A REST API, Kubernetes-like, that reads the database and returns archived resources.
- Integration with Kubernetes authentication and authorization (TokenReview and SubjectAccessReview)
- (future) Integration with arbitrary systems to retrieve data related to archived resources.
KubeArchive is envisioned as a Kubernetes Data plane archiver that users store, access and retrieve cluster definitions at any given moment.
For more information visit: https://github.com/kubearchive
Abstract
In this workshop, we'll build a distributed application that interacts with Apache Zookeeper to elect one leader among all its nodes. Our application will guarantee the presence of at most one leader at all times, even if some of the nodes are down. We'll use Go to write the application, and we'll run it as a cluster using Docker.
Objectives
- Understanding the what(s) and why(s) of leader-election.
- Overview of Zookeeper and its Sequential Ephemeral Z-Nodes.
- Deploying a Zookeeper cluster locally.
- Hands-on implementation of leader-election using Zookeeper in Go.
- Testing the application.
Requirements
- Participants should already have Docker and Go installed.
- It is recommended to keep the zookeeper:3.9 container image downloaded to save time and internet problems.
Creating and deploying Helm charts for Kubernetes workloads is straightforward, yet these charts often fall short of delivering customized controllers, native metrics, and the necessary scalability for handling complex business logic. Operator SDK provides guidelines to build advanced operators through five capability levels. In this workshop, we will be building a demo level 5 Operator, working our way up the implementation of each capability level one by one, guiding the attendees toward achieving a higher level of maturity for their applications. Participants will leave with a functional demo that shows a straightforward, lightweight, and effective operator development process, covering basic installation, metrics enablement, and finishing with an auto-pilot implementation.
Discover how the fundamental principles of SOLID—Single Responsibility, Open-Closed, Liskov Substitution, Interface Segregation, and Dependency Inversion—can significantly elevate your Ansible role and playbook development. Join us for a session that delves into the practical application of SOLID principles within the realm of Ansible content creation.
Through insightful examples drawn from real-world experiences, this presentation will illuminate the precise moments and methods for applying SOLID principles to enhance Ansible content. Attendees will glean actionable strategies for efficiently refining their Ansible content by applying these principles.
Geared towards both seasoned Ansible practitioners and newcomers, this session will feature accessible, yet impactful examples that transcend expertise levels. Whether you're seeking quick ways to improve your Ansible content or aiming to optimize your development workflow, this presentation promises practical insights and actionable takeaways.
Target Audience: Individuals interested in optimizing their Ansible content development process, whether they possess familiarity or are new to Ansible. The presentation will offer generic examples that cater to a diverse audience, requiring no expert-level knowledge of Ansible.
In today's fast-paced business environment, and especially with the advent of machine learning (ML), organizations are seeking ways to derive better insights from their data as quickly as possible. However, implementing a complete ML pipeline can be quite challenging. It’s even harder if you want to process newly arrived data immediately or you have a legacy system which is not easy to connect with your modern infrastructure . Change Data Capture (CDC) has emerged as a technology for delivering real-time data changes from various sources, especially from the databases. In this talk we will introduce Debezium, a leading open source framework for CDC. We will discuss how it can be leveraged for ingesting data from the various databases into ML frameworks like TensorFlow and what the pitfalls are if you go this route. We will also briefly discuss possible future improvements in this area, especially possible integration with emerging ML feature store technology.
The talk will be accompanied by a demo in which well-known example of recognizing handwritten digits using the TensorFlow model and images stored in a Postgres database will be shown. All in real-time.
Attendees will gain an understanding of how Debezium CDC works, how it can help them to ingest data from the source database into the ML framework in real time and also what are the possible challenges with this approach.
Apache Camel is the leading open-source integration framework that simplifies the integration of various systems and applications. There exists a comprehensive set of Tooling specifically designed to empower Camel developers in their work with Apache Camel within VS Code. These tools facilitate a seamless and efficient development experience, offering robust support and functionalities tailored to the needs of Camel developers.
In my session I would like to rely on the Extension Pack for Apache Camel which contains a set of specific extensions for Camel but also leverages the VS Code ecosystem.
The mission would be to bring the picture of how effortlessly managing the entire Camel development lifecycle within VS Code. From the initializing a Camel route file, across experience the ease of running integrations locally, effortlessly making edits, and witnessing automatic reloads for real-time updates. As part of journey utilise the graphical editor for Apache Camel within VS Code to develop, edit and improve your Camel routes. Present the debug with ease, inspect variables, and witness instant changes. Finally, explore a range of deployment options directly from your VS Code environment, ensuring a smooth transition from local development to deployment stages.
We have various motivations: Some seek control over their digital privacy, some enjoy data hoarding, others want to host applications for their communities and some just like exploring the latest and greatest DevOps techniques. What unites us is our appreciation for silent humming of home-server fans in a closet, on a shelf, or in a random corner of our flat, buried under meters of cables and dust.
Let's meet in one room and share our self-hosted/homelab stories, experience and tips and tricks. It doesn't matter if you have one lonely Raspberry Pi, or a dedicated room with multiple racks, just come and join in the fun!
This year, the meetup will be organized as a series of several lightning talks. Feel free to join us at any time! :)
CNI prides itself on doing just a few things right. It pretty much consists of a specification, and libraries for writing plugins to configure network interfaces in Linux containers. A lot of things are implementation specific - meaning, each plugin has a different understanding of what a configured interface should look like.
It does have some real short-comings; how do you know if the plugin is ready to actually configure an interface? How do you tear-down the interface’s allocated resources - when the container is deleted? IP Address management (IPAM) allocated resources are a good example – if you don’t do it properly you can leave an address stranded like Robinson Crusoe.
Up to now… That garbage collection would be up to the plugin. So your teeny tiny CNI plugin - which was thought of as something to run on a single binary on the host file-system - is now bloated to a daemon process to provide a reconcile cycle to teardown the resources depending on its use cases. Likewise when you need to know if the plugin is ready to do its thing.
Fear not, young grasshopper! The CNI maintainers have got your back and added two new verbs to the CNI spec (and libraries): STATUS - which signals if the plugin is ready - and GC - which helps to garbage collect the resources allocated by the plugin.
Join us in this talk where we showcase these two new verbs, providing a demo, and examples of plugin implementations of these new features.
There have been several attempts to provide multiple application streams for Fedora, such as Software Collections that never become used in Fedora, and modules that were broadly used until Fedora 39. However, modules brought multiple issues for package maintainers and faced several challenges on the user side. This situation left many confused about the multiple stream provisioning.
With Fedora 40, modules are being discouraged. But… What will come after them?
Come and join us for a talk where we describe the innovative approach we've chosen for database components in the post-modular world. Be ready for a live demo at the end of the talk!
In a world obsessed with features and deadlines, software development often loses sight of the true goal: impact. Outcome-Driven Delivery (ODD) flips the script by prioritizing impactful change and measurable value.
This talk delves into the transformative power of ODD, exploring its core principles, benefits, and challenges. We'll unveil practical strategies to empower your team to:
- Define clear, measurable outcomes: Move beyond vague goals and prioritize what truly matters to users and the business.
- Embrace continuous learning: Foster a culture of experimentation, data-driven decision making, and rapid adaptation.
- Build cross-functional collaboration: Break down silos and cultivate ownership across the entire development chain.
Join us to explore the exciting world of Outcome-Driven Delivery and discover how it can revolutionize your team. By prioritizing outcomes, you can unlock agility, deliver true value, and empower your team to thrive in the ever-evolving world of software development.
Since Fedora 33 Btrfs is now the default filesystem, Cockpit however did not support it until January 31th this year. What sets Btrfs apart from other filesystems, what where the challenges implementing it?
This talk will give a short overview of the work that has been done, the design choices and (hopefully) a quick demo.
RISC-V is an open standard instruction set architecture that has potential to be widely used as an alternative to existing ARM and x86 solutions. For the software developers it's beneficial to keep up with this technology and test early. At the moment it might be challenging to get real RISC-V hardware, however there is an alternative by using emulators such as QEMU. This short talk presents the journey and challenges we faced when testing our application on RISC-V Fedora in a VM and in a podman container.
Are you an SRE planning a Kubernetes upgrade but unsure about your workload's compatibility? Join us for a presentation featuring an open-source Ansible role designed to automate the detection of to-be-removed APIs in your workload. Learn to use this role on a cluster with your deployed workload, to generate JUnit output that indicates compatibility with different Kubernetes versions: 'workload is compatible with x.y' or 'workload is incompatible with x.z because the following APIs used will be removed in x.z.'
This development was done for a practical use case: upgrading from OpenShift 4.11 to OpenShift 4.12 while ensuring compatibility with Keysight Open RAN SIM CE, a complex telco workload with 20+ operators. We’ll review existing approaches to detect soon-to-be-removed APIs and their limitations, and highlight the benefits of the APIRequestCount API we chose. Short demos will showcase our approach with real-life examples.
"For 25 years RHEL development happened in Bugzilla. That all changed on September 4th, 2023 when all 1500 engineers and their support teams switched Jira. Here's how we did it and how it paves the way for Agile Innovation."
In 2022, a team was formed to start thinking about doing the unthinkable–completely changing the development tracking process for the over 1500 engineers and their support teams building Red Hat’s flagship product, RHEL. The team was tasked with outlining existing processes and mapping them into new tooling.
It wasn’t just about doing what the other tooling did, but taking advantage of new functionality. But, even with the bright and shiny new features, this was no easy feat. It was a task that required coordination, collaboration, and buy-in from all levels within the organization. This talk will take the audience through the process of “the great migration” of RHEL into Jira as the speakers share insights and best practices for managing communication, process improvement, and change for a product team with 20+ years of process ingrained in their everyday work. This talk uses the migration of RHEL into Jira as a proxy for change management best practices as we take the insights forward to find novel ways to implement complex changes across the Red Hat Portfolio.
Audience
This talk is targeted at System Administrators and Site Reliability Engineers interested in learning about how to best make sense of the Prometheus metrics their system exposes. If you know PromQL, but the queries behind your dashboards are still a mystery to you, you are not alone. This talk will show how to get information out of your metrics to maximize the insights and make data-based decisions.
Outline
Creating new metrics and collecting them with Prometheus is easier today than it was ever before. Site Reliability Engineers and System Administrators have all the data at hand they need to make the right, data-based decisions. But how?
Making sense of all that information is still a challenge. Crafting the right PromQL query to answer your question and manifesting it in a Grafana dashboard is a complex and time-consuming task. Not speaking of understanding that query when you need to change it a few weeks later.
In this session, you will see different approaches to make sense of the prometheus metrics exposed by a software deployment: Starting from the default Prometheus UI, via PromLens, an improved, open source query-building UI, all the way to an experiment on transforming Prometheus metrics into a data warehouse for improved data exploration and visualization. Data analysts have used Business intelligence software for decades. What can we learn from these systems to discover knowledge in the ocean of metrics to make better decisions for our infrastructure?
Key Takeaways
During this talk, attendees should have learned how to (1) best explore and query the available metrics in their environment, (2) which tools are available today and (3) how infrastructure intelligence can leverage data warehouse concepts for improved knowledge discovery and decision making.
Most people have the impression that Linux Bridge is just a simple bridged network, which can only be used for simple interface forwarding. In fact, after years of development, Linux bridge has added many new features, such as MSTP, VLAN filter, IGMPv3/MLDv2, Switch dev and so on. These new features enable Linux bridge to match the ability of Layer 3 switches. In this lecture we will introduce to you how these features work, how to use them, and the benefits of using them.
Audiences need to know basic networking concepts.
System performance analysis is the process of gaining a deeper understanding of those aspects of a computing system that affect its ability to perform its many functions as efficiently as possible. This is a complex undertaking requiring a great deal of experience, expertise and specialized knowledge of the system - its hardware, software and environment - in order to understand and then take action to improve performance.
This talk explores recent research in automated system performance analysis using a dynamic ensemble approach, statistical methods, causal inference, anomaly detection and explainable AI techniques. This new approach allows a collaborating human analyst to rapidly distill vast amounts of data, gaining understanding and insight into the major contributors to performance in real time, leading to improved system understanding for both optimization and root cause analysis tasks.
For years, Apache Kafka relied on Apache ZooKeeper for maintaining its metadata and coordination. But that is coming to an end. After a lot of work in the Apache Kafka community, ZooKeeper is going away from Apache Kafka and it will be replaced with its own Raft-inspired implementation called KRaft. This is a major architecture change for all Kafka users, including those running Kafka on Kubernetes. And it affects also projects such as Strimzi that provide tooling for running Apache Kafka on Kubernetes. So, how does it work? What are the advantages? What does this change mean for the existing ZooKeeper-based Kafka clusters? What are the main challenges and limitations when using Kraft on Kubernetes? What are the changes we had to make in the Strimzi project to make it ready for KRaft? All of this will be answered in this talk including a short demo of what Strimzi support for KRaft looks like.
Throughout any project or business you will find people who 'keep the ball moving', but those people are usually never 'managers'. They are sometimes called Leads, Co-Ordinators, Owners, Architects, and what these people all have in common is the ability to lead through influence. This short talk will touch on some examples of this behaviour in many settings - from office coordination to event management to releasing of a distribution, and hopefully serve as a message to people in the audience who may be already doing this work that its valuable, they are important and share ideas on how to be successful in managing when you're not a manager!
We've come across many interesting and creative ways people use Ansible in their personal and professional lives. Come and share your stories and hear from the community! This is a session where everyone can share with, learn from, and get inspired by one another.
Node.js applications are fast becoming more frequent and complex through Kubernetes deployments. With that being said, developers are more concerned about generating useful logs, gathering metrics, and thus maintaining the application with precise data about the performance baseline.
Therefore, to help developers enhance this instrumenting approach, the OpenTelemetry project leverages analytical capabilities by expanding the possibilities to collect and export telemetry data from Node.js applications.
In this talk, I will share an easy way how the audience can achieve the auto-instrumenting of a Node.js application using the OpenTelemetry Operator.
Since the beginning of time, declarative APIs have been driving everything that can happen inside an OpenShift cluster. Predefined CRDs, operators defining custom CRDs, everything is about declarative APIs. Write your YAML once, deploy it, forget it. That’s how you create a cluster, that’s how you deploy your workload.
But is it, for real, as simple as it sounds? How do you bring declarativeness to the imperative world? In the current state of things, host networking is one huge imperative nightmare. So how to happily marry an old-school Network Manager and brand new Kubernetes API? In this session we will demonstrate how NMState provides you with a Declarative Network API, finally allowing you to manage host networking in a declarative manner.
To make it more entertaining, we will show you how the OpenShift cluster with NMState Operator manages networking on the nodes it deploys. It may sound like a chicken and egg situation, but trust us, it is not. Last but not least, we show how it protects itself from applying destructive network changes potentially taking your cluster down.
Join us and create the most complex network topologies on the fly.
With the advent of large language models, looking up human-made software documentation might seem like an antiquated and inefficient way of getting information. But is technical writing bound to be relegated to AI in the near future?
In this talk, we’ll be covering the advantages and disadvantages of using large language models (LLMs) for creating software documentation. In the second part of the session, we’ll recommend some good practices for getting the most from your AI tech writing companion. Finally, we will also invite you to share your experience with AI-made docs, and to discuss your success stories, as well as pitfalls to be avoided.
What are some misconceptions about leadership, and what is it actually about? How does it translate into different professional domains such as tech leads or people managers? And what leadership skills should everyone possess, whether you aspire to lead or simply aim to enhance your professional toolkit?
Join us in a panel discussion among a manager, developer, and QE leads, where we'll shed some light on how to become a successful leader, and why "You don't need people skills in IT" is a dirty lie.
In this workshop you will use how to set up a python data analysis environment using containers managed by Podman. Once we have built our safe and repeatable python environment we will learn how to do data analysis using Jupyter Notebooks and pandas, an easy to use open source data analysis and manipulation tool. In this workshop you will learn how to do the following analysis:
Visualize your data
Describe your data
Find trends in data using linear regression
You will need a laptop of some type that can install podman and run commands from your terminal. If you can please install podman before the session - instructions here - https://podman.io/docs/installation. After this workshop you will understand how you can take advantage of containers and the power of pandas.
Slide links - https://tinyurl.com/42f5u9m4 - https://docs.google.com/presentation/d/19O_4Hr8TxGUzKIP_dydXJEhtxP6f4uHSUb40m6AZ2zw/edit#slide=id.g2e59fe12af2_0_0
Security-Enhanced Linux can be used as an additional layer of security for container workloads. But what does it actually protect and how? Join us for an in-depth exploration of SELinux's role in container and Kubernetes security. We'll begin with a brief SELinux overview, however, we will dive deep into containers and Kubernetes quickly. In particular, we will cover some challenges with SELinux, especially around volume relabeling and how to avoid them, and the future of SELinux and Kubernetes integration.
CI/CD is the industry standard for software delivery pipelines and both CI and CD are employed together and sometimes even used interchangeably. However, both are very different concepts which should not necessarily be tied together. In this talk, I'm going to define what Continuous Integration (CI) is, what Continuous Delivery (CD) and Continuous Deployment (CD) are and how they are different from each other. Then, I'm going to dig deep into how effective CI/CD pipelines have a clear divide between CI and CD and why this is a good pattern to follow.
Have you ever had the need to do a workshop to show what you've worked on or what the company you work for produces and found that it's very complicated? Have you ever had to deal with the complexity of the learners heterogeneous environments? Or in other words, you wanted to show how your project works to 20 people but some have windows, some linux others mac, some constrained laptops, some don't have enough memory. I think you probably know what I'm talking about. For this problem there are cloud based workshop platforms that help you deliver a hands-on workshop experience via a web browser. But many of these do cost a lot of money.
In this workshop, I'm going to show Educates, and open source hands-on workshop platform that can help you deliver workshops and demos, in a very easy, reproducible and cost effective way, and I'm going to demonstrate it using an Educates workshop, which means Inception.
I encourage you to spend this short time with me. It'll change your life.
The email workflow brought the Linux kernel to life and saw it through immense growth and into widespread popularity. However, it seems it's reaching its limits, prompting Linus Torvalds to say we need to "find ways to get away from the email patch model".
While the community is researching and arguing about alternatives, we'll take a look at just one of them – GitLab – and how it already helps some maintainers and developers in or near the kernel community to smoothly integrate testing into their workflow.
We'll talk about our use of GitLab at Red Hat's CKI, explore DRM CI, Mesa3D CI, Media-CI, and the proposal we worked on at KernelCI to introduce the standard GitLab CI pipeline into the kernel. We'll look at what they're already doing, what they can enable in the future, and which part they can play in replacing the email workflow.
Although FreeDOS has been around for quite a while (since 1994!), I didn't notice it until recently. What is the purpose of creating an Open Source DOS-compatible operating system? What could be the use case in 2024? Is there still active development of the project? How can it be run using QEMU on current computers? And what are the challenges here?
If your software project offers any interface towards other consumers (library, CLI, or others), then regressions are especially awkward: When you get the bug reports, the damage happened weeks or months ago, the root cause is often not obvious, the context is gone from your head, the new version is already released upstream and into distributions, and your consumers have to waste a lot of time and add bad workarounds.
A better approach is to run your consumer’s tests right in your upstream project’s pull request (“reverse dependency testing”). This dramatically tightens the feedback cycle, makes regressions much easier to debug, avoids broken releases, and lets you do changes with much more confidence.
This can be done with tmt, packit and COPR services, and thus without having to maintain any custom infrastructure. Cockpit has successfully practiced this approach with SELinux, podman, and a few other projects for some months now. Let me convince you!
In this talk, we will dive into the agile techniques employed in our project. We had the pleasure of starting from scratch and growing into a project used by hundreds every day. We will cover challenges we encountered along the way, such as remote work, planning, tracking tasks, automating tedious actions, team roles, making the biggest impact and more. We will touch on some methodologies from Scrum and Kanban. The biggest highlight will be our system of rotating roles within the team, which means having designated individuals for deployment, communication with the community or leading the Kanban process and rotating these responsibilities each week. We have recently written a few blog posts about this process that works really well for us and will guide you through our journey of getting to this point. Let’s share together the insights, lessons learned and practices for achieving effective collaboration within the development teams! If you are collaborating with others on a project, you may learn something here!
Production Engineers (Meta's equivalent, more or less, to Google's Site Reliability Engineers) tend to be a minority in engineering organizations, and when, like the presenter, they focus on distribution packaging, they tend to have to maintain a lot of packages that are directly or indirectly requested by others, whether fellow engineers or community members.
How does this scale? This talk will try and demonstrate how the PE mindset can be applied to RPM packaging, and some useful tools and frameworks that can be leveraged to make packaging more reliable and scalable
Teaching is hard, teaching kids is harder, teaching your own kids and surviving is a miracle. But it is satisfying!
This is a family story about a parent and his two children sitting down for an hour a day to code. We will explore an opinionated path to teaching kids logic and coding. How to keep the kids interested, engaged and happy.
If you ever thought to pass your knowledge on to the next generation and want to hear some helpful tips, this talk is for you.
In today's data-driven world, fast and accurate access to data is essential. Debezium, a distributed platform for change data capture, is an open-source tool for extracting, transforming, and streaming data changes in real time. In this talk, we will discover how to harness the power of Change Data Capture in a Kubernetes environment.
Key insights:
- Understanding Change Data Capture: Explore the means of tracking and streaming dynamic database changes.
- Essentials of Kubernetes operators: How to simplify application deployments on Kubernetes.
- Step-by-Step Deployment: Technical walkthrough of deploying Debezium on Kubernetes.
Whether you're a data engineer, architect, or Kubernetes enthusiast, this session will equip you with the technical prowess to effortlessly employ Debezium's change data capture capabilities to serve the needs of your applications running in a Kubernetes environment.
Microservices architecture has become a cornerstone in modern application development, offering scalability, agility, and flexibility. However, managing the complexity of microservices can be challenging, and that's where Kiali comes into play. In this talk, we'll explore the powerful capabilities of Kiali as an observability and management platform for Kubernetes applications
Microservices introduce a new set of challenges in terms of monitoring, tracing, and understanding the interactions between services. Kiali, an open-source project, simplifies these complexities by providing a visual representation of the microservices topology, along with advanced monitoring and troubleshooting features
The advent of SaaS, the adoption of CI/CD tools and the DevOps movement have enabled new features to be delivered to customers and bugs to be fixed at an unprecedented rate.
However, while these advances have reshaped the way software is developed and operates, bottlenecks remain. Developers are spending more and more of their time on non-code-related tasks. So a new challenge arises: how to free up time for developers and improve their DX?
Part of the answer could lie in merge queues, a concept as recent as it is little-known.
But before explaining what it is, we need to understand the practices that led us to this merge queue. We'll briefly retrace the history of development processes the world has known, to arrive at the state of the art we know today, namely continuous integration and deployment. The aim is to understand where we are today, in order to better understand where we're going.
What will tomorrow's development processes be like? What does a development team need to put in place today to ensure that a project is as successful as possible?
When deploying virtual machines in public cloud environments, it is crucial to automate the network configuration. While DHCP is enough for single-interface single-IP VMs, more complex scenarios require multiple addresses, additional routes and policy routing.
nm-cloud-setup is a tool - part of the NetworkManager project - that can automatically fetch the network configuration from a metadata server and apply it to an instance. It supports the most common cloud providers and it is the ideal solution when the VM is running NetworkManager, since it integrates natively with it.
Join this talk to know more about it!
Are your pipeline tests mature enough to feel confident enough to deploy on a large and dense Kubernetes deployment on Fridays? What happens when disaster strikes? It is easy to assume the environment will snap back quickly but no one wants to find out if it's true at 2am. Enter Kraken! An open source chaos testing framework for injecting deliberate failures and analysing both recovery and performance, to help harden Kubernetes and applications running on it.
In this session we will explore what chaos testing is, why it matters and how you can use Kraken to test your OpenShift\/Kubernetes system. Who would benefit from using Kraken and how it helps identifying vulnerabilities, optimizing cluster performance and overall improving system reliability. How it injects the chaos into the system and what types of scenarios Kraken has to offer.
You will be guided through the installation of Kraken and given tips on how you can start using it yourself today!
This talk will show you through a live demo how to escape the "if else if" solutions in a way that is beyond a maintained lookup table. It aims to have a foot into meta-programming and the other in your everyday programming.
Just think of the following question. How would you write code for an application that needs to decrypt or load a certain encryption or file type, to be able to do some work on that data. How would you organise your application to handle all the different encryptions or file types that exist?
This is exactly what this talk is meant to spark in its audience and hopes to push for automated solutions that requires the least amount of manual intervention (ideally just what is necessary for example implementations).
In the workshop teams will be built to automate the deployment of IDM with SmartCard support using ansible-freeipa. Configuring and using SmartCards with AD trust can be done too.
Demos and exercises will be prepared. Having a SmartCard for the workshop is highly recommended for proper testing.
Some people build a team using a purely practical perspective and everyone else is just wrong.
Some people build a team using empathy and relationships and everyone else is just wrong.
We realize that both of those statements are a bit over the top.
When people don't understand the makeup of their teams as complex organisms, misunderstandings and conflict can ruin everything. It is way too easy to use broad brush strokes in defining how an Agile team should come to be.
-
If everyone else is wrong, how do you move forward?
-
If everyone else is wrong, what can you do just to survive?
-
There are always 2 sides to a story. Let's find truth!
After this presentation you will be able to identify where you land and chart your course for a more cohesive path forward with your team(s). (but everyone else is still wrong....)
strace is a diagnostic, debugging and instructional utility for Linux that is based on the ptrace API.
In this talk the maintainer of strace will describe how the ptrace API is used by strace, how it has been evolving, what kind of features needed by utilities like strace are still missing in the ptrace API, and what might be done in the Linux kernel to address this.
Home automation is a rapidly growing area where Open Source impacts the ecosystem in a significant way. Open Source projects and solutions are often focused on privacy and security and allow users to build fully local home automation.
Let's have a look at the most popular home automation software - Home Assistant, how it is architectured, how the ecosystem looks like, how to leverage the ecosystem, and what you can achieve by using it.
In this session, we will demonstrate how to implement DevSecOps pipelines in production using Stackrox and Tekton and other Open Source Security tools such as Sigstore among others.
We will demonstrate how to eliminate security risks on our CICD pipelines implementing DevSecOps, and securing the software supply chain providing continuous scanning and runtime protection. On the other hand, we will demonstrate how to shift the security left, detecting and remediating vulnerabilities and misconfigurations that could affect the security of our workloads in production.
Finally we will depict how to provide to the developers automated guardrails, integrating Stackrox with DevOps and security tools such as Sigstore and Quay among others, building robust productive DevSecOps pipelines.
Join our hands-on workshop to unlock the secrets of Multicluster Application Deployment using Red Hat's Advanced Cluster Management (ACM) and the powerful GitOps methodology. We'll guide you through the seamless orchestration of applications across diverse clusters, demonstrating ACM's capabilities in enhancing scalability and resilience. Dive into the GitOps philosophy as we explore how version-controlled, declarative configurations can transform your multicluster deployment workflows. Through practical exercises and real-world scenarios, participants will gain a comprehensive understanding of ACM and GitOps integration, empowering them to efficiently manage and deploy applications in a multicluster environment.
Are you a NetworkManager user, developer, or simply curious to learn more about it? Join us for this community meetup.
This event is open to everyone, providing an opportunity to connect with other people, share experiences, and know more about the project.
We'll be hosting an open discussion without a fixed agenda, so feel free to bring any topics or questions you'd like to explore.
For further details, visit our community page at https://networkmanager.dev/community/.
We look forward to seeing you there!
There are numerous engineers not only throughout Red Hat that deal with authentication on daily basis. Think Keycloak, OIDC, OpenLDAP, Kerberos, X509 certificates... The whole shebang. Yet we rarely ever get to meet and talk about what we deal with authentication in our daily software engineering lives.
This session is meant for just that. Let's try to see if we can gather a bunch of people that deal with authentication daily, who would like to share some exciting news from the industry, or would like to explain how they implemented an authentication feature and would love to see how it would stand with other authentication-involved peers.
Join me for an enlightening discussion as I introduce Fedora CoreOS and Red Hat CoreOS, operating systems designed for lightweight and container-centric environments. Together, we'll explore the CoreOS Assembler, a revolutionary build environment, and how it addresses challenges in traditional image creation methods. Discover the power of OsBuild and its role in simplifying these processes. Plus, I'll walk you through the user-friendly experience of the new OSBuild coreos-assembler image builder works. Don't miss this opportunity to dive into the future of efficient OS development with me!
- Introduction to Fedora CoreOS and Red Hat CoreOS:
Overview of the lightweight, container-focused operating systems. - This is the CoreOS Assembler our build environment
- What is coreos-assembler
- How we build CoreOs systems
- Challenges in traditional image creation method
- Osbuild image
- Introduction to OsBuild and its role in addressing these challenges.
- The new OSBuild coreos-assembler image builder
- User experience
The role of a technical writer extends far beyond the act of writing alone. In this session, we'll debunk the misconceptions related to technical writing and explore the multifaceted skill set required of modern technical writers. From understanding user needs to mastering tools and technologies, educating attendees contains a diverse range of competencies.
FarmAI is an innovative open-source project, developed in collaboration with students from Mendel University, aiming to revolutionize autonomous farming. Utilizing advanced image segmentation models
These days, Linux kernel contains almost 80,000 functions which various observability tools, such as ftrace, BPF, or perf, can attach to. There's usually just a minimal overhead but what if you wanted to attach to all the functions at once? Is that even possible or will it crash the kernel? In this talk, we'll explore the current state of things, show how different tools approach the task, and finally present some very recent kernel contributions which move us towards this goal.
Open-source projects often face challenges in maintaining efficient development, testing and delivery workflows. This talk will delve into the Podman team’s journey of switching to an end-to-end CI workflow, from upstream pull request to Fedora delivery using Packit, CoPR and TMT and also how these tools have enabled testing for a comprehensive array of distributions in the Red Hat family, all the way from Fedora Rawhide to RHEL. We will also discuss some challenges the team is currently facing and the CI roadmap in the Podman team.
Overview of Cilium’s Tetragon component [1] that enables powerful
realtime, eBPF-based Security Observability and Runtime Enforcement.
The tetragon project is highly configurable and can be used to observe
various parts of linux operating system and enforce security policies.
I'll present its overall design and usage with some examples.
[1] https://github.com/cilium/tetragon
The final session of the conference! We'll announce all competition winners here, and it's your last chance to win cool prizes with our conference quiz!