Christoph Görn
Christoph Görn has 25+ years of experience in Open Source business and development. He co-founded a Linux service company in 1998, worked as an IT-Architect at IBM and a consultant for a 25 head company. After he researches the topic of AI Stacks with the Office of the CTO at Red Hat, he is now a Product Manager for Red Hat OpenShift AI. But first and foremost, he is a hacker for the #B4mad Network!
Sessions
This presentation offers a comprehensive exploration of artificial intelligence (AI) and its trajectory from broad, foundational principles to specialized applications at the technological forefront. Starting with an introduction to AI and its evolution, the presentation transitions to discussing the applications and impact of AI, we spotlight the transformative effects on sectors such as healthcare, finance, automotive, and entertainment, while also addressing the ethical and societal implications that accompany widespread AI adoption.
Diving deeper, the presentation shifts focus towards the frontier of AI technology—edge AI. Here, we uncover the significance of bringing AI processing closer to the data source, highlighting the benefits of reduced latency and enhanced privacy, alongside the challenges faced in implementation. Through real-world examples, attendees will gain insights into how edge AI is being integrated into smart devices, autonomous vehicles, and industrial predictive maintenance.
Concluding with a look at future directions, we speculate on emerging trends and potential breakthroughs in AI, including the role of quantum computing and the importance of AI governance. Designed to be both informative and thought-provoking, this keynote aims to provide a holistic view of AI's current state and its boundless future possibilities, encouraging a dialogue on how we, as a society, can navigate the ethical, technological, and practical challenges ahead. This presentation is a call to action for professionals, researchers, and enthusiasts to reflect on the implications of AI advancements and to participate in shaping a future where technology amplifies human potential and addresses global challenges.
In an era where artificial intelligence (AI) is not just an auxiliary tool but a core component of digital ecosystems, ensuring the trustworthiness of AI-powered systems has become paramount. As an Open Source enthusiast, I propose to explore the multifaceted approach required to safeguard the integrity and reliability of large language models (LLMs). This talk will delve into the current state of Trustworthy AI, highlighting the latest developments, challenges, and the critical need for transparent, ethical, and secure AI practices.
We will begin by defining what makes AI "trustworthy," focusing on the principles of fairness, accountability, transparency, and ethical use. The talk will then pivot to the specific challenges posed by LLMs, including bias, interpretability, and the potential for misuse. We will outline practical strategies for implementing guardrails around LLMs. This includes the development of robust frameworks for model governance, the role of open-source tools and communities in fostering responsible AI, and the importance of cross-industry collaboration.
Furthermore, the talk will address how companies and communities can ensure that their AI-powered software systems are not only efficient and innovative but also worthy of trust. This involves a comprehensive approach that encompasses regulatory compliance, continuous monitoring, and the cultivation of an ethical AI culture.