Guardrails for Trustworthy AI: Balancing Innovation and Responsibility
06-13, 13:15–13:50 (Europe/Prague), E105 (capacity 70)

In an era where artificial intelligence (AI) is not just an auxiliary tool but a core component of digital ecosystems, ensuring the trustworthiness of AI-powered systems has become paramount. As an Open Source enthusiast, I propose to explore the multifaceted approach required to safeguard the integrity and reliability of large language models (LLMs). This talk will delve into the current state of Trustworthy AI, highlighting the latest developments, challenges, and the critical need for transparent, ethical, and secure AI practices.

We will begin by defining what makes AI "trustworthy," focusing on the principles of fairness, accountability, transparency, and ethical use. The talk will then pivot to the specific challenges posed by LLMs, including bias, interpretability, and the potential for misuse. We will outline practical strategies for implementing guardrails around LLMs. This includes the development of robust frameworks for model governance, the role of open-source tools and communities in fostering responsible AI, and the importance of cross-industry collaboration.

Furthermore, the talk will address how companies and communities can ensure that their AI-powered software systems are not only efficient and innovative but also worthy of trust. This involves a comprehensive approach that encompasses regulatory compliance, continuous monitoring, and the cultivation of an ethical AI culture.

See also:

Christoph Görn has 25+ years of experience in Open Source business and development. He co-founded a Linux service company in 1998, worked as an IT-Architect at IBM and a consultant for a 25 head company. After he researches the topic of AI Stacks with the Office of the CTO at Red Hat, he is now a Product Manager for Red Hat OpenShift AI. But first and foremost, he is a hacker for the #B4mad Network!

This speaker also appears in: