2024-07-11 –, If (1.1)
This talk will explain the multifaceted risks associated with building custom AI solutions, both from Responsible AI and Safety perspectives, and explore ways to mitigate them, to ensure that as AI professionals we create non-harmful AI systems.
The rapid advancement of AI technologies, especially in the LLM space, is opening countless opportunities across all industries to apply in their daily business. However, it also introduces significant risks that could impact their users and broader society, from ethical and safety perspectives.
In this talk we will delve into various aspects to consider when building custom AI solutions to ensure they are not harmful in any way. We'll explain the types of risks to assess from both Responsible AI and Safety perspectives, and what mitigations can be implemented to address them. We'll discuss broad aspects that apply to all AI systems, such as fairness and inclusiveness, as well as risks specific to Large Language Models, like prompt injection attacks and hallucinations.
By the end of this talk, participants will have a clear understanding of AI risks, and a practical framework for evaluating and implementing responsible AI practices when building AI-based applications.
The talk is designed for anyone involved in the design and development of AI systems, from AI developers and data scientists to project managers. No technical knowledge is required, but a prior understanding of the fundamentals of AI and LLMs is assumed.