BSidesLuxembourg 2026

Omar Rachid

Application Security Engineer with over 10 years of experience, I help organizations embed security at the core of their software development lifecycle. With a background in software engineering, I bring a pragmatic and hands-on approach to bridging the gap between development, security, and DevOps teams.

Today, my focus lies at the intersection of application security and artificial intelligence, where I explore how to securely adopt AI-driven technologies while managing emerging risks and ensuring resilient, secure systems.


Session

05-07
13:30
40min
Trust and Traceability : developer observability in the AI powered SDLC
Omar Rachid

Trust and Traceability: Developer Observability in the AI-Powered SDLC

Safeguarding the enterprise with superior AI risk governance

It has been over three years since AI coding tools first landed, and in 2026, more than three-quarters of developers are using them in their workflows... with or without the knowledge and blessing of the AppSec team. Rumors of developers being replaced entirely have been exaggerated, but crucially, the use of AI in enterprise environments has further uncovered the significant security skills gap that exists among them as they struggle to identify and mitigate vulnerable, AI-generated code.

Security programs must evolve rapidly to reduce this emerging threat vector, but many CISOs lack the necessary data and insights to effectively empower their development cohorts. With AI coding tools touted as both a blessing and a curse for development and software security, there is no better time to ensure the enterprise security program is not just updated to accommodate the increased attack surface, but also actively optimized for SDLC efficiency and cyber defense.

World-class security leaders must rise to the occasion and lead proactive security programs that utilize the right tech stack and strategy to manage developer risk through high observability of their security skills, as well as the security efficacy of their AI technology stack. Developers have immense potential to be central to a defensive security strategy, and they can be empowered with the right knowledge to transform their approach to coding and adopt a security-first mindset. This revolution is vital as the use of AI coding tools grows, and critical thinking from the developer is a must to deploy them safely in their workflow.

Based on AI experiments and key research with CISOs, the presentation reveals the critical pathways security leaders can take to execute incredible developer-focused training programs that reduce risk, shift negative security sentiment in the development cohort, and safely adapt AI technology with precision governance, including:

Understanding comparisons between AI and human coding, what works, and what can affect enterprise security maturity.
Navigating AI data quality issues and establishing safe pair programming with unprecedented developer observability.
Developer upskilling, including benchmarking and growing key security skills with knowledge and governance that leads to better risk mitigation.
How to establish a skills baseline among developers, and grow relevant competency quickly.
The pitfalls of AI vulnerability detection, and the skillset your developers must master to overcome hallucination, insecure code generation and misconfiguration.
Secure Development track
Workshops and Stage - Gernsback (C1.05.02)