Security BSides Las Vegas 2025

Desktop Applications: Yes, We Still Exist in the Era of AI!!!
2025-08-05 , Firenze

Everyone’s talking about securing cloud-native AI—but what about desktop applications, the unsung workhorses powering critical workflows in design, engineering, finance, and content creation? Often seen as “legacy,” today’s desktop apps are evolving—embedding local LLMs, enabling predictive UIs, intelligent automation, and offline inference.

This talk reframes the AI security conversation by spotlighting threats that emerge when AI meets the desktop. We’ll explore how these integrations open up new attack surfaces—prompt injection in embedded models, adversarial inputs, abuse of local inference, and vulnerable plugin ecosystems. These risks don’t replace traditional issues—they amplify them. Longstanding flaws like memory corruption, unsafe file parsing, and protocol-level bugs remain highly relevant.

We’ll demo two real-world attacks: prompt injection on a local model, and file-format fuzzing exposing a legacy crash. Then we’ll look at AI-aware threat modeling for desktop apps, including edge cases like tampered models and insecure automation. Finally, we’ll share practical strategies to integrate validation, fuzzing, and modeling into your secure SDLC.

If you thought desktop security was yesterday’s problem—think again. With AI in the mix, it’s more relevant, more complex, and more important than ever.


In today’s rush toward AI-native development, desktop applications are often dismissed as legacy systems. However, they remain foundational to industries like design, finance, healthcare, and engineering. These applications are evolving too—embedding local LLMs, enabling predictive UIs, and offering offline AI inference. But in doing so, they create a new category of hybrid software: traditional desktop logic combined with AI decision-making. This evolution introduces a unique and largely under-explored threat landscape.

This talk reframes the AI security conversation around the desktop domain. It starts by cataloging AI use cases already embedded in modern desktop applications—intelligent assistants, context-aware automation, AI-enhanced plugins, and model-influenced file parsing. With this foundation, we’ll explore the novel risks they bring, including:
* Prompt injection in offline or locally-embedded LLMs.
* Inference-based abuse, where untrusted inputs manipulate model behavior.
* Unsafe output handling, where AI-generated content drives downstream actions.
* AI plugin ecosystems prone to over-permissioning or unvalidated extensions.
* Model tampering, especially in scenarios without strong integrity checks.

But these new threats don’t replace the old—they amplify them. Traditional issues such as memory corruption, unsafe file parsing, and protocol vulnerabilities remain present, and in some cases, are re-exposed by AI-powered workflows (e.g., previewing or auto-parsing files without validation).

To demonstrate this hybrid risk model, the session includes two practical demos:
1. A prompt injection attack targeting an embedded local LLM in a desktop app, leading to unintended file disclosure or unauthorized automation.
2. A file-format fuzzing demo against a legacy parser now wrapped in AI functionality, resulting in a crash or memory corruption—highlighting the dangers of blindly coupling AI with legacy input handling.

We’ll then transition into modern threat modeling for these AI-desktop hybrids. We'll break down:
* How to model trust boundaries when inference engines are embedded locally.
* Risks introduced by model updates or user-controlled configuration.
* Edge cases like AI-driven plugin behavior and adversarial content generation.

From a defense perspective, we’ll provide fuzzing strategies that remain effective—file format fuzzing, protocol fuzzing, and model I/O fuzzing—along with examples of tools like AFL++, libFuzzer, and custom harnesses for AI pipelines.
Finally, we’ll outline how to bring this into the Secure Development Lifecycle (SDLC):
* Introduce abuse-case testing for AI features.
* Incorporate threat modeling sessions into early feature design.
* Automate fuzzing pipelines into CI for both legacy and AI logic.
* Develop organizational awareness around the risks of hybrid systems.

This session is ideal for security engineers, red teamers, and AppSec practitioners who want a deeper understanding of how the AI transformation impacts a class of software that hasn’t gone anywhere—but is becoming more complex and critical than ever.
Expect actionable insights, demo-driven examples, and a modernized approach to defending desktop applications in the AI era.

Uday Bhaskar Seelamantula is a security professional at Autodesk with a focus on innovative approaches to application security. With extensive experience in both offensive security and secure development practices, Uday is passionate about bridging the gap between traditional security concerns and the emerging risks presented by AI technologies. Currently working on novel fuzzing techniques and static analysis, Uday has a deep interest in how security can evolve to address the unique challenges posed by AI integrations in desktop applications.

Having collaborated with teams on projects that span across security incident response, threat modeling, and secure software development lifecycle practices, Uday brings a well-rounded perspective to the conversation on how organizations can better secure the applications we rely on. When not researching the latest vulnerabilities or AI threats, Uday enjoys mentoring colleagues and sharing knowledge to help shape the next generation of security professionals.

Outside of work, Uday keeps sharp by playing CTF challenges and running fuzz farms, while unwinding with snowboarding as a favorite way to relax.