2026-05-06 –, Workshops May 6th (C1.02.06)
Visual Studio Code has become the de-facto IDE for millions of developers, and its extension marketplace is now a first-class target for supply-chain compromise. In this talk we move beyond yesterday’s JavaScript-only “theme” backdoors and show how to fuse high-level TypeScript with low-level Rust to create extensions that are indistinguishable from legitimate Microsoft-signed add-ons—yet silently execute native x86_64 shellcode inside the IDE process.
We begin with a data-driven tour of recent in-the-wild incidents: we begin by examining an array of malicious solidity extensions which targeted blockchain developers with a special emphasis on the “Solidity” extension that stole $500 k in crypto from a Russian blockchain developer. We follow that up with an analysis of the Malicious Corgi malware, and the new self propagating GlassWorm extension - including the later samples seen in the wild which used more advanced techniques. The rise of AI-centric forks (Cursor, Windsurf, etc.) has also given a rise to new extension marketplaces where malicious extension can use inflated download counts to serve as perfect camouflage. Next we deep-dive into the malicious extension toolchain: a Rust FFI bridge that compiles to a library, exposes a single innocent-looking TypeScript API, and preserves the marketplace’s blue “verified” tick. We demonstrate live how to backdoor legit extensions - including cases where the source code is available and when it is not.
We close with defensive takeaways: IoCs and TTPs to look for, defensive rules which can prevent such attacks and possible detection vectors. Attendees leave with a fully annotated GitHub repo that walks them through the process of developing such malware - starting with a "hello-world" C++ addon and building a rust based shellcode loader backdoored into a popular extensions.
Visual Studio Code is no longer just an editor; the IDE, along with its many AI powered forks, have become the most primary interface for Developers of all kind. Its extension host, a Microsoft-signed Electron process, enjoys the same blind trust from EDRs that we traditionally grant to Outlook or Teams. Meanwhile, the extension ecosystem still treats security as an after-thought: there is no deep dive source scanning, verification mechanisms are sparse, and the blue “verified” badge is cached locally – so a repackaged .vsix keeps the badge even after the payload has been swapped. The talks presents a brief case study about the various examples of malicious extensions used in the wild by threat actors and previously affected supply chains.
The talks presents the one of the first public implementation that weaponises this trust gap with a Rust-compiled, position-independent shellcode runner delivered as a Node native addon by taking a Microsoft published extension: live-server and backdooring it with a malicious extension, as well another extension with over 74M downloads. The talk also demos the following aspects of such an attack:
- Extension-host OPSEC: delaying
require("./index.node")until the user triggers the legitimate command (“Open with Live Server”) so the implant is absent from the initial process snapshot that EDRs collect. - Repackaging a blue-tick extension: cloning Microsoft’s own “Live Preview” repository at a signed commit, grafting the Rust addon into its webpack pipeline, and repackaging with
vsce package. The resulting.vsixis byte-for-byte identical except for the extra native node – and the GUI still shows the verified badge because VS Code only re-validates signatures when enterprise policyextensions.verifySignatureis set toerror. - Going in blind - Backdooring another popular extension with our shellcode - without any prior knowledge of the source code
All these topics would also dissect the internal workings, file structure, thread stack and other relevant information associated with the working of the loader/
Finally, the talk concludes by listing the relevant IoCs and TTPs left behind by this attack vector and discusses various detections which organisations and individuals can adopt to protect themselves.
Session Outline
- Pre-roll (loop, 2 min before start)
- Screen cycles side-by-side screenshots: legitimate vs back-doored Live Preview extension.
- Blue tick is identical; only the “Installation” tab shows an extra 46 kB native node
- Caption: “Spot the implant.” (Sets the visual theme of the talk.)
- Introductions (1 min)
- whoami
- Previous work
- Opening – VS Code and its many forks (5 min)
- Rise of VS Code and it’s various forks
- Rise of new forks mean the rise of new market places
- Why target VSCode?
- Electron renderer = Microsoft-signed, whitelisted by every EDR.
- Marketplaces scan JS source only → native code is often a blind spot.
- Very difficult to tell malicious extensions apart
- Attacks in the Wild (8 mins)
- Previous attacks in the wild: Kaspersky, Malicious Corgi, Material Themes, Glassworm
- Dissecting the $500K Kaspersky malware
- Powershell scripts are nice - but we can do better
- Taking a look into Malicious Corgi
- Taking a looking into Glassworm’s source code
- Unicode is nice - compiled is nicer
- Pivot: “What if we go native?”
- Node addons and demo extensions (5 mins)
- Introduction to node addons
- Compiling C++ shellcode runner compiled with node-gyp and running it with gyp
- Creating a “Hello world” extension and using ffi to pop a message box
- Bringing in the crab (8 mins)
- Introducing neon-rs and interfacing with Javascript/Typescript
- Writing a shellcode runner in rust
- Discuss relevant changes to be made in the configs
- Compiling and running
- Backdooring a legit VS Code extension (10 mins)
- Choosing the target: LiveServer
- Updating the source to include the add-on
- Making webpack happy
- Compiling and loading the extension
- Visual similarities with legitimate extensions
- Backdooring a popular VS Code extension without any prior knowledge of it’s source code (5mins):
- Extract the VSIX bundle
- Add our implant
- Repackage the extension
- Load it into VSCode
- Trigger shellcode execution
- Improvements and Detections (3 mins)
- References to other similar works
- Improvements and other closing thoughts
- IoCs and TTPs associated with the techniques
- Possible detections and prevention mechanisms
Key Takeaways
- The audience become more aware of the dangers of blindly trusting extensions from stores
- Malware developers and red teamers get introduced to a new and powerful vector for initial access method
- Blue teasers can use the knowledge to prepare new rulesets and detections to avoid any such attacks
I am Debjeet, a Malware Developer for Black Hills Information Security. I curate malware and tools for testers, publishes research, discovers new bypasses and creates automation pipelines. Previously, he used to work as a Consultant with Certus and a Researcher with Payatu. When I am not in front of the computer, I am either reading Philosophy books, playing Dark Souls or riding bikes!