Alexander Adamov, NioGuard Security Lab

Dr Alexander (Oleksandr) Adamov is the Founder and CEO of NioGuard Security Lab (nioguard.com), a cybersecurity research laboratory. With over 20 years of experience in cyber attack analysis, gained through his work in the antivirus industry, he has taught cybersecurity at the universities of Ukraine (nure.ua) and Sweden (bth.se) for the last 15 years. His laboratory focuses on applying AI and machine learning to solve cybersecurity problems. NioGuard Security Lab is a member of the Anti-Malware Testing Standards Organization (AMTSO). Dr Adamov regularly speaks at major cybersecurity events, including the Virus Bulletin Conference, OpenStack Summit, UISGCON, OWASP and BSides.

Affiliation:

CEO/Founder of NioGuard Security Lab (nioguard.com); Senior Lecturer, Blekinge Institute of Technology (BTH, Sweden); Associate Professor, Kharkiv National University of Radio Electronics (NURE, Ukraine); Member of AMTSO (Anti-Malware Testing Standards Organization); Trainer at ECTEG (European Cybercrime Training and Education Group)


Session

10-13
09:45
45min
Rethinking AV Testing in the Age of AI-Enhanced Cyberattacks
Alexander Adamov, NioGuard Security Lab

Traditional AV testing methodologies are rapidly becoming obsolete in the face of emerging AI-powered cyber threats. In 2020, we demonstrated how AI, specifically Reinforcement Learning (RL), could enable ransomware to evade anti-ransomware defenses by autonomously identifying stealthy file encryption strategies. After approximately 600 training iterations, our RL-based agent learned how to encrypt files in a target folder without triggering detection mechanisms [Adamov & Carlsson, EWDTS 2020; AMTSO 2021].

While initially theoretical, such AI-enhanced malware is no longer speculative. The release of large language models (LLMs), beginning with ChatGPT in late 2022, has dramatically accelerated adversarial innovation. By early 2024, joint reporting from Microsoft and OpenAI confirmed that nation-state threat actors were actively leveraging LLMs for reconnaissance, scripting, and social engineering in the preparatory stages of cyberattacks.

Most notably, in July 2025, CERT-UA reported a groundbreaking cyber operation by APT28 (a.k.a. Fancy Bear / Forest Blizzard), in which the attackers operationalized an LLM (Qwen 2.5-Coder-32B-Instruct) to generate system commands on the fly. The attack utilized a Python-based tool, LAMEHUG, which issued reconnaissance commands and harvested sensitive documents autonomously, bypassing traditional AV signatures and behavior-based detection [CERT-UA, 2025].

These developments reveal the necessity to come up with a new testing approach for AI-powered cyberattacks. We will examine the shortcomings of current anti-malware test protocols, present a taxonomy of AI-driven attack techniques, and discuss a new testing approach designed to evaluate AV solutions under conditions involving AI-powered malware. By showcasing real-world examples such as APT28’s use of LAMEHUG, we aim to highlight the urgent need for industry-wide adaptation of AV testing to meet the next generation of cyber threats.

Main Track