BSides Tallinn 2025
Security teams love metrics. Beautiful dashboards, filled with vulnerability counts, alert volumes, SLA compliance for fix times, training hours logged, etc. However, do any of these metrics actually make organizations more secure? The uncomfortable truth is that most security metrics are questionable, at least from a scientific perspective.
In this talk, I will focus on the science behind meaningful security metrics. I will introduce a framework that helps define metrics based on organization-specific goals, as opposed to creating purpose around whatever metrics we have lying around. From there, I will break down what are the key qualities of a good metric. Finally, I will briefly present the different data analysis methods and the common validity threats when going from metric values back to supporting your goals.
"If you can't measure it, you can't improve it". However, if your security strategy is built on questionable metrics, you might not be improving the right things. This talk will challenge industry assumptions and provide scientific backing to the fact that many widely used security metrics in the industry might be vanity numbers.
Traditional forensic acquisitions create bottlenecks in incident response, requiring specialized expertise and significant time that delays investigations. This presentation introduces an automated forensic triage workflow using open-source tools to accelerate response operations.
The workflow utilizes a Velociraptor offline collector to acquire forensic triage images, automatically uploaded to cloud storage. This triggers an OpenRelik workflow that processes triage data using tools like Hayabusa and Plaso/log2timeline, with AI-powered analysis and summarization. The processed output is uploaded to Timesketch for collaborative analysis.
Several DFIR datasets will be used to show the automation pipeline from initial collection to timeline analysis. The workflow reduces time-to-analysis from hours to minutes while maintaining forensic integrity.
Attendees will learn to implement automated triage workflows and integrate multiple open-source tools into investigation pipelines. This targets incident responders, digital forensics practitioners and anyone in the security community looking to streamline forensic operations.
Forget your PINs and proper keys - at this hands-on workshop, you can try how to:
- open a car door lock via a CAN bus attack
- pick a key safe at a holiday home without any special tools
- control building automation over a solar Wi-Fi
You will need a phone that can connect to a local Wi-Fi for the last lab.
Introduction
Penetration testing (PT) reports play a significant role in helping organizations identify and mitigate security vulnerabilities as they are the only tangible product of the conducted tests. The report effectiveness relies on the extent to which customers can translate the findings into actionable decisions.
Our study investigates usability gaps in penetration testing reports from a customer-centric perspective, focusing on the challenges organizations face in understanding, prioritizing, and acting on the provided insights.
Study
We conducted the study with IT professionals from various companies that consume PT reports. These studies took place during workshop events held in Czechia and Estonia. More than 50 participants attended the workshop in Czechia and 32 participants in Estonia.
The study included the following steps:
• Demonstration of a PT scenario – The goal was to show participants how a specific PT scenario is conducted, enabling them to assess the content of the vulnerability finding in the report and identify what they would like to see included.
• Survey – Participants reviewed a report corresponding to the demo. The survey captured their general perceptions and feedback on its content and usability.
• Focus group discussion – Structured, in-depth discussions designed to uncover and explore penetration testing consumers’ expectations, pain points, and preferences regarding reports.
Key findings
Our analysis indicated some differences between the views of technical and managerial participants. For example, for managerial roles it is important that the PT report includes the executive summary, definition of scope and detailed description of testing methodology. On the other hand, more technical roles highlighted the crucial parts as detailed step-by-step explanations of findings and actionable recommendations.
The list below highlights selected actionable findings for improving penetration testing reports to better meet client needs and expectations:
• Machine readability – Machine readability in PT reports refers to the format and structure of these documents being optimized for automated processing by software tools, rather than being exclusively human-readable, as they are typically provided (e.g. PDF). Reports in standardized formats, such as JSON, XML, or CSV, with clearly defined schemas, could improve efficiency.
• Additional resources – Including additional resources was shown to be essential to replicate the testing process, allowing the organization to verify the vulnerability and better understand its root cause.
• Multiple mitigations – Participants have expressed the need for multiple mitigation options in PT reports, rather than a single solution. Providing a variety of mitigation strategies would allow organizations to choose an approach that best fits their resources, risk tolerance, and operational constraints. This flexibility would ensure that remediation efforts are both effective and practical, accommodating different technical environments and business priorities.
• Mitigation impact – Participants emphasized the importance of including the mitigation impact in PT reports. In addition to multiple mitigation options and a preferred solution, they want a clear explanation of how each mitigation would affect the system, security posture, and business operations.
• Preferred mitigation – In addition to multiple mitigation options, participants have indicated a preference to have a preferred mitigation clearly highlighted in PT reports. This approach would allow decision-makers to balance between ideal and practical mitigation strategies.
• Target groups – Labeling the proposed mitigation, with labels suggesting the role typically responsible for the mitigation. For example, for an issue in the development part, use the label “dev”; for a configuration problem, use the label “config”. Using straightforward language, well organized sections, and consistent terminology could make reports more accessible to both technical and non-technical stakeholders.
• Positive findings – Writing positive findings alongside identified vulnerabilities. By highlighting areas where security measures are functioning correctly, reports can provide a balanced view that not only identifies vulnerabilities, but also acknowledges strengths.
Side quest (results soon)
We observed that the majority of feedback was related to the recommendations. In response, we revised and improved the recommendations in an example report and collected feedback from over 200 security professionals. While the data is still being analyzed, we anticipate that by the time of the event, the analysis will be complete. This will allow us to share whether the suggested improvements were effective from the clients' perspective—and whether any unexpected insights emerged.
Conclusion
Our study highlighted a set of actionable steps to improve PT reports with the client experience in mind. Ultimately, when clients can clearly understand and effectively implement the recommended actions, security vulnerabilities are addressed more efficiently. And when that happens, we all move one step closer to a safer and more secure tomorrow.
Want to know more?
„From Reports to Actions: Bridging the Customer Usability Gap in Penetration Testing” K. Galanska, A. Kruzikova, M. P. Murumaa, V. Matyas, M. Just; IEEE Access, vol. 13, pp. 73975-73986, 15.04.2025, 10.1109/ACCESS.2025.3561220
Manual penetration tests don’t always reveal critical vulnerabilities — but even minor issues, when linked together, can result in significant risks. In this session, Axinom and Neverhack share highlights from a recent engagement that brought such vulnerability chains to light. You’ll also discover how a single pentest can deliver value across multiple areas within a company, turning one investment into value several times over.
When I ask audience about 2FA phishing or stealers ... the silence is deafening. With the exception of dude from back row: "Stealers can't get your passwords from Chrome since ca 2024 August, go home, stunthacker").
Well, "I've seen things you people wouldn't believe" - not C-beams glittering in the dark near the Tannhäuser Gate, but trying to guess organisations' password policy from leaks / stealerlogs. Much fun, not time to die, though.
So, let's run a 2FA phising campaign live against Estonian TARA auth (with scoring) and see what we can grep from some recent freely shared stealerlogs drop (as of 2025 April: 3000 logs from BreachForums rando = 183 WordPress admin cookies).
CSS is an often overlooked aspect of web security, but in the right hands it can be extremely powerful.
This talk takes you through my journey of making silly browser games to pwning companies like Google through this fun little styling language.
In the age of AI and large-scale data processing, it’s tempting to assume that applying security practices equals good privacy. But as multiple real-world breaches have shown—from Estonia’s Asper Biogene genetic data exposure to pharmacy data leaks at Allium UPI— insufficient security controls and a lack of privacy by design can expose organizations to significant privacy risks.
This interactive workshop is tailored for security and privacy professionals whose organizations work with sensitive or large datasets, especially in the context of AI/ML training or internal analytics. We’ll break down the differences and overlaps between infosec and personal data breaches, demystify what anonymisation and pseudonymisation really mean under the GDPR, and explore how to make data useful and safe. Participants will also gain practical insights into breach response basics and how to act when things go wrong.
We’ll wrap with a practical group exercise where attendees get to “anonymise” a fictional database based on publicly available data—and see if their efforts withstand real-world re-identification threats.
KEY TOPICS:
1. How large datasets fuel AI innovation yet at the same time cause regulatory risk. Why effective privacy compliance is not a checklist task but active daily practice.
2. Key differences between infosec incidents and personal data breaches (and when they overlap).
3. Legal definition of anonymisation and pseudonymisation, hands-on practical task to understand both the value as well as the risk of these measures.
4. Case study examples:
4.1. Asper Biogene (genetic data breach)
4.2. Allium UPI (pharmacy breach)
4.3. European Data Protection Board’s recent recommendations:
4.3.1. Guidelines 01/2025 on Pseudonymisation
4.3.2. Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models
5. What to do when a breach happens: notify, assess, contain, communicate.
PRACTICAL WORKSHOP EXERCISE:
Participants are expected to have at least one device per team. Participants are given a dataset for a machine learning exercise. Their task in teams is to:
1. Anonymise the dataset using privacy enhancing techniques (masking, generalization, suppression, etc.).
2. Switch files between teams and evaluate potential for re-identification based on auxiliary data.
3. Determine whether their approach met the standard of anonymisation or only pseudonymisation.
4. Present each teams’ anonymisation strategy and summarize a residual risk assessment. Discuss what would be the potential consequences of a leak of such data - would it be merely a security incident or a data breach.
LEARNING OBJECTIVES:
1. Understand how anonymisation supports safe AI use and data reuse.
2. Recognize when a breach is a security issue, a privacy issue, or both.
3. Learn to evaluate anonymisation effectiveness using legal and technical criteria.
4. See how access control gaps can escalate into reportable personal data breaches.
5. Get hands-on anonymisation experience and peer feedback.
SPEAKERS:
Margot Arnus - CIPP/US, Co-founder and Privacy Expert at Damus, Senior Legal Counsel at Veriff
Stella Goldman - CIPM, Co-founder and Privacy Expert at Damus, Lead Legal Counsel at Veriff
Are you feeling it...?
That relentless pressure as your attack surface expands – but your security resources just can’t keep up?
We’ve been there at Bolt, grappling with the exact same challenge. The relentless growth of digital assets, coupled with limited internal security resources has created critical blind spots and persistent exposure to threats. While our product security team excels at developing extensive and scalable security solutions, we often lack the capacity for the deep, narrow focus required by every application and service. Traditional penetration tests, while valuable for targeted assessments, by design provide a time-boxed and limited view, often leaving vast areas of the attack surface unexamined.
Enter crowdsourced security through bug bounty programs – a powerful, indispensable complement to Bolt’s existing defenses. Imagine leveraging a global, always-on network of ethical hackers, each bringing unique expertise and a fresh perspective. Unlike the constraints of traditional pentests, these skilled researchers aren't limited by scope or time. They can relentlessly delve into our features and services, uncovering subtle, systemic issues hidden deep within our systems. This collaborative, continuous approach doesn't just bridge the security resource gap; it dramatically reduces our window of exposure, transforming vulnerability management from a reactive burden into a proactive and resilient defense effort.
Join this session to uncover:
* Strategic Integration: How crowdsourced security has enhanced our overall vulnerability management framework?
* Real-World Triumphs & Challenges: Practical insights into the challenges and undeniable benefits of running a successful bug bounty program.
* Actionable Intelligence: How to transform raw bug findings into strategic insights that identify systemic weaknesses and inform the security roadmap?
* Unique Discoveries: Why crowdsourced findings often differ from, and complement, those from internal teams or traditional pentests?
* Program Playbook: Navigating the critical decision: Is a private or public bug bounty program the right fit for an organization?
The idea that "attackers only need to succeed once" has long influenced the development of defensive strategies. This talk challenges that myth by reframing the defender’s role: not as a gatekeeper who must be perfect, but as a strategist who can disrupt the attacker’s journey at multiple points.
We’ll explore how a layered defense strategy, enhanced by detection engineering, attack surface management, and deception technologies, can shift the advantage toward defenders.
To ground these ideas in practice, we’ll look at how MITRE’s Summiting the Pyramid and Attack Flow projects help defenders visualize, prioritize, and disrupt adversary behavior across the kill chain. These tools offer actionable frameworks for mapping detection coverage and understanding attacker movement in complex environments.
Attendees will gain practical insights into designing and implementing strategic defenses that turn every layer, every alert, and every response into an opportunity to stop attackers in their tracks. Because in modern cyber defense, every step truly counts.
Generative AI promises to revolutionize how security operations teams and investigators detect and respond to threats but, how much of this promise is real and how much is just hype?
In this talk, we go beyond vendor marketing to explore what practitioners and experts really think about GenAI’s place in modern detection and response workflows. Drawing from a Delphi study I conducted with global SOC leaders and AI specialists as part of my academic research with Luleå University of Technology in Sweden, we’ll uncover:
-
Where GenAI is already making an impact (and where it's not) for detection and response workflows
-
Key opportunities for GenAI in threat detection, triage, and investigation
-
Real-world challenges: hallucinations, trust issues, operational risks, and more
-
How security analyst roles and skills are evolving in the face of GenAI adoption
-
Practical considerations for integrating GenAI into existing detection and response SOC processes
Expect an honest, evidence-based discussion, free of buzzwords, and grounded in what the experts are actually experiencing on the ground.
Whether you're skeptical or optimistic about AI in detection & response workflows, this session will give you a grounded view of the path forward.
Short overview off file analysis
Brief deep dives into:
PDF Format
Office formats (DOCX, XLSX...DOC,XLS..)
Image formats (JPEG, PNG)
MP3/MP4
Archives (ZIP, RAR, 7z...)
Each topic we look at
* Headers and structure basics
* How file structure had ben used in attacks.
Detection artifacts in file format with Hands-on file Dissection with using tools like:
binwalk
xxd / hexdump
ExifTool
oletools , pefile ,PDFid ,PDF-Parser and so on.
Task to understand structure and identify potently malicious components
Bolt's product security team secures applications for over 200 million customers and 4.5 million partners across 600+ cities in 50 countries. This massive scale makes our platform a prime target for a diverse array of malicious actors, many of whom specialise in scalable, low-tech scams. We've seen an increasing professionalisation even in these "low-tech" schemes, leading to an arms race between threat actors and security measures that often unfolds within weeks, if not days.
Traditional phishing techniques are now being repurposed from email to modern chat applications. We're observing 2FA bypasses via recovery flows and constant probing for business logic issues that can be abused for quick financial gain.
During this presentation, we'll shed light on the variety of sophisticated phishing techniques we've encountered in the wild. Attendees will gain insights into:
Abused Communication Channels: Discover how in-app chat functionality and chat applications such as Telegram and WhatsApp are misused.
Reward vs Punishment: Understand persuasion techniques threat actors use to manipulate targets.
Bypassing Protections: Learn how 2FA, chat filtering and business logic checks could be bypassed.
Authentication Strengths & Weaknesses: Explore the benefits and drawbacks of existing authentication methods
Red team testing has evolved from underground art to regulated operations, and if you're hoping to deliver these services professionally, you should know the game has completely changed. The financial sector's adoption of TIBER-EU offers a masterclass in what works in structured adversary simulation.
This talk is for practitioners delivering threat intelligence and red team testing services who want to understand how regulatory frameworks are reshaping client expectations and project dynamics. While TIBER-EU emerged from financial sector requirements, its methodologies offer valuable lessons for any industry serious about adversary simulation.
You'll discover the hidden complexities of "threat-led" testing, why many threat intelligence reports fail to drive realistic attack scenarios, and how to navigate the minefield of control teams, blue teams, and regulatory oversight. We'll explore the craft skills that separate professional adversary simulation from basic penetration testing: building credible threat actor personas, designing scenarios that test resilience rather than find vulnerabilities, and managing the delicate dance of "leg-ups" and purple teaming.
Whether you're expanding into the threat intelligence and red team testing services market, or simply curious about the professionalization of red teaming, this session offers practical insights from these complex engagements.
Think cloud security is all about stopping attackers at the gates? Think again. The biggest threats in the cloud aren’t zero-days or nation-state actors — they’re misconfigurations. Yep, the stuff we set up wrong ourselves.
After digging into the guts of hundreds of Azure-based solutions across industries, I’ve seen the same security faceplants over and over again — and they’re not just rookie mistakes. In this talk, I’ll walk through the most common cloud security pitfalls I’ve found, why they keep happening, and how to actually fix them. Whether you're a red teamer, blue teamer, or somewhere in between, you’ll walk away with practical takeaways and a few war stories from the trenches.
In this hands-on workshop, participants will walk through the core steps of a threat hunt - from forming a threat hypothesis to testing it against real-world data. You’ll learn how to frame hypotheses based on attacker behaviors, identify the right data sources, and validate your findings using structured hunting techniques. Whether you're new to threat hunting or looking to sharpen your approach, this session will give you practical skills to hunt smarter.
Administrators are meant to take care of your systems but what happens when they go rogue?
In this gripping incident response case study, we take you behind the scenes of a real-world insider threat that targeted internal systems. What began as suspicious access patterns on the network led to the uncovering of a calculated and deeply damaging betrayal from within.
The threat eventually became victim of his own attack.
Not sure how exactly to write that CFP so its free form. I can clarify all topics.
Title: Why playing board games and D&D in cyber security is actually useful?
ALT title: Why playing board games and Dungeons & Dragons at work is useful?
Abstract:
I had a tiny bit of experience playing tabletop security games, as well as being an organizing team. But never alone, nor am I an expert in game theory. I just like DnD and RPG-s and I work in cyber security.
When I reworked the incident response process in Opera and there was a need for training, I decided to do it in the form of tabletop exercises using pen and paper and fake scenarios.
In total I held 8 tabletop games (Half a day each) in 4 different offices, total participants ~100. 3 different main scenarios and some variations.
My talk would be about (general + personal experience)
Different levels of security training and their usefulness. And why do we train at all?
* Why tabletop/gamification
* My personal learnings and failures in designing such games, eventually what worked and what A/B testing I used. If you want to run your own games alone.
* Company (player) failures.
* Actual examples of failures (no names disclosed), illustrating
*** Why such games are needed
*** What they teach to us
*** How we still make so obvious mistakes
*** What people tended to fail in specifically (And what did each specific failure illustrate)
***What were people's feedback to such training (maybe, there were few interesting findings)
*Key points I would recommend anyone to consider while designing a game / scenario
I have permission from my employer to do the talk, talk about examples etc.
However they have asked not to record and publish this later. Only at a conference.