2024-12-07 –, Track 1
Recent trends have shown that the next evolution in phishing is the abuse of AI tooling to create realistic and believable deepfake clones. Organisational resilience against deepfake phishing is drastically behind the curve.
In this talk, we will investigate the state of the art, present case studies of actual deepfake attacks, examine the practical feasibility and ease of execution of these kinds of attacks as well as possible solutions to these problems.
-
Introduction to the current state of the art
- An overview of what attackers are capable of with current techniques and technology (AI specific)
- Case studies of specific incidents where deepfake phishing has been abused successfully -
Technical Overview
- Detailed overview of how an attacker could accomplish same results
--Specific attention to whether it is possible without extensive training data -
Demonstration
- Recorded demonstrations of all of the above
- (Hardware Permitting) Live demonstration of an audience member made to look like one of the authors -
Remediation
- Possible Social, Corporate and Technical solutions to fight this issue
- Tools and techniques for detection
Takeaways
- Understanding the current state of the art:
- Attendees will gain a solid understanding of the current capabilities of deepfakes and AI models.
- Identifying and Mitigating Risks:
- Participants will learn how to identify these kinds of threat actors.
- Participants will gain an understanding of what technical and organisational controls can be used to mitigate such threats
I am a computer engineering graduate who joined MWR CyberSec in 2021 to dedicate myself to supporting and assisting people in becoming more secure. I have a particular fascination with artificial intelligence (specifically generative AI) that has led me to this intersection of security and machine learning. I have an inquisitive nature and love to question the security implications of emerging technologies.
Cybersecurity consultant at MWR CyberSec