Bsides Cymru 2025

Saving the Future: How to fix Artificial Intelligence before we all die because of it
2025-10-17 , Tramshed Tech

Modification of most of the elements of AI models can be trivial and completely undetectable, however very little is being done to address these core, fundamental concerns. In addition, Large Language Models (LLMs) are now harvesting AI-generated content which is inaccurate, potentially leading major LLMs into a death spiral of false information.

This talk gives examples of real-life attacks against AI models and explains how we address these issues and build better, robust AIs to avoid future human catastrophe.


Why are we in the position that we might die from AI? It starts with the data used to train and build that AI. Increasingly, there are concerns about the wholesale misuse and harvesting of information to build AI systems (also without permission and without any compensation for the effort in creating that original content) – whether it be technical instructions, music, art or archival information. Crucially though, none of the information is ever properly validated.

On top of that, for the main consumer interface with AIs - Large Language Models; if they don’t know the answer to a question, will go out to the web and try and find the answer and present it to the user (however wrong the answer may be). So-called “LLM grooming” has already been taking place to spread propaganda from nation states, essentially infecting the results presented to users. Many people have already written-off these generative AIs as unusable because of persistent, confidently wrong answers, while others from school children to politicians have passed off incorrect and false information as their own, with no attribution to generative AI. Other cases have resulted in dangerous information being presented to users about medical advice or recipes that could poison them. Businesses, including some in sectors that are classed as Critical National Infrastructure are experimenting with Agentic AI to run and coordinate other systems – or perhaps playing with fire. This situation clearly can’t continue.

By showing real examples of poisoning attacks, manipulation of weights and other interesting attacks against Convolutional Neural Networks (CNNs) and via customised LLM red-teaming tools, the audience will understand that this is not just theoretical and that human life really is at risk when it comes to adoption of AI across all sectors and industries. The audience will learn about the methods that can be used to secure future models.

David is a mobile telecoms and security specialist who runs Copper Horse Ltd, a software and security company based in Windsor, UK. His company is currently focusing on research for AI model security, product security for the Internet of Things as well as future automotive cyber security.

David chaired the Fraud and Security Group at the GSMA until March 2025. He authored the UK’s ‘Code of Practice for Consumer IoT Security’, in collaboration with UK government and industry colleagues and served on the UK’s Telecoms Supply Chain Diversification Advisory Council.

From 2015-2022 he sat on the Executive Board of the Internet of Things Security Foundation. He has worked in the mobile industry for over twenty-five years in security and engineering roles. Prior to this he worked in the semiconductor industry.

David holds an MSc in Software Engineering from the University of Oxford and a HND in Mechatronics from the University of Teesside. He lectured in Mobile Systems Security at the University of Oxford from 2012-2019 and served as a Visiting Professor in Cyber Security and Digital Forensics at York St John University.

He was awarded an MBE for services to Cyber Security in the Queen’s Birthday Honours 2019.