Tinus Green
Passionate about cybersecurity, helping upskill others, and generally getting involved in the cybersecurity community!
Session
Large Language Models (LLMs) are trained on vast amounts of internet data, which means their understanding of what it means to be an expert is largely shaped by how that word is used in forums, blogs, and online arguments. Let's be honest, this isn't always positive. Despite this, we constantly preface our prompts with phrases like “You are a senior pentester…” or “You are the world’s best developer…”, thinking we’re nudging the LLM in the right direction.
But what if we’re actually setting it up to fail?
In this lightning talk, we’ll explore how the way we frame prompts contributes to the very problems we complain about: hallucinations, overconfidence, and incorrect output. More importantly, can we get better results by prompting LLMs to think more like real experts? The kind who research, collaborate, and aren’t afraid to say “I don’t know”?
