Detections and Dragons: Creating Logic that Scales
2025-04-26 , Track 1

Dragon riders - grab your flight leathers and let’s strap in for Detection flight school. What makes for a fire(breathing) detection? Where should we even start? We will dive in, discussing head to wings to tail, on how to create high fidelity detection logic - whether you’re protecting a few resources or a few thousand. We will discuss tying in the MITRE ATT&CK framework, choosing the right sources for detection, and testing the logic with the open-sourced Atomic Red Team framework.


  1. Introduction (1 - 2 minutes)
    • Speakers
    • Agenda
  2. What’s the problem? (2 - 3 minutes)
    • Receiving the same alert repeatedly that does not provide value
    • Coverage is lacking - finding out about a threat significantly after initial access
    • Spending too much time creating logic around the new hypothetical hotness vs tried and true common abuses
  3. What makes for “good” detection logic? (4 - 5 minutes)
    • Quick to determine benign vs evil when analyzing
    • You don’t see it often, but when you do it is likely bad
    • Maybe needs a few exclusions, but not to where you’re constantly adding exclusions
    • Clear what attack you’re looking for with it - not some hypothetical “this could be bad”
    • Clearly scoped/defined logic
    • Example of bad, better, best
  4. MITRE ATT&CK framework (9 - 10 minutes)
    • Brief intro
    • Choosing a technique
    • Early stages of an attack are better to start coverage on
    • What techniques are used the most in the wild?
    • Researching technique abuse
  5. Use findings to explore detection options (2 - 3 minutes)
    • Do you have the means to detect with the logs/tools you have?
    • Define your idea(s)
    • Is it going to be better to break it out into well defined chunks?
    • Is it going to be easy to analyze?
    • Environment search to determine if exclusions are necessary
  6. Example (9 - 10 minutes)
    • Choosing technique to dig into
    • Discover abuse mechanisms
    • Be able to answer:
      • What does malicious look like?
      • What threats are using this technique?
      • Is this feasible to write detection logic based on the logs I have?
    • Define the pieces of logic most valuable to detection
  7. Test and detect (9 - 10 minutes)
    • Introduction to Atomic Red Team
      • Open source
      • Tied to MITRE ATT&CK!
      • Minimal setup
      • Running the test
      • Identify:
        • Were you notified of the activity in ways you expected?
        • Were you able to quickly analyze the alert?
        • Key Takeaways (1 - 2 minutes)
  8. Questions (5 - 10 minutes)

Rachel is a Sales Engineer at Red Canary. Before moving to Sales Engineering, she spent over three years on the Detection Engineering team building out logic and hunting for threats in a wide variety of environments.