2026-05-07 –, IFEN room 1, Workshops and Detection Engineering village (Building D)
Many SOCs invest into powerful Risk&AI-based tools to generate and classify their alerts to "clear-out the noise" and pin-point actual "value" out of the massive amount of data they collect. It is not a secret that nowadays we're collecting on SIEM more data than we'd ever thought possible decades ago, most of which are of no real operational relevance. Some even say "SOC is dead" as this model isn't humanly bearable. Some also offer flashy magic wands that may solve all these issues in a painless plug&play way, while at the same time magically reducing cost (or not).
What's the solution, then? Agentic-AI? Data Lakes? Cloud-first? All valuable solutions, but there's something we can also do upstream: On top of trying to clean a dirty river, decrease its source pollution.
This approach allows also to mitigate a lesser known risk, yet very serious: unknown unknowns in data collection. In the same way alert-fatigue is correlated with False Positives figures/ratio, most CyberSecurity departments focus on the unsustainability of telemetry volumes and forget about False Negatives, hence the useful logs you should be collecting but don't know you don't have. Caring for your car's longevity / performances means also not assuming any fuel would do and hope for the best.
Our solution: Governance and Data Quality. It's not a coincidence that NIST recently added this as a new pillar into its CSF. With the "Identify" pillar you get "informed" decision, yet it's "Governance" that gives the "deliberate" element on what to collect, why, and if it's enough. Having no Logging Data-Compliance framework, or having one that doesn't take into account business values (e.g. BIA, crown-jewels, investments) ultimately results in building Security Monitoring on sand, or focusing in scopes that are so narrow that only security may benefit from it, fueling the "working in silos" approach and goes against the "holistic observability" and "management buy-in" elements.
Why this talk
How many times you've been asked to onboard logs on a SIEM just by "opening the flows", without any validation? Or even develop alerts on already provided logs without questioning them? Has any PenTest or Red Team exercise highlight that you had no visibility (let alone alerting) over certain actions, despite "you had the logs"? Have you ever saw a truncated log or one coming from the future? Or a logout without its previous login?
Nowadays, there is no golden standard for baseline or maturity assessments on log collection / coverage, except a few governmental exceptions (e.g. OMB M-21-31) or highly prescriptive yes/no audit-level compliance frameworks that don't meet the granular level needed to "plug" logging and detection/analysis seamlessly (e.g. NIST SP 800-53 AU Family). This is the same from developers' "Security by Design" perspective, where best practices exist for narrow scopes but may not be ultimately enforced (e.g. OWASP Logging Cheat Sheet).
Historically, "security" has often been treated as an elite craft and a compliance checkbox - fertile ground for buzzwords and "magic wand" tooling narratives. Our experience is that every time the solution is "just a new tool" an analyst dies (joke intended; right?). "Magic wands" do not exist. A tool can help, but it cannot replace understanding: normal vs. corner cases, environment constraints, and informed decisional context.
This matters because the industry repeatedly shows that SIEM programs are fragile in practice: expensive volumes ingestion, yet broken detections, missing fields, parsing issues, and alerts overload.
Our thesis: "shift-left" inside the SOC
Instead of starting from "alerts" and hoping SOAR + AI/LLMs will fix the rest (sometimes scaling more confusion than value), we shift-left by making upstream telemetry complete, useful, and normalized - the foundations of reliable detection engineering. We do so by enforcing a "Compliance Data Model" that is both the output of SIEM engineers and the input for Detection Engineers, a meeting point to build Use Cases on even when you don't have the logs (yet), and SIEM-vendor-agnostic.
We will deep-dive into:
• Logs Management as a discipline / requirement: end-to-end process of collecting, storing, processing/normalizing, validating, and monitoring log data, ultimately making sure "it represents reality" - as opposed to the common "hydrant approach" of indiscriminately turning on a firehose of logs and assuming the job is done (e.g. "I’ve opened the flow. Are you getting some logs now? Yes? Great, we’re done").
• Security Monitoring as a practice that is highly dependent on Logs Management, either in its automated form (Use Case Management, UBA, etc.) and/or in its manual one ("free-dive" or Hypothesis-based Threat Hunting, etc.), regardless of the framework you may be using (e.g. OpenTide, MITRE, FI-ISAC NL MaGMa).
• Visibility Depth vs Width: many environments feel "well integrated and monitored" simply because a type of logs is collected from all hosts, but when laying out a matrix of which other logs are collected from where, and if they're normalized, a clear "wide-but-shallow" image shows up, and suddenly nobody agrees what "critical app alerting" means without app owners at the table.
• Bridging the gap - Log Schema vs. Policy: Deciding what to log (a logging policy) is just as important as how to structure it (a data schema / taxonomy). Many teams adopt common schemas like Splunk CIM, OCSF, Elastic ECS, Microsoft ASIM, etc. to normalize fields, which is important and ensures consistency, but they cannot be used alone to audit visibility gaps. If you never send a particular log type to your SIEM, the schema won’t complain, and even if you count the number of success/failures or logs with "username" or other fields, the Logging Policy (and thus upstream checks) is still needed to set expectations and understand what is normal vs. anomalous.
Useful resources for companies to draft their own Logging Policy are:
➤ Prescriptive Standards: OMB M-21-31 (U.S. federal logging requirements, which explicitly lists log categories and retention periods agencies must collect for each security tier), NIST SP 800-53 (Audit & Accountability controls, that mandates specific events that systems must log as a baseline), and CIS Critical Security Controls (especially Safeguard 8.2, enumerating essential logs to collect to support security monitoring).
➤ Threat-Informed Frameworks: MITRE ATT&CK provides a matrix of data sources needed to detect various adversary techniques at a high level. MITRE’s open-source DeTT&CT can help score your log coverage. Even SIGMA rules include a "logsource" definition as requirement, although very high-level. CTI-based frameworks like Drago's CMF (Collection Management Framework). If you have an Attack Range Lab, more technical resources from PenTesters / Red-Teamers can be leveraged, like Atomic Red Team, testing techniques and adjusting logs verbosity up until meaningful activity is logged.
➤ Application Layer Logs: Logging isn’t just an IT operations concern; it starts with developers. We reference the OWASP Logging Cheat Sheet (and similar app-security guidance) which outlines what security-relevant events applications should generate - for example, input validation failures, authentication successes/failures, and access control violations. This highlights that effective logging requires collaboration between the Security/SOC and development teams (not just red&blue teams).
➤ Business Context: Above compliance standards and threat frameworks are inherently generic. They assume all servers, applications, and data are equally important, or they focus solely on the likelihood of an attack. What they completely miss is the Business Impact (e.g. BIA - Business Impact Analysis, FAIR - Factor Analysis of Information Risk) - which is the exact language the Board of Directors (BoD) speaks. Each organization should craft a Logging Policy/Framework tailored to its unique context - considering its business model, "crown jewel" assets, regulatory requirements, and mix of IT vs. OT systems. For example, onboarding and normalizing upfront logs that grant visibility over a big project could provide Exploratory Data Analysis (EDA) capabilities and even give the opportunity to spot issues or misconfigurations before they happen, bringing unexpected added-value / ROI to top management and ultimately granting stronger mandates and economics internally in the organization (e.g. "We noticed 40% of users are dropping off at this specific transaction point because of a backend timeout, impacting revenues" or "There is a misconfiguration causing the app to query the database 50 times per second per user, increasing API costs"). Bringing those findings to management means transitioning Security from a "cost center" to a "business enabler", providing QA and operational intelligence, not just blocking hackers.
Disclaimer
We acknowledge that not every organization can overhaul its logging overnight - real-world constraints exist. The session emphasizes incremental improvement and trade-offs, helping each attendee identify a few high-impact "logging wins" they can pursue back at work. We’re not promising a silver bullet (that would go against the entire premise!); instead, attendees will leave with fresh perspectives and actionable frameworks to gradually turn their own "Ferrari" into a well-fueled security machine.
- OWASP Logging Cheat Sheet
- NIST SP 800-53, Revision 5 - AU: Audit and Accountability - Logging Compliance
- NSA's Best Practices for Event Logging & Threat Detection
- CISA / OMB M-21-31 Logging Compliance
- Drago's Collection Management Framework (CMF) - Methodology for prioritizing and managing information sources in cyber threat intelligence
- GIGO - Garbage IN, Garbage OUT
- Elastic.co on OMB M-21-31 Logging Compliance
- Florian Roth on wide-but-shallow visibility
- SOC is Dead - AI is rewriting CyberSecurity (ITA)
- SANS Hybrid Data Collection Strategy
SOC Team Leader and hard-worker, with a decade of experience among ISP, MSSP and Internal SOC.
SANS/GIAC GSOM Certified
Elliot is a cyber threat intelligence consultant at AmeXio. He is from New Zealand with a background in Financial Services, Technology Services and Government organisations. His expertise is in threat intelligence, threat hunting, reverse engineering, malware analysis, and incident response.