Chris Beckman
Chris Beckman is a Principal Security Engineer at TaxBit with expertise in AI security and security architecture across both emerging technology startups and public companies. His work focuses on practical security decision-making in real-world systems. Outside of work, he enjoys photography.
Session
Early in my career, while working as a junior engineer at an emerging AI startup in Seattle, Washington, USA, during the first wave of commercial AI adoption, our company suddenly became the target of an extreme and highly disruptive phishing campaign. Shortly after we received public attention as a “hot startup,” phishing volume surged to the point that it flooded employee mailboxes and interfered with normal operations. The messages were convincing enough that at one point an employee ran through the office claiming that our CEO was stranded at an airport and urgently needed financial help.
What initially felt like an uncontrollable background problem became a significant security and operational risk. Rather than accepting it as inevitable, we began analyzing the phishing emails in detail— treating them as data rather than noise. By correlating sender IP addresses and examining publicly available IP allocation and routing information, we discovered that although the emails appeared to originate from many different sources, the traffic consistently traced back to a small number of allocated IP blocks.
We mitigated the immediate risk by blocking those ranges at the email gateway, which dramatically reduced the volume of phishing. Digging further into the upstream infrastructure revealed that the IP space was associated with a data center in Luxembourg, operating email security and anti-spam systems. At the time, I was in the process of reclaiming my Luxembourg citizenship through ancestry on my mother's side, and the situation prompted a different line of thinking: if similar infrastructure under my supervision was being abused, I would want to know about it.
Instead of assuming malicious intent, we reached out directly to the infrastructure operator, shared sanitized examples of the phishing messages, and coordinated a responsible disclosure. Despite internal skepticism that this amounted to “talking to the attackers,” the response was professional, the issue was investigated, and the phishing activity largely stopped. We also filed a report with the regional internet registry.
Looking back, this incident shaped how I think about security problems that seem impossible or overwhelming. Not every issue is solved with more tooling or escalation. Sometimes, careful deduction paired with human communication and empathy can break deadlocks that technology alone cannot.