2024-08-07 –, Florentine A
<RING, RING> 1999 called; it wants its computer security policy back.
As we arrive at the 25th anniversary of a successful Y2K response, we also arrive at the anniversary of the Melissa virus – a security event that cost an estimated $80 million. In the words of the FBI, Melissa “foreshadowed modern threats”, but a quarter-century later, its core policy and legal security challenges remain unaddressed.
Security incidents now cause billions in financial losses, and have potentially catastrophic impacts on public safety, national security, and critical infrastructure.
It's time to end to the "Goldilocks era" of computer security policy. The 1990's beauty of the baud has now morphed into an unstable “company town” tech economy, too often powered by hype cycles and security “outages” and “glitches.”
Through original research on engineering catastrophes where loss of life resulted, this talk explains how historical responses to safety shortfalls hold lessons for a more successful next quarter century of computer security.
By retelling the story of computer security using the language of safety -- the traditional legal and policy lens for technologies that have the potential to kill or harm -- our Wednesday keynote poses four elements of a more successful future.
It's time to bring an end to what this talk calls the Goldilocks era of computer security policy. The 1990's beauty of the baud has now morphed into an unstable “company town” tech economy, too often powered by hype cycles and security “outages” and “glitches.” Specifically, this talk calls out the slipperiness in classifying insider attacks, the role of intent analysis in security contexts, and how “AI” exacerbates this slipperiness. Then, this talk retells the story of computer security using the language of safety - the traditional legal and policy lens used for technologies that have the potential to kill or physically harm people.
Presenting original research on engineering catastrophes where loss of life resulted (N=120+), this talk explains how historical responses to safety shortfalls hold lessons for a more successful next quarter century of computer security. Using historical examples, this talk crystallizes the role of intent/knowledge in liability determinations and juxtaposes it with recent computer security enforcement. In brief, this talk demonstrates that the tech safety policy landscape—the lynchpin of which is computer security—is currently out of step with the safety policy and law governing other major industries in our economy. But, because almost every company is now functionally a tech company, this problem is also cross-cutting. In all cases, the current security trajectory is unsustainable. Finally, this talk sets forth the four critical elements of a more successful future response through technology safety policy:
- A new federal technology safety regulator of last resort – the Bureau of Technology Safety (BoTS);
- Predictable liability determinations driven by context, harm, and intent;
- International computer security policy harmonization; and
- A (set of) formalized self-regulatory structures for professionals who hold themselves out as possessing security expertise, including one for a category of “Chief Technology Safety Officers.”
Andrea M. Matwyshyn is an American law professor and engineering professor at The Pennsylvania State University. She is known as a scholar of technology policy, particularly as an expert at the intersection of law and computer security and for her work with government