Andrea M. Matwyshyn
Andrea M. Matwyshyn is an American law professor and engineering professor at The Pennsylvania State University. She is known as a scholar of technology policy, particularly as an expert at the intersection of law and computer security and for her work with government
Sessions
1999 called; it wants its computer security policy back.
As we arrive at the 25th anniversary of a successful Y2K response, we also arrive at the anniversary of the Melissa virus – a security event that cost an estimated $80 million. In the words of the FBI, Melissa “foreshadowed modern threats”, but a quarter-century later, its core policy and legal security challenges remain unaddressed.
Security incidents now cause billions in financial losses, and have potentially catastrophic impacts on public safety, national security, and critical infrastructure.
It's time to end to the "Goldilocks era" of computer security policy. The 1990's beauty of the baud has now morphed into an unstable “company town” tech economy, too often powered by hype cycles and security “outages” and “glitches.”
Through original research on engineering catastrophes where loss of life resulted, this talk explains how historical responses to safety shortfalls hold lessons for a more successful next quarter century of computer security.
By retelling the story of computer security using the language of safety -- the traditional legal and policy lens for technologies that have the potential to kill or harm -- our Wednesday keynote poses four elements of a more successful future.
We do not live in the best of all possible worlds. Effectively considering the future of AI, software safety, and security risk starts with building a shared language – one that is understandable both to the security community and policymakers. Professor Matwyshyn will guide the attendees through a series of definitions, then begin a session called “Difficult Conversations,” where we will unpack some of the tough policy and legal questions that have historically presented obstacles to meaningful improvements in security. What is “safety” in the context of software? What is resilience? Which software-reliant systems are safety-critical from the perspective of users (and who is responsible for their maintenance)? How should we evolve our approach when failures in digital systems bring real world harm? How do we create more robust structures of accountability?