OFA Symposium 2025: Open Technology Impact in Uncertain Times

Benedict Kingsbury

Benedict Kingsbury’s broad, theoretically grounded approach to international law closely integrates work in legal theory, political theory, and history. He delivered the Lauterpacht Lectures at Cambridge in November-December 2022 on International Law Futures, drawing from his current research on planetary and space law & governance issues, infrastructures as regulation, and global data law & AI. With the late Richard Stewart, he helped pioneer the field of Global Administrative Law; their most recent joint books are on Megaregulation (the TPP, 2019), and Global Hybrid and Private Governance (2023). Kingsbury has directed the Law School’s Institute for International Law and Justice since its founding in 2002, and is the Faculty Director of the Guarini Institute for Global Legal Studies and its Global Law & Tech initiative launched in 2018. He and NYU Professor José Alvarez served as the editors-in-chief of the century-old American Journal of International Law 2013-18. Kingsbury has written on a wide range of international law topics, from indigenous peoples issues to interstate & investor-state arbitration, indicators & rankings, infrastructure, and genetic sequence data. After completing his LLB with first-class honors at the University of Canterbury in New Zealand in 1981, Kingsbury was a Rhodes Scholar at Balliol College, Oxford. In 1984, he graduated at the top of his class in the MPhil program in international relations at Oxford. He subsequently completed a DPhil in law at Oxford and has taught at Oxford, Duke, Harvard, University of Tokyo, University of Paris 1, and University of Utah.


Sessão

18/11
15:20
30min
Legal Regulation of Open Artifacts: The Data–Software–AI Model Convergence
Marco Germanò, Benedict Kingsbury

Over the past few decades, open artifacts have flourished across distinct yet interrelated domains: OSS, open data, open standards, and now open AI models. Each developed within its own set of cultures and communities, producing divergent governance approaches. Technical and business factors are increasingly melding these areas into partly-fused communities of practice. These artifacts now coexist within layered digital infrastructures, where each element reinforces and depends on the others: OSS powers data pipelines; open datasets train AI models; open models store/produce data and shape software development. Innovations and vulnerabilities propagate across these layers; interventions in one domain ripple through others. This paper examines how regulatory framings and legal requirements are consolidating these recursive infrastructures and drawing these domains under a shared governance optic. Regulatory pressure comes from both public and private actors. State-led interventions—often driven by national security and geopolitical priorities—are expanding the reach of cybersecurity, liability, and transparency obligations across software, data, and AI systems. Examples include the EU’s CRA and AI Act, and the U.S. NCS. Institutional and private-sector initiatives are reshaping legal architectures: the World Bank’s licensing framework for OSS formalizes software release as part of broader data governance; the OSI’s OSAI definition seeks to stabilize what “openness” means when applied to models. The paper argues these developments are not merely cumulative, but indicative of a deeper reconfiguration in how open artifacts are articulated. Communities once grounded in distinct rationales for openness now confront shared compliance environments and overlapping regulatory demands. Should they embrace this trajectory or preserve differentiation? Responses so far have been piecemeal. We argue this flux calls for a collective and systematic rethinking of how openness is defined and contested. Silo-specific governance mechanisms may prove unsuitable for managing cross-domain claims and interdependencies.

Open Source and AI
Main Room