Decentralizing AI Governance: A Community-Driven Alternative to Global Regulatory Fragmentation

In today’s rapidly evolving AI landscape, governance is increasingly shaped by geopolitical
rivalries and institutional power plays. The world's major powers are developing divergent regulatory frameworks, leading to a fragmented global landscape of AI governance. This talk examines this fragmentation in AI governance, analyzes its implications, and proposes community-driven alternatives that prioritize democratic participation and public interest.

The three dominant AI governance models reflect distinctly different priorities and approaches. The United States has adopted a market-led approach with light-touch regulation that emphasizes competitiveness and voluntary industry commitments. The European Union promotes a rights-based, precautionary approach grounded in democratic values. China's state-directed model balances national priorities with stability while aligning with socialist values.

This talk will provide a comparative analysis of US, Chinese, and European AI governance
regimes, drawing on leading academic and policy literature to examine their philosophical
underpinnings, structural characteristics, and geopolitical implications. We will then critique these dominant models through the lens of power asymmetries, regulatory capture, and technocratic bias, highlighting how affected communities are routinely excluded from governance processes.

In response, we will advocate for a decentralized, community-driven model of AI governance drawing on existing proposals. Several promising alternatives demonstrate how more participatory governance frameworks might function. Barcelona's Municipal AI Strategy exemplifies city-level governance that emphasizes technological sovereignty and meaningful citizen oversight of algorithmic systems. Harvard Law School's research on Co-Governance frameworks illustrates how regulatory authority can be effectively shared among government, industry, civil society, and affected communities. Recent academic work, including "Beyond Participatory AI" (AAAI/ACM) and "Global AI governance" (Oxford Academic), provides concrete mechanisms for substantive community involvement throughout the AI lifecycle and offers insights into scaling these approaches internationally while respecting contextual differences.

This session aims to move beyond a binary view of AI governance as either state- or market- led. Instead, it will explore what a truly inclusive, globally responsive governance framework might look like - one where community voices shape the design, deployment, and regulation of AI systems.

Please note that this session room has limited capacity, and attendance will be accommodated on a first-come, first-served basis

See also: Discord Thread
The speaker’s profile picture
Kartikeya Srivastava

Kartikeya Srivastava is a Data Scientist and Data Governance Specialist with extensive experience operationalizing compliance frameworks for AI-enabled systems in highly regulated environments, including PCAOB-governed financial auditing and public healthcare. He currently works as an Analyst at SA Health, where he advances clinical data governance frameworks and ethical analytics practices to enable privacy-preserving innovation in public healthcare. Previously, at Deloitte, he developed AI accountability mechanisms in financial auditing by establishing governance documentation and embedding regulatory controls into machine learning systems. Kartikeya is currently pursuing a Master’s in Artificial Intelligence and Machine Learning at the University of Adelaide.

The speaker’s profile picture
Cathy Zhang