Mozilla Festival; Social Moment
Kick off Saturday night at MozFest with an easy-going evening of music, good vibes, and time to mix with fellow festival-goers.
Headliner: B Jones
Fresh off DJ Mag’s Top 100 (#94), B Jones made history as the first Spanish artist to play Tomorrowland’s Main Stage, across three editions since 2022. With releases on Tomorrowland Music, Spinnin’, and Dim Mak—plus collabs with Steve Aoki, Alok and NERVO—she’s bringing big-festival energy to our dancefloor.
Opening set: DJ Boom Boom Boom
Human-rights technologist and DJ based in Brooklyn, New York, Sarah sets the tone with a genre-hopping, feel-good opener.
Good to know
Join us from 19:30. Bars and food trucks will be open all night to purchase food and beverages!
See you on the floor! 💃🕺
Ingrid LaFleur; Lab
How might our technologies evolve if their design principles were rooted in ancestral wisdom, communal care, and ecological balance? This Afrofuture World-build invites participants to imagine radical alternatives to today’s dominant tech paradigms by stepping into speculative, culturally grounded futures shaped by Black and African diasporic traditions.
In this facilitated session, we will challenge the “one-size-fits-all” mentality embedded in today’s design ecosystems, where efficiency often trumps equity, and abstraction erases cultural nuance. Drawing from the Dinkinesh Method, a futures-thinking framework developed by curator and Afrofuturist Ingrid LaFleur, this interactive workshop uses storytelling, system mapping, and collective dreaming to envision worlds where technology is not only inclusive but also liberatory.
We will begin by unearthing the default assumptions embedded in today’s platforms—such as surveillance-as-safety, speed-as-value, or neutrality-as-design. Then, participants will collaboratively world-build a speculative tech society where design is informed by intergenerational knowledge, spiritual sovereignty, and pluralistic cultural values. Together, we’ll co-create prototypes of systems, platforms, or rituals that embody alternate pathways: What does data stewardship look like when grounded in kinship networks? How might algorithms evolve if guided by principles of harmony, not extraction? Can machine intelligence be designed to support spiritual and emotional well-being?
This world-build is not about optimizing current systems—it’s about imagining what becomes possible when we center the sacred, the ancestral, and the collective in our design. Our goal is to make visible the hidden cultural scaffolding of dominant tech norms and illuminate futures where technology serves as a tool for restoration, joy, and sovereignty.
Open to technologists, artists, designers, organizers, and anyone curious about justice-centered futures, this Afrofuture World-build will leave participants with new design imaginaries, tangible prompts for decolonizing their practice, and the inspiration to create tech aligned with liberation.
Because the futures we envision today shape the technologies we inherit tomorrow.
Paz Peña; Forum
The socio-environmental impacts of AI data centers are increasingly gaining ground in public opinion on all continents. Although they are not the only impacts, massive energy and freshwater consumption are two of the most worrying consequences. In a space where public policy is still unclear, the communities that suffer these impacts have been key in the world to question these projects.
This session is a space to discuss the options available to communities when an AI data center announces that it is setting up in their community, and thus to understand the diverse socio-environmental problems of data centers and the complexities of local and national public policies on the issue. By understanding these complexities, we can imagine new forms of technology public policy.
This session will be participatory. It will begin with a 15-minute presentation of Paz Peña's research on AI data centers in Latin America. Based on that, attendees will discuss different topics in groups, such as transparency and access to information, the power of big tech in the public policy imagination, and socio-environmental principles that should govern AI policies.
Please note that this session room has limited capacity, and attendance will be accommodated on a first-come, first-served basis
Aksana; Installation
Bitcoin Threads is a hand-embroidered art installation that weaves the story of Bitcoin into textile form. Each piece is carefully stitched using traditional embroidery techniques to represent the resilience, decentralization, and beauty of Bitcoin culture.
Bringing the craft of cross-stitch embroidery into the digital age, 5Ksana transforms colorful threads into symbols of financial freedom and individual empowerment. Her work connects old-world craftsmanship with the modern revolution of digital money - proving that art, like Bitcoin, transcends borders and time.
Arpit Tandon; Talk
The pursuit of shareholder value has been capitalism's north star for decades. Yet this singular focus has led to concerning outcomes: declining labor share of income, widening economic inequality, and growing public disillusionment with market economies.
In this thought-provoking session, I'll challenge conventional wisdom by examining how traditional profit-maximization models have transformed open platforms like the internet into concentrated markets dominated by a handful of powerful players.
Drawing on my experience at the intersection of business innovation and emerging technologies, I'll identify the critical shortcomings of shareholder-centric approaches:
Compensation structures that fail to distribute value fairly
Tax frameworks that incentivize extraction over reinvestment
Short-termism that sacrifices sustainable growth for quarterly results
Power consolidation that stifles competition and innovation
But critique alone doesn't drive change. The heart of this talk showcases pioneering alternatives emerging across high-growth sectors including AI, Web3, and climate tech. Through concrete case studies, attendees will discover how forward-thinking companies are successfully implementing:
Community-driven governance models
Transparent value distribution mechanisms
Shared ownership structures
Equitable compensation frameworks that align all stakeholders
Participants will leave with actionable insights and practical frameworks they can immediately apply to reshape their own organizations' approaches to value creation and distribution.
Patrice Caire; Talk
How can we build AI and robots that reflect diverse ways of thinking, relating, and creating—rather than replicating dominant systems of control and exclusion?
In this interactive session, AI researcher and artist Dr. Patrice Caire invites participants into a critical and playful exploration of feminist approaches to technology. Drawing on her interdisciplinary projects—such as Cooperatives Robots, which stages surreal encounters between humans, drones and humanoids—and her research on social robots in public spaces, Dr. Caire challenges the default assumptions built into AI and robotic design.
Using live examples and stories, Dr. Caire will unpack how social machines are shaped by cultural narratives, and how artistic experimentation can reveal and rewire the values embedded in technological systems.
To bring these ideas to life, the session includes participatory activities, such as:
• “Design a Robot Persona”: Small groups quickly sketch a robot character based on values like care, resistance, or ambiguity. How does this shift their expectations of what a robot can be or do?
• “Whose Voice Is That?”: A quick-fire guessing game based on robot voice samples—inviting reflection on bias, authority, and cultural assumptions embedded in vocal interfaces.
These playful moments are designed to surface critical questions, among which: Who gets to define intelligence? What is erased in the pursuit of "neutral" design? How can creative practices help us imagine technologies that center on complexity, difference, and accountability?
Bio (Short): Dr. Patrice Caire (PhD in Computer Sciences, AI) is an artist-scientist bridging technology and creative expression. As a researcher specializing in AI and social robotics, she has published over 50 scientific papers. Her multimedia installations have been exhibited at the Brooklyn Museum in New York, Luxembourg's Museum of Modern Art, and the Center for the Arts in San Francisco. This dual expertise allows her to approach AI development from both rigorous scientific and deeply human creative perspectives. Dr. Caire's work demonstrates that we can--and should--rethink AI and technology to create alternatives to harmful tech systems, making her a leading voice in the development of ethical and feminist AI. (More at https://patricecaire.com)
This session welcomes artists, scientists, technologists, educators, activists, and anyone curious about rethinking AI, tech and the machines we live with.
Mari Zumbro, Jad Esber, Mohamed Nanabhay, Tessa Brown; Talk
In this panel, companies from Mozilla Ventures “Healthy Communities” thesis areas reflect on lessons learned in building social tooling around data sovereignty and trust and safety. Mozilla Ventures’ General Partner Mohamed Nanabhay moderates a conversation between Koodos’s Jad Esber, Germ Network’s Tessa Brown, and Filament’s Tony Haile on building consumer-first products that balance data sovereignty, community, and public trust.
Nina Ajnira Karisik; Installation
At MozFest, Nina Ajnira Karisik invites you into a spring-like oasis alive with robotic butterflies powered by AI. These delicate creatures listen for meaning in your words; each time a butterfly flaps, you’ll find yourself guessing what the AI just recognized.
It’s a mystery game, a technical demo, and a meditation on how AI systems “think”; and how often they flatten or overlook the nuance that makes us human. Set within this living landscape, the butterflies embody a quiet struggle between the positive and negative forces of technology. Each flicker and flutter is both playful and profound, inviting you to wonder: whose side will your words help them win?
Suba Vasudevan, Katie Eyton, Marty Swant; Talk
As digital platforms continue to monetize attention, targeted advertising has become both a cornerstone of the internet economy and a flashpoint for public concern. Can we build an internet where advertising respects user autonomy, promotes safety, and earns trust? Or is the very model fundamentally flawed?
Claire Pershan; Forum
The EU has been quietly negotiating a proposed law that would force messaging services to scan private chats and risk breaking end-to-end encryption.
Privacy defenders and activists have been successful in preventing the worst, at least for now, but EU countries are still pushing the issue forward.
This urgent file has not gotten the attention it deserves, in particular outside of Europe.
That's why we'll hold space for informal discussion with privacy experts and media representatives about the EU "'Chat Control" file and how it risks weakening encryption.
We will take advantage of the international journalists present at the festival to raise awareness about this concerning regulatory proposal in the EU that could have global implications.
We will discuss the technical challenges in this file, the state of play, and how we can advocate and organize together.
Daniel Harris, Gordon F B Johnson; Forum
This session will lay out an actionable plan to achieve a post-capitalist economic interaction system by which we all thrive, with more effective ways to participate economically so we all achieve more leisure/comfort, and less inequality/stress.
Humanity has at its command enough technology and resources that we could all live really well, not just marginally survive. All of us.
Ending starvation, poverty, homelessness, providing healthcare - all worthwhile, but far short of what we can achieve by truly, universally, and effectively aligning our work, knowledge and resources.
Capitalists have cowed us with dogma, fear and greed, steeped in an environment where we are induced to believe everything is zero-sum, dog-eat-dog competition, and "they" decide what to provide with only lip service to what we want - jobs, goods and services, housing, food, interest rates. (And by the power of the wealth they take from all the rest of us, they control the making of laws, regulations and policies, such that what ought to be illegal and not done, is allowed to be standard practice.)
We'll look at a model that turns all the norms on their head to achieve a way to interact by which we can prioritize the environment and ALL people, so we all win.
• Harnessing energy should free us from work – not put us out of their jobs so we have to fight each other while groveling for their next job.
• Mass production and tech should make things more available and cheaper, and at the same time more customized – not shortages of what we want that jack prices or overages that get wastefully thrown out.
• Communication should facilitate producing nearly exactly the right amount of everything – not be victims of business risk of the business cycle (the boom-and-bust that execs and owners point at in rationalizing their obscene inequality).
• Exceptional ideas upvoted to the top get facilitated, making everyone's life better – not unsupported and ignored, and not scooped up by megalithic corporations either to milk for highest profit or to shelve so they don't have to out-compete the new ideas.
https://discordapp.com/channels/909777704432324618/1420372101977604097
Please note that this session room has limited capacity, and attendance will be accommodated on a first-come, first-served basis.
Tazin Khan; Talk
In a world where 80% of cybersecurity breaches stem from human vulnerabilities—phishing, manipulation, and social engineering—our traditional frameworks are failing us. Built on Cold War logic and enterprise compliance, today’s cybersecurity systems are designed to protect infrastructure, not people.
This talk presents a new paradigm: Digital Resilience.
Drawing from original research and interviews with trauma-informed care specialists, cybersecurity leaders, educators, and gender justice advocates, I’ll introduce the Digital Resilience Framework (DRF)—a justice-centered, trauma-informed model rooted in the RISE pillars: Resilience, Inclusion, Safety, and Empowerment.
This session will explore:
Why awareness training rooted in fear, jargon, or shame doesn’t work
How digital safety education can be emotionally resonant and culturally grounded
What it means to co-create security with the people most harmed by digital systems—rather than designing for compliance
From community workshops with immigrant parents to trauma-informed digital literacy for youth, I’ll share insights from Cyber Collective’s fieldwork and my thesis research on how we can build frameworks that are accessible, flexible, and human-first.
The talk concludes with a facilitated discussion exploring practical strategies for designing safety training, awareness programs, and policy that center care over control—turning digital security into a site of healing, not just hardware defense.
This session is for educators, designers, technologists, community organizers, and funders who want to reimagine what safety looks like in the digital age—and for whom.
Jamile Santana; Talk
What does it feel like to be reduced to a row in a spreadsheet?
In this interactive forum, participants will become a “living database” — physically enacting how data categorizes, simplifies, and often erases the complexity of real lives.
We’ll begin with a silent movement exercise: each person receives a random set of category labels (race, gender, education level, internet access, etc.) and is asked to position themselves in a grid on the floor. Through a series of regroupings and card-swaps, we’ll explore what happens when data doesn’t match lived experience.
Participants will then reflect on how it felt to embody inaccurate data points or lose visibility within imposed categories. We’ll invite open discussion: What was missing? Which labels felt false or violent? Who got left out?
To close, each person will propose a new data field — something they wish a system could know about them (e.g. “daily fears”, “joy triggers”, “community support”) — and contribute it to a collaborative wall imagining a new, more humane database.
This session uses no screens, tools, or devices — just our bodies, stories, and shared imagination. It’s an invitation to unlearn the default logic of datafication and dream up relational, care-centered approaches to data and technology.
This activity is ideal for people working with digital rights, governance, AI ethics, journalism, research, or activism who are rethinking how data impacts power and visibility.
Paul Aguilar, Linda Fernandez; Talk
Do you know how many trackers are there in the apps you use every day? Why does a goverment app require access to your location 24/7? Why would you install stalkerware on your children?
Datávoros is a project that investigates the voracious collection of data through mobile applications developed by governments and companies. We run tests and technical analyses to provide insights on the main security and privacy flaws in mobile applications.
This session is designed for beginners interested in mobile apps auditing, digital privacy, transparency, and accountability.
We will present the methodology used by Datávoros to analyze mobile applications, including network analysis, identification of permissions and trackers, and evaluation of security and privacy measures. Through concrete examples, we will show how analyses have been conducted on citizen security applications, applications for monitoring ,dating apps and parental control apps.
In addition, we will discuss how the evidence generated by these analyses can be used by activists, journalists, and civil society organizations to demand better practices in personal data protection.
We will invite participants to learn about, contribute to, and to provide feedback on our methodology.
Kartikeya Srivastava, Cathy Zhang; Talk
In today’s rapidly evolving AI landscape, governance is increasingly shaped by geopolitical
rivalries and institutional power plays. The world's major powers are developing divergent regulatory frameworks, leading to a fragmented global landscape of AI governance. This talk examines this fragmentation in AI governance, analyzes its implications, and proposes community-driven alternatives that prioritize democratic participation and public interest.
The three dominant AI governance models reflect distinctly different priorities and approaches. The United States has adopted a market-led approach with light-touch regulation that emphasizes competitiveness and voluntary industry commitments. The European Union promotes a rights-based, precautionary approach grounded in democratic values. China's state-directed model balances national priorities with stability while aligning with socialist values.
This talk will provide a comparative analysis of US, Chinese, and European AI governance
regimes, drawing on leading academic and policy literature to examine their philosophical
underpinnings, structural characteristics, and geopolitical implications. We will then critique these dominant models through the lens of power asymmetries, regulatory capture, and technocratic bias, highlighting how affected communities are routinely excluded from governance processes.
In response, we will advocate for a decentralized, community-driven model of AI governance drawing on existing proposals. Several promising alternatives demonstrate how more participatory governance frameworks might function. Barcelona's Municipal AI Strategy exemplifies city-level governance that emphasizes technological sovereignty and meaningful citizen oversight of algorithmic systems. Harvard Law School's research on Co-Governance frameworks illustrates how regulatory authority can be effectively shared among government, industry, civil society, and affected communities. Recent academic work, including "Beyond Participatory AI" (AAAI/ACM) and "Global AI governance" (Oxford Academic), provides concrete mechanisms for substantive community involvement throughout the AI lifecycle and offers insights into scaling these approaches internationally while respecting contextual differences.
This session aims to move beyond a binary view of AI governance as either state- or market- led. Instead, it will explore what a truly inclusive, globally responsive governance framework might look like - one where community voices shape the design, deployment, and regulation of AI systems.
Please note that this session room has limited capacity, and attendance will be accommodated on a first-come, first-served basis
Andreu Belsunces Gonçalves; Talk
Artificial General Intelligence (AGI) has emerged as the most ambitious frontier of the artificial intelligence industry, despite its conceptual ambiguity and the lack of consensus regarding its technical feasibility. This presentation introduces the concept of AGI deep hype: a form of long-term technological overpromising rooted in structural uncertainty and projections of civilisational transformation. Unlike conventional hype cycles, which revolve around short-term breakthroughs, deep hype sustains momentum by projecting unverifiable promises far into the future.
Building on the notion of sociotechnical fictions (mediated forms of imagination within technoscientific domains that help materialise non-existent technological assemblages) this presentation argues that AGI is a product of venture capital’s speculative imagination. This mode of imagination is future-oriented and market-driven: it anticipates returns, shapes emerging industries, and legitimises its interventions through the myth of technological inevitability. It functions as both a financial strategy and a moral narrative that frames private capital as the rightful guide to humanity’s future.
In its most powerful expressions, particularly among leading U.S. venture capitalists, this speculative drive is embedded in a broader ideological project shaped by cyberlibertarian and longtermist worldviews. These narratives position AGI development not just as an economic opportunity but as a moral imperative to safeguard humanity from existential risk.
Drawing from analysis of key academic, media, and policy discourses, the presentation identifies a series of interlinked arenas of uncertainty underpinning AGI deep hype, including conceptual, temporal, economic, geopolitical and moral, among others. By mapping these uncertainties, this presentation explains how AGI deep hype relies on sociotechnical fictions with significant performative agency, shaping both expectations and infrastructure, and ultimately redefining the contours of AI governance and technological possibility.
Caleb Gichuhi; Talk
Harmful technological systems threaten democracy in 2025 and have also done so in the past. Scholarship has shown that there is a relationship between social media and critical trends driving toxic polarization and populism. The collapse of truth due to disinformation and the rise of digital echo chambers due to the algorithmic sortition of content have empowered authoritarian leaders to undermine democratic institutions. This in turn has resulted in the growth of perception gaps as communities assume the worst of each other and lack opportunities, processes and tools to hear diverse views on public issues. Applying harmful technology systems to try and bring communities together towards deliberative democracy is no longer possible and in an effort to unlearn harmful technology systems, new commitments and innovations to support public deliberations are emerging in response to these trends.
This session will showcase the findings and lessons from a digital deliberative program implemented in Sudan and Kenya using three deliberative technologies (Talk to the City, Polis, and Remesh) and offline engagement, highlighting how these new technologies are challenging authoritarian efforts to stifle public debate during conflict. By moving away from harmful technology systems that have been used in the past to try and engage the public, this session will compare how toxic the conversations are in these harmful system and how safe and inclusive they are in deliberative technologies. The participants shall also be taken through a deliberative process discussing a contentious issue using the one of the deliberative technology tools to fully grasp how they can be applied in a real world setting
Katya Hancock, Josh Thompson; Talk
Too often, tech products aimed at young people are designed for them, not with them, reinforcing narrow assumptions about what young people want, need, and can contribute. As a result, digital spaces frequently work against, rather than for, young people’s wellbeing. But not all young people are struggling with tech. Many are finding ways to connect, create, and express themselves, often in spite of default design choices that fail to reflect their realities.
This matters now more than ever. Gen Z and Gen Alpha make up one of the largest and most powerful blocks of tech users globally. They are not only shaping online culture, they are influencing the evolution of the platforms themselves. And with the rapid rise of AI, we face a critical window to shape how this transformative technology impacts a rising generation. Young people must be supported to navigate both the opportunities and risks of AI, and to develop the agency, fluency, and voice to help shape its future.
This session will explore how default design practices continue to marginalize youth perspectives, and why centering diverse youth voices is essential to building more inclusive, empowering tech. It’s more than just about gaining the technical skills to build new tech, it’s also about developing a critical fluency of how tech intersects into a teen’s life. Our goal is not only to manage and minimize risks, but also to lean into the positives: supporting young people in shaping digital spaces, and AI tools, that foster meaningful connection, creativity, and agency, and helping them find their way in a tech future they help design.
We’ll share insights from Young Futures’ Youth Listening Tour and from youth-led and youth-informed nonprofits that are charting new paths for healthier, more empowering digital experiences. We’ll also explore what it takes to move from tokenistic “youth input” to true youth co-creation, and how funding and scaling youth-developed solutions can shift the broader ecosystem.
Larissa Macêdo; Lab
This session explores how Black and Indigenous Brazilian artists act as critical agents in the hacking, remixing, and reprogramming of algorithmic systems through their artistic practices shared on social media. Positioned at the encruzilhadas [crossroads] of art, artificial intelligence, and networked platforms, the activity draws upon ancestral knowledge systems, non-Western counter-colonial perspectives, and creative resistance to examine how social media can be subverted and transformed.
The focus will be in case studies and visual projects that confront the normative logics of algorithmic visibility and challenge the techno-colonial infrastructure of social media. From Exu (Afro-Brazilian orixá) and the encruzilhadas [crossroads] as a metaphor for code in this systems.
Social media will be analyzed as the “boca do mundo” (mouth of the world), inspired by the figure of Exu Enugbarijó, to trace the ambivalences embedded in AI systems. The session will introduce three Afro-Brazilian experimental artistic communication procedures based on the notions of Exu Yangí, Òkòtó, and Enugbarijó. These are not only paths of resistance, but also sites of design and language reinvention that redefine the way we perceive and share artistic practices in digital networks.
The objective of the activity is to invite participants to critically and symbolically explore the encruzilhadas [crossroads] between art, AI, and social media within their own cultural and digital experiences. The aim is to challenge and expand critical design and artistic practices within digital platforms by engaging with epistemologies that have been historically marginalized.
By combining case studies, visual analysis, and a collective creative exercise, this session reflects on how Black and Indigenous artists disrupt algorithmic biases, opening space for ethical, aesthetic, and poetic intersections that reimagine the field of art and technology.
Please note that this session room has limited capacity, and attendance will be accommodated on a first-come, first-served basis
Amanda Levendowski Tepski; Lab
Our community needs a fresh framework for imagining tech governance, which has been largely defined by bros wearing Patagonia vests, billionaires wanting to colonize Mars, and the lawyers who litigate or lobby for them. Problems with this approach are especially evident in cyberlaw. Iconic cyberlaw cases center technologies that appropriate pirated nude images to fuel search engines, ones that profit from harmful user-generated videos, and others that rely on scraped images to create a hotness ranking website of peers. But none of the coders behind those technologies lost lawsuits over legality. Rather, those tools blossomed into Google Image Search, YouTube, and Meta, all run by the companies with the power to shape tech governance.
Those tools share something else: misogyny. Coders engineered them to exploit women’s bodies. Eric Schmidt, the co-founder of Google, confessed that Google Image Search was launched to oogle Jennifer Lopez’s body in her gauzy green Grammy’s gown. YouTube was launched to oogle Janet Jackson’s bared breast after Justin Timberlake ripped her bodice during the Super Bowl Halftime Show. And in Congressional testimony, Mark Zuckerberg admitted that he’d launched FaceMash to oogle his women peers in the Harvard Class of 2006. Those women never gave their consent to be used for profit, and they suffered for their exploitation without compensation.
Feminist cyberlaw provides that fresh framework for imagining tech governance. Pioneered by Amanda Levendowski (me) and Meg Leta Jones, feminist cyberlaw examines how gender, race, sexuality, disability, and class shape cyberspace and the laws that govern it. Drawing on more than a decade of my cyberlaw practice and teaching, this Lab will explore how copyright emerged as the most powerful cyberlaw to counter oppressive tech governance. Not only does it play a role in all three of the technologies highlighted above, this Lab explores a trio of case studies about biased artificial intelligence, invasive face surveillance, and nonconsensual intimate imagery that exposes why copyright can be a powerful tool for forcing technologies to be just, not just legal.
To put feminist cyberlaw lessons into practice, participants will be invited to redact portions of legal documents using a custom Mozilla-compatible bookmarklet co-created by Georgetown Law students and faculty, empowering participants to create redactive poems that radically reimagines tech governance through feminist cyberlaw futures. (Of course, this exercise also implicates copyright.)
Mehan Jayasuriya, Nabiha Syed, Alix Dunn, Ellie Bertani; Showcase
2025 Mozilla grantees working on topics like environmental justice, data stewardship and trustworthy AI will compete live to pitch their projects to a live audience. Attendees will vote to choose which project is the most impactful; the winning project will walk away with a prize.
Liv E.; Talk
In a world where platforms optimize for attention and extraction, our personal data becomes a commodity rather than a site of reflection. How can we unlearn the default surveillance paradigms of big tech and reclaim our data through the development of non-extractive tools that nurture agency, memory, and meaning?
We are constantly producing data through our devices. From algorithmic feeds to timelines, our digital footprints are designed for external visibility, not internal meaning. Traditional platforms are rarely designed to encourage us to re-visit our past in meaningful ways. Instead, we are taught to see data as something to optimize, monetize, or ignore altogether. This commercialization of ourselves can result in permanent disruptions to our behaviors, pushing the nature of our engagement with technology into market norms, rather than social ones.
This data, though, is created through acts of being human. Memory and identity is fluid, contextual, and unique. Mainstream platforms often present identity as a fixed profile and flatten the emotional experience of revisiting artifacts from our past, but we can unlearn surveillance culture by recognizing the act of remembering as an emotional one. With this lens, we can shift the design paradigms from extractive applications to tools that support multiplicity, neurodivergence, and introspection.
These ideas challenge the traditional role that individual users play in the role of constructing algorithms. Participants will learn about the philosophy behind introspective tooling and reflect on their own relationship with personal data:
- Designing emotionally intelligent interfaces that help us reflect on and make sense of our digital memories
- Constructing identity as something that evolves and is contextual, rather than a fixed, one-size-fits-all profile
- Caring for our data the way we care for a journal or a garden: not just collecting it, but engaging with it in different ways over time
This session will explore how we can reclaim our data for ourselves and engage with our digital pasts on our own terms. Using open source tools and local generative AI agents, we'll interrogate what it means to unlearn externalized modes of digital identity and uncover the opportunity space of data introspection.
Please note that this session room has limited capacity, and attendance will be accommodated on a first-come, first-served basis.
Trei Brundrett, Sam Liebeskind; Forum
How local communities share information and build trust is essential to pluralism, democracy, and economic mobility. And while we see important mediums like newspapers and libraries dying, online forums like Facebook Groups and Nextdoor have stepped in to fill the void. In fact, half of US adults say they get their local news from these sources, even more than newspapers. Unfortunately, many of these groups are toxic, negative, and hard to navigate.
Here at New_ Public, we have done a lot of research in the last year about what it requires to support healthy digital spaces for local communities; and it turns out that thoughtful, skilled, and appreciated “community stewards” are a key component. These stewards are often unsung heroes in their communities. They’re volunteers managing online neighborhood groups, newsletters, and boards. And we’ve has developed a number of insights about what it requires to care for these stewards and their online communities to fuel connections and social trust on- and offline.
A lot of our work now is scaling what they’ve learned to other communities through a platform that can serve as a new vital American institution; a transformative space for local conversation and community that invests in people, practices, and platforms — not just tech.
As a part of our presentation, we would spend the time speaking to this research and our insights, along with ways that we can make the internet local again; bringing people's attention to the places we are, inspiring a deeper sense of belonging, and ultimately creating healthy spaces online that help people thrive.
Dorcas Owinoh, Deril Okoth; Installation
For 12 years, I have documented my community's journey of growth through the lens of my camera, capturing the heartbeat of a community that has blossomed from a small group of 10 university students to a vibrant ecosystem of over 12,000 techies, entrepreneurs, innovators and creators.
Dipping into my rich archive of historical photographs taken during the infancy of my community to the present day, I will showcase our story of growth, resilience and change. Each image will represent a moment in time, meet up by meet up, event by event, programme by programme, that has helped shape us into a vibrant, globally-recognised tech community.
This installation will be more than just a nostalgic journey. I will elaborate on the evolution of community-building based on adaptive systems. I'll look back at how we implemented human-centred design principles, prioritising collaboration and tapping into the lived experiences of our community. I will show how we used feedback from listening closely to the needs of young people, women, students, developers, and entrepreneurs within my community to shape programs, spaces, and partnerships.
My installation will show how systems changed as my community grew. How we shifted from informal meetups in coffee shops to structured meetups in a rented space/innovation hub. From spontaneous events to a well-coordinated calendar of programs aligned with long-term outcomes. How we redesigned our governance structures, introduced open-access learning tools, embraced agile development, and embedded inclusivity into the very fabric of our work.
I will showcase the evolution of online conversations about my community. I will bring up old tweets, for good measure, from the early days of "Let's meet at coffee shop X to talk about Python" to latter-day tweets like "Hosting a multinational consortium of start-ups working on climate change"
I will showcase the evolution and emergence of smaller innovation hubs from within the larger community we have built over a decade, the governance policies that we have influenced based on our human-centred approach, as well as direct investments that have benefited community members.
My installation will be a story of building more than just a tech community. It will be the story of nurturing a resilient, self-sustaining ecosystem in Kisumu, Kenya that will leave an indelible mark on festival attendees.
Pierre Depaz, Andreu Belsunces Gonçalves; Forum
Hype is a future-oriented, overpromissory sociotechnical phenomenon that plays a central role in the governance of emerging technologies. It consists of hyperbolic discourses designed to inspire confidence and persuade stakeholders of the desirability, inevitability, and urgency of technological innovation. Hype is not merely communicative excess; it is a constitutive force in techno-financial capitalism, functioning as a manufactured event that captures attention, compresses decision-making timeframes, and fuels speculative investment.
A hallmark of hype is the creation of perceived windows of opportunity that appear to be rapidly closing, generating fear of missing out and accelerating commitment before critical evaluation is possible. These dynamics contribute to what has been described as the accelerated chronopolitics of innovation, where short-term speculation dominates and longer-term considerations are sidelined.
While often short-lived and susceptible to disillusionment, hype follows a recognizable trajectory of rising expectations, peak visibility, and subsequent decline. Hype is not peripheral to technological development; it is an intrinsic force in technological emergence. It functions by instrumentalising uncertainty, projecting imagined futures as inevitable outcomes, and orchestrating collective visions through promotional discourse.
Crucially, hype is performative: it not only reflects but shapes reality. By amplifying potential benefits and downplaying limitations, it legitimises ventures, coordinates actors, and structures expectations across diverse sociotechnical fields.
Often driven by charismatic figures and supported by sensationalist media, hype generates affective intensities—excitement, desire, and hope at its peak, followed by fear, frustration, and disillusionment when promises are unmet. These emotional cycles are not collateral effects but integral to how hype operates as a mode of technopolitical governance. It mobilises resources, legitimises agendas, and produces a sense of inevitability around particular technological trajectories, ultimately shaping what futures are imagined, funded, and pursued.
Taking this into consideration, this interactive session aims at debating about the role and manifestations of hype in technology governance – the interests, attention dynamics, financial flows, infrastructural consequences, power inertias and forms of influence and exclusion that it entails. We welcome actors working in the tech industry, activists, policy makers, artists, hackers and scholars to share their experiences and case studies to understand this urgent, overlooked topic in a historical moment where political, technological and financial power are inherently linked to hype dynamics.
Please note that this session room has limited capacity, and attendance will be accommodated on a first-come, first-served basis
Michelle Baldwin, Alexandra Stef, Sonja Miokovic, Melanie Hui; Forum
Overview:
What if AI, Web3, and blockchain weren’t tools of disruption—but of coordination, care, and community control?
In this Imagination Circle, we reframe emerging technologies not as ends in themselves, but as infrastructures for reimagining how philanthropy and capital move, who decides, who benefits, and how value is shared.
Today’s systems of funding and innovation often reinforce top-down control, extractive logics, and inequitable power dynamics. This session brings together funders, technologists, and community leaders to collectively imagine how technology can support new models of collective governance, trust-based giving, and regenerative investment.
What might it look like to build funding ecosystems rooted in solidarity, not scarcity?
Together, we will:
-Examine the default assumptions embedded in philanthropy and capital flow, efficiency, competition, donor control and explore how they mirror extractive tech
-Explore how AI, blockchain, and Web3 can serve as coordination tools for decentralized governance, transparent distribution, and community-led decision-making
-Experiment with AI to surface the many ways communities already give, share, and generate value – from time and care, to data and local knowledge – and explore asset generating models that recognize, value and activate a broader ecosystem of community assets.
-Engage in imagination sprints to share hacks of new models, such as:
A decentralized autonomous organization (DAO) for mutual aid funding
AI-enabled tools to surface community-defined priorities and redistribute resources accordingly
*Blockchain-based systems for participatory budgeting or climate reparations
-Co-create a poetic harvest of what liberatory, tech-enabled philanthropic infrastructures could look like in 2040
We will use storytelling, speculative design, and systems thinking to explore how we can shift power from funders to communities, and from platforms to people.
Who It’s For:
No technical expertise needed—just a commitment to transformation.
Co-participants will leave with:
-A reimagined view of AI, Web3, and blockchain as tools for community coordination and systemic change
-New relationships and shared language for building post-extractive ecosystems of philanthropy and tech
-Experience creating new ideas through storytelling and collective imagination
Hosts:
Michelle Baldwin, Associate, Equity Cubed; Co-Founder AI for Social Impact Collaborative; Faculty Governance Leadership Ethics, Huron University College; SuperBenefit DAO https://www.linkedin.com/in/michellebaldwin/, in Canada
Alexandra Stef, Collective Learning & Innovation, Inspire Change https://www.linkedin.com/in/alexandra-stef/, in Spain
Sonja Miokovic, Consulting Director, Community Innovation, Tamarack Institute https://www.linkedin.com/in/sonjamiokovic/
Louis Barclay; Lab
Plenty of brilliant people are doing the serious thinking about what’s wrong with tech.
This workshop is about bringing fun to the table.
Humor is a beautiful tool to mock the powerful, show people what's wrong with tech, and make us feel marginally less depressed about the world burning around us. So let's use it!
In this workshop:
- We'll come up with ten themes (e.g. weirdos trying to live forever, or how we have zero control over our data), ten formats (fake startup, new tech law), and split into ten groups to create absurd satirical concepts/parodies using a given format and theme.
- You'll receive guidance on how to make your satirical concept slap as much as possible.
- We'll round up with lightning-fast presentations to the group about our new concepts, presenting them as straight-facedly as possible.
- And there'll be an opportunity to keep the fun going afterwards — to turn your satire into something real that goes out into the big wide world.
To put it simply — it'll be a big laugh! Bring a sense of humor, an appetite for fun, and your craziest ideas.
Who am I?
I'm Louis, a Senior Fellow at the Mozilla Foundation and the editor of new tech publication Attention to make tech fun again.
Attention's projects so far include:
- The Center for the Alignment of AI Alignment Centers — featured by The Verge and reposted by Timnit Gebru, Emily Bender and 'godfather of AI' Yann LeCun.
- Together with Nitya Kuthiala, The Box — the world's first anti-deepfake wearable. The Box was also featured by The Verge and has come to MozFest as an installation, go try it out!
Cecilia Ananías, Karen; Lab
One of the main weaknesses in addressing digital divides and violence is that the Internet is still conceived as a space detached from so-called real life, as if it had no physical and real consequences on bodies and territories. We forget that behind the Internet there is a network of submarine and terrestrial cables, devices -created with rare earth minerals-, data centers -that consume our water- and corporations. Something similar happens with the survivors of technology-facilitated violence: violence does not just remain floating in the forum or social network where it happened, it also leaves marks on their identities and bodies.
Therefore, this space seeks to deconstruct these beliefs and make visible the networks and materialities that sustain the digital space, as well as its consequences on bodies and territories. The invitation is to build a cartography of the body-territory through dialogue and with a series of materialities that will give physical form to the reflection, updating the proposal with which the Colectivo Miradas Críticas del Territorio desde el Feminismo worked with indigenous and environmental activists in Ecuador and transferring these concepts to the digital space.
Swarna Manjari, Kriti Bajpai, Kriti Bajpai, Beste; Installation
Welcome to the Memorial of Serial Numbers. We are half a century knee-deep in the AI revolution. Do we know what we ought to know?
It is 2050—our children learn of crusading technological empires pillaging invisible lands and extracting visible resources. Who are the fallen, you ask?
The Clickworkers.
People who once sat by in their tiny corners of their dimly lit up homes chipping away at microtasks - anything from marking the area of trees, moderating live videos or answering a questionnaire.
In this dedicated Memorial, take a walk through countless graves, pay your respects to faceless workers, and contribute to donations that unite for a cause. You may notice that the grave is abuzz with life at daytime. This graveyard is the work of data cooperatives, and is one of the last remaining repositories of their ancestors.
Once you find yourself dwelling in the history of these people who spent laborious hours mothering a monstrous technology that governs us today, let us go back to all that consists of the resources required to build an AI empire. The construction of such towering superintelligence isn’t a mere intangible string of words and ideas–-it’s tactical. It can be felt and seen and touched. It’s the land you walk on, the water you drink and the energy that fuels everyday life. The Clouds were here on our lands; they were hot and loud, claustrophobic, and grey.
Then comes night. The sun takes away its light and life and the graveyard turns eerie and dark. Power haunts and olden tales of extraction and violence resurface. It is time to uncover the true villains at play. Underneath the soil, the dark underworld still exists. Gold, diamonds and billions. Stacked against the graves, the eternal masters of the trade know no death. They continue to skilfully extract resources, labour and time. Pitch your best strategy as the Master or the Slave in a combat of capital. Hire or Fire, Work or Protest, Quell or Unionise in a round of cards that can build or break you with the sleight of a hand. Is it all luck? Or does injustice have a strategy?
Come. Play the game to find out.
"Immaterial" is an overarching commentary on game design, amoral tech megastructures, and life itself. We apologise in advance for the frustration that you might end up feeling after the game.
Mozilla Festival; Social Moment
MozFest sessions close at 10pm. Join us the next morning to pick up your badge, if needed.
Mozilla Festival; Social Moment
Thanks for joining us at MozFest!!! Join us again next year in 2026!
Mozilla Festival; Social Moment
MozFest sessions close at 7pm. Join us the next morning to pick up your badge, if needed.
Mozilla Festival; Social Moment
Have your QR code ready and head to Badge Pickup at Main Entrance to collect your badge.
When re-entering after picking up your badge, scan your badge on the front path and walk through the Badge Pick Up area into the venue.
Venue exit is through the turret.
Mozilla Festival; Social Moment
Have your QR code ready and head to Badge Pickup at Main Entrance to collect your badge.
When re-entering after picking up your badge, scan your badge on the front path and walk through the Badge Pick Up area into the venue.
Venue exit is through the turret.
Mozilla Festival; Social Moment
Have your QR code ready and head to Badge Pickup at Main Entrance to collect your badge.
When re-entering after picking up your badge, scan your badge on the front path and walk through the Badge Pick Up area into the venue.
Venue exit is through the turret.
Scott Chipolina, Claire Godfrey, Hasan Patel; Forum
Myths are sociologically important, helping us to understand the world around us. But despite their social uses, myths are fictional. When myths are used to justify and protect a harmful profit model, we all lose out.
Monopolies in financial markets — with big tech firms seated at the head of the table — are propped up by a complex web of myths and narratives, wielded by highly trained lobbyists paid to uphold a status quo that protects excessive concentrations of economic power.
In our proposed forum, we will look at these harmful myths that justify mass deregulation, enable private capital to influence public policy, and normalise the extraction of wealth from the many to benefit the few. We will we bust the myths that uphold the status quo, we will offer a new, alternative approach that benefits wider society and reclaims economic power from vested interests.
Please note that this session room has limited capacity, and attendance will be accommodated on a first-come, first-served basis
Ruha Benjamin, Nabiha Syed; Debate
To open Mozilla Festival 2025, Imagination: A Manifesto author Dr. Ruha Benjamin will join Nabiha Syed, Mozilla Foundation Executive Director, for a conversation that redefines unlearning as a radical act of possibility.
From the biases baked into our digital worlds to the myths that shape our laws and institutions, this conversation will explore how power operates through the stories we inherit—and how breaking free from them can unlock new ways of thinking and building.
Zoe-Alanah Robert, Sarah Radway; Installation
As part of our work to facilitate social media transparency at the Applied Social Media Lab at the Berkman Klein Center for Internet & Society, we are investigating the unintended potential privacy implications of granting device permissions to social media applications.
Smartphone apps often request data permissions from users. For example, an app might request access to a user's photos or calendar events or contact list. These permission requests allow us to control which apps have access to our data, but not the end use of our data. Granting these permissions can seem innocuous to us as humans - but what can apps learn about us through our data? And does the privacy leakage get worse when combining multiple device permission types using modern data processing power to extract insights? In the Permissions Explorer project, we examine how bad the privacy problem is, giving users concrete insights into the surprising things that apps can glean, even from data that seems mundane. For vulnerable populations, understanding what kinds of inference capabilities are possible is all the more important.
Our hope is that by providing transparency into how users might be exposing sensitive data from mobile devices without realizing it, we can empower individuals, no matter their technical expertise/understanding, to practice good permission hygiene.
Lorenzo Porcaro; Talk
We no longer “find” music, music finds us. Streaming platforms promise endless discovery, but behind the scenes, recommender systems are reshaping our listening habits, narrowing our tastes, and limiting the visibility of underrepresented artists. What happens when we start to question these systems, and the assumptions they carry?
This talk draws from the Algorithmic Auditing for Music Discoverability (AA4MD) project, a research initiative funded by the European Commission, to explore how music recommender systems influence cultural access. Through user interviews, fieldwork, and critical analysis, the project uncovers how people engage with algorithmic curation, and where they encounter its blind spots, biases, and constraints.
We’ll take a deep dive into:
- How music recommenders work, and how they quietly shape what we hear
- Where users notice (or don’t notice) algorithmic influence in their listening
- Why diversity suffers in automated environments
- What it means to become aware of, resist, or reimagine these systems
By connecting algorithmic awareness with broader questions of cultural equity, this talk invites listeners to unlearn the neutrality of digital platforms and to rethink music discovery as a political, creative, and participatory act.
Following the talk, we’ll open up space for a collaborative discussion:
- What should a just and diverse recommender system look like?
- What role can listeners, artists, and technologists play in shaping it?
- And how do we begin to reclaim our agency in the age of algorithmic taste?
Let’s unlearn the defaults, and imagine something radically better.
Sadik Shahadu, Arehone Matodzi, Dr. Gina Moape; Talk
The default design in the current technology ecosystem is deeply shaped by Western-centric norms, able-bodied and neurotypical user assumptions, high-bandwidth environments, and English as the dominant language. These defaults create structural barriers that marginalize millions, especially those with disabilities and speakers of non-dominant languages. Accessibility is often an afterthought, and multilingual support is rarely prioritized, despite the global linguistic diversity and the digital divide in rural or underserved areas. This session will interrogate these embedded design defaults and present a conceptual framework for inclusive, locally grounded, and user-centered design that actively centers language equity and accessibility for people with disabilities. Participants will engage with case studies, personal narratives, and design prompts that surface alternative approaches and challenge the one-size-fits-all mentality. The objective of the session is to critically examine how current technologies exclude users through language and ability biases and co-create a vision for inclusive design that centers accessibility and linguistic diversity. The final conceptual framework will be documented and distributed openly to designers, developers, policy advocates, community organizations, and educational institutions working toward more inclusive technology systems. The session will start with a presentation of the overview of the technology landscape (10 min), followed by a breakout group discussion(20 min), collaborative design mapping(15 min), and open reflection and resource sharing(15 min).
Lisa LeVasseur; Talk
Can we train and foster a worldwide community of citizen scientists -- certified mobile app safety inspectors--to collect the data needed to generate accurate safety labels for mobile apps? Can this community effectively shift the balance of power through app behavior transparency? We think we can, and we think it might be the only way to keep on top of the growing invisible risks in constantly changing mobile apps.
Safetypedia is a pilot project Internet Safety Labs (ISL) has been running for several months. The purpose of this session is to expose the project to a larger, worldwide community, solicit participants, and to foster dialogue on the approach. In particular in this session we will:
- Explain ISL's mobile app safety labels as seen on appmicroscope.org (example app: https://appmicroscope.org/app/1579/ ),
- Explain the Safetypedia project,
- Explain the ISL safety inspector certification process,
- Demonstrate the Safetypedia data collection portal,
- Explain how safety labels are generated as a combination of automations plus human research,
- Share the results of the pilot to date--how many trained and certified inspectors, how many safety labels generated,
- Discuss viability of the project as a sustainable transparency intervention, subverting deliberate opacity and mysticism surrounding technology.
Bhavya Madan, Ali Latorre; Lab
Whether it is building a new platform from scratch, launching a new feature, or troubleshooting issues, speed often wins out over sustainability for technology-aligned teams. We default to Minimum Viable Products (MVPs), creative workarounds, and quick fixes to meet deadlines. But what starts as a short-term solution can turn into long-term friction, technical debt, or systems that don't scale.
This session is grounded in real-world tech implementation, but this is accessible to anyone who has ever tried to build something sustainable under pressure. Whether you work in technology, community building, events management, or organizational change, you'll relate to the tension between "just ship it" and "make it last" as we visualize and reframe your approach to scaling what you build.
In this hands-on, cookie decorating lab, we will explore how to unlearn default design habits that favour speed at the cost of thoughtful implementation or true intention. You will decorate two cookies:
- One representing an MVP cookie, built quickly under constraints.
- One Scaled with Care cookie, designed with intention, inclusion, and long-term impact in mind.
Using cookies as metaphors, we will reimagine how teams build systems that rise with people and not just timelines.
By the end of the session, participants will:
- Reflect on trade-offs between speed and sustainability in their work
- Reframe MVPs as a first step, not a final solution
- Leave with a metaphorical (and edible!) "recipe" for scaling with care
No baking or tech experience required! Just curiosity, creativity, and a willingness to decorate your way to better design.
Let’s unlearn the habit of rushing to build, and instead, bake something that lasts.
Vince Trost; Talk
Most AI memory implementations focus on storage and retrieval. The inference—what to actually store and why—is superficial. This talk introduces a different approach: solve the inference problem first by training reasoning models to produce formal logic. It's the hardest reasoning for humans, but LLMs excel at it—even more so when trained. Build the storage and retrieval system around scaffolding that logic to produce comprehensive, evolving representations. Vince Trost (Co-Founder, CEO of Plastic Labs) walks through how to use Honcho, how it reasons over data, and how developers can leverage that reasoning to solve memory, build stateful agents, and focus on building the best AI products possible. It's simple to implement, come see how.
Sarah Hinchliff Pearson, E.M. Lewis-Jong, Keoni Mahelona, Johann Diedrick, Pedro Ortiz Suarez, Dr. Gina Moape, Rashel Moritz; Forum
Join us for a conversation on bringing inclusive, representative data into AI — exploring data sovereignty, openness, and equity. We’ll talk about actual case studies that represent those values and what it really takes to build datasets for fair, representative systems. A Mozilla Festival-style deep dive: bold, curious, and unapologetically honest.
Moderator - EM Lewis-Jong
Panelists:
* Keoni Mahelona - Te Hiku Media
* Pedro Ortiz Suarez - Common Crawl Foundation
* Dr. Gina Moape - University of South Africa
* Rashel Moritz - Meta
* Johann Diedrick - Mozilla Data Collective
* Sarah Pearson - Creative Commons
Louis Barclay, Nitya Kuthiala, Nitya Kuthiala; Installation
🚀 It’s finally out: the world’s first anti-deepfake wearable.
2025’s most-anticipated tech launch.
It’s simple, it’s elegant, it’s versatile, and it's fortunately a complete joke. Phew.
Come try The Box at MozFest!
- Keep your face safe from unwanted photos
- Choose an avatar to replace you in the real world
- Experience analog AR for the first time
Along the way, learn more about The Box's features and our pioneering founding team.
About
The Box, previously featured by The Verge, is a satirical, dystopian startup by Nitya Kuthiala and Louis Barclay, to show what could happen if we fail to stem the tide of adult deepfakes targeting women.
We hope this absurd, appalling installation will first make you laugh, and then make you think. And then make you laugh again, and then make you think again. And finally, perform those two same actions in a recursive loop for the rest of your life.
Ayşegül Güzel; Lab
Generative AI is transforming our world, but how secure, safe, and aligned is it, really? This interactive, lab-style workshop dives deep into the critical practice of AI red teaming. Forget passive learning – here, you'll roll up your sleeves and actively probe generative AI models to uncover their vulnerabilities, biases, and potential for misuse.
Drawing from real-world red teaming initiatives (like those at DEFCON and NIST) and established techniques, participants will:
- Engage in Simulated Red Teaming Exercises: Get hands-on experience testing AI models against various challenges.
- Experiment with Jailbreak & Prompt Injection Techniques: Learn and apply methods like social engineering, character adoption, encoding attacks, and typographic tricks to try and bypass AI safeguards.
- Analyze Model Responses: Collaboratively identify different types of vulnerabilities, from generating harmful content and misinformation to revealing unintended biases or security flaws.
This workshop moves beyond theory to practical application. You'll gain first-hand insight into why red teaming is crucial for AI safety, application security, and platform integrity. We'll explore how it has evolved from military strategy and cybersecurity to become an indispensable tool for evaluating frontier AI models.
Who is this for?
Technologists, developers, researchers, policymakers, students, ethicists, and any curious digital citizen interested in understanding the practical challenges of making AI safer and more trustworthy. No prior red teaming experience is required, just an inquisitive mind!
What you'll leave with:
- Practical experience in basic AI red teaming techniques.
- A deeper understanding of AI vulnerabilities and how to identify them.
Insights into designing red teaming exercises. - A framework for thinking critically about AI safety, ethical implications, and the urgent need for robust evaluation and safeguards in the age of generative AI.
Come with your laptop connected to the internet, if you can!
Daniel Odongo; Installation
At the heart of the installation stands a needle, symbolizing the catalyst for change in entrepreneurship.
Individuals contribute ideas on how entrepreneurship can prioritize people over profit and foster local empowerment. Each idea is represented using a different color of thread and linked to the needle, creating a collaborative and living tapestry of contributions gathered over three days.
This installation highlights the collective power of small actions and the role each participant plays in shaping a more equitable economic future. The Needle of Change serves as a reminder that transformative shifts in entrepreneurship begin with individual contributions, and together, these contributions can lead to lasting impact.
Hanna Pishchyk, Julian Hauser, Masho Dzneladze; Forum
This session will explore the social costs of smart city technologies, particularly how surveillance systems, often presented as lawful and efficient, reinforce structural inequality, target marginalised communities, and undermine public trust. In the participatory, non-formal educational format, we will explore real-world examples of urban surveillance: facial recognition, predictive policing, biometric data collection, etc. Through discussion, interactive exercises, and collective reflection, we will explore how these systems work and how the narrative of "smartness" and "efficiency" often obscures their harms.
We will examine urban experiences and case studies from underrepresented regions in Europe and beyond, such as Serbia, Georgia, and Brazil, to see how surveillance technologies operate unevenly across class, race, gender, and geography. The session allows participants to explore their thoughts regarding the following questions: Who designs smart cities and for whom? What is lost when digital control systems are normalised? Where do care, equity, and community agency intersect in urban tech development?
During this session, we want participants to collect and share strategies for resisting, subverting, or reimagining harmful urban technologies in their local contexts. We will also map a set of shared principles for more just and inclusive tech systems in cities, grounded in lived experience, resistance, and collective imagination.
Malik Afegbua, Sougwen Chung, Dayo Lamolo; Debate
In a world where algorithms compose symphonies, AI models paint portraits, and machine systems remix culture at scale: what does it mean to be creative today? This conversation invites two visionary artists working at the edge of human and machine collaboration, Sougwen Chung and Malik Afegbua, to unlearn what we’ve been taught about creativity, authorship, and artistic bias.
Brandi Geurkink, Peter Chapman, LK Seiling, Carlos Hernández-Echevarría, Alberto Navas; Talk
Given the central role that private technology platforms play in the dissemination of information, shaping our lives online and, increasingly, offline, understanding the nature of that information is essential to advancing the common good.
Independent researchers, journalists, members of civil society, and the public all rely on access to platform data to understand and expose critical aspects of how information is produced and disseminated. While regulatory regimes – like the EU’s Digital Services Act – increasingly require digital platforms to make some data publicly available, there remains no clear agreement on what specific data should be made available, when, and in what form.
This session will describe a new Framework for High-Influence Public Digital Platform Data, recently developed by a group of experts convened by the Knight-Georgetown Institute (https://kgi.georgetown.edu/expert-working-groups/publicly-available-platform-data-expert-working-group/). The outcome of this work is a framework for the minimum baseline of platform data that should be made publicly available, under what circumstances, from which platforms, and in what format. It focuses on access to public platform data as a means to enable interested parties to understand the relationships between online platforms and individuals, communities, and societies. The new framework seeks to support the emergence of uniform, cross-industry data access expectations that allow for understanding the online information ecosystem as a whole, not in platform-specific silos mediated by highly structured access opportunities.
In addition to describing the new framework, we will focus on how diverse groups can leverage digital platform data – from climate advocates to public health experts. The session will help attendees find common ground through the need for clear access, transparency, and accountability from digital platforms. We see this as a unique bridging opportunity in a time of increasing polarization. We will also workshop with attendees different ways that they could pursue data access in their own work, focusing on specific practical applications.
Nasrat Khalid, Nasrat Khalid; Talk
The traditional foreign aid system is broken. It was built on outdated assumptions of centralization, dependency, and control. In a time of global crises, communities on the frontlines are often the last to receive support and the least empowered to shape how that support reaches them. This session invites participants to unlearn the deeply embedded models of humanitarian aid and explore how a decentralized, community-driven platform, AidOs, that has been created by Aseel, is transforming how help is delivered.
AidOs is a decentralized humanitarian operating system that empowers local networks to distribute aid with transparency, speed, and dignity. Through its Atalan (Heroes) Network, community members act as local aid agents, delivering packages, registering beneficiaries for digital OMID IDs, and building data systems from the ground up. This model eliminates traditional intermediaries and places agency directly in the hands of those closest to the crisis.
We’ll share how Aseel’s model is being used across Afghanistan to reach all 34 provinces with zero foreign logistical footprint. We’ll examine the role of open APIs, ethical data collection (via Ferni), and digital identity in enabling both transparency for donors and autonomy for beneficiaries. Participants will learn how AidOs is designed to be interoperable, scalable, and applicable far beyond Afghanistan, from refugee camps to climate-hit regions globally.
This talk isn’t just about showcasing a tech solution. It’s about asking hard questions:
What happens when we design humanitarian systems that don’t assume centralized power is necessary?
How can we build digital tools that serve both transparency and trust in fractured environments?
What does it mean to decolonize aid in the 21st century, not in theory, but in code, community, and practice?
Join us to explore what humanitarian aid could look like when led by the people it’s meant to serve. Walk away with a new perspective on aid systems—and an invitation to partner, invest, or collaborate in building a future where help arrives faster, fairer, and on local terms.
Alex Hanna, Udbhav Tiwari, Lauren Hendry Parsons; Debate
The debate will focus on the tension between privacy as a fundamental right and data as a public resource, as well as the balance between radical transparency and data protection. These opposing ends of the debate raise critical questions about how information is shared, controlled, and safeguarded in the digital age.
Audrey Tang, Francesca Bria, Julie Brill; Debate
This debate will explore the necessity of unlearning traditional regulatory models to make way for new approaches that reflect rapidly evolving technologies, shifting power structures, and alternative ways of governance.
Should we rethink how governments regulate innovation, or does deregulation risk chaos and exploitation? Can communities and decentralized networks create better forms of self-regulation, or do we still need centralized oversight? From AI ethics to financial systems, from platform governance to environmental policies, we will question whether regulation should always mean control—or if it can be reimagined as something more dynamic, participatory, and adaptive.
Keoni Mahelona, Luísa Franco Machado, Seyi Akiwowo; Debate
This debate explores the transformative potential of unlearning dominant ways of knowing and organizing, through the lens of Indigenous knowledge, youth-led resistance, and struggles against fixed norms and inherited hierarchies. As modern society grapples with environmental crises, systemic inequality, and structures of exclusion, we will question whether both ancestral wisdom and the lived knowledge of those pushing back—across identities, movements, and generations—can offer vital insights for reimagining our relationship with the world.
Mick Larson, Mario Del Prete; Lab
Maps are powerful tools that shape how we understand the world. Traditional maps often prioritize borders, terrain, or infrastructure, while potentially obscuring the social, economic, or digital layers that influence people’s lives. This workshop invites participants to “unlearn” conventional mapping approaches and reimagine how maps can reveal new ways of seeing the world and global challenges.
The session is presented by Giga, a joint initiative of UNICEF and ITU that aims to connect every school in the world to the Internet. To support our mission, we have built Giga Maps - an open and live map of global schools and connectivity. Connectivity is more than infrastructure: it shapes access to information, opportunity, and choice. By visualizing where schools are connected—and where they are not—a map can become a powerful tool for advocacy and insights. When combined with other data layers such as climate, demographics, or economic indicators, maps can reveal deeper patterns of understanding and possibility.
The first half of the workshop will focus on ideation. In small groups, participants will ask: what else could a map show if connectivity were the starting point? Could the “shape” of a map be reorganized around networks rather than borders? How might layering in social or environmental indicators change how we see a community? The exercise is about imagining new mapping logics rather than reproducing existing ones.
The teams will then shift to rapid prototyping. Using pen and paper, open-source platforms, or AI-based tools, participants will sketch or build visual experiments. These prototypes might combine unexpected data sets, distort scale to emphasize overlooked issues, or invent alternative ways of depicting relationships. Each group will share back, sparking dialogue on how design choices reframe meaning.
By the end, participants will have created “unlearned maps” that challenge assumptions about space, context, and connection. The workshop emphasizes visualization over analysis, offering an accessible, hands-on design thinking process. Maps, reimagined, become tools for storytelling, advocacy, and new perspectives on global challenges.
Peter Rojas, Catherine Bracy, Harry Booth; Debate
In a market where growth is the default goal, mission-driven products follow a different playbook. This debate explores what it takes to build products rooted in values—what makes them stronger, what holds them back, how teams can balance integrity with impact, and how VC money can influence the process.
This debate will ask: What would it mean to unlearn growth as the ultimate measure of success in technology? Can we imagine and build digital ecosystems rooted instead in care, accountability, and justice?
Paula Mesa Macías (Pau&Company); Lab
Your organisation’s tech stack – email provider, cloud services, communications tools, website setup – tells a story. But is it the story you want it to tell?
In this Lab, we’ll expose how default tools and platforms quietly undermine autonomy, extract data, and leave users vulnerable even when they appear secure. Then, we’ll flip the script. Participants will explore how small shifts in infrastructure can align tech choices with values like privacy, safety and digital sovereignty.
What you’ll experience:
- A live teardown of “typical” organisational tech setups (no need to share your own)
- A walkthrough of what a real audit looks like, from security gaps to ethical red flags
- Case studies showing how organisations have moved towards transparent, secure and autonomous tools without sacrificing usability
- A guided diagnostic worksheet to reflect on your own organisation’s digital risks and possibilities
Whether you’re a non-profit, cooperative, campaign group or small business, this session will help you:
- Understand where your digital infrastructure reinforces extractive defaults
- Visualise what a safer, values-aligned stack could look like
- Ask smarter questions when hiring tech support or choosing platforms
This session is accessible to non-technical participants but led by a technical expert with experience rebuilding infrastructure for privacy, cybersecurity and sustainability. You’ll leave with a concrete reflection tool to take back to your team.
Perfect for anyone responsible for digital tools but unsure where to begin making them safer.
Come curious. Leave equipped and inspired to rebuild your digital house on stronger ground.
Daniel Stone; Talk
This session invites participants to unlearn a persistent myth: that the public is too disengaged or too uninformed to participate meaningfully in decisions about AI governance. In fact, when we connect these issues to people’s everyday concerns — about fairness, jobs, safety, and accountability — we find something else entirely: a public that is deeply concerned and ready to act.
This will be one of the first opportunities for our movements to engage with the findings of the AI Regulatory Compass — two comprehensive, nationally representative public opinion studies on community attitudes toward AI governance in the United Kingdom and the United States. These groundbreaking studies offer critical insights into how public values, literacy, and attitudes shape political conditions and legislative action — and how we can strategically shape them in return.
Attendees will also gain early access to a new diagnostic tool designed to segment audiences — from our closest allies to the wider public — so that we can more effectively tailor our messages and campaigns to those who need to hear them most.
This keynote will offer clear, actionable findings on:
- How the public genuinely understands AI beyond surface-level perceptions — and how we can shift these views.
- Who people trust to manage AI risks, who they hold accountable, and how we can leverage this in our advocacy claims.
- The most persuasive narratives for shaping public debates and influencing government action.
- Practical recommendations to build stronger public support and trust for ambitious, public interest-oriented AI regulation.
Participants will gain unique, data-driven strategies to directly inform their advocacy, policymaking, and communication efforts, and with new tools for shifting power away from industry-led narratives and toward truly community-informed governance. The session will conclude with a live strategy clinic, where participants can raise real-world challenges and receive on-the-spot guidance on applying the findings to their own work.
Alia ElKattan, Jihyun, Gabor; Forum
As AI systems like large language models (LLMs) become embedded in daily life, shaping education, guiding decisions, and influencing public discourse, the myth of machine neutrality remains. But we believe that neutrality isn’t the absence of bias; it often mirrors the people, institutions, and agendas behind a system. Equally impactful are the values we fail to acknowledge.
This forum invites participants to unlearn the idea that LLMs are neutral tools. We’ll explore how moral, political, and economic values are woven into these systems, and what unfolds when those values go unchecked by imagining dystopia and how to prevent it.
Seven years after Survival of the Best Fit, our open-source game on hiring algorithm bias showcased at MozFest 2018, we revisit core questions in the new era of generative AI. Who defines fairness? What does bias look like today? How can communities intervene?
Workshop (3 parts)
1. Dystopia Brainstorming (Group Work)
Participants split into groups to imagine worst-case futures enabled by LLMs and near-AGI across contexts like healthcare, employment, security, daily life and over various timeframes e.g. next year to next two decades.
Explainer & Current Landscape (Full Group)
A concise walkthrough of how LLMs are trained, the human frameworks shaping them, real-world adversarial risks, and existing intervention or regulatory frameworks.Designing Better Futures (Group Work)
Groups reconvene to devise intervention strategies: values-based guardrails, governance ideas and design alternatives that could counteract the dystopias. This phase emphasizes that LLM design remains very much human-value driven, thus shapeable, not inevitable, with thoughtful and representative design.
No coding required. This session is for anyone interested in the moral foundations of the technologies shaping our future.
Chris Tegho, Jazmin Morris; Installation
When TRex Dreams of Mangoes and Figs is an immersive art installation that reimagines waiting in digital spaces as a portal for imagination, and collective dreaming. Inspired by the Google Chrome Dino game that appears during lost internet connections, the installation explores where consciousness drifts during digital "loading states". It invites the audience to enter a phygital landscape that transforms waiting into a playful, reflective experience.
The experience unfolds across two interconnected states, each using experimental, immersive technologies:
1. Loading: In this partially loaded, low-poly world, glitches and movements immerse participants in an "in-between" state, a space of fragmented anticipation.
2. Consciousness: A more vibrant, dream-like world featuring mangos and figs. Mangos and figs challenge the traditional, western landscapes seen in digital environments.
This project reinterprets digital extraction by critically engaging with the hidden infrastructures that shape online experiences, particularly those of waiting, loading, and digital liminality. Every online interaction, including moments of waiting, is connected to physical landscapes where resources are extracted. The project uses the loading state to expose hidden connections, with a sensory experience where glitching visuals and fragmented movements mirror the instability of digital access.
It focuses on the unequal distribution of internet access, particularly in regions affected by the extractive economies of digital industries. While some experience high-speed connections, others are routinely disconnected, left in liminal states of waiting. The presence of mangos and figs challenges the dominance of Western-centric digital environments. These fruits, which thrive in tropical and sub-tropical regions, also serve as subtle references to the extractive histories tied to colonial agriculture mirroring the ways in which data and resources are unevenly extracted.
The installation treats the broken link and the frozen loading screen as meaningful sites of reflection and potential instead of errors. It explores the fragility of digital presence and the physical consequences of digital memory, a reminder that even fleeting moments online are anchored to material infrastructures that corrode, glitch, and decay.
It centers the experience on waiting, a temporal void often erased from our digital narratives. It questions the assumption that digital life is seamless or permanent, and highlights how decay, slowness, and interruption can reveal truths about access, extraction, and imbalance.
By reimagining waiting and anticipation in digital spaces, it creates an opportunity for dreams, imagination and creativity in a subtle protest against technologies grasp on our experiences.
Karina Nemeth, hans stam; Forum
How did we come to accept that electronics must be opaque, disposable, and distant? This session invites participants to unlearn the myth of immaterial tech by reimagining electronics as something deeply local, cultural, and human.
🔹 Part 1: The Reign of Silicon
We begin with a sharp historical framing: how Silicon Valley and Shenzhen rose in parallel, driven not only by innovation but by access to raw resources, cheap engineering labor, and government-backed trade policies.
The result: homogenized design standards, disappearing public process knowledge, and devices disconnected from people and place.
Then, we shift to imagining a new model.
🔹 Part 2: City Planning for Local Electronics
Participants become city planners envisioning local electronics ecosystems.
In small, role-based groups—engineers, artisans, technicians, teachers, politicians—they’ll map:
- What’s made in their region, and why
- Who participates, from workers to public institutions
- How knowledge, pride, and repair flow through the system
🔹 Part 3: The Golden Ticket
Each participant receives a Golden Ticket—a letter-writing prompt.
They’ll write to a future child about a locally made product they helped create:
- What is it, and why does it matter?
- Who made it, and how is it cared for?
- What legacy does it carry?
They exchange tickets with another participant, and select letters will be read aloud to melt participants' hearts.
🔹 Part 4: Wall of Futures + Invitation to Build
We close with a ceremonial gathering of all visions.
Participants post their city plans and letters onto a shared Wall of Futures, then explore interactive stations:
- 🧱 LEGO Station – Model local factories or community centers
- 🗺️ Map Table – Plot future electronics hubs around the world
As a continuation of this session, we invite participants to join our offsite evening Meetup—an open gathering to bring (or discover) a piece of childhood electronics and share the story of how it shaped your imagination.
This Forum centers imagination, macroeconomic reframing, and tangible pride of place.
No engineering background needed—only a belief that technology can serve culture, not erase it.
Montse Ollés Roig, David van Walderveen, Kaspar Ravel, and Antoine Begon; Installation
The sassy fish is a game and co-creation project investigating the materiality of digital infrastructure and how it influences the way we perceive our understanding of technology and its environmental impacts. Through interaction and decision-making, the user encounters statements and dilemmas posed by power and AI technologies, with the objective to save the fish.
The game is developed through open-source tools and it is grounded in humour, critical environmental awareness and literacy. It ultimately aims to invite the audience to reflect on how current digital technology infrastructures hinder our digital agency, inviting to reflect upon a desirable digitalisation.
Onur Alp Soner; Talk
Today, nearly every layer of our digital infrastructure, from measuring behavior to delivering messages, running deployments, making decisions, and now generating content and code, runs on multi-tenant, opaque, centralized SaaS platforms. We've inherited an architecture designed for speed and convenience, not transparency or control.
This talk challenges the idea that shared cloud services offer "security by default." Through real-world examples of cloud failures, AI misalignment, and overlooked dependencies, we'll explore how organizations lose visibility, autonomy, and accountability, often without realizing it.
We'll propose a more honest and resilient model: one that reclaims infrastructure as a boundary, not just a backend. Whether deployed on private cloud or hybrid models, this approach re-centers ownership, accountability, and trust in how systems are built and secured.
Participants will walk away with:
- A deeper understanding of how modern SaaS architecture obscures risk
- Concrete examples of where "secure by default" has failed in practice
- A practical lens for evaluating infrastructure decisions through trust, traceability, and intentional design
- A case for treating security not as a badge or a product feature, but as something you build and own
This session is for technologists, designers, digital rights advocates, public infrastructure thinkers, and anyone questioning what it really means to build on infrastructure you don't fully control.