Language: English (mozilla)
This 60 minute discussion will focus on interactive engagement from participants, Meedan, and the National Democratic Institute (NDI) to discuss what future platform accountability looks like in a world moving towards AI. With recent releases of ChatGPT and Meta’s new Metaverse, how does data transparency evolve and how do we as users protect our data and ourselves? NDI will start off the discussion with three 5-minute lightning talks from experts on recent research and findings on what data transparency and access will look like with emerging AI tools on platforms. Participants will then split off into smaller breakout groups to draft policy recommendations for technology platforms to ensure data transparency and access. Groups will be given varying mock scenarios in which they must draft policy recommendations for a new AI released by a large tech platform, such as Cierco, ChatGPT, and Tay. Participants will be given guiding questions on how to draft their recommendations, and at the end of the session they will be welcomed to post their drafts to an interactive platform like Mural.
Kaleigh Schwalbe is a Program Manager for Information Integrity on the Democracy and Technology team at the National Democratic Institute (NDI). She works with NDI’s staff and partners to develop tools and resources for countering mis/disinformation online, including gendered disinformation. Kaleigh previously worked at the McCain Institute for International Leadership and the Sandra Day O’Connor College of Law at Arizona State University in DC, where she managed projects and proposals focused on rule of law, transitional justice, countering disinformation, internet freedom, and cybersecurity. While there, Kaleigh led projects on tracking and countering disinformation and hate speech in Georgia, building a network of CSOs in the Balkans to track and analyze disinformation trends, and developing legal aid clinics in Pakistan. Kaleigh has a B.A. in International Relations from the University of Delaware and an M.S. in Conflict Resolution from Columbia University.
Content Moderation Lead at Meedan and Research Affiliate at the UC Irvine Center for Responsible, Ethical, and Accessible Technology. Her work seeks to translate the experiences of targets of online abuse and community moderators into product and policy insights for social media companies and civil society organizations.