Taking on generative AI’s safety dilemma
Regulators are trying to act fast, but the AI race is already running at full speed.
Hey Checklisters!
We hope you’re staying safe and healthy.
If you’re running late, here’s your TL;DR Checklist
✅ Increasingly, we are encountering the products of generative AI across multiple industries and within our personal lives.
✅ Technology moves faster than regulations, and way faster than bureaucracies.
✅ More effort should be put into raising public awareness about the current state of Generative AI.
Top Comment
by the Editorial Team
How do we navigate an internet that starts with a prompt and ends with AI-generated, synthetic media? Increasingly, we are encountering the products of generative artificial intelligence, or GenAI, across multiple industries and within our personal lives.
Optimists and proponents of generative AI are bullish about the prospect of every user being just one “prompt” away from the information they’re looking for.
But can prompts really get users everything they need? Like other innovative technologies, GenAI will bring about great opportunities, but it will also carry great risk.
As GenAI technologies advance exponentially, will safety measures put forward by lawmakers and companies today have any real value one year from now? This is what could be described as GenAI’s safety dilemma: Technology moves faster than regulations, and way faster than bureaucracies.
When machines ‘hallucinate,’ they cause harm
As interest in GenAI tools rose last year, AI hallucinations became a popular topic. These errors are not just amusing quirks, though. They can have significant consequences for real-world applications. Biases in training data — which can be a significant source of hallucinations — can reflect and reproduce existing societal prejudices, potentially leading to additional harm being inflicted on already disenfranchised communities.
How much harm will these hallucinations cause before GenAI tools get better at doing their jobs?
Wide adoption, uncontrolled risks
The AI-driven internet is even more opaque than previous iterations. Reporters, fact-checking organizations and other groups may struggle to determine if the content they’re seeing is real or generated by AI. After all, generative AI companies do not disclose much about the inner workings of their systems or their plans for the future.
More effort must be put into raising public awareness about the current state of GenAI, how it’s being used and how it may affect the lives and choices of individuals and groups. After all, the end user has the right to choose what world they want to live in.
Gendered disinformation, amplified by AI
How does GenAI impact women — online and offline? Gendered disinformation, broadly speaking, involves spreading false narratives laden with gender-based attacks or exploiting issues related to gender in the pursuit of various agendas. The objective of gendered disinformation is to silence women and minority groups and to hinder their participation in the media ecosystem and democratic processes.
In Pakistan for instance, leading journalists recently faced online attacks that involved the nonconsensual use of images and doctored visuals. This is a disturbing example of ideologically motivated opponents using AI tools to spread sexist and misogynistic content, including threats of physical violence. With the proliferation of GenAI, it’s more important than ever that we protect the ability of women to engage online.
Next steps for Meedan and our partners
Here’s a small sample of what we’re doing at Meedan to address these rising concerns.
In Latin America, we partnered with the feminist organization Coding Rights, which launched the Not My A.I. project to map public-sector AI initiatives that negatively impact gender equality and intersectionality.
Our latest project, supported by the Patrick J. McGovern Foundation, will allow us to address and combat the spread of dangerous misinformation and enable the verification of synthetic media and GenAI content.
For the full story, check out our unabridged blog post about GenAI’s safety dilemma.
Define_Synthetic Data
Synthetic Data is “information that's been generated on a computer to augment or replace real data to improve AI models, protect sensitive data, and mitigate bias.”
source: IBM Research
Townsquare
March 12, 2024
The AI Journalism Lab invites journalists to apply for a three-month, hybrid, tuition-free program to explore generative AI through theory and practice and to add meaningfully to the conversation.
March 17, 2024
Mama Cash is now accepting applications from self-led feminist organizations and initiatives that defend and advance the rights of women, girls and trans and intersex people in Africa, West Asia, East Asia, South Asia, Southeast Asia and Oceania.
April 30, 2024
The Stanford Internet Observatory Trust and Safety Research Conference is now accepting presentation proposals for the September 26-27, 2024, gathering, which aims to advance research while fostering the exchange of ideas across various disciplines.
What else we’re reading
“A Tech Accord to Combat Deceptive Use of AI in 2024 Elections”
“This accord seeks to set expectations for how signatories will manage the risks arising from deceptive AI election content created through their publicly accessible, large-scale platforms or open foundational models, or distributed on their large-scale social or publishing platforms in line with their own policies and practices as relevant to the commitments in the accord.”
(AI Elections accord)
“The AI Series: AI and the Global South” [Video]
“Nobel Peace Prize laureate Maria Ressa and the director of the Digital Futures Lab, Urvashi Aneja, explore the impact AI is already having on communities in the Global South.”
(Al Jazeera)
“Need for data access to tackle AI-powered disinformation in the Global South”
“It is critical that African researchers are able to understand the emerging effects of AI and disinformation on the continent.”
(Tech Policy Press)
Did you miss an issue of the Checklist?
Read Checklist newsletter issues here. We've explored a diverse range of subjects, including women and gender issues, crisis response, media literacy, and elections, as well as AI and big data.
If there are updates you would like us to share from your country or region, please reach out to us at checklist@meedan.com.
The Checklist is currently read by over 1900 subscribers. Want to share the Checklist? Invite your friends to sign up here.