Living in a time of 'fake news' laws and deepfakes
If you’ve been buried in work this week, here’s an update on all the misinformation news from around the world. And some good news... the weekend is here!
In this week’s issue of The Checklist, we're following stories about anti-fake news laws in Singapore and Ethiopia. Governments continue to argue that such laws are necessary to control the spread of misinformation, but free speech advocates have expressed concerns over the potential abuse of that type of legislation. In this week’s issue we also look at how deepfakes and 'face fakes' have the potential to be weaponised online.
We’re keen to know your views on attempts at debunking misinformation around public health epidemics. In this issue we include a slightly different take on the problem.
Here's your weekly roundup!
Singapore orders Facebook to block access to States Times Review’s page under ‘fake news law’ (South China Morning Post)
On the orders of the Singapore government, Facebook has blocked Singapore-based users access to the page of the State Times’ Review (STR). The Singaporean authorities have accused STR, an alternative media site critical of the government, of perpetuating misinformation. STR had been served three correction directions by the government under the Protection from Online Falsehoods and Manipulation Act (POFMA), but failed to heed any. The latest correction notice was served after STR posted an article containing claims about the coronavirus (Covid-19) situation that was deemed "entirely untrue" according to the government. Facebook has expressed concerns over being forced to comply with an order to block the page of a news site.
We’ve repeatedly highlighted this law’s potential for overreach and we’re deeply concerned about the precedent this sets for the stifling of freedom of expression in Singapore.” – A Facebook spokesperson
Ethiopia passes hate speech, disinformation legislation (Borkena)
Ethiopia's parliament has passed a law punishing "hate speech" and "disinformation" with hefty fines and long jail terms, despite rights groups saying it undermines free speech months before a major election.The new law defines hate speech as rhetoric that fuels discrimination "against individuals or groups based on their nationality, ethnic and religious affiliation, sex or disabilities". Legislators said the law is needed because existing legal provisions did not address hate speech and disinformation, and said it will not affect citizens' rights.
Ethiopia has been experiencing sometimes deadly ethnic violence since June 2018 shortly after Prime Minister Abiy Ahmed announced sweeping political reforms for which he later was awarded the Nobel Peace Prize. The government says there is a need to legislate against hate speech, because it has been partly blamed for rising ethnic violence in the East African nation. Tensions are expected to rise in advance of landmark elections due in August.
“An ill-construed law that opens the door for law enforcement officials to violate rights to free expression is no solution.” – Laetitia Bader, a senior Africa researcher at Human Rights Watch
We've Just Seen the First Use of Deepfakes in an Indian Election Campaign (Vice)
A deepfake of an Indian politician went viral on WhatsApp in early February, ahead of the state elections in Delhi, according to Vice. In the original video, the Bharatiya Janata Party (BJP) President Manoj Tiwari, makes a statement accusing his political opponent in Delhi of making false promises to their electorate. The original video was then deepfaked in English and Haryanvi, a popular Hindi dialect spoken in Delhi. According to Vice, the two deepfakes were shared across 5,8000 WhatsApp groups in the Delhi region, reaching around 15 million people.
The BJP has partnered with a political communications firm, The Ideaz Factory, to create deepfakes that let it target voters across the over 20 different languages used in India. In this particular case, the deepfakes were approved by the BJP to serve as a sort of high-tech dubbing, in which the speaker’s lips and facial expressions are synced with the novel audio that utters words Tiwari had never spoken. But it’s not difficult to imagine someone faking a video to issue threats or hate against a specific section of the population. In a country like India where digital literacy is nascent, even low-tech versions of video manipulation have led to violence.
“To say only some forms of deepfakes are allowed by political parties, allows for a lot of subjectivity and interpretive power on who defines those forms.” – Tarunima Prabhakar, cofounder of Tattle, a civic tech project
How fake faces are being weaponized online (CNN Business)
Most activists harassed by online trolls are accustomed to faceless social media accounts of people who target them for their work and their views. Often online trolls use fake profile pictures taken from other users’ accounts. Most of the major social media platforms have rules against using other people’s profile pictures and impersonation complaints can be filed if identities and pictures are misused. But what if trolls use Artificial Intelligence (AI) to generate faces of people that are not real, that do not exist? By doing so they can potentially avoid being reported for impersonation.
Social media activist Nandini Jammi, co-founder of Sleeping Giants, a group that campaigns for companies not to run ads on websites that allow the spread of discrimination and hate, has been attacked by one such troll. The troll Jessica, a woman with blonde hair and a beaming smile, claimed to have potentially embarrassing information from an old online dating profile about Jammi. "Jessica" was part of a coordinated network of around 50 accounts that were run by the same person or people, Twitter confirmed to CNN Business.
Last year, Phil Wang, a software engineer, created a website called "This person does not exist." Every time you visit the site you see a new face, which in most cases looks like a real person. However, the faces are created using AI. Wang cautioned that fake faces like "Jessica" may just be a small sign of what's to come.
"Faces are just the tip of the iceberg. Eventually the underlying technology can synthesize coherent text, voices of loved ones, and even video. Those social media companies that have AI researchers should allocate some time and research funds into this ever growing problem." – Phil Wang
Helping the truth catch up: Full Fact, Africa Check and Chequeado turn to research to fight bad information (Full Fact)
What are effective and creative ways to address the spread of false and misleading information? Full Fact, an independent fact-checking group in the United Kingdom, partnered with two independent fact-checking groups - Africa Check and Chequeado to research this issue. The objective was to equip fact-checkers with the tools to understand how misinformation travels and what tactics work best to stop it. The three groups analysed academic research and fact-checking experiments. Here are their main findings for journalists:
Make good use of visuals. Posts with pictures are far more engaging than videos and text-only posts. Jargon-free stories work best to convey information to readers.
Older people and those with modest education levels find it difficult to separate fact from fiction. We all tend to share content that is new, political and emotionally charged, and do it even when we know it’s wrong.
Media literacy initiatives can have positive results. They can make audiences think critically about issues. Fact-checkers can contribute to a culture of accuracy by guiding politicians and journalists to correct themselves.
“This research project was designed to help the truth catch up and protect more people from harm. We wanted to use the best available academic evidence, and equip fact checkers with the tools to understand how misinformation travels, and what tactics work best to stop it.” – Full Fact
Attempts at Debunking “Fake News” about Epidemics Might Do More Harm Than Good (Scientific American)
How and why do fake beliefs arise during public health crises? We’ve been following efforts to debunk misinformation around the Coronavirus disease (Covid-19). We’ve also been flagging the need for credible and quality health information around such epidemic outbreaks. A study of two earlier epidemics that arrived just as new reports about Covid-19 continued to mount reveals the difficulty in reversing false rumors about a health crisis.
Researchers at Dartmouth College, IE University in Spain and other institutions conducted social science experiments showing that attempts to counter false beliefs about the Zika virus with information from the World Health Organization were often counterproductive: the debunking failed to lower misperceptions and even reduced respondents’ confidence in accurate information about the epidemic of the pathogen.
The study may point toward strategies that avoid an overemphasis on the role of furnishing debunking information in public health campaigns in favor of simple messages about the best measures to be adopted. The extent to which false beliefs can be corrected will require further studies of different epidemics. We’re keen to know your experience and views on this.
“The results of the study might be disheartening, but in the realm of misinformation, it is as important to figure out what existing programs don’t work as it is to figure out what programs might work.” – Adam Berinsky, Professor of Political Science and Director of the Massachusetts Institute of Technology