Wikipedia misinformation, Facebook removals and anonymity loss in India
Hello Checklisters! A big welcome to our new subscribers; it's great to have you here! Our effort is to bring you misinformation trends and updates from around the world. This week has definitely been eventful. We're watching Facebook's suspension of misinformation-spreading accounts in Myanmar, Russia, Vietnam and Iran, and the possibility of major privacy loss for social media users in India.
This week Meedan and partners gathered around the world, for a developers retreat in Bahia, Brazil, and for meetings with collaborators in Delhi and Mumbai, India. We are working on building tools that can increase ways for journalists and civil society organizations to better communicate with the public.
Here's your weekly roundup!
400 million social media users are set to lose their anonymity in India (Economic Times)
In an effort to curb the spread of misinformation, India's lawmakers are set to put forth new rules requiring major platforms, like Facebook, Instagram and TikTok to hand over the identities of users when the government requests that information.
Platforms are — unsurprisingly— not happy about this, and are trying to push back, arguing that the rules would be a violation of privacy rights under India's Supreme Court laws (although so far a case has not been tried in a court). Some platforms have also warned that this will lead to increased censorship and surveillance.
"The requirement comes as governments around the world are trying to hold social media companies more accountable for the content that circulates on their platforms, whether it’s fake news, child porn, racist invective or terrorism-related content. India’s new guidelines go further than most other countries’ by requiring blanket cooperation with government inquiries, no warrant or judicial order required." - Economic Times
Facebook Removes Accounts in Myanmar, Vietnam, Russia and Iran for Misinformation (Wall Street Journal)
This is the latest effort from the company to control disinformation and misinformation on its platforms. Facebook has removed billions of accounts before. This time the tech giant says it took down dozens of "coordinated, inauthentic behaviour" from Russia, Iran, Vietnam and Myanmar" that was taking place in groups, pages and accounts, and was "engaging in foreign or government interference...on behalf of a government or foreign actor — on Facebook and Instagram," according to a Feb. 12, 2020 press release.
"The first operation originated in Russia and primarily targeted Ukraine and its neighboring countries. The second originated in Iran and focused mainly on the US. The third network originated in Myanmar and Vietnam and targeted audiences in Myanmar. Each of them created networks of accounts to mislead others about who they were and what they were doing. We have shared information about our findings with industry partners." - Facebook
Reuters plans to help Facebook on 2020 election misinformation, but critics remain disappointed (Inverse)
While we're on the topic of Facebook misinformation, Reuters is pairing up with the company to work on flagging false or misleading claims during the 2020 U.S. election and beyond. If content is flagged as misinformation it will be tagged that way and downplayed in the algorithm, Facebook says, but misinformation won't be removed unless it violates Facebook's community standards.
"You're basically negotiating with the foxes to keep the hens safe," Binkowski says. "What happens is Facebook gets to control whatever output it wants and still with no transparency at all, so we fact-checkers can break our backs for the truth and their algorithms will still merrily spread disinformation all over the world and corrode every democracy it touches." - Brooke Binkowski, former managing editor of Snopes
On Wikipedia, a fight is raging over coronavirus disinformation (Wired)
Misinformation related to the novel coronavirus epidemic continues to rage across the internet. The English-language version of Wikipedia has witnessed a surge in articles related to the epidemic. The number of readers that have accessed the articles since the beginning of January is over 18 million.
Wikipedia's objective of providing free online information is important to many people who use Wikipedia for information on epidemics. However, its free-to-edit collaborative policy makes it susceptible to misuse. At the moment Wikipedia's editorial model means that various theories about the origin of the virus are circulating on the platform. Bats and snakes have been been accused to be the source of the virus, for example, and one idea related to its spread have been the Australian bush fires! Editors who are concerned about the reliability of sources have emphasized the use of accurate and high quality sources.
"Early information about developing events tends to be unreliable – including, in some cases, when it comes from scientists." - Marielle Volz, a volunteer Wikipedia editor
Facebook, Amazon, Google and more met with WHO to figure out how to stop coronavirus misinformation (CNBC)
Officials from the World Health Organization met with members from some of the largest tech platforms to discuss how each company was responding to the coronavirus outbreak. The major topic of discussion was how the companies are working down to tamp down the spread of misinformation. Representatives from Facebook, Amazon, Twilio, Dropbox, Alphabet’s Google, Verizon, Salesforce, Twitter and YouTube attended the meeting. Private companies including Airbnb, Kinsa and Mapbox also attended. Some of the priorities that tech companies have outlined in recent weeks include efforts to work with third-party fact checkers and public health organizations. The companies agreed to work on collaborative tools, better content and a call center where people can ask questions or get advice.
“I encouraged collaboration and innovation. During a crisis, it’s a good time for that.” - Andy Pattison, WHO
Out-of-context photos are a powerful low-tech form of misinformation (The Conversation)
Most of us have been exposed to visual misinformation in which old photographs and videos are recycled and presented as evidence of recent events. We've witnessed this in various elections as well as during the outbreak of epidemics. According to Liza Fazio, an academic in Vandebilt University, out-of-context photographs serve as a proof of an event. Photos also capture attention and people are able to retrieve related information from memory. Photos are a particularly potent form of misinformation. And unlike deepfakes, they are incredibly simple to create.
"As consumers and users of social media, we have a responsibility for ensuring that information we share is accurate and informative. By keeping an eye out for out-of-context photographs, you can help keep misinformation in check." - Liza Fazio, Assistant Professor of Psychology, Vanderbilt University
Open Source Investigation
VKontakte vs. Facebook: From Open White Supremacy To Stealth
Bellingcat
A look inside a "vibrant community of American racists who “hide their power level” just enough to avoid being banned, while subtly pushing their views on friends and family."
In March 2019 Facebook banned white nationalist and white separatist content. Bellingcat reveals in its investigation that loopholes to the ban are being exploited by far-right users.