Online misinformation is leaching out from cheap mobile phones and free Facebook plans used by millions in the Philippines, convincing many to reject vaccinations for polio and other deadly diseases.
Childhood immunisation rates have plummeted in the country — from 87 percent in 2014 to 68 percent — resulting in a measles epidemic and the reemergence of polio last year.
A highly politicised campaign that led to the withdrawal of dengue vaccine Dengvaxia in 2017 is widely seen as one of the main drivers of the fall.
But health experts also point to an explosion of vaccination-related misinformation that has undermined confidence in all types of immunisations.
In the northern city of Tarlac, government nurse Reeza Patriarca watched with horror the impacts of Facebook posts that falsely claimed five people had died after receiving an unspecified vaccination.
The posts, which have been shared thousands of times, went online in August, weeks after the relaunch of a World Health Organization-backed polio immunisation drive.
The Tarlac government and national health department issued statements saying no one had died, but Patriarca said the misinformation proved stronger than the truth for many parents.
“It spread like crazy. In the second week, more and more people were refusing,” said Patriarca, 27, whose health unit was administering the vaccine across nine neighbourhoods.
“Some believed the (government) explanation, others didn’t. We couldn’t force them.”
The false report in Tarlac even deterred people from getting free flu jabs in the nearby city of San Jose del Monte.
Health worker Rosanna Robianes said elderly people who would normally queue at her centre for their shot did not show up.
“They said it’s because of Facebook, that there’s a report that people who had been vaccinated in Tarlac had died,” she said.
– ‘Toxic to humans’ –
Interest in online
Twitter (TWTR) – Get Report is reassessing how its misinformation labels appear and reach users, the microblogging site’s head of site integrity told a news service.
The San Francisco social-media company currently attaches small blue notices to false tweets.
It is assessing how to make these signals more “overt” and “direct,” Twitter’s Yoel Roth told Reuters.
Roth made no mention of whether the changes would be implemented before the Nov. 3 U.S. election.
The changes will include testing a reddish-magenta color that is more visible, Roth told the news service.
Twitter reduces the reach of tweets that it labels for false content by limiting their visibility and not recommending them in search results, Reuters reported.
Feedback from users tells the company that they want to know whether an account has been repeatedly labeled, Roth said. Twitter will consider whether to flag users who constantly post false information, he said.
Twitter said it had labeled thousands of posts, including some tweets by President Donald Trump.
In May, Twitter began labeling fabricated media, expanding its labels to coronavirus misinformation, misleading tweets about elections, and civic processes.
Twitter has been criticized for its transparency regarding its interventions, according to Reuters. The company doesn’t keep public lists of when it applied labels or disclose data that would allow outsiders to assess how those labels affect the spread of a tweet or its reactions.
The company consults with partners, including election officials, on its labeling.
In September, the social-media company said it would label or remove tweets claiming election victory before the results were confirmed.
Twitter has confirmed it’s working on a new feature, currently dubbed “Birdwatch,” that could let the Twitter community warn one another about misleading tweets that could cause harm.
There’s an awful lot we don’t know about the idea, including whether Twitter will actually release it to the public or how it might work in its final form, but enough has leaked out that we do have a pretty fair glimpse — which, we understand, is still early in development and would not be released ahead of the US election.
As TechCrunch notes, the existence of such a tool was first discovered by Jane Manchun Wong, who often digs through app code for evidence of unreleased features, back in August. At a basic level, the idea is that you’ll be able to attach a note to a misleading tweet:
Twitter is working on a moderation tool to monitor misinformations on Twitter
Moderators can flag tweets, vote on whether it is misleading, and add a note about it
(I made up my own note to show what it currently looks like) pic.twitter.com/YIa6zt58Fj
— Jane Manchun Wong (@wongmjane) August 5, 2020
And as of late September, social media consultant Matt Navarra spotted a dedicated “Add to Birdwatch” button below a piece of content he’d tweeted:
MORE INFO about Twitter’s ‘Birdwatch’ feature spotted.
Looks like it allows you to attach notes to a tweet.
May allow you to create public and private notes. pic.twitter.com/GNGEg2AmwT
— Matt Navarra (@MattNavarra) October 1, 2020
As of October 3rd, Birdwatch even appears to have its own miniature survey to fill out as you’re reporting a piece of content, with options to take either side (misleading/not misleading) in the debate about a particular piece of information, as well as drilling down to how much harm you believe the
By Amanda Seitz and Beatrice Dupuy | Associated Press
CHICAGO — News Friday that President Donald Trump and first lady Melania Trump had tested positive for COVID-19 sparked an explosion of rumors, misinformation and conspiracy theories that in a matter of hours littered the social media feeds of many Americans.
Tweets shared thousands of times claimed Democrats might have somehow intentionally infected the president with the coronavirus during the debates. Others speculated in Facebook posts that maybe the president was faking his illness. And the news also ignited constant conjecture among QAnon followers, who peddle a baseless belief that Trump is a warrior against a secret network of government officials and celebrities that they falsely claim is running a child trafficking ring.
In the final weeks of the presidential campaign, Trump’s COVID-19 diagnosis was swept into an online vortex of coronavirus misinformation and the falsehoods swirling around this polarizing election. Trump himself has driven much of that confusion and distrust on the campaign trail, from his presidential podium and his Twitter account, where he’s made wrong claims about widespread voter fraud or hawked unproven cures for the coronavirus, such as hydroxychloroquine.
“This is both a political crisis weeks before the election and also a health crisis; it’s a perfect storm,” said Alexandra Cirone, an assistant professor at Cornell University who studies the effect of misinformation on government.
Facebook said Friday that it immediately began monitoring misinformation around the president’s diagnosis and had started applying fact checks to some false posts.
Twitter, meanwhile, was monitoring an uptick in “copypasta” campaigns about Trump’s illness. “Copypasta” campaigns are attempts by numerous Twitter accounts to parrot the same phrase over and over to inundate users with messaging, and they are sometimes signals of coordinated activity. The social media company said it was working to
CHICAGO (AP) — News Friday that President Donald Trump and first lady Melania Trump had tested positive for COVID-19 sparked an explosion of rumors, misinformation and conspiracy theories that in a matter of hours littered the social media feeds of many Americans.
Tweets shared thousands of times claimed Democrats might have somehow intentionally infected the president with the coronavirus during the debates. Others speculated in Facebook posts that maybe the president was faking his illness. And the news also ignited constant conjecture among QAnon followers, who peddle a baseless belief that Trump is a warrior against a secret network of government officials and celebrities that they falsely claim is running a child trafficking ring.…