Facebook issued two statements in the past week relating to its treatment of “misinformation” — and they couldn’t have been more different.
The first was a single paragraph updating their policy on stories speculating that Covid-19 is a man-made virus — after almost every major media outlet, and yesterday even the British and American security services, finally confirmed that it is a feasible possibility.
“In light of ongoing investigations into the origin of COVID-19 and in consultation with public health experts,” a Facebook spokesman said, “we will no longer remove the claim that COVID-19 is man-made or manufactured from our apps.”
In other words, Facebook now believes that its censorship of millions of posts in the preceding months had been in error. There was, of course, no hint of apology in its most recent statment; though its tone proved quite the contrast to Facebook’s boast last year that, in April alone, it displayed “warnings” on 50 million “pieces of content related to Covid-19”. That was just the start; in February this year, Facebook even placed a warning on a piece for UnHerd by Ian Birrell, an award-winning investigative reporter who has been writing about the origins of Covid-19 since the start of the pandemic.
“When people saw those warning labels, 95% of the time they did not go on to view the original content,” the company says. Moreover, if an article is rated “false” by their “fact checkers”, the network will “reduce its distribution”. This means that, while an author or poster is not aware that censorship is taking place, the network could be hiding their content so it is not widely disseminated.
The second announcement — released on the same day — was that Facebook is now extending its policy of “shadow-banning” accounts that promote misinformation. “Starting today, we will reduce the distribution of all posts in News Feed from an individual’s Facebook account if they repeatedly share content that has been rated by one of our fact-checking partners.” So now, if you share something deemed to contain misinformation multiple times, your account could be silenced; you won’t be informed, you won’t know to what degree your content will be hidden and you won’t know how long it will last — all thanks to group of “fact-checkers” whose authority cannot be questioned.
The fact that this announcement was made on the very same day as Facebook’s admission of error shows how unaccountable these global superpowers are, as well as the extent to which they can act as they please without fear of repercussion. Indeed, it’s hardly surprising that they have increasingly adopted the paraphernalia of governments: Facebook’s “Oversight Board” includes ex-politicians (who it appoints), has its own constitution and passes down “binding” judgements on the company.
Yet imagine if a similar error had been made by a democratic government. There would be consequences; a public inquiry, perhaps, as well as demands for a change in policy and for people to resign. But Facebook — the sixth largest company in the world, whose apps are a source of information for 3.45 billion people, over half the world’s adults — doesn’t simply continue with its programme of cleansing “misinformation”; it doubles down on it.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe