Photo illustration: Allie Carl/Axios. Photo: Andrew Caballero-Reynolds/Getty Images
Distinguishing truth from falsehood is frustrating, endless, thankless work — and now Mark Zuckerberg is walking away from it.
The big picture: Facebook’s latest content-moderation pivot looks like part of a plan to win over Donald Trump as he takes power again. But the field Zuckerberg is abandoning is one he never wanted to play on in the first place.
State of play: The founders of social media giants like Facebook, Instagram, Twitter and TikTok didn’t expect to end up in what the industry came to call the “content moderation” business — and what many critics, and now Zuckerberg himself, denounce as “censorship.”
- Policing online speech costs a fortune to do right. It’s impossible to make everyone happy. You’re bound to make mistakes. And users’ wishes keep changing.
- The whole effort is a distraction from what’s always been Facebook/Meta’s top priority — boosting engagement to sell more ads.
- Meta faces huge challenges this year, particularly an April trial in the Federal Trade Commission’s suit to unwind its decade-old acquisition of Instagram and WhatsApp. The new Trump-friendly approach to content moderation is one of many efforts to win over the new administration, which is open about rewarding friends and punishing enemies.
Zuckerberg staked out a free-speech position in a 2019 speech at Georgetown. A few months later, he said that social media networks shouldn’t try to be “arbiters of truth” — but at the same time Facebook was ramping up its truth arbitration.
- After taking blame for spreading misinformation during the 2016 election and violating users’ privacy during the Cambridge Analytica scandal, Facebook was under enormous pressure to clean up its act, and the company made big investments in expanding its moderation efforts.
- It was also in 2019 that Facebook started a program using third-party fact-checking organizations from a variety of political perspectives to help it identify and limit the spread of potentially dangerous misinformation.
The fact-checking program has drawn fire throughout its existence.
- The kinds of topics it confronted — controversies over climate science, COVID-19 and vaccines, charges of election fraud — are often both matters of fact or science and also flashpoints for partisan rage.
- Believers in fact-checking insist that there’s value to society in telling the public what is — and isn’t — authoritative information, grounded in vetted research and verifiable records, in fields like medicine and public affairs.
- Critics say there’s always another point of view that deserves to be heard, and blocking any perspective is a form of censorship.
Between the lines: Facebook tried to solve some of its content moderation headaches by setting up the independent Oversight Board.
- The company handed the Oversight Board hundreds of millions of dollars beginning in 2019 to build a kind of Supreme Court for user complaints.
- It’s been particularly effective in sorting out complex issues beyond U.S. borders.
- But it hasn’t insulated Zuckerberg and Meta from criticism by American conservatives and Congressional committees.
- Notably, Meta’s announcements Tuesday failed to mention the Oversight Board at all.
Zoom out: Zuckerberg calls Meta’s new approach a “back-to-our-roots” embrace of free expression. But there’s never been any medium where absolute free speech reigned.
- Platform owners have legal obligations to governments of countries they operate in to obey the law.
- In the U.S. that means dealing with laws governing what Zuckerberg describes as “legitimately bad stuff” like “drugs, terrorism, child exploitation.”
A second category where platform owners have generally felt an obligation to intervene is speech that could cause imminent harm.
- That might include death threats, violent conspiracies, or even plans to attack the U.S. Capitol.
Then there’s the category of hate speech.
- It bedevils social-media owners, because it’s constantly shifting and varies across cultures. But in any given time and place there are some slurs that violate public norms, and a global public platform can’t just ignore them.
- Along with the company’s other new content policies, Meta has now updated its Community Guidelines on “hateful conduct” specifically to allow “allegations of mental illness or abnormality when based on gender or sexual orientation.” A variety of other changes in its rules appear to significantly loosen the knot around what many users will consider hate speech.
A final category of content moderation — most relevant to fact-checking — is misinformation (widely shared but inaccurate info) and disinformation (misinformation with deliberate bad intent).
- Zuckerberg and other social media owners hate playing content cop in this realm and would rather let users sort such issues out for themselves.
- His plan is for Meta to copy X’s Community Notes approach, which lets users flag other users’ posts for inaccuracies.
What we’re watching: When Elon Musk rewrote Twitter’s old content rules for X, the platform’s never-decorous conversations deteriorated further. Today you don’t have to look far on X to find posts espousing racism and anti-semitism or deriding LGBTQ people.
- Musk is proud of what he’s done with X, but it hasn’t helped turn around his business.
- We don’t yet know how Zuckerberg’s version of “more free speech” will play out, but if Meta’s platforms get nastier and uglier, too, advertisers could be spooked — and users who aren’t on the MAGA side of the fence could flee.
Our thought bubble: Decades of human experience online shows that running any kind of community platform is like gardening — if you let the weeds go wild, the flowers will choke.