On Tuesday, Mark Zuckerberg, dressed like one of Eminem's backup dancers, announced that Meta (née Facebook) would be eliminating its fact-checking department and essentially throwing in the towel on content moderation. The announcement came exactly four years to the day after Zuckerberg's decision to ban then-President Trump from Facebook (and Instagram) for using the platform to promote election lies and "cheer on" Jan 6ers. In some ways, the timing of the announcement was more interesting than its substance. Coming the day after Congress certified Trump as the 47th president, it seems Zuckerberg didn't want to waste any time ingratiating himself. Held up alongside his decision to ban Trump from Meta platforms the day after Biden was certified as the 46th president, you could be forgiven for building a profile of Zuckerberg as a man who is quick to kowtow to whomever is in power. But announcing a change in moderation policy on the heels of the four-year anniversary of Jan. 6 is also notable because the events leading up to and culminating with Jan. 6 were an embarrassment to Facebook, revealing that even when it tried to stop the spread of disinformation, its content moderation practices were useless against the incentives of the algorithm.
In a letter to potential investors on the eve of Facebook's 2012 IPO, Zuckerberg wrote: "Facebook was not originally created to be a company. It was built to accomplish a social mission – to make the world more open and connected." By the fall of 2020, it was hard to imagine anyone still believed this. Political scientist Kevin Munger, in a New York Times op-ed, explained the shift over the last decade as the collapse of the "Palo Alto Consensus" — the consensus being, as Munger described it, that "American-made internet communication technologies (both hardware and software) should be distributed globally and that governments should be discouraged from restricting speech online." The idea was that more speech would lead to more democratic outcomes, like the Arab Spring. In fact, the "consensus" justified allowing these companies to grow quickly and scale globally without investing in time-consuming and expensive societal safeguards and with limited government oversight. While Facebook achieved growth and connectivity (it now claims over 3 billion users), the platform's role in fueling ethnic conflict and violence in places including Myanmar and Sri Lanka (to name just a few) undercut the credibility of Silicon Valley's self-described aspiration to create more open or democratic societies. The platform's connection to events like Brexit and the general hyper-polarization of political discourse across Western countries suggested Facebook's problems might be both endemic and pervasive and that, at least in some high-profile cases, rather than bringing people together it was driving them apart.
And so, heading into the 2020 U.S. election, Facebook was facing intense scrutiny. It made a public announcement outlining all the ways it would stop the influence of bad actors, promote transparency, and reduce misinformation. Just after the polls closed on November 3, 2020, with the results of the election still undecided, the "Stop The Steal" posts started.
In what an NPR article referred to as a "classic game of whack-a-mole," Facebook took down the initial Stop The Steal group, which already had 360,000 members, on November 5, but new groups just kept popping back up. Facebook moderators were unable to keep pace as the Stop The Steal movement proliferated across the platform, spreading lies about the election results and organizing themselves for what would ultimately culminate in the events of Jan. 6. In an SEC complaint filed by Facebook whistleblower and former product manager Frances Haugen alleging the company misled investors about its role in Jan. 6, Haugen claims that Facebook announced its plans combat election misinformation and violent extremism knowing "its algorithms and platforms promoted this type of harmful content, and it failed to deploy internally-recommended or lasting counter-measures." Put another way (by Josh Marshall): Zuck's "content moderation" was always a crock.
Zuckerberg's platforms can't moderate harmful content because they are designed to incentivize that kind of content. This isn't a matter of stopping disinformation, and it certainly has nothing to do with free speech. The platforms themselves are the disinformation — they distort the way we view each other by highlighting and promoting our most extreme views as though these views are the norm. And when we respond to those views, posting in disbelief or outrage about what our society is becoming, the platforms call that engagement and show us ads. You can't stop that by blocking every lie and news story that isn't true, and good luck if you think more speech will result in the algorithms moderating their preferences for extremism.
All this is to say: it's fitting that Zuckerberg chose the shadow of Jan. 6 for his big content moderation announcement. By Jan. 6 the gig was up. Now Facebook is back to its roots. Move fast and break things, am I right?! Only now we know what it's breaking.
Also on TPM… Josh Kovensky was in the Manhattan courtroom to witness a surreal hearing where Trump was sentenced to nothing and became the first convict President-elect … Khaya Himmelman wrote about the predictable politicization of the LA Wildfires … Kate Riga on the dubious argument being made by Republican state officials and legislators in Louisiana that since the state has moved past racial discrimination in voting, it should be able to suppress Black voting power with impunity.
In The Backchannel, Josh Marshall's take on Trump's Greenland, Panama, and Canada rhetoric: none of that is going to happen.
And on the podcast, Kate and Josh discuss Trump's attempts to evade the final dregs of legal accountability, congressional Republican dysfunction and the legacy of Jimmy Carter.
-Derick D.
(Comments or feedback? Reply directly to this email.)
Share your views...
0 Respones to "Breaking, Bad"
Post a Comment