Democracy dies in darkness so let’s see if we can shed some light on this Post editorial published yesterday under the headline: “Twitter and Facebook were right to suppress a Biden smear. But they should tell us why they did.” The piece itself opens by admitting there’s a problem:
FACEBOOK AND TWITTER don’t want to make the same mistakes that marred this country’s last presidential election, but righting old wrongs can introduce new obstacles.
Last week, Facebook reduced the distribution of a dubious story by the New York Post that smeared Democratic nominee Joe Biden, pending third-party fact-checking. Twitter blocked the URL from being shared altogether. Both platforms made the correct decision to slow what so far seem to be baseless accusations backed up by leaked emails of murky origin — yet the way the sites made that decision matters, too. The confusing and opaque process that accompanied the positive outcome threatens to render pyrrhic any victory over the forces of misinformation and meddling.
The Post has obviously taken a side here, saying the NY Post story “smeared” Biden without really justifying that claim at all. If the emails are real is this a smear or just a legitimate story about influence peddling by the candidate’s son? In any case, it’s clear up front that the Post is aware what’s at stake: Joe Biden’s image.
The Post then states that Twitter and Facebook made the right choice to block the story even though the reasons they did so don’t make much sense. Specifically, Facebook claimed it was slowing distribution in advance of a fact check because of “signals” of falsehood. But as the Post admits: “The problem is that no one knows what the signals in question are…”
Well, yes, that does seem to be a problem. And Twitter’s decision to block the story was just as incomprehensible. They claimed the material was hacked and therefore needed to be blocked but how did they know this? Once again, the Post admits, “Twitter never shared its basis for believing the materials were hacked.” In fact, after claiming this was the reason they blocked the story they changed their policy on hacked material.
So we have two social media giants both reaching the same conclusion and neither is able to offer a clear reason. That almost leads the Post to an uncomfortable conclusion:
Allegations of partisan bias in content-moderation decisions have never been borne out by the evidence. Yet it’s much easier to launch such allegations when platforms aren’t clear about precisely what their rules are and precisely how they’re being applied.
This is the Post way of saying: This looks a lot like partisan bias and that’s bad because conservatives might seize on this as proof bias exists.
The problem of course is that there isn’t a better explanation than the one the Post’s editorial board is trying so hard to avoid. Partisan bias not only makes sense of what Facebook and Twitter did (protecting Biden) it also explains why the process was “confusing and opaque” (because this wasn’t a case of applying the existing rules fairly).