Manipulation, misinformation, fear and loathing are endemic to today’s social media platforms, whose engagement-driven algorithms are built to spread whatever messages tap into users’ viscera and provoke a quick “like” or an angry comment. Yet the platforms have delegated much of the work of moderating this content to overwhelmed contractors and fallible artificial intelligence software. The tide of hogwash and bile may recede when a super-spewer such as Mr. Trump is deplatformed. But the dynamics that enabled him endure.
It is those underlying dynamics, and not solely Mr. Trump’s right to use the platform, that any truly independent oversight of Facebook would address. Last month, the U.S. Senate began deliberating over how social media algorithms and design choices mold political discourse. While its hearing was inconclusive at best, it at least served notice that they’re a topic of potential regulatory interest.
Facebook endowed the Oversight Board with a measure of autonomy. It funded the board with an irrevocable trust, promised operational independence and pledged to treat its content decisions (though not its policy recommendations) as binding. Yet it did not empower the board to watch over its products or systems — only its rules and how it applies them.
That’s why some communication scholars have dismissed the board as a red herring, substituting a simulacrum of due process in certain high-profile cases for substantive reform. While the term “oversight board” suggests accountability for the institution it oversees, this board’s function is essentially the opposite: to shift accountability for Facebook’s decisions away from the company itself. The board’s power to adjudicate individual content decisions may be real, but it’s a power that Mr. Zuckerberg never wanted in the first place.
That’s not to say it’s a total sham. Wednesday’s decision aside, putting weighty decisions about online speech in the hands of an accomplished group of outsiders would seem more likely to lead to thoughtful and consistent rulings than leaving them to Facebook’s employees and executives. Both today and in previous decisions, the board has revealed itself to be nothing if not thorough, carefully documenting its rationale and the implications of its decisions for similar cases. But its impotence in holding Facebook to account, more than the ruling on Mr. Trump’s suspension, is what makes Wednesday’s announcement unsatisfying.
Among the many public comments that poured in when the board announced it would take the case, a submission from the Knight First Amendment Institute at Columbia University best articulated the crux of the matter. Warning that the board’s decision on Mr. Trump would serve as a “fig leaf” for Facebook’s own failures, the institute’s scholars implored the company to delay issuing a ruling until Facebook commissioned an independent study into its own role in the events leading up to the insurrection at the Capitol on Jan. 6.
There had been at least a sliver of hope that the board might take such a stand. One of its members, Alan Rusbridger, a British journalist, had called publicly in March for the board to examine Facebook’s algorithms, though he acknowledged it might not do so right away. “We’re already a bit frustrated by just saying ‘take it down’ or ‘leave it up,’” Mr. Rusbridger said, according to The Guardian.