Three Questions for Mark Zuckerberg
From The Real Facebook Oversight Board
Tuesday’s stunning Guardian report that Facebook users are allowed to praise mass killers and violent non-state actors is shocking, but not surprising. We believe President Trump should be permanently banned, and we’ve been calling out Facebook for months on their lax policies towards repressive regimes and dangerous nationalists. In fact, our first shadow oversight hearing will be The Case of Steve Bannon, determining if his hate speech and rhetoric should have him banned.
Facing Congress on March 25th, will Mark Zuckerberg answer for Facebook’s support for violent and hateful content? Will he codify what, if anything, is enough to be banned from his platforms?
Ahead of Mark Zuckerberg’s testimony in front of the U.S. House Committee on Energy and Commerce Technology subcommittee, we have submitted a formal brief which you can read here. Here are our three big questions for Facebook’s Founder and CEO:
- Steve Bannon, in a Facebook Live video, threatened the beheadings of NIAID Director Dr. Anthony Fauci and FBI Director Christopher Wray. Last November, when asked why Bannon had not been banned from Facebook for inciting violence, you told the Senate Judiciary Committee that Bannon had not violated enough policies to be banned from Facebook. Bannon is also still under indictment for using Facebook’s products in his fraudulent “We Build the Wall” scheme. (Those accounts still exist — Instagram; Facebook). Bannon is under investigation from the Federal Trade Commission (FTC) for his misuse of Facebook user data. This sounds to us like a very bad customer of your product. What, in your mind, constitutes inciting violence? How violent a threat does a Facebook user need to make to be banned from your platforms? And what more does Steve Bannon need to do to flout the rules of your company to be banned from your platform?
- The Guardian reported yesterday that your company permits in some circumstances users calling for the death of public figures, and that Facebook maintains content moderation standards in repressive nations that favor criminal law over basic human rights. Often the standards you uphold fall well beneath U.S. human rights law. Why do you allow Facebook to favor repressive governments over international human rights on your platform? Shouldn’t accepted international human rights standards be the benchmark for any company?
- On Monday January 11th, 2021 in an interview with Reuters, Sheryl Sandberg said, “I think these events [the insurrection] were largely organized on [other platforms],” and not Facebook. This has been widely discredited and disproven — reports from Buzzfeed News, the Center for American Progress, the Tech Transparency Project, and Avaaz all indicate that Facebook was used at scale to amplify disinformation ahead of the January 6 insurrection. Do you stand by your COO’s comments? What are you doing to make sure another insurrection cannot be organized on your platform? How are you auditing the groups, leaders, and page administrators that spread election disinformation and incited violence ahead of the January 6th insurrection?
As a bonus question, we know likes matter to you — Facebook hired a full time pollster to monitor your approval ratings. As of last July, your approval rating was 20%, with 56% disapproving — that’s actually worse than Donald Trump. What does it tell you about how you are running your company and impacting society that you’re viewed this way?
We look forward to his answers.
Real Facebook Oversight Board members are available for interviews and live reaction.