The Video Trump Shared of Pelosi Isn't Real. Here's Why Twitter and Facebook Should Leave It Up Anyway | Opinion

Last week, Speaker Nancy Pelosi famously ripped up her copy of President Donald Trump's State of the Union address on camera after he finished delivering it.

Later, the president retweeted a video based on it. The video the president retweeted (and pinned) had been edited to appear like the speaker had been ripping up pages throughout the speech, as if reacting contemptuously to each American credited by name, like Tuskeegee Airman Charles McGee.

An official from the speaker's office has publicly sought to have Facebook and Twitter take down the video, since it's not depicting something real.

So should Twitter and Facebook take it down?

As a starting point for thinking about this, it helps to know that the video isn't legally actionable. It's political expression that could be said to be rearranging the video sequence in order to make a point that ripping up the speech at the end was, in effect, ripping up every topic that the speech had covered.

And to show it in a video conveys a message far more powerful than just saying it—something First Amendment values protect and celebrate, at least if people aren't mistakenly thinking it is real.

So a first question is whether sites should even consider taking action against content that is otherwise legal. I believe they should, and they clearly do. For example, their terms of service prohibit types of nudity that the First Amendment protects.

But Facebook's and Twitter's current policies don't, and shouldn't, result in a takedown of the video here, although it is important for social media sites that have massive reach to make and enforce policies concerning manipulated content, rather than abdicating all responsibility. The platforms are clearly still figuring it out—each has recently updated its policies with the 2020 election no doubt in mind. (Twitter's can be found here, and Facebook's here. The Facebook policy is explained further in this blog entry.)

Facebook's policy is narrowly drawn around the targets of the manipulation being made to say words they didn't say. That wouldn't apply here, since the speaker isn't shown saying anything at all. Maybe Facebook's policy should be broader, but even if it's changed to cover actions, as well as words, there should still reasonably remain, as there are now, certain exceptions for expression that isn't real but makes a point.

Twitter's policy is different than Facebook's—it concerns media that have been "deceptively altered or fabricated"—but to be removed something has to result in "threats to physical safety or other serious harm."

Donald Trump
President Donald Trump speaks to the crowd at a rally in Manchester, New Hampshire, on February 10. JIM WATSON/AFP/Getty

Both Facebook and Twitter leave open the possibility of labeling videos like these as manipulated, rather than removing them, and that might rightly apply here. Even something that to most people clearly appears to be satire or point-making, rather than offered to deceive, can be taken seriously by others—Poe's law—and it might be helpful to label accordingly, as long as that is done consistently.

While it shouldn't be a free-for-all, removing a video like this—one that could be said to be more taking creative license than outright deceiving to most people—would cause at least as many problems as it solves, so it shouldn't be taken down. But it wouldn't be nearly as intrusive to label it as manipulated, since it is.

In any case, it's entirely possible that for the speaker's purposes, a public debate around the president's actions here is as useful as any action by the platforms.

One thing that gives me pause: Disinformation experts have pointed out the prospect that video is especially visceral and compelling. The rise of social media and the ability to make so-called "deep fakes" is still in its infancy. We don't know what it's doing to us, or society. We need to.

Meanwhile, there remain clear instances of outright disinformation that shouldn't stand alone. On Sunday, for example, the president retweeted a bald, unsupported claim about other politicians' sons doing business in Ukraine.

The write-up from PolitiFact is persuasive that the claim, broadcast to 72 million followers, is likely false and reckless; if so, it'd also probably be, in context, defamatory.

That's a real problem, and one where the platforms offering the outsize megaphones have a role to play. Disinformation researchers are developing expertise around fact-checking and labeling that doesn't inadvertently reinforce the falsehoods they debunk, and it's not just delete-or-not and label-or-not, but also promote-or-demote—something platforms already naturally decide. So there's no "neutral" position here.

Finally, apart from what the "right" policy should be across all political discourse, there's the question of who should enforce it. It is dicey for the platforms to be the sole judges and enforcers of policy, as if it were simply a customer service issue. It doesn't match the gravity of the decisions called for here, and it makes them too powerful. Facebook has turned to third-party fact-checking with mixed results so far, and I've called for users themselves, indeed perhaps high school students, as part of their graded schoolwork, to do the policy review.

Ultimately, these should be decisions undertaken reflectively by exactly the kinds of people to whom the content is targeted.

Jonathan Zittrain is the George Bemis professor of international law at Harvard Law School, as well as co-founder and director of Harvard's Berkman Klein Center for Internet & Society.

This article was originally published on Medium. Republished with permission.

The views expressed in this article are the author's own.

The Video Trump Shared of Pelosi Isn't Real. Here's Why Twitter and Facebook Should Leave It Up Anyway | Opinion | Opinion