The Liberal Case for Facial Recognition | Opinion

The Trump years saw a flowering of open-source intelligence by non-government organizations, starting from the days of the Charlottesville rally in 2017 and extending through the January 6 riot at the U.S. Capitol. Private groups and individual social media users took it upon themselves to discern and post the identities of those who participated in the deadly events.

For all the investigations, however, notable holes remain in the story of both rallies. We still have not found out who was carrying the Nazi flag at Charlottesville, for instance. And nobody has taken the footage shot by John Earle Sullivan at the Capitol and been able to identify many of those in the front rank of the protestors as they moved from the rotunda to the speaker's lobby.

What's more, the methods used to identify rioters have been inconsistent, opaque and often partisan. Right now it isn't at all clear which among the various outings of far-right individuals were the result of Facebook digging, dumps from a third party, good old source-based journalism or the use of AI technology. In any case, the mainstreaming of this kind of doxxing of people engaged in political activity is by no means confined to the right-wing margin. That just happens to be where most of the open-source watchdogs do their work today.

Identifying those who take part in violent riots, and maintaining transparency about the identification process, are important for a multitude of reasons, not least of which is the protection of authentic, nonviolent protest speech. Political groups of all kinds—including, but not limited to, extremists—have a long history of co-optation by informants and infiltrators, going back before even the FBI's COINTELPRO program in the 1960s. Both Black Lives Matter last summer and Trump supporters today allege that they have witnessed attempts to incitement, and that the people who act violently at their protests are provocateurs. Revelations about the leading role of FBI informants in a plot to kidnap Michigan governor Gretchen Whitmer only reinforced these suspicions. The healthy immune response of people on the internet when confronted with encouragement to violence is "no thanks, Fed."

Charlottesville rally anniversary
CHARLOTTESVILLE, VA - AUGUST 12: People gather during the "Reclaim the Park" gathering at Emancipation Park on August 12, 2020 in Charlottesville, Virginia. Community members in Charlottesville collaborated with Congregate Cville and other Charlottesville organizations to put together the "Reclaim the Park" gathering to mark the third anniversary of a far-right rally on August 12, 2017. Eze Amos/Getty Images

What if we could know the identities of violent individuals and would-be provocateurs immediately? Computers can do all of this stuff better than an anarchist with an internet connection, and already do. At the moment, Russia, Israel and China already have reasonably successful companies that perform this sort of work.

Facial recognition technology would make it easier to gauge the authenticity of a protest—the possibility of "astroturfed" incitement by external interests would be a thing of the past. And, most importantly, it would encourage accountability when things turn violent. It may sound dystopian to talk about the government getting into the business of deciding which protesters are "authentic," but the immediate problem is that some clearly are not. Opinion polling shows that nonviolent protest redounds to the success of social movements, and violent protest doesn't. The right to communicate anonymously is important, but most would agree it shouldn't extend to people who commit violent crimes.

Public deployment of facial recognition would also address the troubling one-sidedness of the institutions investigating domestic terrorism right now. When the only players capable of doing this work are either ideologically partisan or foreign-controlled, the ability to define and portray a given protest ends up in the hands of extremists like Antifa, or even hostile governments. These groups have not shown any propensity to mitigate collateral damage. A January 6 protester who did not commit violence and had no intention of storming the Capitol, or someone on the sidelines of last year's rioting who intended only to peacefully protest racial injustice, may well find him or herself out of a job.

This technology is already being used on American soil; the question is whether the government can work with any of its current providers. SenseTime, a Chinese AI firm implicated in human rights abuses in Xinjiang, is obviously out. AnyVision, which is expanding in the U.S. after raising $235 million, is an Israeli company that has faced criticism for its links to the surveillance of West Bank residents. There's the Polish PimEyes and Russian FindFace, which have been linked to stalking. There's Clearview AI, whose founder has been alleged to have alt-right ties. And there's Banjo, which has rebranded itself as safeXai after its CEO was revealed to have been involved with klansmen.

The only antidote to partisan and foreign-controlled public facial IDs are even-handed American ones. Facial recognition might be judiciously used in the interest of a healthy civic culture. The main question from a policy standpoint is on what terms this technology should be used by the public—whether by law enforcement alone, or by law enforcement and some segment of the public at large, for specific purposes. Right now it's used by large corporations and other countries, including several adversaries of the United States. Parity, for either the public or the U.S. government, needs to be reached.

Arthur Bloom was the online editor of The American Conservative and deputy editor of the Daily Caller. He has been published in the Washington Post, The Guardian and the New York Post.

The views expressed in this article are the writer's own.