Google's AI Drone Project With U.S. Military Provokes Outrage Among Employees

Google Drone AI military DARPA
A military drone is parked in an aircraft shelter on November 17, 2015, in Indian Springs, Nevada. Isaac Brekken/Getty Images

Revelations that Google is quietly working with the U.S. military to develop technologies to analyze drone footage has reportedly provoked outrage from the tech giant's employees.

A report from Gizmodo revealed details of Google's partnership with the Department of Defense on Project Maven, an initiative that uses artificial intelligence in surveillance operations.

The project was not a secret but had not been previously reported. It came to public attention only after Google employees leaked details of an internal mailing list detailing the project.

Predator Drone
A U.S. Air Force MQ-1 Predator drone assigned to the California Air National Guard's 163rd Reconnaissance Wing flies near the Southern California Logistics Airport in Victorville, California, in January 2012. REUTERS/U.S. Air Force

The Department of Defense gave details of Project Maven when it was first announced last year, though Google's direct involvement was not mentioned.

"People and computers will work symbiotically to increase the ability of weapon systems to detect objects," Marine Intelligence Officer Drew Cukor said in a Department of Defense press release last year.

"Eventually we hope that one analyst will be able to do twice as much work, potentially three times as much, as they're doing now. That's our goal," he added.

Related: AI experts urge ban on weaponized AI with 'life and death powers over humans'

Google acknowledged the work it does with the Department of Defense, adding that it was currently involved in internal discussions about how its machine learning technologies are being used.

"We have long worked with government agencies to provide technology solutions," a Google spokesperson said. "This specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data."

The spokesperson added that the technology was for "non-offensive uses only" and acknowledged that the use of machine learning for military applications "naturally raises valid concerns" and that the company was developing policies and safeguards around its development.

Several Google employees expressed concern that the project presents ethical questions about the "development and use of machine learning," Gizmodo added. Furthermore, hundreds of artificial intelligence experts have previously warned of the dangers posed by the technology within a military context.

Final Experimental Demonstration Object Research is a bipedal robot designed by Russia's Android Technics and Russian military research agency Advanced Research Fund. It is capable of performing a number of complex human tasks including firing guns and driving cars. Social Media

Open letters, which were published in parallel last year, were sent to the prime ministers of Australia and Canada to highlight the "spectacular advances" of AI and machine learning in recent years.

"Lethal autonomous weapons systems that remove meaningful human control from determining the legitimacy of targets and deploying lethal force sit on the wrong side of a clear moral line," the open letter to Canadian Prime Minister Justin Trudeau stated.

"Canada's AI research community is calling on you and your government to make Canada the 20th country in the world to take a firm global stand against weaponizing AI," the letter indicated.