Google Won't Make AI to Murder People, But Will Still Help the U.S. Military

In the wake of backlash over its involvement in a U.S. military drone program, Google CEO Sundar Pichai released a set of artificial intelligence principles on Thursday pledging that the technology company would never "design or deploy" AI to aid weaponized systems and surveillance.
Instead, it said its applications would now be socially beneficial, avoid bias, be tested for safety, have strong privacy protections and be accountable. Yet despite seemingly listening to the many critics of "Project Maven"—the Department of Defense-led operation to use AI to analyze bulk drone surveillance footage—Google said it will still work on contracts with the U.S. government and military.
"We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas," Pichai wrote.
"These include cybersecurity, training, military recruitment, veterans' healthcare, and search and rescue," he continued. "These collaborations are important, and we'll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe."
The CEO said Google would not make technologies "that cause or are likely to cause overall harm." He wrote: "Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints." It is unclear what such benefits would be. Google did not respond to a request for comment.

Last month, hundreds of academics urged the Mountain View, California, company to abandon all work on the Maven project, which they argued could eventually lead to the aiding of targeted killing. In April more than 3,000 Google staffers petitioned against the firm's stance on offensive warfare. Gizmodo, which first reported the news, revealed that some employees had resigned over the matter.
Google initially maintained the work was for "non-offensive purposes," even if an internal memo from inside the project stated in black and white that it would help to "enhance military decision-making."
Diane Greene, CEO of Google Cloud, confirmed in a blog post on Thursday that the company would not peruse follow-on contracts for Project Maven once the current contract expires in 2019. She rejected calls for Google to cancel its DoD work immediately, saying it needs to fulfill its obligations.
"There has been public focus on a limited contract we entered into in September 2017 that fell under the U.S. Department of Defense's Maven initiative," Greene wrote. "This contract involved drone video footage and low-res object identification using AI, saving lives was the overarching intent."
She added: "There have been calls for Google to cancel the September 2017 contract with the [DoD]. I would like to be unequivocal that Google Cloud honors its contracts. We will not be pursuing follow on contracts […] and because of that, we are now working with our customer to responsibly fulfill our obligations in a way that works long-term for them and is also consistent with our AI principles."
The news that Google was backing out of the military operation was welcomed by some campaigners. One insider told Gizmodo, however, the AI principles were little more than "a hollow PR statement."
"Google bosses have listened to their staff and done the right thing by backing out of Project Maven," said Jennifer Gibson, drone expert at rights group Reprieve. "They are now in a position to actually change the rules of the drone program for the better. Google should use the influence it has to set strong ethical standards that ensure the U.S. government cannot exploit life-improving technology."
Greene also stressed Google would continue to support the "government, military and our veterans."
