Former Google Engineer Warns AI Might Accidentally Start a War: 'These Things Will Start to Behave in Unexpected Ways'

Advancements in artificial intelligence may result in "atrocities" because the technology will behave in unexpected ways, a former Google software engineer has warned.

Computer scientist Laura Nolan left Google in June last year after raising concerns about its work with the U.S. Department of Defense on Project Maven, a drone program that was using AI algorithms to speed up analysis of vast amounts of captured surveillance footage.

Speaking to The Guardian, the software engineer said the use of autonomous or AI-enhanced weapons systems that lack a human touch may have severe, even fatal, consequences.

She said: "What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed. There could be large-scale accidents because these things will start to behave in unexpected ways.

"Which is why any advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous."

Nolan, now a member of the Campaign to Stop Killer Robots, said Project Maven was using AI to help the military to separate and identify people or objects at speed, but warned that any weapons system based solely on AI would lack the ability to make real-time judgement calls.

"How does the killing machine out there on its own flying about distinguish between the 18-year-old combatant and the 18-year-old who is hunting for rabbits?" she questioned.

"If we are not careful one or more of these weapons, these killer robots, could accidentally start a flash war, destroy a nuclear power station and cause mass atrocities," Nolan added.

A 2017 government memo described Maven as algorithmic warfare. Its objective was to "turn the enormous volume of data available to DoD into actionable intelligence and insights."

Googe's contract with the U.S. defense department resulted in significant backlash from academics, security experts and a slew of its own employees. In April last year, thousands of staff signed a petition against Maven, calling for its cancelation and for the publication of a clear AI policy. "We believe that Google should not be in the business of war," the petition stated.

On June 7 last year, the tech giant confirmed it would not pursue follow-on contracts for Project Maven after the existing contract, signed in September 2017, came to an end. The same day, the firm produced a new set of "AI principles," which it pledged to abide to in the future.

Chief executive Sundar Pichai said that while Google would not develop AI for use in weapons, the company would continue to work alongside the U.S. government and its military. "These collaborations are important and we'll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe," he wrote in a blog post.

MQ-1B Predator
A U.S. Air Force MQ-1B Predator unmanned aerial vehicle (UAV), carrying a Hellfire missile flies over an air base after flying a mission in the Persian Gulf region on January 7, 2016. John Moore/Getty