Hawking, Musk, Wozniak: Ban Artificially Intelligent, Autonomous Weapons in War

07_27_KillerRobots_01
Apple co-founder Steve Wozniak speaks during the EBN Congress, in Londonderry, Northern Ireland on May 30, 2013. Wozniak, physicist Stephen Hawking and Tesla founder Elon Musk have joined over 1,000 AI and robotics researchers in signing an open letter calling for a ban on "offensive autonomous weapons." Cathal McNaughton/Reuters

More than 1,000 artificial intelligence experts, including Stephen Hawking, Elon Musk and Steve Wozniak, have signed an open letter calling for the ban of fully autonomous weaponry—weapons capable of killing without operators.

“Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is—practically if not legally—feasible within years, not decades, and the stakes are high,” the letter reads.

The authors recognize a common argument made in favor of the technology—that replacing soldiers with machinery ultimately reduces the owner’s number of battlefield casualties. But in reducing casualty rates, they argue, autonomous weapons would make armed conflict less costly and therefore more frequent.

They also contend that one stocked arsenal is all that’s needed to provoke a never-ending arms race. And unlike nuclear weapons, which require costly and hard-to-obtain materials, fully autonomous weapons can be mass produced cheaply.

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” the letter reads. “It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.”

This letter is just the latest to call for a preemptive ban of such weaponry. In April, a joint report released by Human Rights Watch and Harvard Law School argued that fully autonomous weapons should be prohibited by international treaty. The report, entitled “Mind the Gap: The Lack of Accountability for Killer Robots,” asserts that under existing law, humans who manufacture, program and command fully autonomous weapons would escape liability for any suffering caused.

This is also not the first time these big names in AI have warned about the technology’s future use. “Computers will overtake humans with AI at some point within the next 100 years,” Hawking said in May. “Our future is a race between the growing power of technology and the wisdom with which we use it.”

The AI experts’ open letter will be presented on Tuesday at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina.