If you think future wars will be fought against robots, you aren’t alone.
“Computers will overtake humans with AI [artificial intelligence] at some point within the next 100 years,” Stephen Hawking, the renowned theoretical physicist and cosmologist, said on Tuesday at the Zeitgeist 2015 conference in London. “When that happens, we need to make sure the computers have goals aligned with ours.”
AI refers to the intelligence of computer systems, allowing them to perform tasks that normally require human intelligence. Apple’s Siri and self-driving cars are current examples.
Hawking also asserted that concern currently lies in who controls AI. But with technology’s rapid progression, he said, the future worry will be whether AI can be controlled at all. In December, he went a step further and said that “the development of full artificial intelligence could spell the end of the human race.”
The ability of a machine to kill, independent of human guidance, is one of the many fears expressed in a report jointly released by Human Rights Watch and Harvard Law School in April. Its authors call for a prohibition on “the development, production and use of fully autonomous weapons through an international, legally binding instrument.”
Hawking posed another possible solution: having developers of the technology carefully coordinate advancements to ensure AI stays within our control. “Our future is a race between the growing power of technology and the wisdom with which we use it,” he said.