Google is leading the way in the global race to create human-level artificial intelligence, according to leading AI expert Nick Bostrom.
Speaking at the IP Expo conference in London on Wednesday, October 5, Bostrom said that there are several companies and organizations that are currently focused on developing human-level AI, or artificial general intelligence.
“There are different bets on what approach [to developing human-level AI] is most promising, and since we don’t know what approach will ultimately work, there is some uncertainty there,” Bostrom said in response to a question from Newsweek .
“Baidu, Open AI, and all the large tech companies have various kinds of AI efforts that if they were to become specifically directed to this aim, they have a lot of resources.”
When pushed to back just one company that is currently leading the field, Bostrom said that Google’s DeepMind was the clear frontrunner.
“At this point in time I think that DeepMind is very strong…it is probably the largest group specifically trying to solve general intelligence,” Bostrom said. “But if this happens three decades from now, there might be some entirely new thing that doesn’t exist yet, just as three decades ago a lot of the current players wouldn’t be on the table. A lot could change many times over in the remaining time.”
Swedish philosopher Bostrom, who heads the Future of Humanity Institute at the University of Oxford, gained worldwide attention in 2014 with the release of his seminal work Superintelligence. Following its publication, Stephen Hawking, Bill Gates and Elon Musk were among those to raise concerns about the implications of the existential threat that artificial intelligence poses to humanity.
According to Musk, advanced AI could be “more dangerous than nukes,” while Hawking suggested that it could lead to the end of humanity. Both have since joined Bostrom in signing an open letter on artificial intelligence calling for research priorities that would mitigate against such threats.
Since signing the letter, Musk has alluded heavily to the fact that Google is the “only one” that he is worried about when it comes to the development of advanced artificial intelligence.
Google’s ‘Big Red Button’
Introducing a “super intelligent” system, Bostrom argues, would see humans replaced as the dominant life form on Earth—and potentially wiped out. Ultimately, the main concern is that the first machine to surpass human capabilities will be impossible to switch off. Speaking at a TED (technology, entertainment and design) conference last year, Bostrom hypothesized why neanderthals hadn’t “flicked the off switch” with humans when we became the dominant species.
“They certainly had reasons,” Bostrom said. “The reason is that we are an intelligent adversary. We can anticipate threats and plan around them. But so could a super intelligent agent and it would be much better at that than we are.”
Fortunately, this issue is something that Google is already working on—in the form of a “big red button” that would act as an off switch for a rogue artificial intelligence agent. Having been acquired by Google in 2014 for $500 million, DeepMind has become the search giant’s AI flag bearer, making headlines earlier this year for its creation of the first computer capable of beating a human champion at the boardgame Go.
In June, researchers from DeepMind and Bostrom's Future of Humanity Institute put forward the idea of an off switch in a peer-reviewed paper titled Safely Interruptible Agents. The paper outlined a framework for preventing advanced machines from ignoring turn-off commands and becoming out of human control.
“Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences,” the paper stated. “If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions.”
While such measures could ultimately save humanity, Bostrom warned that the artificial intelligence race could ultimately be won by a company who does not take such precautions.
“There is a control problem,” he said. “If you have a very tight tech race to get there first, whoever invests in safety could lose the race. This could exacerbate the risks from out of control AI.”
This article has been updated to acknowledge that the Safely Interruptible Agents paper was co-authored by both DeepMind and the Future of Humanity Institute.