Google CEO Sundar Pichai believes artificial intelligence could have “more profound” implications for humanity than electricity or fire, according to recent comments.
Pichai also warned that the development of artificial intelligence could pose as much risk as that of fire if its potential is not harnessed correctly.
“AI is one of the most important things humanity is working on,” Pichai said in an interview with MSNBC and Recode, set to air on Friday, January 26. “It’s more profound than, I don’t know, electricity or fire.”
Pichai went on to warn of the potential dangers associated with developing advanced AI, saying that developers need to learn to harness its benefits in the same way humanity did with fire.
“My point is AI is really important, but we have to be concerned about it,” Pichai said. “It’s fair to be worried about it—I wouldn’t say we’re just being optimistic about it— we want to be thoughtful about it. AI holds the potential for some of the biggest advances we’re going to see.
“Whenever I see the news of a young person dying of cancer, you realize AI is going to play a role in solving that in the future. So I think we owe it to make progress too.”
Google has invested heavily in artificial intelligence research, having acquired the London-based startup DeepMind for £300 million in 2014.
DeepMind is often cited by AI experts and academics as the leading pioneer in AI research for its work in developing an algorithm capable of beating human champions at the ancient boardgame Go, as well as its work with the National Health Service (NHS) in the UK.
The advances DeepMind has made in this field has led to some figureheads voicing concern about the direction the research takes. Tesla CEO Elon Musk, who also owns AI startup OpenAI, said in 2016 that Google was the “only one” of the companies working on AI that he is worried about.
While leading the way in research, DeepMind also appears to be among the leading companies developing technology to address these concerns.
One example of safety measures being put in place to harness AI’s potential is that of a “Big Red Button,” first described in a 2016 peer-reviewed paper titles Safety Interruptible Agents.
“Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences,” the paper states.
“If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions—harmful either for the agent or for the environment—and lead the agent into a safer situation.”