Elon Musk and Thousands of Other Scientists Sign Pledge Not to Manufacture Killer Robots

A robot shown at the China International Robot Show in Shanghai on July 4. Thousands of scientists have signed a pledge not to manufacture autonomous machines that can harm humans. Tang Yanjun/China News Service/VCG/Getty Images

Artificial intelligence could change the way humans conduct warfare and scientists are concerned about what the consequences might be if life and death decisions are made by machines.

Thousands of scientists and organizations are now pledging to not help with the development of robots that can identify and harm people without human oversight. The pledge, orchestrated by the Boston-based organization The Future of Life Institute, was announced Wednesday at the International Joint Conference on AI in Stockholm, Sweden.

Demis Hassabis at Google DeepMind and Elon Musk at the rocket company SpaceX are among more than 2,400 people who signed the pledge, which aims to discourage governments from constructing killer robots. The document calls on governments to establish laws and regulations around the development of deadly autonomous weapons.

150 companies + by 2,400+ engineers, scientists + other individuals from 90 countries commit not to participate in nor support the development of lethal autonomous weapons systems in a new pledge issued at #IJCAI2018 today https://t.co/JUrPTNpteJ @FLIxrisk pic.twitter.com/HPg6FxLl1c

— Campaign to Stop Killer Robots (@BanKillerRobots) July 18, 2018

"I'm excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect. AI has huge potential to help the world – if we stigmatize and prevent its abuse," Max Tegmark, president of the Future of Life Institute, said in a statement. "AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way."

Those who signed the document agreed to neither participate, support the development of, manufacture, trad, or use of lethal autonomous weapons. More than 150 AI-related organizations added their names to the pledge.

"The decision to take a human life should never be delegated to a machine," the document says. "There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable."

Independent of this effort, 26 countries in the United Nations have also explicitly endorsed the call for a ban on lethal autonomous weapons systems.

The move is the latest from concerned scientists and organizations to highlight the dangers of artificial intelligence. It follows calls for a preemptive ban on technology that campaigners think could bring a new weapons that could be used for terror.

"We cannot hand over the decision as to who lives and who dies to machines. They do not have the ethics to do so. I encourage you and your organizations to pledge to ensure that war does not become more terrible in this way," said another organizer of the pledge, Toby Walsh, professor of artificial intelligence at the University of New South Wales in Sydney.