Efforts to Avoid a 'Suicidal' AI Arms Race are Failing, Scientists Warn

Urgent call by thousands of scientists to pause the development of AI systems for six months is about to expire. Organisers say it hasn't worked

An urgent call by scientists to pause the development of powerful AI systems and to work out a safe way forward for technologies that could profoundly alter or even threaten human life has not worked, organisers said.

"AI labs are recklessly rushing to build more and more powerful systems, with no robust solutions to make them safe," Anthony Aguirre, Executive Director & Secretary of the Board at the U.S.-based Future of Life Institute told Newsweek, as the expiry looms of a six-month pause the institute called for that was signed by over 33,000 people — including Elon Musk, CEO of SpaceX, Tesla & X, and Apple co-founder Steve Wozniak. Aguirre described the situation as potentially a "suicidal AI arms race which everyone loses."

The March 22 letter titled "Pause Giant AI Experiments: An Open Letter" said that AI systems with "human-competitive intelligence" can pose profound risks to society and humanity. It called on scientists everywhere to halt the training of AI systems that are more powerful than the GPT-4 technology created by OpenAI.

The letter followed barely a week after the release of GPT-4, which OpenAI described as exhibiting "human-level performance on various professional and academic benchmarks". OpenAI's ChatGPT had already taken the world by storm late last year with its human-like impression of swiftly researching any subject and producing fluent text on it. Other companies rushed to catch up, including Google with its Bard chatbot.

Some critics of the Future of Life Insitute's call for a pause accused it of trying to undermine the edge that OpenAI and others had established over competitors - an accusation rejected by the institute.

"(W)e published a letter to sound the alarm on the dangers of unchecked, out-of-control AI development. Since then, threats have made headlines around the world,"Aguirre said in emailed comments to Newsweek. He noted that the EU had passed its first legislation to regulate AI, the U.S. Congress had held hearings on the risks, and China had passed a law over some kinds of AI. But the U.S. needed to create a federal agency to manage the challenge, Aguirre said.

"Polls reveal the majority of Americans fear AI's potential for catastrophe and would prefer to see a slowdown. But our letter wasn't just a warning; it proposed policies to help develop AI safely and responsibly, including licensing and auditing," he said. "Such measures are also backed by public consensus: over 80% of Americans distrust AI corporations to self-regulate, and a majority support the creation of a federal agency for oversight."

He said that AI labs acknowledge massive risks and safety concerns.

Yet they are unable or unwilling to say when or even how such a slowdown might occur. "We need our leaders to be capable of directing AI for everyone's benefit, with the technical and legal capacity to steer and halt development when it becomes perilous," Aguirre said.

Fears over Artificial Intelligence
A screen displaying the logo of Bard AI, a conversational artificial intelligence software application developed by Google, and ChatGPT Photo by Lionel BONAVENTURE / AFP via Getty Images

He urged them to attend a "AI Safety Summit" set for Nov. 1 and 2 in the U.K. at Bletchley Park, the site of ground-breaking computer science and where Enigma, a German code machine used by the Nazis, was broken during World War II. The meeting was an opportunity to make progress - and to focus on the positive aspects of AI too, Aguirre said.

"This is a global effort, and at the upcoming UK summit every concerned nation must have a seat," Aguirre said, adding that this should include China as well as the U.S.

"The ongoing arms race risks global disaster and undermines AI's huge potential for good. We must not let competition between a handful of corporations threaten our shared future," he said. "China should appreciate that it too is endangered by a suicidal AI arms race which everyone loses, and it has a security interest in mitigating threats from non-state actors," he said.

The U.K.'s Department of Science, Innovation and Technology said the U.K. aimed to bring together "key countries, as well as leading companies and researchers, and civil society, to drive targeted, rapid international action on the safe and responsible development of the technology," in a comment attributed to an unnamed government spokesperson.

Asked who was invited, the spokesperson said, "We've always said AI requires a collaborative approach, and we will work with international governments to ensure we can agree on safety measures which are needed to address the most significant risks emerging from the newest developments in AI technologies. As is routine for summits of this nature, we won't speculate on potential invitees."

The U.K. government believes the meeting will complement other forums also working to pull together a response to the challenges of AI, including the OECD, Global Partnership on AI, Council of Europe, UN, G7 and G20.

A spokesman for the Chinese embassy in D.C., Liu Pengyu, made clear China wants to be part of shaping the way forward globally, telling Newsweek: "As a principal, China believes that the development of AI benefits all countries, and all countries should be able to participate extensively in the global governance of AI."

Newsweek has reached out to the White House for comment.

In July, the Biden administration indicated it preferred light regulation, announcing a "voluntary commitment" from leading AI companies to develop the technology in safe and responsible ways.

In May, the G7 - the U.S., Canada, Germany, Italy, France, Japan, the U.K. and the European Union - said in its Hiroshima Leaders' Communique that it would work to "advance international discussions on inclusive artificial intelligence (AI) governance and interoperability to achieve our common vision and goal of trustworthy AI, in line with our shared democratic values." However, some G7 members are understood to be wary of including China in discussions before the group has a shared position on how to proceed with AI, and discussions are ongoing.

Editor's Picks

Newsweek cover
  • Newsweek magazine delivered to your door
  • Unlimited access to Newsweek.com
  • Ad free Newsweek.com experience
  • iOS and Android app access
  • All newsletters + podcasts
Newsweek cover
  • Unlimited access to Newsweek.com
  • Ad free Newsweek.com experience
  • iOS and Android app access
  • All newsletters + podcasts