Hibernate

Tech leaders are calling for a six-month pause on training more powerful AI systems

Elon Musk, Steve Wozniak, Yoshua Bengio, and Stuart Russell are among the 1,000+ signatories of a Future of Life Institute open letter

We may earn a commission from links on this page.
A smartphone with a displayed ChatGPT logo is placed on a computer motherboard.
Pause, please.
Photo: Dado Ruvic (Reuters)

More than 1,100 people have now signed an open letter calling for a six-month pause on the training of AI systems more powerful than OpenAI’s GPT-4. The signatories, which include tech leaders Elon Musk and Steve Wozniak, and AI experts Stuart Russell and Yoshua Bengio, warn that the technology could “pose profound risks to society and humanity.”

The pause, they argue, would be necessary to agree on shared safety protocols to be audited and overseen by independent experts. “These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter reads.

Advertisement

Published online on March 22 by the Future of Life Institute, a non-profit focused on mitigating existential threats to humanity, the letter says that in order to enjoy an “AI summer,” stronger governance and regulations must first be put in place. It also warns that AI will disrupt current political and economic systems, and that institutions must be established to navigate those changes.

Advertisement

The warning call from the tech and AI communities comes amid growing concerns from Europol regarding the criminal applications of ChatGPT. Earlier this month, the OpenAI team debuted its most advanced language model system to-date, GPT-4, whose training data remains a mystery.

Advertisement

Notably, nobody from the OpenAI team signed the open letter, according to a report from TechCrunch, nor has anyone from Anthropic, an AI safety and research firm.

Quotable: The open letter’s core demand

“These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

Advertisement

What are the rules regulating AI?

The open letter’s recommendations emerge from the so-called Asilomar AI Principles, a set of 23 guidelines created to direct AI research and development. They were developed at the 2017 Asilomar Conference on Beneficial AI, which was organized by the Future of Life Institute and held in Pacific Grove, California.

Advertisement

The principles are broken down into three broad sections: research issues, ethics and values, and longer-term issues. They cover a number of topics including AI research funding, transparency, protection of privacy, avoidance of an AI arms race, and emphasis on working towards the common good.

The rules have been translated into five languages and been signed by over 5,700 individuals. Elon Musk, Stephen Hawking, and Ilya Sutskever, the co-founder and research director of OpenAI, number among the signatories.

Advertisement

Related stories

⚖️ In a first, an Indian court turns to ChatGPT for jurisprudence

🎓 If you went to college, GPT will come for your job first

🤹 ChatGPT is getting more nuanced