LET'S HYPOTHESIZE

OpenAI sees the IAEA as the future model for regulating AI

If AI becomes smarter than humans, could an agency similar to the atomic energy regulator keep tabs?

We may earn a commission from links on this page.
Sam Altman
Sam Altman, CEO of OpenAI.
Photo: Elizabeth Frantz (Reuters)

Last week before Congress, Sam Altman, CEO of OpenAI, laid out how the US government should regulate artificial intelligence companies like his. Besides calling for a new agency to oversee AI and license the development of large-scale models, Altman advocated safety standards and auditing requirements for the technology.

But Altman, whose company created ChatGPT, didn’t stop there.

One scenario sees AI models becoming more intelligent than people, a development that would require special regulation. In a blog post published on May 22, OpenAI co-founders Altman, Greg Brockman, and Ilya Suskever suggest the International Atomic Energy Agency (IAEA) as a blueprint for regulating “superintelligent” AI.

Advertisement

Under OpenAI’s proposal, such advanced AI would be subject to an international authority that can inspect systems, require audits, test for safety standards, and restrict the technology’s deployment. The first step would be for companies to draw up a list of potential requirements for countries to implement. The agency would focus on reducing existential risks, leaving more specific issues—like defining what an AI is allowed to say, for example—with individual nations.

Advertisement

How effective is the IAEA?

Given its mixed record when it comes to controlling the spread of nuclear technology, the IAEA offers little reassurance that a similar fate won’t befall an agency for AI regulation.

Advertisement

Headquartered in Vienna, Austria, the regulator, which works with more than 170 member countries, aims to promote the safe, secure, and peaceful use of nuclear energy. Its role also includes establishing a framework for efforts to strengthen international nuclear safety and security, and helping ensure that member nations adhere to the Nuclear Non-Proliferation Treaty of 1968.

But there have been critiques of the IAEA’s effectiveness. In one major blow, North Korea, which joined in 1974, expelled IAEA inspectors and eventually left the agency in 1994. More recently, the IAEA has had trouble keeping up with inspections, thanks to budget cuts and the growing accessibility of nuclear technology, then–director general Yukiya Amano said in 2019. These challenges suggest that an international regulator for AI could face similar headwinds.

Advertisement

OpenAI proposes different levels of AI regulation

Not all AI models present the same level of risk, OpenAI says. “[It’s] important to allow companies and open-source projects to develop models below a significant capability threshold” without being subjected to regulation via licenses or audits, the co-founders note in their blog post. But it’s unclear what types of models fall under this umbrella—and outside of the regulation that Altman proposed to Congress.

Advertisement

Current AI models carry risks “commensurate with other Internet technologies” that society can probably manage, Altman and his colleagues add. The superintelligent AI systems they’re “concerned about will have power beyond any technology yet created.”

Superintelligence may sound far out, but the OpenAI executives aren’t the only ones warning about the potential dangers of advanced AI models. Their post comes after a recent letter, signed by tech leaders including Elon Musk and Apple co-founder Steve Wozniak, urged AI labs to pause the training of systems more powerful than OpenAI’s GPT-4, citing worries over “unknown unknowns.”