Several high-profile tech executives and AI researchers, including Tesla CEO Elon Musk and AI pioneer Yoshua Bengio, have called for a temporary pause in the rapid development of powerful AI tools in an open letter coordinated by the nonprofit Future of Life Institute. The letter, titled "Pause Giant AI Experiments: An Open Letter," is also signed by Apple co-founder Steve Wozniak, Stability AI CEO Emad Mostaque, and Center for Humane Technology co-founders Tristan Harris and Aza Raskin.

The pause proponents believe that a six-month or more moratorium would allow the industry time to establish safety standards for AI design and avert potential risks associated with the most dangerous AI technologies. According to Bengio, director of the Montreal Institute for Learning Algorithms, "We've reached the point where these systems are smart enough that they can be used in ways that are dangerous for society, and we don't yet understand." (WSJ)

The letter does not call for an end to all AI development but rather a temporary halt on training systems more potent than GPT-4, the technology recently released by Microsoft-backed startup OpenAI. This includes OpenAI's next-generation GPT-5. Sam Altman, OpenAI CEO, told The Wall Street Journal that the company has long prioritized safety in development and spent over six months conducting safety tests on GPT-4 before its launch.

Max Tegmark, one of the letter's organizers, president of the Future of Life Institute, and a physics professor at MIT, expressed concern over the rapid advancements in AI surpassing what many experts believed was possible just a few years ago. Tegmark said, "It is unfortunate to frame this as an arms race. It is more of a suicide race. It doesn't matter who is going to get there first. It just means that humanity could lose control of its destiny."

The Future of Life Institute began working on the letter last week, initially allowing anyone to sign without identity verification. Sam Altman's name appeared on the letter at one point but was later removed. Altman clarified that he never signed the letter and mentioned that OpenAI frequently collaborates with other AI companies on safety standards and to discuss broader concerns.

Elon Musk and Steve Wozniak have both previously expressed concerns about AI technology. Musk, an early founder and financial backer of OpenAI, stated on Twitter, "There are serious AI risk issues."

The open letter suggests that a pause of at least six months should be declared publicly, verifiable, and involve all key players in the AI space. If a break cannot be enacted quickly, the letter urges governments to step in and institute a moratorium. The authors argue that AI labs and experts can develop shared safety rules for advanced AI design, audited and overseen by external experts during this time. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.

Yoshua Bengio, who shared a 2018 Turing Award for inventing the systems on which modern AI is built, said, "We do need to take time to think through this collectively. I don't think we can afford just to go forward and break things." The letter's proponents believe that a six-month pause would provide the industry with "breathing room" and not disadvantage companies that opt to move cautiously.

The call for a pause comes amid widespread interest among tech companies and startups in generative AI, a technology capable of creating original content in response to human prompts. The buzz around generative AI grew after OpenAI unveiled a chatbot that could provide detailed answers and generate computer code with human-like sophistication. Companies like Microsoft, Alphabet Inc.'s Google, Adobe Inc., Zoom Video Communications Inc., and Salesforce Inc. have all introduced advanced AI tools.

Microsoft CEO Satya Nadella said last month, "A race starts today. We're going to move and move fast." This attitude has sparked concerns that rapid deployment of AI technology could lead to unintended consequences alongside its benefits.

In response to these concerns, the letter recommends that AI labs and experts work together during the proposed pause to create a set of shared safety rules for advanced AI design, which external experts should audit and oversee. The authors argue that these protocols should ensure that systems adhering to them are safe beyond a reasonable doubt, thus reducing the risks associated with rapid AI development.


Important Update

We’re launching an invitation-only, highly curated network of the world’s best CEOs. We're only letting 500 leaders join at this time. Apply now.

Written by

's Profile Picture For leaders.

The above article was written, edited, and reviewed with AI assistance by experienced CEO.com journalists and researchers to produce the most accurate and highest-quality information.