OpenAI co-founder and former chief scientist Ilya Sutskever announced he is launching a new AI firm that will primarily focus on developing a “safe superintelligence.”
Former OpenAI member Daniel Levy and former Apple AI lead Daniel Gross are also co-founders of the firm, dubbed Safe Superintelligence Inc., according to the June 19 announcement.
According to the firm, superintelligence is “within reach,” and ensuring that it is “safe” for humans is the “most important technical challenge of our age.”
The firm added that it intends to be a “straight-shot safe superintelligence (SSI) lab” with technology as its sole product and safety its primary goal. It added:
“We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.”
Safe Superintelligence Inc. said it aims to advance capabilities as quickly as possible while pursuing safety. The firm’s focused approach means that management, overhead, short-term commercial pressures, and product cycles will not divert it from its goal.
“This way, we can scale in peace.”
The firm added that investors are on board with the approach of prioritizing safe development over everything else.
In a Bloomberg interview, Sutskever declined to name financial backers or state the amount raised so far, while Gross commented broadly and said that “raising capital is not going to be” an issue for the company.
Safe Superintelligence Inc. will be based in Palo Alto, California, with offices in Tel Aviv, Israel.
Launch follows safety concerns at OpenAI
The launch of Safe Superintelligence follows a dispute at OpenAI. Sutskever was part of the group that attempted to remove OpenAI CEO Sam Altman from his role in November 2023.
Early reporting, including from The Atlantic, suggested that safety was a concern at the company around the time of the dispute. Meanwhile, an internal company memo suggested Altman’s attempted firing was related to a communication breakdown between him and the firm’s board of directors.
Sutskever left the public eye for months after the incident and officially left Open AI a few weeks ago in May. He did not cite any reasons for his departure, but recent developments at the AI firm have brought the issue of AI safety to the forefront.
OpenAI employees Jan Leike and Gretchen Krueger recently left the firm, citing concerns about AI safety. Meanwhile, reports from Vox suggest that at least five other “safety-conscious employees” have left since November.
In an interview with Bloomberg, Sutskever said that he maintains a good relationship with Altman and said OpenAI is aware of the new company “in broad strokes.”