Ilya Sutskever.

Sutskever’s AI startup surges past $30B valuation, set to raise over $1B

The stealth AI firm has skyrocketed in value, attracting massive investment.

Ilya Sutskever’s secretive artificial intelligence startup, Safe Superintelligence (SSI), is raising more than $1 billion at a valuation exceeding $30 billion, according to Bloomberg.
San Francisco-based venture capital firm Greenoaks Capital Partners is leading the deal and plans to invest $500 million, per Bloomberg.
Ten days ago, Reuters reported that SSI was raising new funding at a $20 billion valuation, but that now appears to have underestimated the actual figure.
1 View gallery
Ilya Sutskever.
Ilya Sutskever.
Ilya Sutskever.
(Photo: Avigail Uzi)
Sutskever, formerly OpenAI’s chief scientist and a key figure in its breakthroughs, left the company in May 2024. Shortly after, he announced the launch of Safe Superintelligence. The company initially raised $1 billion at a $5 billion valuation, largely based on Sutskever’s reputation as the scientific force behind OpenAI’s advancements. If the latest funding round materializes, SSI’s valuation will have increased sixfold within just a few months.
SSI has offices in Silicon Valley and Tel Aviv, where it has recently been expanding, hiring engineers and relocating to new office space. The company has leased space in Midtown Tel Aviv, a skyscraper in the city center, but has yet to announce who will lead its operations there.
Sutskever, who immigrated to Israel as a child before moving to Canada for university, co-founded Safe Superintelligence with Daniel Levy and Daniel Gross. The startup’s initial funding round was led by Silicon Valley powerhouses Sequoia Capital and Andreessen Horowitz, along with DST Global, the investment firm of billionaire Yuri Milner.
Despite its soaring valuation, the company has yet to unveil any technology or products. So far, it has only announced hiring efforts, including roles for its Tel Aviv development center. According to Sutskever, Safe Superintelligence aims to build AI models that surpass human intelligence while remaining aligned with human interests. This mission suggests potential philosophical differences with OpenAI CEO Sam Altman, particularly regarding the risks and boundaries of advanced AI development.