Why AI innovator Ilya Sutskever left OpenAI to prioritize safety and Israel's role in his new mission
Why AI innovator Ilya Sutskever left OpenAI to prioritize safety and Israel's role in his new mission
Sutskever's new company, Safe Superintelligence, seeks to build safe AI, distancing itself from OpenAI's profit-driven model.
Dr. Ilya Sutskever has a vision. It's not the usual public vision of high-tech and artificial intelligence (AI) entrepreneurs to develop products that will change the world and bring them to market as quickly as possible. Nor is it the non-public but implied vision of creating a business that will make them billions, or at least millionaires. It's a different kind of vision, one that is not common in the world of technology these days: to make a product, in the case of his new company, general artificial intelligence, that will be first and foremost safe, even if it takes time to reach the market.
He stated in an interview with Bloomberg that his new company’s first product will be a safe supercomputer, and it won't focus on anything else until then. There is reason to be skeptical of such a statement, given past experiences. However, there is also an opening to believe in Sutskever. In his previous role as founder, director, and chief scientist of OpenAI, he risked everything to lead a move to oust founder and CEO Sam Altman due to concerns that the company was not investing enough in safety and risk prevention in its products. The move ultimately failed.
About a month ago, realizing that nothing in the company was going to change for the better in this regard, he once again risked everything and left his secure position at OpenAI in favor of his new venture—a company called Safe Superintelligence (SSI). This company aims to create artificial intelligence in a way that other companies, which operate on a capitalist basis, are not able to do. He mentioned that the company will be completely insulated from external pressures of dealing with a big, complicated product and being stuck in a competitive rat race.
The company is expected to have deep ties to Tel Aviv. Sutskever immigrated to Israel at the age of 5 and grew up there until he moved to Toronto for academic studies, although he started studying for a bachelor's degree at the Open University. One of his partners in establishing SSI, Daniel Gross, was born in Jerusalem. The third partner, Daniel Levy, is not Israeli. Additionally, SSI announced that besides its center in Palo Alto, it will also open a center in Tel Aviv and is already recruiting employees for it.
Sutskever, who was one of the founders of OpenAI and the company's chief scientist, announced his retirement in May, just six months after leading the failed move to oust Altman. A few hours later, another executive at the company, Jan Leike, announced his resignation. Two days later, OpenAI announced the disbandment of the team dealing with AI risks, which was established and led by Sutskever and Leike.
Last week, Sutskever unveiled his new venture, an artificial intelligence company called Safe Superintelligence, which will be structured so that its operations will emphasize safe developments in the field. “Building safe superintelligence (SSI) is the most important technical problem of our time,” read the new company's announcement. “We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence. It’s called Safe Superintelligence. SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.”
“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead,” the announcement read. “This way, we can scale in peace. “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”
In the Bloomberg interview, Sutskever refused to specify the names of the investors in the company and the scope of its funding. He also skimmed the details of how SSI will define what makes AI systems safe, saying it will try to achieve this through engineering breakthroughs that allow safety to be built into the AI system, as opposed to protections built into the technology as it works.
Sutskever compared the safety they aim for to nuclear safety, as opposed to the concept of 'trust and safety,' acknowledging that this comparison might not be particularly reassuring given several serious nuclear disasters such as Chernobyl or Fukushima. He added that at the most basic level, a safe superintelligence has to be built so that it doesn't harm humanity on a large scale. After that, it can be a force for positive change. Some of the values they consider could be those that have been successful in the last hundred years and are at the foundation of liberal democracies, such as freedom and democracy.
Levy, who worked closely with Sutskever at OpenAI, added that he believes the time is right for a project like SSI. His vision aligns with Sutskever's: a small and skilled team that is entirely focused on the single goal of safe superintelligence.
For Sutskever, this is a second attempt to create a corporate structure for the safe development of artificial intelligence. This was the idea behind OpenAI, which began its journey as a non-profit organization with the stated goal of developing AI products in isolation from the considerations and pressures of a traditional capitalist corporation. However, the need to raise large sums of money, billions of dollars, and a close business partnership with Microsoft led to a change in the corporate structure of OpenAI and the creation of a for-profit entity.
Altman's failed impeachment attempt at the end of 2023 was the last nail in the coffin of the old vision. Following that, the board of directors was replaced, which consisted entirely of directors with no business interest in the company. Besides Altman and Sutskever, it included prominent independent scientists and researchers in the field. As a result, OpenAI is now a much more traditional startup company, with a strong CEO in the form of Altman and a commitment to investors to produce products that are profitable, leaving behind any vision for development that prioritizes safety over profitability.
The big question about SSI is whether Sutskever has learned the lessons of OpenAI and managed to establish a body that will be freed from the usual capitalist pressures and will be able to truly prioritize, over time, safety over revenue and profits. A significant part of the answer depends on the identity of the investors and their willingness to invest in a development that may not yield any return for a long time, if at all. This means investors who treat their contributions more as philanthropy than as an investment and are committed to this philanthropic approach over time.
Despite these limitations, estimates based on the identity of its founders and their position in the industry suggest that SSI will not have difficulty raising significant amounts. Gross told Bloomberg that raising capital will not be one of their problems.