Ilya Sutskever: "Our first product will be the safe superintelligence."
Ilya Sutskever: "Our first product will be the safe superintelligence."
The former OpenAI chief scientist's new startup aims to create superintelligent systems that prioritize safety.
"We've identified a mountain that's a bit different from what I was working [on]...once you climb to the top of this mountain, the paradigm will change... Everything we know about AI will change once again. At that point, the most important superintelligence safety work will take place," says Ilya Sutskever, OpenAI's former chief scientist, who has launched a new company called Safe Superintelligence (SSI), aiming to develop safe artificial intelligence systems that far surpass human capabilities. "Our first product will be the safe superintelligence."
SSI announced on Wednesday that it has raised $1 billion at a $5 billion valuation. The company, which currently has 10 employees, plans to use the funds to acquire computing power and hire top talent. It will focus on building a small highly trusted team of researchers and engineers split between Palo Alto, California and Tel Aviv, Israel.
Sutskever, 37, is one of the most influential technologists in AI and trained under Geoffrey Hinton, known as the "Godfather of AI". Sutskever was an early advocate of scaling - the idea that AI performance improves with vast amounts of computing power - which laid the groundwork for generative AI advances like ChatGPT. SSI will approach scaling differently from OpenAI, he said.
Would you release AI that is as smart as humans ahead of superintelligence?
"I think the question is: Is it safe? Is it a force for good in the world? I think the world is going to change so much when we get to this point that to offer you the definitive plan of what we'll do is quite difficult.
"I can tell you the world will be a very different place. The way everybody in the broader world is thinking about what's happening in AI will be very different in ways that are difficult to comprehend. It's going to be a much more intense conversation. It may not just be up to what we decide, also."
How will SSI decide what constitutes safe AI?
"A big part of the answer to your question will require that we do some significant research. And especially if you have the view as we do, that things will change quite a bit... There are many big ideas that are being discovered.
"Many people are thinking about how as an AI becomes more powerful, what are the steps and the tests to do? It's getting a little tricky. There's a lot of research to be done. I don't want to say that there are definitive answers just yet. But this is one of the things we'll figure out."
On scaling hypothesis and AI safety
"Everyone just says 'scaling hypothesis'. Everyone neglects to ask, what are we scaling? The great breakthrough of deep learning of the past decade is a particular formula for the scaling hypothesis. But it will change... And as it changes, the capabilities of the system will increase. The safety question will become the most intense, and that's what we'll need to address."
On open-sourcing SSI’s research
"At this point, all AI companies are not open-sourcing their primary work. The same holds true for us. But I think that hopefully, depending on certain factors, there will be many opportunities to open-source relevant superintelligence safety work. Perhaps not all of it, but certainly some."
On other AI companies safety research efforts
"I actually have a very high opinion about the industry. I think that as people continue to make progress, all the different companies will realize — maybe at slightly different times — the nature of the challenge that they're facing. So rather than say that we think that no one else can do it, we say that we think we can make a contribution."