OpenAI co-founder starts new company to build ‘safe superintelligence’ — here’s what that means
Here's what Ilya is doing next
Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Daily (Mon-Sun)
Tom's Guide Daily
Sign up to get the latest updates on all of your favorite content! From cutting-edge tech news and the hottest streaming buzz to unbeatable deals on the best products and in-depth reviews, we’ve got you covered.
Weekly on Thursday
Tom's AI Guide
Be AI savvy with your weekly newsletter summing up all the biggest AI news you need to know. Plus, analysis from our AI editor and tips on how to use the latest AI tools!
Weekly on Friday
Tom's iGuide
Unlock the vast world of Apple news straight to your inbox. With coverage on everything from exciting product launches to essential software updates, this is your go-to source for the latest updates on all the best Apple content.
Weekly on Monday
Tom's Streaming Guide
Our weekly newsletter is expertly crafted to immerse you in the world of streaming. Stay updated on the latest releases and our top recommendations across your favorite streaming platforms.
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
One of OpenAI’s co-founders, who also served as its chief scientist until last month, has started a new company with the sole aim of building ‘safe superintelligence.’
Ilya Sutskever is one of the most important figures in the world of generative AI, including in the development of the models that led to ChatGPT.
In recent years his focus has been on superalignment, specifically trying to ensure superintelligent AI does our bidding not its own. He was one of the board members to fire Sam Altman earlier this year before resigning himself when Altman returned.
That is what he hopes to continue with his new company SSI Inc. This is the first AI lab to skip artificial general intelligence (AGI) and go straight for the sci-fi-inspired super brain. “Our team, investors, and business model are all aligned to achieve SSI,” the company wrote on X.
The founders are Sutskever, Daniel Gross, a former Apple AI lead turned investor in AI products and Daniel Levy a former OpenAI optimization lead and expert in AI privacy.
What is Superintelligence?
Superintelligence is within reach.Building safe superintelligence (SSI) is the most important technical problem of our time.We've started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.It’s called Safe Superintelligence…June 19, 2024
Artificial superintelligence (ASI) is AI with beyond human levels of intelligence. “At the most fundamental level, this superintelligent AI has cutting-edge cognitive functions and highly developed thinking skills more advanced than any human,” according to IBM.
Unlike AGI, which is generally as or more intelligent than humans, ASI would need to be significantly more intelligent in all areas including reasoning and cognition.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
There is no strict definition of superintelligence and each company approaching advanced AI has different interpretations. There is also disagreement over how long it will take to achieve this level of technology with some experts predicting decades.
One aspect of superintelligence would be an AI capable of improving its own intelligence and capabilities, leading to even further distance between human and AI capabilities.
How do you ensure Superintelligence is safe?
The problem with creating an AI model more intelligent than humanity is it could be difficult to keep it controlled or stop it from outsmarting us. It could opt to destroy humanity if it isn’t properly aligned to human values and interests.
To solve this every company working on advanced AI is also developing alignment techniques. These are approaches vary from systems that work on top of the AI model and others that are trained alongside it. That is the SSI Inc approach.
SSI says that focusing exclusively on superintelligence will allow them to ensure it is developed alongside alignment and safety. “SSI is our mission, our name, and our entire product roadmap, because it is our sole focus,” they wrote on X.
“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs,” the company added. “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.”
More from Tom's Guide
- OpenAI’s 'superintelligent' AI leap nearly caused the company to collapse — here’s why
- OpenAI is building next-generation AI GPT-5 — and CEO claims it could be superintelligent
- Governments unveil new AI security rules to prevent superintelligence from taking over the world

Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on AI and technology speak for him than engage in this self-aggrandising exercise. As the former AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover.
When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing.










