OpenAI co-founder starts new company to build ‘safe superintelligence’ — here’s what that means

Adobe Firefly image of superintelligence
(Image credit: Adobe Firefly/Future AI)

One of OpenAI’s co-founders, who also served as its chief scientist until last month, has started a new company with the sole aim of building ‘safe superintelligence.’

Ilya Sutskever is one of the most important figures in the world of generative AI, including in the development of the models that led to ChatGPT

Latest Videos From

What is Superintelligence?

Artificial superintelligence (ASI) is AI with beyond human levels of intelligence. “At the most fundamental level, this superintelligent AI has cutting-edge cognitive functions and highly developed thinking skills more advanced than any human,” according to IBM.

Unlike AGI, which is generally as or more intelligent than humans, ASI would need to be significantly more intelligent in all areas including reasoning and cognition.

There is no strict definition of superintelligence and each company approaching advanced AI has different interpretations. There is also disagreement over how long it will take to achieve this level of technology with some experts predicting decades.

One aspect of superintelligence would be an AI capable of improving its own intelligence and capabilities, leading to even further distance between human and AI capabilities.

How do you ensure Superintelligence is safe?

The problem with creating an AI model more intelligent than humanity is it could be difficult to keep it controlled or stop it from outsmarting us. It could opt to destroy humanity if it isn’t properly aligned to human values and interests.

To solve this every company working on advanced AI is also developing alignment techniques. These are approaches vary from systems that work on top of the AI model and others that are trained alongside it. That is the SSI Inc approach.

SSI says that focusing exclusively on superintelligence will allow them to ensure it is developed alongside alignment and safety. “SSI is our mission, our name, and our entire product roadmap, because it is our sole focus,” they wrote on X.

“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs,” the company added. “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.”

More from Tom's Guide

Category
Brand
RAM
Storage Size
Colour
Condition
Storage Type
Minimum Price
Any Minimum Price
Maximum Price
Any Maximum Price
Showing 10 of 21 deals
Filters
Arrow
Show more
Ryan Morrison

As the former AI Editor for Tom's Guide, Ryan wielded his vast industry experience with a mix of skepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover.
When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing.