OpenAI Sora: Everything you need to know

Sora
(Image credit: OpenAI)

OpenAI revealed Sora to the world on February 15, 2024 by sharing a handful of remarkable AI generated videos and a research paper on X. 

Sora wasn’t the first artificial intelligence video model, but it was the first to show such high levels of consistency, duration and photo realism.

While the output seems impressive, so far only videos generated by OpenAI staff have been shared on either X or TikTok, although some were made with prompts suggested by fans.

No date has been set yet for when the model will be made public, or what limitations will be placed on its output before it is integrated into a tool like ChatGPT

Sora news and updates (Updated March 14, 2024)

What is OpenAI Sora?

OpenAI Sora video of eye

(Image credit: OpenAI)

Sora is a generative video model, similar to the likes of Runway’s Gen-2, Pike Labs Pika 1.0 and Stable Video Diffusion from StabilityAI. It turns text, images or video into AI video content.

It is named for the Japanese word “sky,” which the company said was to show its "limitless creative potential." One of the first clips showed two people walking through Tokyo in the snow.

Unlike some of the models that came before it, Sora appears to be much more capable, able to generate clips of up to one minute long and with consistent characters and motion.

What is the technology behind Sora?

gif of Sora created video featuring frolicking dogs

(Image credit: OpenAI)

The technology behind Sora is an adapted version of the models built for DALL-E 3, OpenAI’s generative image platform but with additional features for fine-tuned control.

Sora is a diffusion transformer model, that is it marries the type of image generation model behind Stable Diffusion with the token-based generators powering ChatGPT

A video is generated in a latent space and "denoised," or formed in 3D patches and then put through a video decompressor to turn into a standard, human viewable output. 

What data was Sora trained on?

Sora

(Image credit: OpenAI)

OpenAI says it trained its model on publicaly available videos, public domain content and copyrighted videos where it had purchased the licence in advance.

It hasn't said exactly how many videos went into the training data and is unlikely to ever reveal that information. It is thought to be in the millions.

The company used a video-to-text engine to create captions and labels from ingested video files to further fine-tune Sora on real-world content.

Rumors and speculation suggest that OpenAI also made use of synthetic video content, such as that generated using Unreal Engine 5 as this would also give it information on the physics of the worlds inside the video clips it ingested. 

Why did Sora surprise its developers?

See more

Every large scale AI model has its quirks, behaving in unexpected ways or responding to prompts in a way that almost feels the opposite of what was intended. Sora is no different.

During the post-training run Tom Brooks, a Sora researcher said it seemed to work out how to create 3D graphics from its own dataset without any additional training.

Meanwhile, Bill Peebles, another researcher working on the model said it automatically created different video angles without being prompted — it assumed that was what was needed.

What about content restrictions and privacy?

Sora

(Image credit: OpenAI)

During training there were also red teamers and safety experts working to track, label and prohibit use cases for misinformation, hateful content and bias through adversarial testing. 

There are also metadata tags within the generated videos to label it as made by AI and text classifiers that will check prompts don't violate usage policies.

Like DALL-E 3, OpenAI says Sora will have a number of content restrictions before launch. This will include limits on generating images of real people.

This will also include a ban on generating videos showing extreme violence, sexual content, hateful imagery, celebrity likeness or the IP of others such as logos and products. None of this is possible easily with DALL-E 3 and the same restrictions will apply. 

How can I access Sora?

still from a video created with a text prompt by OpenAI Sora

(Image credit: OpenAI)

You can't currently access Sora. The only insight we have into the model is what we've seen from videos shared by OpenAI themselves. This is because they are working on ensuring it doesn't generate misinformation or dangerous content.

Tim Brooks, research lead at Sora said they have to focus on safety and ensure mechanisms are in place that the public can be confident in the difference between AI generated and real videos before it is released. 

It also takes a long time to make a single video clip. Long enough, the team explained, to make a coffee and come back to it still making the clip.

It is most likely that Sora will be integrated into ChatGPT similar to DALL-E 3 rather than made available as a standalone product — although previous versions of DALL-E had their own page.

The model will also be available as an API where third-party developers can integrate its functionality into their own products, although that will come further down the line.

This already happens with DALL-E 3. For example you can use the OpenAI model within your own product to automatically create images, or, as is the case with the AI image platform NightCafe, offer your own interface to generate images with the model.

We may even see it reserved as a professional tool, integrated into products like Apple's Final Cut Pro or Adobe Premiere Pro for filmmakers and VFX artists.

When will Sora be released?

OpenAI hasn't set a release data for Sora yet, but the CTA Mira Murati says it will come out sometime in 2024, and possibly before the summer.

When released it will be available and priced similarly to OpenAI's image generation model DALL-E, likely integrated into the premium version of ChatGPT.

More from Tom's Guide

Category
Arrow
Arrow
Back to Ultrabook Laptops
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 60 deals
Filters
Arrow
Load more deals
Ryan Morrison
AI Editor

Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover.
When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?