These 5 new Sora videos take GenAI to the next level
From bubble dragons to mythical tea, but when can we use it?
OpenAI has dropped a new set of videos generated using its Sora AI model. They were shared on TikTok and include a horse on rollerskates, a bubble dragon and mythical tea.
The AI lab has been teasing Sora since it was first unveiled to the world in February, leading to intense speculation over when it would finally be available for the public to try.
In a recent interview with Marquees Brownlee's WVFRM podcast the Sora team said a public release was unlikely to happen any time soon. This was in part due to the need for further safety research, and likely also due to the fact it takes minutes not seconds to make a video.
For now we will have to settle for the videos the team themselves produce, often in response to prompt suggestions from people on social media. In one of the new videos they were asked to show a “cute rabbit family eating dinner in their burrow.”
How is Sora different than other AI video models?
There are multiple AI video models and tools on the market at the moment, with Runway already coming up to a year since its public launch and Pika Labs expanding into sound effects and lip synced dialogue in partnership with ElevenLabs.
None of them, including the very realistic Stable Video Diffusion clips, seem to come close to what is possible with Sora. That could be down to time as the team told Brownlee that they have enough time to go away, make a coffee and come back before a video finally generates.
@openai ♬ Bubble - Official Sound Studio
They also made use of the massive number of GPUs available to OpenAI to train Sora and have adopted a new type of architecture that merges techniques from models like GPT-4 and DALL-E. Plus, Sora uses a very diverse training data set including a variety of sizes, lengths and resolutions.
Sign up now to get the best Black Friday deals!
Discover the hottest deals, best product picks and the latest tech news from our experts at Tom’s Guide.
One of the more remarkable videos in this new round of clips is a dragon seemingly made of bubbles and blowing bubble fire. The motion, quality and physics are all impressively realized.
It all comes from a single prompt
Currently the team have minimal control over the output as the prompting is done entirely by text and so far that has been from a fairly short one-sentence prompt.
That will likely change by the time Sora is released to the public as they are working on more fine detail controls to manipulate lighting, camera motion and orientation. These are all features available through other platforms like Pika and Runway.
@openai ♬ Pieces of Memory - Carlos Carty
Sora's ability to create something remarkable from a short prompt is impressive. In one of the new clips we see a teapot pouring water into a cup, but the cup is filled with what looks like a swirling vortex of colors and movement.
Many of the new videos are being shared to TikTok in a vertical format, showing that it is possible to create vertical videos using just a text prompt.
What is holding up Sora's release date?
@openai ♬ Funk Hip Hop Music(814197) - Pavel
We all want to play with Sora. It is an impressive tool that has use cases across different sectors including video production, marketing and architecture. One of the new videos gives us a walkthrough of a slightly odd kitchen with a bed off to one side.
The Sora team told Brownlee there was work to do on Sora before it was ready to be turned into an actual product, or included with ChatGPT.
Tim Brooks, research lead at Sora said: “The motivation for why we wanted to get Sora out in this form before it is ready is to find out what is possible and what safety research is needed.”
“We wanted to show the world this technology is on the horizon and hear from people how it could be useful,” and gather feedback from safety researchers on the risks it presents.
He said not only is Sora not a product, they don’t even have a timeline for when it might become a product — so don’t expect to be able to use it this year.
More from Tom's Guide
- MacBook Air M3 announced — pre-orders open now for ‘world’s best consumer laptop for AI’
- I tested Google Gemini vs OpenAI ChatGPT in a 9-round face-off — here’s the winner
- OpenAI is building next-generation AI GPT-5 — and CEO claims it could be superintelligent
Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?