OpenAI just dropped a new Sora video to promote TED Talks — and the video is explosive
Where will TED take us in the future?

Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Daily (Mon-Sun)
Tom's Guide Daily
Sign up to get the latest updates on all of your favorite content! From cutting-edge tech news and the hottest streaming buzz to unbeatable deals on the best products and in-depth reviews, we’ve got you covered.
Weekly on Thursday
Tom's AI Guide
Be AI savvy with your weekly newsletter summing up all the biggest AI news you need to know. Plus, analysis from our AI editor and tips on how to use the latest AI tools!
Weekly on Friday
Tom's iGuide
Unlock the vast world of Apple news straight to your inbox. With coverage on everything from exciting product launches to essential software updates, this is your go-to source for the latest updates on all the best Apple content.
Weekly on Monday
Tom's Streaming Guide
Our weekly newsletter is expertly crafted to immerse you in the world of streaming. Stay updated on the latest releases and our top recommendations across your favorite streaming platforms.
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
OpenAI's latest Sora video is a rapid fly-through of innovation, conversation and hints of red as the company uses its flagship video product to promote a new season of TED Talks.
The motion is slightly nausea inducing and takes you on a rollercoaster ride through research labs, factories and lecture halls before finishing on a shot of someone giving a talk on stage.
It has been designed to promote the new season of TED talks which will focus on artificial intelligence by exploring what TED will cover in 40 years time.
This is the latest Sora release from a professional video producer rather than the OpenAI team themselves and follows nature documentaries, music videos and short films about a man with a bubble head.
How to make a video with OpenAI Sora
What will TED look like in 40 years? For #TED2024, we worked with artist @PaulTrillo and @OpenAI to create this exclusive video using Sora, their unreleased text-to-video model. Stay tuned for more groundbreaking AI — coming soon to https://t.co/YLcO5Ju923! pic.twitter.com/lTHhcUm4FiApril 19, 2024
Currently only a small group of OpenAI approved artists and creators can make anything using Sora as it is a closed system. That is expected to change later this year as OpenAI looks to integrate Sora into ChatGPT and third-party tools like Adobe Premiere Pro.
The TED Talks was created using Sora by LA based director Paul Trillo. He said to get the final 1:33 clip he had to create over 330 clips from text prompts then edit them down.
The final video was made up of a total of 25 clips, all made by Sora. Everything but the TED logo was generated by Sora, so all the motion and individual shots were AI generated.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Trillo said it was: “Really fun to explore techniques I have done in the past with this new tool. Unlocks a lot of new ideas.”
This is a sentiment many of the creatives given early access to Sora have expressed, suggesting it could lead to entirely new ways of telling stories with visuals.
What can we see in the new video

The video opens with what looks like an explosion, the camera zooms rapidly forward into the explosion and that kicks off a "through the looking glass" like journey of discovery.
You then fly over a number of cities, going into different types of buildings. At first you see someone giving at alk and the zoom continues into factories, experiments and more.
Every few scenes we get another person giving a talk against a red background, likely designed to simulate a TED talk, before more shots of experiments and research.
It is a compelling and well done video with music by Jacques. It gives a solid indication of what is possible with generative AI video when put in the hands of artists and in my opinion further supports the idea that rather than replace creatives, it will unleash a new era of creativity.
More from Tom's Guide
- Google’s new VLOGGER AI lets you create a lifelike avatar from just a photo
- Apple could bring Google Gemini to the iPhone for AI tasks
- I gave Claude, ChatGPT and Gemini a photo of ingredients to see which came up with the best recipe

Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on AI and technology speak for him than engage in this self-aggrandising exercise. As the former AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover.
When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing.










