Runway unveils Gen-3 — AI video just took a big leap forward
Promises better motion and realism

Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Daily (Mon-Sun)
Tom's Guide Daily
Sign up to get the latest updates on all of your favorite content! From cutting-edge tech news and the hottest streaming buzz to unbeatable deals on the best products and in-depth reviews, we’ve got you covered.
Weekly on Thursday
Tom's AI Guide
Be AI savvy with your weekly newsletter summing up all the biggest AI news you need to know. Plus, analysis from our AI editor and tips on how to use the latest AI tools!
Weekly on Friday
Tom's iGuide
Unlock the vast world of Apple news straight to your inbox. With coverage on everything from exciting product launches to essential software updates, this is your go-to source for the latest updates on all the best Apple content.
Weekly on Monday
Tom's Streaming Guide
Our weekly newsletter is expertly crafted to immerse you in the world of streaming. Stay updated on the latest releases and our top recommendations across your favorite streaming platforms.
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
Runway, one of the first AI video generation platforms to launch publicly, has unveiled the third generation of its model — and its a huge step forward for the technology and could be one of the best AI video generators yet.
In the same way that OpenAI says its end goal is artificial general intelligence, for Runway that is general world models. That is an AI system that can build an internal representation of an environment and use it to simulate events inside that environment.
Gen-3 Alpha, the new model from Runway is the closest the startup has come to achieving that long-term ambition. The company says it will power all image- and text-to-video tools on the Runway platform, as well as Motion Brush and other features such as text-to-image.
Runway: How does Gen-3 differ from Gen-2
Introducing Gen-3 Alpha: Runway’s new base model for video generation.Gen-3 Alpha can create highly detailed videos with complex scene changes, a wide range of cinematic choices, and detailed art directions.https://t.co/YQNE3eqoWf(1/10) pic.twitter.com/VjEG2ocLZ8June 17, 2024
Runway hasn’t said when Gen-3 will be implemented, replacing the current Gen-2 models but added there are new safeguards in place for Gen-3 including improved visual moderation and the C2PA standard which makes it easier to trace the origin of different types of media.
This is the latest in a new generation of AI video models, each with longer clips and improved motion including OpenAI Sora, Luma Labs Dream Machine and Kling.
Runway says Gen-3 is the first in a series of models that have been trained on a new infrastructure. This was built specifically for large-scale multimodal training and improves fidelity, consistency and motion.
One of the lessons learned from Sora is that scale matters above most other things, so adding more compute and data can significantly improve the model.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
What does Gen-3 look like?
This leap forward in technology represents a significant milestone in our commitment to empowering artists, paving the way for the next generation of creative and artistic innovation.Gen-3 Alpha will be available for everyone over the coming days.Prompt: A slow cinematic push… pic.twitter.com/cLaZvGpeu6June 17, 2024
The new model was trained on video and image at the same time, which Runway says will improve visual quality from tex-to-video prompts.
The new model will also power new tools offering more fine-grained control over things like structure, style and motion.
I haven't had the chance to try Gen-3 myself and it is still in alpha mode but the videos seem to show a significant improvement in motion and prompt adherence.
Each video is about ten seconds long which is about twice as long as a Luma default and of a similar length to Sora videos. It is also nearly three times the length of the current Runway Gen-2 videos.
1. Taking the train

Prompt: "Subtle reflections of a woman on the window of a train moving at hyper-speed in a Japanese city."
2. Spaceman in the city

Prompt: "An astronaut running through an alley in Rio de Janeiro."
3. An underwater community

Prompt: "FPV flying through a colorful coral lined streets of an underwater suburban neighborhood."
4. Hot Air balloon

Prompt: "Handheld tracking shot at night, following a dirty blue ballon floating above the ground in abandon old European street."
5. The big picture

Prompt: "An extreme close-up shot of an ant emerging from its nest. The camera pulls back revealing a neighborhood beyond the hill."
6. Realistic people

Prompt: "Zoom in shot to the face of a young woman sitting on a bench in the middle of an empty school gym."
7. Drone through a castle

Prompt: "A FPV drone shot through a castle on a cliff."
More from Tom's Guide
- Apple is bringing iPhone Mirroring to macOS Sequoia — here’s what we know
- iOS 18 supported devices: Here are all the compatible iPhones
- Apple Intelligence unveiled — all the new AI features coming to iOS 18, iPadOS 18 and macOS Sequoia

Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on AI and technology speak for him than engage in this self-aggrandising exercise. As the former AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover.
When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing.










