Runway launches new video-to-video AI tool — here's what it can do

Runway AI video
(Image credit: Runway AI video/Future)

Leading AI video platform RunwayML has finally unveiled its video-to-video tool, allowing you to take a ‘real world’ video and adapt it using artificial intelligence. 

Runway launched Gen-3 Alpha, the latest version of its video model in June and has gradually added new features to an already impressive platform that we gave 4 stars and named one of the best AI video generators

It started with text-to-video, added image-to-video soon after, and now it has added the ability to start with a video. There was no video-to-video with Gen-2 so this is a significant upgrade for people wanting to customize a real video using AI.

The company says the new version is available on the web interface for anyone on a paid plan and includes the ability to steer the generation with a text prompt in addition to the video upload.

I put it to the test with a handful of example videos and my favorite was a short clip of my son running around outside. With video-to-video, I was able to transport him from the real world to an underwater kingdom and then on to a purple-hued alien world — in minutes.

What is Runway Gen-3 video-to-video?

Runway AI video

(Image credit: Runway AI video/Future)

Starting an AI video prompt with a video is almost like flipping the script compared to starting with an image. It lets you determine the motion and then use AI for design and aesthetics. When you start with an image you’re defining the aesthetic then AI sets the motion.

Runway wrote on X: “Video to Video represents a new control mechanism for precise movement, expressiveness and intent within generations. To use Video to Video, simply upload your input video, prompt in any aesthetic direction you like.”

As well as being able to define your own prompt there are a selection of preset styles. One can turn the subject matter into a glass effect and another makes it a line drawing.

In its demo video we see a sweeping drone view of hills turn first into wool, then into an ocean view and finally sand dunes or clay. Another example shows a city where we first have it at night, then daytime, then in a thunderstorm and finally into bright colors.

Being able to film real footage and then use AI to apply either a new aesthetic or even just specific effects — one example sets off an explosion in the background — is a significant step forward for generative AI video and adds a new usefulness to the feature.

More from Tom's Guide

Category
Arrow
Arrow
Back to MacBook Air
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Storage Type
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 87 deals
Filters
Arrow
Load more deals
Ryan Morrison
AI Editor

Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?