5 Best AI video generators — tested and compared

Young man on his laptop looking at images and videos
(Image credit: Shutterstock)

Generative AI video has taken on a new meaning in the past year, going from tools capable of pulling together clips from a massive library to those able to create a video nearly indistinguishable from reality synthetically — all from a text prompt.

Runway kickstarted this revolution in February last year with the release of Gen-2, the first commercially available AI video generator, emerging out of the Discord test-bed. Pika Labs quickly followed this with Pika 1.0 and then several Stable Video Diffusion-based services came online.

Things started to break through for synthetic video earlier this year when OpenAI unveiled Sora, revealing that the scale of compute and training data were among the biggest factors in making a breakthrough in realism and motion quality.

We now have several Sora-level models, some readily available or coming very soon such as Luma Labs Dream Machine and Runway’s Gen-3 and others not as easily accessed such as the Chinese video model Kling.

Pika Labs and Haiper are also regularly updating and everyone expects something new from StabilityAI around Stable Video 2. For now, here is a list of generative video models I’ve used, tested and are readily available for anyone with the time or money to try.

What makes the best AI video generators?

Why you can trust Tom's Guide Our writers and editors spend hours analyzing and reviewing products, services, and apps to help find what's best for you. Find out more about how we test, analyze, and rate.

A good generative AI video platform needs to be able to create high resolution clips with clear visuals, minimal artefacts and reasonably realistic motion.

It will follow the prompt you give it, whether that is in the form of text or an image and will also offer reasonably quick generation times.

I'd also expect a platform around it including additional features such as inpainting, clip extensions and an ability to upscale the clip if it is lower resolution.

Swipe to scroll horizontally
Row 0 - Cell 0 Credits with free planCost of cheapest paid planCredits with cheap basic plan
Luma Labs30/month$29150/month
Pika Labs250 total$10700/month
Runway125 total$15650/month
Haiper10/day$10Unlimited
FinalFrameN/A$320

For each of these reviews I've included a short video generated by that platform that I've generated myself using the default settings with no custom features or additional prompts.

Best overall video

Luma Labs Dream Machine

(Image credit: Luma Labs/Future AI)

Luma Labs Dream Machine

Impressive motion and video quality

Reasons to buy

+
Realistic video generation
+
Clip extension
+
Five second initial video
+
Image and text to video
+
Accurate motion
+
Prompt enhancement

Reasons to avoid

-
Long waiting times 
-
Daily generation limit
-
Minimal additional features
-
Expensive starter plan

Dream Machine came out of nowhere and from a company previously focused on generative 3D content. Luma Labs Genie model was a big milestone in text-to-3D model and it seems they took some of that understanding and created a text-to and image-to-video model.

Demand was so high for Dream Machine when it first launched that the company had to quickly implement daily limits for free users of just five generations per day. It has also been appealing for more compute power to run its model on social media.

Each video generated is about five seconds long and it is impressive at following prompts. You can give it a descriptive idea and it will then improve that to get the best result from the model.

Soon after launch the ability to extend a clip by up to five seconds was also added, although from my experience this can be a bit hit-and-miss. When it works it is effortless but you have to get the prompt exactly right or it will do some weird changes to your original video.

The videos created with Luma Labs Dream Machine are as realistic as anything I’ve seen from the Sora examples with impressive levels of motion control. Unlike Sora I've been able to see this for myself in videos I've created. It is easy to use, enhances your own prompts and works well with traditional filmmaking queues like dolly-in.

It comes with 30 video generations per month with the free plan, which are used up very quickly if you want to do more than play about. The paid plans start at $30 a month for 120 creations on top of your 30 free. It also removes the watermark, allows for commercial use and gives you a higher priority in the queue.

Best value platform

Pika Labs

(Image credit: Pika Labs/Future AI)

Pika Labs

Reasonable price for the quality achieved

Reasons to buy

+
Good value premium plan
+
Generous starter credits
+
AI sound effect creation
+
Good image to video
+
Wide range of features
+
Good community
+
Free upscale

Reasons to avoid

-
Issues with motion
-
Warping and distortion
-
Lower quality text to video
-
No monthly top-ups on free plan

Pika Labs is one of the best overall AI video platforms but its strength is more in turning images from services like Midjourney or Ideogram into video. This is thanks to an update to the image-to-video model earlier this month. 

Using its built-in motion tools gives you the best results, especially for scenes requiring a slow zoom or pan. But you can instruct the model with a text prompt alongside the image prompt.

A new generation model is coming soon but for now, Pika Labs is working on a first gen synthetic video model. A couple of months ago this alone would be something to shout about and Pika Labs was among the best, but in comparison to 2nd gen models like Sora and Kling it is showing its age.

However, as a platform, it has a lot to offer and if you start with an image the comparisons are less obvious. It generates three-second clips extendable up to 16 seconds and offers upscaling as well as the ability to inpaint a specific region of a video.

What makes me say it is one of the best platforms is the addition of sound effects, that can be your own custom noises or generated to match the video and lip sync technology created in partnership with ElevenLabs. It doesn’t move the head but is a good quick solution.

The free plan gives you a total of 300 credits and the ability to top up additional credits as needed for a cost. The Standard plan is $10 per month for 1050 credits, renewed monthly. Both include the ability to upscale videos and remove the watermark.

Best overall platform

Runway AI video of cowboy walking in the sunset

(Image credit: Runway)

Runway

Lots of features to play with including lip-sync

Reasons to buy

+
Head movement with lip-sync
+
Timeline editor for AI video
+
Motion Brush to paint what you want to animate
+
Fine-grain controls over motion
+
Solid Discord community
+
Asset library and editing features
+
Extensions up to 16 seconds

Reasons to avoid

-
Warping and distortion in video
-
No monthly top-ups on free plan
-
Issues with motion

Runway has unveiled Gen-3 but at the time of writing it hadn’t been released to the public. If it had then Runway may have been my best overall because its next-generation model offers 10-second clips, advanced motion control and impressive degrees of realism. This is based on Gen-2.

Its current generation, Gen-2, is of a similar quality to Pika Labs Pika 1.0 but it doesn’t seem to be as good as generating video from an image prompt. What it does have in its favor is an impressive toolkit of features including Motion Brush.

Motion Brush lets you paint specific parts of an image and only animate that aspect, or dictate specifically how it should be animated and move. Its not perfect but its as good as first-generation synthetic video gets. If you get the painting right, are descriptive with your prompt then you can get very good results.

The other tool that stands out from the crowd is its impressive lip-sync system. It also uses ElevenLabs and lets you add your own voice but unlike Pika also animates the head movement to create a more realistic video output.

Each video in Gen-2 is about 4 seconds with the ability to extend up to 16 seconds. Videos vary in motion accuracy with a lot of blurring and warping. If you use the built-in controls you can improve the quality of the motion in each scene.

The free plan gives you a flat 125 credits but with no option to increase. You also can't upscale or remove watermarks. The base plan is $15 per month for 625 credits renewed every month. It does include upscaling and removing watermarks and access to other features such as texture creation, custom model training and 4K exports.

Best for prompt following

Haiper/AI video

(Image credit: Haiper/AI video)

Haiper

Impressive ability to predict accurate motion

Reasons to buy

+
Impressive prompt adherence 
+
Motion prediction from the model
+
Realistic video
+
Range of styles and features
+
Simple UI

Reasons to avoid

-
Limited to short clips
-
Minimal features on free plan
-
Watermark free only on most expensive plan

Haiper is a relative newcomer that has focused primarily on building out prompt adherence. It sits somewhere between the first-generation models and the likes of Luma Labs Dream Machine with impressive motion thanks to its transformer diffusion model but with short clips that make it hard to properly judge.

You can use it to create text-to-video or from an image as the initial prompt. It also supports repainting part of a generated video. This also works with videos loaded from outside the platform including your own filmed videos. For example you could share a video of yourself and change your head to that of a cat.

Extensions are coming soon and clips currently start at 4s with 8s initial generations also coming soon with the next update.

What makes Haiper stand out is how well it follows a prompt and how good its AI model is at interpreting likely motion within the video. I spoke to the developers early on and they said it actually works better if you leave the AI to work out how to manage movement within the video.

It is currently in beta and the free version allows for 10 creations per day. They contain a watermark and can't be used commercially.

The base plan is $10 a month fro unlimited creations and early access to new features but they also have a watermarked and can't be used commercially. The $ 30-a-month plan is required for watermarked free and commercial-use video.

Best for experimenting

FinalFrame

(Image credit: FinalFrame AI generated)

FinalFrame

Tries many new ideas and iterates quickly

Reasons to buy

+
Regular feature updates
+
Impressive lip-sync
+
Timeline view
+
Image generation
+
Pay as you go 
+
Upscaling

Reasons to avoid

-
Warping and distortion
-
Lower quality
-
Limited motion

I love FinalFrame. It doesn’t have the best video quality or even motion quality but it has speed of iteration and new features in its favor. 

Built by a small team, the bootstrapped platform quickly adds new technology and features as they become available and isn’t afraid to try new things.

It is also very easy to use. Quickly give it an image or text prompt and it will turn it into a video, adding it to a library in a UI similar to a video editor like Final Cut Pro.

While the quality isn't great, in my tests its motion is more realistic than from some of the big players including Pika Labs. It works best when prompted using an image first but it also creates impressive AI images using a version of Stable Diffusion.

One thing that stands out for me is the lip syncing. It works impressively from a video generation, keeping natural movement while matching the lip movement to the speech you give the model.

Unlike the other platforms which require a monthly commitment, with FinalFrame you just buy the credits you need. The basic plan is $3 for 20 credits.


Want to know more about using AI for creative work? Here's our breakdown of the best AI image generators

More from Tom's Guide

Category
Arrow
Arrow
Back to MacBook Air
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Storage Type
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 66 deals
Filters
Arrow
Load more deals
Ryan Morrison
AI Editor

Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?