7 best OpenAI Sora alternatives for generating AI videos

Pika Labs lip sync video
(Image credit: Pika Labs)

OpenAI’s Sora is one of the most impressive AI tools I’ve seen in years of covering the technology but only a handful of professional creatives have been given access so far.

We’ve seen dozens of impressive videos from a documentary about an astronaut to a music video about watching the rain. We’ve even seen a short film about a man with a balloon head.

Mira Muratti, OpenAI's Chief Technology Officer, says we should get access to Sora at some point this year, but warned that could change if the company isn’t able to tackle safety issues with the model before November.

Alternatives to Sora already available

While you’re waiting for Sora, there are several amazing AI video tools already available that can create a range of clips, styles and content to try. These Sora alternatives include Pika Labs and Runway among others.

The main limitation of the current generation of AI video tools is in duration. Most can’t do more than 3-6 seconds of consistent motion; some struggle to go beyond 3 seconds. 

Despite the limitations, these video tools produce impressive results, and they continue to improve or add new features to negate the limitations every day — you can also use lip syncing, sound effects and voice-over tools that won’t be available in Sora from day one.

Runway

Runway AI video

(Image credit: Runway)

Runway is one of the biggest players in this space. Before OpenAI unveiled Sora, Runway had some of the most realistic and impressive generative video content, and is remains very impressive.

Runway was the first to launch a commercial synthetic video model and has been adding new features and improvements over the past year. This includes a major boost in quality and motion consistency in its Gen-2 model since Sora was first unveiled.

Highlights include very accurate lip-synching from an image, which also handles animating head and eye movements to add to the overall realism. This comes with synthetic voices from ElevenLabs or the ability to record or upload your own voice.

Runway's special feature is Motion Brush, the ability to select a specific region of a video and animate only that part, or select multiple regions and animate independently.

Runway has a free plan with 125 credits. The standard plan is $15 per month.

Pika Labs

Pika Labs lip sync video

(Image credit: Pika Labs)

Pika Labs is one of the two major players in the generative AI video space alongside Runway. Its Pika 1.0 model can create video from images, text or other video, as well as extend a video to up to 12 seconds — although the more you extend it, the worse the motion becomes.

Pika launched last year to a lot of fanfare, sharing a cartoon version of Elon Musk and an impressive inpainting ability that allows you to replace or animate a specific region of a clip.

Pika Labs offers negative prompting and fine control over the motion in the video. It also features sound effects that are either included from a text prompt or aligned to the video and lip sync.

The lip syncing from Pika Labs can be added to video content. So you can have it generate a video from, say, a Midjourney photo, then animate its lips and give it voice. Or, as I did in an experiment, you can animate action figures.

Pika Labs has a free plan with 300 credits. The standard plan is $10 per month.

Stable Video

Stable Video

(Image credit: StabilityAI/AI generated)

Built by StabilityAI on top of Stable Video Diffusion, Stable Video is currently in a closed beta and happens to be one of the better implementations. It is also one of the few SVD platforms that offers fine control over the motion rather than just allowing you to set a motion amount.

You can generate from an image or from text, specify an aspect ratio or a style and set the camera to either locked or shake. You can also set other camera motion controls. When you generate from text, Stable Video offers four choices as a starter image to animate.

Stable Video is currently in beta and no pricing details have been released.

Leonardo and Night Cafe

Leonardo AI

(Image credit: Leonardo AI)

Stable Video Diffusion is an open model which means it can be commercially licensed and adapted by other companies. Two of the best examples of this are from Leonardo and Night Cafe, two AI image platforms that offer a range of models including Stable Diffusion itself.

Branded as Motion by Leonardo and Animate by NightCafe, the image platforms essentially do the same thing — take an image you’ve already made with the platform and make it move. You can set the degree of motion but there are minimal options for other controls.

NightCafe's base plan is $6 per month for 100 credits. 

Leonardo has a free plan with 150 creations per day. The basic plan is $10 per month.

FinalFrame

FinalFrame

(Image credit: FinalFrame AI generated)

This is a bit of a dark horse in the AI video space with some interesting features. A relatively small bootstrapped company, FinalFrame comfortably competes in terms of quality and features with the likes of Pika Labs and Runway, building out to a “total platform.”

The name stems from the fact FinalFrame builds the next clip based on the final frame of the previous video, improving consistency across longer video generations. You can generate or import a clip, then drop it on to the timeline to create a follow on, or to build a full production.

The startup recently added lip syncing and sound effects for certain users, including an audio track in the timeline view to add those sounds to your videos.

FinalFrame requires the purchase of credit packs which last a month. The basic plan is 20 credits for $2.99.

Haiper

Haiper

(Image credit: Haiper AI video)

A relative newcomer with its own model, Haiper takes a slightly different approach from other AI video tools, building out an underlying model and training dataset that is better at following the prompt rather than offering fine-tuned control over the motion. 

The default mode doesn't even allow you to change the motion level. It assumes the AI will understand the level of motion from the prompt, and for the most part, it works well. In a few tests, I found leaving the motion setting to default worked better than any control I could set.

Haiper is currently free, with no pricing information released.

LTX Studio

AI video from LTX Studio

(Image credit: LTX Studio/AI Video)

Unlike the others, this is a full generative content platform, able to create a multishot, multiscene video from a text prompt. LTX Studio has images, video, voice-over, music and sound effects; it can generate all of the above at the same time.

The layout is more like a storyboard than the usual prompt box and video player of the other platforms. When you generate video, LTX Studio lets you go in and adapt any single element, including changing the camera angle or pulling in an image to animate from an external application.

I don’t find LTX Studio handles motion as well as Runway or Stable Video, often generating unsightly blurring or warping, but those are issues the others have started to resolve and something LTX Studio owner Lightricks will tackle over time. It also doesn’t have lip sync, but that is likely to come at some point in the future.

LTX Studio is in beta with a waiting list. No pricing information is available, but the beta is free to use.

More from Tom's Guide

Category
Arrow
Arrow
Back to MacBook Air
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 94 deals
Filters
Arrow
Load more deals
Ryan Morrison
AI Editor

Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?

  • InternetWizard
    Your readers can't use LTX Studio yet, it's just a waitlist (for months now). I'm pretty sure they are just paying you to say this, Tom! No users no customers not even pricing yet, just a hype video promising what they can't deliver.
    Reply