Pika Labs new generative AI video tool unveiled — and it looks like a big deal

In promoting Pika 1.0 the company generated a cartoon of Elon Musk
(Image credit: Pika Labs)

Generative artificial intelligence company Pika Labs has unveiled its latest model Pika 1.0. This builds on earlier versions and is a significant step up in AI video generation.

Dubbed an "idea-to-video" model, it can produce content in a range of styles and allows for editing existing video clips by painting over objects, people, or even whole scenes.

In a promotional video for Pika 1.0, you see an example of clothing being changed on the fly, the style of the video clip being updated, and even real people such as Elon Musk being depicted as a cartoon character. 

Multimodal video model

See more

The multimodal AI model lets you turn a text prompt, image, video, or even object within a clip into something entirely new at the press of a button.

Like earlier and more limited versions of the Pika Labs technology, this will be available to use on the Pika Discord server but will also be available on the Pika.art website for the first time.

Our vision is to enable anyone to be the director of their stories and to bring out the creator in all of us"

Demi Guo, Pika co-founder and CEO

The company has started the process of rolling it out to people signing up for the waiting list with full rollout expected to take a few weeks. Demand for the new tool has caused the Pika website to become unresponsive several times over the past 24 hours since the announcement.

“My Co-Founder and I are creatives at heart. We know firsthand that making high-quality content is difficult and expensive, and we built Pika to give everyone, from home users to film professionals, the tools to bring high-quality video to life,” said Demi Guo, Pika co-founder and CEO. “Our vision is to enable anyone to be the director of their stories and to bring out the creator in all of us.”

How it stacks up

Pika 1.0 comes at a time of growing competition in the AI video space. Unlike generative images, which are practically mainstream, generative video is a harder problem to crack.

Until recently Runway was running ahead of the pack with its Gen-2 model. This is capable of generative video from text, image, or a combination of both. It also offers fine-tuned controls and the ability to highlight which parts of the video should be animated.

It feels like generative video is catching up fast. In the past week, we’ve seen new tools from Runway, the launch of Stable Video Diffusion from StabilityAI, and Meta announcing its Emu Video AI model that is coming to Instagram at some point in the future.

I haven't been able to try Pika 1.0 for myself yet, but I have tried earlier versions of the model on Discord and it creates impressive clips. If the reality lives up to the hype, including being able to edit frame-by-frame, then Pika 1.0 could be for generative AI video what ChatGPT was for generative AI generally.

More from Tom's Guide

Ryan Morrison
AI Editor

Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover.
When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?