Pika Labs launched its first public AI video model earlier this month and has been slowly rolling out access.
The web-based tool lets you generate short video clips from an image, video or text prompts, and you can customize how the movement works or what it looks like.
Artificial intelligence video generation is a relatively new phenomenon, going from a research concept to an actual product in a few months. Pika Labs is the latest to allow you to create realistic clips from nothing and after finally getting off the waiting list — I gave it a try.
How does Pika 1.0 compare?
On the surface, Pika 1.0 is very similar to the only other general-purpose AI video generation platform Runway. It even has similar motion controls, albeit without the recently revealed Motion Brush function that lets you paint movement within a specific region.
However, from a few test prompts, I found the movement in Pika 1.0 was richer. It could be used to generate regional motion from a simple prompt without the need for granular controls.
On the first run-through, each prompt generates a 3-second clip at up to 24 frames per second, although this can be customized. You can extend and upscale each generated video or add fine details, change the motion or have it alter a specific aspect of the shot.
Testing Pika 1.0
Testing AI video generators is still a bit hit and miss, as most of the models are in beta test form and the best approaches to testing haven’t yet been developed. For me, it is a case of coming up with a range of prompt ideas and seeing what the AI video generator comes up with.
I started with a known person. Some AI models outright refuse to generate video or images related to famous figures, but Pika Labs showed footage of a cartoon Elon Musk in its promotional video so I set the prompt “Elon Musk speaking to invading aliens.”
The Pika Labs AI video tool generated something of a caricature of Elon Musk, looking old and tired with an almost Nixon-esque appearance, but it was clearly the SpaceX boss.
There was no sign of aliens or even a crowd, just Elon speaking. I extended and refined my image request but never managed to achieve exactly what was in my head — a crowd of aliens looking on as Elon Musk delivers a speech.
I tried a different Elon Musk prompt, this time more directly inspired by the promotional video and asked Pika 1.0 to create a video of a cartoon Elon Musk addressing colonists on Mars. This was much closer, showing Mars in the background with small settlements.
The next test was an image-to-video experiment. For this, I selected a picture I generated using Midjourney for an earlier story on artists using prompt injection techniques to stop AI models using their images.
I wanted to see how well the combination of image and text prompt worked, so the image was the source but I also entered “an alien invasion” text prompt. The tool from Pika Labs seemed to have ignored the text and focused completely on animating through the image. It looks great, but the tool didn’t do as asked.
Finally, I tried a video-to-video test. For this test, I filmed a short clip of myself speaking to camera and uploaded it with the prompt “make me a cartoon and put me on a spaceship.”
Does Pika 1.0 live up to the hype?
Overall the Pika 1.0 output quality is impressive, especially if you use a high-quality image prompt to start the process. It works very well with Midjourney images, finding ways to animate them, while struggling with other types of input or format.
The video-to-video isn’t bad, but if you’re doing facial replacements, other specialized tools out there do it as well if not better than Pika, including Reface which lets you swap, alter or completely change faces using generative AI.
Pika 1.0 is the next step in generative video we’ve been waiting for, but while the output looks pretty, the motion still needs work. Some clever things are happening in the 3D motion space that will improve this aspect over time, but for now, Pika is a fun, and free, tool to play with.
I made a short "lunar landing" documentary entirely using Pika 1.0 for the video and ElevenLabs for the voiceover. There were some questionable shots, but in others, especially those using an image as the source material, the motion was almost life-like.
More from Tom's Guide
Get the BEST of Tom’s Guide daily right in your inbox: Sign up now!
Upgrade your life with the Tom’s Guide newsletter. Subscribe now for a daily dose of the biggest tech news, lifestyle hacks and hottest deals. Elevate your everyday with our curated analysis and be the first to know about cutting-edge gadgets.
Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover.
When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?