Pika Labs was formed out of Standford University's AI lab, offering an early version of its model through Discord.
The new release comes off the back of a $55 million investment round and will also be available through a web platform for the first time.
The company described their goal as wanting to "enable everyone to be the director of their own stories and to bring out the creator in each of us.”
Starting with a waiting list
Pika Labs is starting with a waiting list for version 1.0. The first users will be those making the most active and impressive use of the earlier model through Discord. That rollout starts this week. Once the model is “stable” they will begin rollout to others on the list.
There are already half a million Pika users accessing the original model through Discord and producing millions of videos per week. Pika 1.0 is a significant upgrade adding the ability to generate videos in a range of styles including 3D animation, anime and cinematic.
In a promotional video for Pika 1.0, you see an example of clothing being changed on the fly, the style of the video clip being updated, and even real people such as Elon Musk being depicted as a cartoon character.
What can it be used for?
🌟AI 'Modify Region' Example🌟https://t.co/JHRrintgm5 pic.twitter.com/mVnW1oDDoUDecember 5, 2023
The company has started to demonstrate a handful of the more unique features, including modifying specific regions within a video.
In a short clip shared on X it shows a woman typing on a phone as the background is changed to include a Christmas tree. Her clothing is then changed to put her in a Christmas jumper.
This suggests the model will have advanced in-situ editing capabilities, generating random objects within a scene. This is something previously only possible in image AI, first revealed by OpenAI in DALL-E 2.
It will also include the ability to edit existing videos using natural language prompts, something also promised with Emu Video from Meta and available to a lesser extent with Runway.
What Pika 1.0 promises is a generative video that is on par with advances in generative images. It will allow users to turn any idea into a short film.
Whether it be a photo you've taken that you want to make into a cartoon, or simply make the video of a relative more festive, the proof will be in the actual use.
I haven’t been able to try the model myself yet so can’t say how long this takes, how accurate it is, or what the limits are of the model. If it lives up to its promise then this will change how we make videos forever.
More from Tom's Guide
Get the BEST of Tom’s Guide daily right in your inbox: Sign up now!
Upgrade your life with the Tom’s Guide newsletter. Subscribe now for a daily dose of the biggest tech news, lifestyle hacks and hottest deals. Elevate your everyday with our curated analysis and be the first to know about cutting-edge gadgets.
Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover.
When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?