Midjourney drops surprise v6.1 update — now humans look more real than ever
Better looking skin
Midjourney, the leading artificial intelligence image generation platform has dropped a surprise update to its core model. In version 6.1, skin on humans looks more natural and rendered text is also more legible.
There was some speculation the company would release v6.5 at some point this summer but it seems they’ve gone for a more iterative approach, with v6.2 due in the coming weeks.
The biggest changes are to people. Specifically how it handles the depiction of arms, legs, hands and bodies. Its texture mapping has also been upgraded to offer new skin textures.
Having briefly played with the new version I’d say 6.1 feels more like a major upgrade than its iterative numbering would suggest. The improved text rendering alone is worth the change.
What is new in Midjourney v6.1?
Overall there are changes to every aspect of the model. Subtle upgrades to each area improve image quality by reducing the number of pixel artifacts, enhancing textures and the way it handles certain styles such as 8bit and retro designs.
Midjourney says the new model is 25% faster when running a standard job and upgrades to the personalization model allow for improved nuance, surprise and accuracy over v6.
Midjourney V6.1 is now live! V6.1 greatly improves image quality, coherence, text, and comes with brand-new upscaling and personalization models. It’s smarter, faster, clearer, and more beautiful. We hope you enjoy our best model yet <3 pic.twitter.com/4qfervgbhMJuly 30, 2024
The company wrote on X: “V6.1 greatly improves image quality, coherence, text, and comes with brand-new upscaling and personalization models,” adding that “it’s smarter, faster, clearer, and more beautiful. We hope you enjoy our best model yet.”
Sign up to get the BEST of Tom's Guide direct to your inbox.
Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
One of the new changes is in how the upscale works, offering better image and texture quality to improve overall look and feel. There is a new –q 2 mode that takes much longer but adds more texture to further improve the realism of an image.
There are also more precise, detailed, and correct small image features that are perfect for eyes, small faces and far away hands.
The feature I’m most excited about is improved text accuracy. This is something all AI models struggle with but Midjourney says if you put words within quotations in a problem it will accurately render those words on the image.
How well does Midjourney v6.1 work?
To use Midjourney v6.1 simply add –v 6.1 to the end of your prompt. This works on the web and the Discord versions and will change the model you are using. I ran a few tests and the most obvious changes are to skin and text rendering.
For the first test, I gave it the prompt: “A poster for a movie called "Cats in Space" where the subheadline is "They are feline good" showing cats on the moon in a spacesuit.” This was detailed enough to direct the model and included text requirements.
The poster came out better than I could have expected, although only two versions had the right style and one of those had an accurate rendering of the headline and subheadline.
I then asked it to show a “Wide shot of a woman playing a public free piano in a train station.” This was vague enough that if the prompt following was off it would give something bizarre but it didn’t, I got exactly what I was hoping for although one version almost had her on the tracks.
Finally, I had Midjourney v6.1 generate an image of a woman and animated it using the impressive new Runway Gen-3 Alpha image-to-video functionality and it is one of the most real-looking AI images and videos I've created to date.
Overall I think this is a notable improvement to Midjourney, offering subtle but significant changes to areas the base model struggled with and is a great sign of what is to come in v7.
More from Tom's Guide
- OpenAI is paying researchers to stop superintelligent AI from going rogue
- Exclusive: AI breaktrhough could let your next running shoes learn and adapt to how you move
- Meet Alter3 — the creepy new humanoid robot powered by OpenAI GPT-4
Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?