Google drops new Gemini model and it goes straight to the top of the LLM leaderboard

Google Gemini
(Image credit: Google)

Google is constantly updating Gemini, releasing new versions of its AI model family every few weeks. The latest is so good it went straight to the top of the Imarena Chatbot Arena leaderboard — toppling the latest version of OpenAI's GPT-4o.

Update: Gemini under fire after telling user to die.

Previously known as the LMSys arena, it is a platform that lets AI labs pit their best models against one another in a blind head-to-head. The users vote but don't know which model is which until after they've voted.

The new model from Google DeepMind has the catchy name Gemini-Exp-1114 and has matched the latest version of GPT-4o and exceeded the capabilities of the o1-preview reasoning model from OpenAI.

The top 5 models in the arena are all versions of OpenAI or Google models. The first model on the leaderboard not made by either of those companies is xAI's Grok 2.

The success of this new model comes as Google finally releases a Gemini app for iPhone, which beat the ChatGPT app in our Gemini vs. ChatGPT 7-round face-off.

How well does the new model work?

The latest Gemini model seems to perform particularly well at math and vision tasks, which makes sense as they are areas in which all Gemini models excel.

Gemini-Exp-1114 isn't currently available in the Gemini app or website. You can only access it by signing up for a free Google AI Studio account (the platform aimed at developers wanting to try new ideas).

I'm also not sure whether this is a version of Gemini 1.5 or whether its an early insight into Gemini 2, expected next month. If it is the latter then the improvement over the previous generation might not be as extreme as some expected.

However, it is doing well in technical and creative areas according to benchmarks. This would tie in to the idea its going to be useful for reasoning and managing agents. It first in math, solving hard problems, creative writing and vision.

Unlike other benchmarks the Chatbot Arena is based on human perceptions of performance and output quality, rather than rigid testing against data.

Whether this is just a new version of Gemini 1.5 Pro or an early insight into the capabilities of Gemini 2, its going to be an interesting few months in AI land.

More from Tom's Guide

Category
Arrow
Arrow
Back to MacBook Air
Brand
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Storage Type
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 70 deals
Filters
Arrow
Load more deals
Ryan Morrison
AI Editor

Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?