Anthropic looks to beat GPT-5 and Grok 4 with this one major upgrade

GPT-5 might be the big talking point in AI right now, but Anthropic’s Claude is looking for ways to fight back and compete in the crowded market. The company’s latest trick is to up its prompt length. This feature, exclusively for enterprise customers, is in part looking to bring developers over to its tool. The context window, meaning the amount of text that the model can consider, has been raised to 1 million.
That is, as it sounds, absolutely massive. It translates to roughly 750,000 words. Compared to Claude’s previous limit, it is roughly five times higher and more than double the amount offered by GPT-5 right now. However, this new feature will only be made available through Anthropic’s cloud partners, including Amazon Bedrock and Google Cloud’s Vertex AI. This means it is only going to apply to a small number of Anthropic users.
This is an area where Anthropic has seen large amounts of growth in recent years, deploying one of the more successful business-focused AI plans. It has been selling it to partners, including Microsoft’s GitHub Copilot, Windsurf, and Anysphere’s Cursor.
However, while it does have a grasp on this market right now, the competition is getting competitive, even with these new longer context windows. Both Grok 4 and GPT-5 claim to have some of the best coding capabilities available in AI tools right now.
With the rollout of GPT-5, OpenAI, which has frequently been the first choice for people, could steal away business. OpenAI has largely been a consumer-focused brand, whereas Anthropic has made a lot of its money on the business side. But Altman has shown interest in this other area, too.
Right now, the advancement in context length does give Anthropic a major advantage.
To keep up with the progression from both Grok and ChatGPT, Anthropic announced Claude Opus 4.1 recently. This brought with it improvements to the coding capabilities of the model.
Right now, the advancement in context length does give Anthropic a major advantage. However, it isn’t an entirely unique feature. Google’s Gemini 2.5 Pro offers a 2-million context window, and Meta’s Llama 4 Scout goes up to a whopping 10 million.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Standing out in this market is challenging. While the improvement in context windows from Anthropic makes a big difference, it is unlikely to be enough to make a massive difference.
Especially considering some research seems to show that, even with a larger context window at hand, AI can’t usually handle incredibly long prompts. Either way, Anthropic is looking for ways to stay competitive.
More from Tom's Guide
- I tested ChatGPT-5 vs Grok 4 with 9 prompts — and there's a clear winner
- Apple plans home robot invasion with lifelike Siri that 'injects itself into conversations' — and there's a lot more devices
- 5 AI tools students need to excel in 2025 — without cheating













Alex is the AI editor at TomsGuide. Dialed into all things artificial intelligence in the world right now, he knows the best chatbots, the weirdest AI image generators, and the ins and outs of one of tech’s biggest topics.
Before joining the Tom’s Guide team, Alex worked for the brands TechRadar and BBC Science Focus.
He was highly commended in the Specialist Writer category at the BSME's 2023 and was part of a team to win best podcast at the BSME's 2025.
In his time as a journalist, he has covered the latest in AI and robotics, broadband deals, the potential for alien life, the science of being slapped, and just about everything in between.
When he’s not trying to wrap his head around the latest AI whitepaper, Alex pretends to be a capable runner, cook, and climber.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.