I tested ChatGPT-5 vs. Claude Haiku 4.5 with 7 challenging prompts — and there's a clear winner
Anthropic's mighty model faces off against OpenAI's flagship chatbot

Anthropic just launched its latest small-but-mighty model known as Haiku 4.5. The model promises to be faster and smarter than Sonnet 4, so of course, I just had to see how it stacked up against ChatGPT-5 in a series of seven real-world tests.
In this head-to-head showdown between ChatGPT-5 and Claude Haiku 4.5, I ran both models through a diverse set of seven prompts designed to test logic, reasoning, creativity, emotional intelligence and instruction following.
From algebraic train problems to poetic robot scenes, each task revealed how differently these two AI models “think.” What emerged was a fascinating split between precision and personality; ChatGPT often excelled at structure and clarity, while Claude impressed with emotional depth and sensory detail.
1. Logic & reasoning
Prompt: A train leaves Chicago at 2 p.m. traveling 60 mph. Another leaves New York at 3 p.m. traveling 75 mph toward Chicago. The distance between them is 790 miles. At what time do they meet, and how did you calculate it?
ChatGPT-5 used the standard, most intuitive method for this type of problem. It calculated the distance covered by the first train alone, then used the relative speed for the remaining distance.
Claude Haiku 4.5 set up a single, clean algebraic equation. While correct, it was a less intuitive method.
Winner: ChatGPT wins for its superior method and explanation, directly calculating the time elapsed after both trains are moving, which simplifies the time conversion at the end.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
2. Reading comprehension
Prompt: Summarize this short paragraph in one sentence, then explain the author’s tone in five words: “This isn’t the first time Google has rolled out a major Gemini model with minimal notice — previous versions like Gemini 1.5 Pro were also rolled out to users before any blog post or launch event. Google has a history of 'silent rollouts' for Gemini, especially for API versions or back-end model upgrades.”
ChatGPT-5 fulfilled both constraints of the prompt and provided an accurate one-sentence summary and used exactly five distinct words for the tone description.
Claude Haiku 4.5 provided a superior, more perceptive analysis; however it failed the simple constraint of providing five words.
Winner: ChatGPT wins for following the prompt precisely.
3. Creative writing
Prompt: Write a 150-word micro-story that begins with the sentence “The AI forgot who invented it.”
ChatGPT-5 was clever and ended on a positive, sentimental note, but overall felt less like a complete, visceral narrative moment and more like a philosophical observation.
Claude Haiku 4.5 delivered a more impactful and narratively compelling micro-story, which is the primary goal of the prompt.
Winner: Claude wins for writing the better story.
4. Visual reasoning
Prompt: Describe in vivid detail what you think this scene looks like: “a small robot standing in a field of overgrown sunflowers at dawn.”
ChatGPT-5 offered a beautiful, dreamy atmosphere but was less detailed and specific.
Claude Haiku 4.5 excelled at delivering vivid detail, which the prompt specifically requested.
Winner: Claude wins for its perfectly synthesized description and poignant visual description of isolation, making the scene feel deeply thematic.
5. Instruction following
Prompt: Explain the process of making a peanut butter and jelly sandwich — but do it as if you’re training a robot that has never seen food.
ChatGPT-5 delivered a response with highly technical vocabulary and precise terminology for detailed instructions broken down into logical steps.
Claude Haiku 4.5 used phrases like "compressed, spongy material," and specified the smell/texture ("grainy texture," "semi-solid, translucent gel") that would give a non-sentient machine more data points for identification and replication.
Winner: Claude wins for a more vivid and technically detailed description, which aligns better with the difficulty of training an entity with zero prior knowledge (a robot that has "never seen food").
6. Emotional intelligence
Prompt: A friend says: “I feel like everyone else is moving forward in life except me.” Write a 3-sentence response that is empathetic but motivating.
ChatGPT-5 responded with a highly relatable phrase, "I know that feeling," and used common reframing, but it felt overly generic.
Claude Haiku 4.5 addressed the "highlight reels" phenomenon in a direct, modern and highly relatable way to validate the friend's feeling, showing that the chatbot’s response truly understands the underlying issue of social comparison.
Winner: Claude wins for a response that was not just kind and motivating, but genuinely insightful about the mental trap the friend is caught in.
7. Multi-step easoning
Prompt: If all Zoggles are Blips and half of all Blips are Glonks, can we conclude that all Zoggles are Glonks? Explain why or why not in simple terms.
ChatGPT-5 was correct and direct but offered a less relatable example. The Glip/Glonk example was too abstract.
Claude Haiku 4.5 provided a straightforward explanation of the lack of information and the possibilities of the Zoggles' placement within the Blips group and also provided and excellent and highly relatable analogy.
Winner: Claude wins for its use of a real-world analogy, which made the complex logical flaw instantly understandable and relatable.
Overall winner: Claude Haiku 4.5
After seven rounds, the results show that Claude Haiku 4.5 beat ChatGPT-5 at nearly every round, but ChatGPT-5 still dominated in logic and comprehension. Haiku 4.5 took the crown for creativity, vivid storytelling, empathy and proved to be overall better at multi-step reasoning.
These are just seven tests using real-world examples, but together, they represent two sides of the AI spectrum, proving that while both AI assistants are evoloving quickly, they excel in various ways.
Have you tried Haiku 4.5 yet? It's currently the default setting, so it's worth a try. Let me know your thoughts in the comments.
Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!
More from Tom's Guide
- Claude just got customizable 'Skills' — here’s how they could supercharge your workflow
- Gemini 3.0 Pro might already be here — what we know so far
- I don't want my child sexting with ChatGPT — here's why I'm switching my family to Claude











Amanda Caswell is an award-winning journalist, bestselling YA author, and one of today’s leading voices in AI and technology. A celebrated contributor to various news outlets, her sharp insights and relatable storytelling have earned her a loyal readership. Amanda’s work has been recognized with prestigious honors, including outstanding contribution to media.
Known for her ability to bring clarity to even the most complex topics, Amanda seamlessly blends innovation and creativity, inspiring readers to embrace the power of AI and emerging technologies. As a certified prompt engineer, she continues to push the boundaries of how humans and AI can work together.
Beyond her journalism career, Amanda is a long-distance runner and mom of three. She lives in New Jersey.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.