OpenAI just launched its smartest AI yet that can think with images — here's how to try it
The AI autonomously chooses the best tool for the job
Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Daily (Mon-Sun)
Tom's Guide Daily
Sign up to get the latest updates on all of your favorite content! From cutting-edge tech news and the hottest streaming buzz to unbeatable deals on the best products and in-depth reviews, we’ve got you covered.
Weekly on Thursday
Tom's AI Guide
Be AI savvy with your weekly newsletter summing up all the biggest AI news you need to know. Plus, analysis from our AI editor and tips on how to use the latest AI tools!
Weekly on Friday
Tom's iGuide
Unlock the vast world of Apple news straight to your inbox. With coverage on everything from exciting product launches to essential software updates, this is your go-to source for the latest updates on all the best Apple content.
Weekly on Monday
Tom's Streaming Guide
Our weekly newsletter is expertly crafted to immerse you in the world of streaming. Stay updated on the latest releases and our top recommendations across your favorite streaming platforms.
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
OpenAI just released two updated AI models — o3 and o4-mini — for ChatGPT Plus, Pro and Team users. Essentially two new, bigger and better brains, these models are said to be the smartest ones yet because they can tackle more advanced queries, understand the blurriest images, and solve problems like never before.
This release comes just a few days after OpenAI announced that ChatGPT is getting a major upgrade to its memory features, aimed at making conversations even more personal, seamless and context-aware.
With ChatGPT retiring GPT-4 at the end of this month, the release of these new models underscore OpenAI’s broader push to make ChatGPT feel less like a one-off assistant and more like a long-term, adaptable tool that evolves with its users.
More advanced multimodal capabilities
These models are the most advanced yet, capable of interpreting both text and images, including lower-quality visuals such as handwritten notes and blurry sketches. Users can upload diagrams or whiteboard photos, and the models will incorporate them into their responses.
The models also support real-time image manipulation, such as rotating or zooming, as part of the problem-solving process.
Greater autonomy with built-in tools
For the first time, the models can independently use all of ChatGPT’s tools, including the browser, Python code interpreter, image generation and image analysis. This means the AI can decide which tools to use based on the task given, potentially making it more effective for research, coding, and visual content creation.
As part of this launch OpenAI is also unveiling Codex CLI, an open-source coding agent that runs locally in a terminal window. It’s designed to work with these new models and will soon support GPT-4.1. To encourage developers to test and build with these tools, OpenAI is offering $1 million in API credits, distributed in $25,000 increments.
Availability and other updates
The newly released o3 and o4-mini models are now available to ChatGPT Plus subscribers, with developers able to access them via the OpenAI API. A more advanced o3-pro model is expected to arrive in the coming weeks. In the meantime, users on the Pro plan can continue using the existing o1-pro model.
These updates come at a time when OpenAI is no longer held back by limited computing power — a shift that could mark a major leap forward for AI. In a recent interview with Business Insider, CEO Sam Altman revealed that OpenAI is no longer “compute constrained,” meaning the company now has access to the kind of massive processing power needed to build more sophisticated models.
With this boost it looks likely that OpenAI can accelerate development, roll out more powerful versions of ChatGPT, and create models capable of handling far more complex tasks. In short, the brakes are officially off.
This newfound capacity also signals OpenAI’s broader ambition to make its models more flexible, intelligent, and autonomous, particularly for users who rely on AI for research, content creation and coding.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
As these tools evolve, so does the potential for AI to move beyond assistant-level support and become a true creative and analytical collaborator.
More from Tom's Guide
- You can now use Google's AI to make videos from text — and I'm already obsessed
- These 5 AI prompts work like magic — no matter which chatbot you use
- I let Google AI take over my desktop — it found files I thought were gone forever and other mind-blowing tricks

Amanda Caswell is an award-winning journalist, bestselling YA author, and one of today’s leading voices in AI and technology. A celebrated contributor to various news outlets, her sharp insights and relatable storytelling have earned her a loyal readership. Amanda’s work has been recognized with prestigious honors, including outstanding contribution to media.
Known for her ability to bring clarity to even the most complex topics, Amanda seamlessly blends innovation and creativity, inspiring readers to embrace the power of AI and emerging technologies. As a certified prompt engineer, she continues to push the boundaries of how humans and AI can work together.
Beyond her journalism career, Amanda is a long-distance runner and mom of three. She lives in New Jersey.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.










