OpenAI to make GPT-4o Advanced Voice available by the end of the month to select group of users
Sam Altman, OpenAI CEO says alpha release this month
Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Daily (Mon-Sun)
Tom's Guide Daily
Sign up to get the latest updates on all of your favorite content! From cutting-edge tech news and the hottest streaming buzz to unbeatable deals on the best products and in-depth reviews, we’ve got you covered.
Weekly on Thursday
Tom's AI Guide
Be AI savvy with your weekly newsletter summing up all the biggest AI news you need to know. Plus, analysis from our AI editor and tips on how to use the latest AI tools!
Weekly on Friday
Tom's iGuide
Unlock the vast world of Apple news straight to your inbox. With coverage on everything from exciting product launches to essential software updates, this is your go-to source for the latest updates on all the best Apple content.
Weekly on Monday
Tom's Streaming Guide
Our weekly newsletter is expertly crafted to immerse you in the world of streaming. Stay updated on the latest releases and our top recommendations across your favorite streaming platforms.
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
OpenAI CEO Sam Altman says the first users will start to get access to GPT-4o Advanced Voice in the next couple of weeks, but this will be a limited "alpha" rollout.
The company is testing the full capabilities of GPT-4o, a new type of Omni model released during its Spring Update in May. Unlike GPT-4, this natively multimodal model can understand speech directly without converting it into text.
This makes GPT-4o both faster and significantly more accurate when acting in the role of voice assistant, even allowing it to pick up on tone and vocal intonations during a conversation.
Users have been waiting patiently for access, but OpenAI says safety testing must be completed first. Some have briefly gained access, and there have been multiple demos of its capabilities, but most users won’t get it until later this year.
What is GPT4o Advanced Voice
alpha starts later this month, GA will come a bit afterJuly 18, 2024
GPT-4o Advanced Voice is an entirely new type of voice assistant, similar to but larger than the recently unveiled French model Moshi, which argued with me over a story.
In demos of the model, we’ve seen GPT-4o Advanced Voice create custom character voices, generate sound effects while telling a story and even act as a live translator.
This native speech ability is a significant step in creating more natural AI assistants. In the future, it will also come with live vision abilities, allowing the AI to see what you see.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Other use cases for Advance Voice include having it act as a very patient language teacher, able to correct you directly on pronunciation and help improve your accent.
“ChatGPT’s advanced Voice Mode can understand and respond with emotions and non-verbal cues, moving us closer to real-time, natural conversations with AI. Our mission is to bring these new experiences to you thoughtfully,” OpenAI said in a statement last month.
Why the delay in launching GPT-4o Advanced Voice?
OpenAI is one of the most cautious artificial intelligence labs, taking significant time to security test, verify and put guardrails in place for any new major model.
Altman has also called for regulation of frontier-style models like the upcoming GPT-5 or world models like Sora due to the risk they present to society. This caution has allowed other companies to begin to catch up with OpenAI, and GPT-4 is no longer the only top-tier model.
The company was concerned that GPT-4o Advanced Voice, without appropriate guardrails, could offer potentially harmful information or be used unexpectedly. To tackle this, they’re gradually releasing it to trusted users first and then more widely over time.
“As part of our iterative deployment strategy, we'll start the alpha with a small group of users to gather feedback and expand based on what we learn,” a spokesperson explained.
“We are planning for all Plus users to have access in the fall. Exact timelines depend on meeting our high safety and reliability bar. We are also working on rolling out the new video and screen sharing capabilities we demoed separately, and will keep you posted on that timeline.”
More from Tom's Guide
- I just tried Runway’s new AI voiceover tool — and it’s way more natural sounding than I expected
- Hume AI brings its creepy emotional AI chatbot to iPhone
- ChatGPT Voice could change storytelling forever — new video shows it creating custom character voices

Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on AI and technology speak for him than engage in this self-aggrandising exercise. As the former AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover.
When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing.










