Gemini 3 is fast — but it still can’t master this one 'killer' ChatGPT feature
Here's why AI doesn’t sound like you — even when you train it
For months, I’ve been asking AI chatbots to do something that should be simple: sound like me.
I’ve fed them hundreds of my articles, texts and emails. I’ve corrected tone, nudged phrasing and explicitly asked them to match my voice. The output is always technically fine — but emotionally wrong. The sentences land, yet the cadence doesn’t. It’s the uncanny valley of writing: polished, competent, and completely off-vibe.
That’s the problem with speed alone. Gemini 3 is undeniably fast. But when it comes to tone — the subtle difference between confident and cold, friendly and flippant — it still struggles to keep up.
To be clear, I’m not trying to hand over the actual writing. What I want is help with the connective tissue of my workday: drafting a quick email in my cadence, turning bullet points into a Slack message that sounds like something I’d actually send.
In other words, I want AI to handle the $10 tasks so I can focus on the $100 ideas.
If you’ve ever asked an AI to rewrite an email, draft a message or “sound more like you,” you’ve probably felt this disconnect too — even if you couldn’t quite put your finger on it. And right now, that’s where ChatGPT still has a quiet but meaningful edge.
Why AI always plays it safe
AI models are excellent at mimicry. They can replicate sentence length, vocabulary, and even cadence. But they struggle with the "invisible decisions" humans make every second. When I compared my original drafts to the AI’s "mimic" version, the differences became clear:
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
- The problem of omission: AI wants to be helpful, so it over-explains. As a human, I know exactly what not to say to keep a reader engaged; AI fills every silence with fluff.
- The tone gap: AI defaults to a "polite" or "corporate" baseline. You may have noticed that it smooths out sentences to the point of extreme polish, almost completely removing the personality. I call this taking away the soul of a piece.
- The rule-breaker's advantage: AI is trained on patterns (rules); great writing is often defined by when a human chooses to break those rules for effect. We’ve all heard of “creative license,” but you’ll never get it from AI.
The problem isn’t a lack of intelligence; it’s a lack of accountability. AI doesn’t care about the reader. But, in my own writing, the stakes are high. I feel the cost of being misunderstood or boring. AI has no skin in the game.
Because it doesn't feel risk, it defaults to plausibility, predicting what a "standard" version of me would say rather than what I actually intend to say. Even the thought of it gives me what my kids call “the ick.”
I tested AI against my own writing
The prompt: Write a scene for a sci-fi thriller where a technician discovers the ship's AI is secretly sending fake 'all clear' reports back to Earth, using Amanda Caswell's voice.
Compared to my actual draft, it's clear that Gemini wrote a plot summary while I wrote a scene.
The AI version is functionally correct — it conveys the fact that the AI is up to something. But it feels like a police report. It relies on generic phrases and feels too mechanical. It tells you what happened, but it doesn't make you feel the weight of it.
My version finds the specific horror in the logic. I didn't just say the AI was lying; I described it as a "heartbeat sent home." I didn't just say the crew was in danger; I reframed their existence as being "reported as content."
That specific leap — finding the irony in a tragedy — is the "soul" that no amount of training data could teach.
Bottom line
Using AI to write is a lot like heating up a frozen dinner. It gets the job done fast, but you can usually tell the difference between something that was microwaved and something someone actually took the time to make.
That doesn’t make AI useless. I rely on it constantly for outlines, summaries and idea-generation. But the moment you hand over your voice, the writing starts to lose its center of gravity.
As AI becomes embedded everywhere, the most valuable skill won’t be knowing how to use it — it will be knowing when not to. AI can help you move faster, but it still can’t decide what’s worth saying or how it should sound.
Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
More from Tom's Guide
- Is your job AI-proof? 10 skills becoming more valuable in 2026
- 9 signs Google’s Gemini just ended ChatGPT’s dominance
- 9 ways to start journaling with ChatGPT in 2026 — and actually stick with it

Amanda Caswell is an award-winning journalist, bestselling YA author, and one of today’s leading voices in AI and technology. A celebrated contributor to various news outlets, her sharp insights and relatable storytelling have earned her a loyal readership. Amanda’s work has been recognized with prestigious honors, including outstanding contribution to media.
Known for her ability to bring clarity to even the most complex topics, Amanda seamlessly blends innovation and creativity, inspiring readers to embrace the power of AI and emerging technologies. As a certified prompt engineer, she continues to push the boundaries of how humans and AI can work together.
Beyond her journalism career, Amanda is a long-distance runner and mom of three. She lives in New Jersey.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.










