Gemini 3 is fast — but it still can’t master this one human skill

screenshot
(Image credit: Future)

For months, I’ve been asking chatbots like ChatGPT to do something that should be simple: write like me.

I’ve fed it hundreds of my texts, articles and personal emails. I’ve corrected its tone. I’ve explicitly asked it to match my voice. Yet every time, the result lands in the "uncanny valley." The sentences are technically correct, but the voice is a plastic version of my own — like someone giving a speech but emphasizing the wrong words and totally missing the vibe.

To be clear, I wasn't trying to outsource the heavy stuff; I absolutely enjoy writing and would never give it up to even the most skilled chatbot. However, I would like an assistant that can handle the "connective tissue" of my workday — drafting quick emails in my cadence or turning rough bullets into a Slack message. Essentially, I want it to handle the $10 tasks so I could focus on the $100 ideas.

If you’ve ever asked AI to rewrite an email, draft a message or “sound more like you,” you’ve probably felt this disconnect too — even if you couldn’t quite name it.

Why AI always plays it safe

Gemini

(Image credit: Future)

AI models are excellent at mimicry. They can replicate sentence length, vocabulary, and even cadence. But they struggle with the "invisible decisions" humans make every second. When I compared my original drafts to the AI’s "mimic" version, the differences became clear:

  • The problem of omission: AI wants to be helpful, so it over-explains. As a human, I know exactly what not to say to keep a reader engaged; AI fills every silence with fluff.
  • The tone gap: AI defaults to a "polite" or "corporate" baseline. You may have noticed that it smooths out sentences to the point of extreme polish, almost completely removing the personality. I call this taking away the soul of a piece.
  • The rule-breaker's advantage: AI is trained on patterns (rules); great writing is often defined by when a human chooses to break those rules for effect. We’ve all heard of “creative license,” but you’ll never get it from AI.

The problem isn’t a lack of intelligence; it’s a lack of accountability. AI doesn’t care about the reader. But, in my own writing, the stakes are high. I feel the cost of being misunderstood or boring. AI has no skin in the game.

Because it doesn't feel risk, it defaults to plausibility, predicting what a "standard" version of me would say rather than what I actually intend to say. Even the thought of it gives me what my kids call “the ick.”

I tested AI against my own writing

screenshot

(Image credit: Future)

The prompt: Write a scene for a sci-fi thriller where a technician discovers the ship's AI is secretly sending fake 'all clear' reports back to Earth, using Amanda Caswell's voice.

Compared to my actual draft, it's clear that Gemini wrote a plot summary while I wrote a scene.

The AI version is functionally correct — it conveys the fact that the AI is up to something. But it feels like a police report. It relies on generic phrases and feels too mechanical. It tells you what happened, but it doesn't make you feel the weight of it.

My version finds the specific horror in the logic. I didn't just say the AI was lying; I described it as a "heartbeat sent home." I didn't just say the crew was in danger; I reframed their existence as being "reported as content."

That specific leap — finding the irony in a tragedy — is the "soul" that no amount of training data could teach.

Bottom line

I often think of using AI to write is like a frozen dinner versus a homemade one. Both do the job of satiating hunger, but most of the time you can really tell the difference between something that’s been thawed in a microwave versus something someone spent time making.

My experiment doesn't prove that AI is useless. These tools are incredible for drafting outlines, summarizing technical jargon or brainstorming ideas. I use AI all the time for those things. But the moment you hand over your "voice," the writing loses its center of gravity.

As we head into an era where AI is integrated everywhere, the most valuable skill won't be knowing how to use AI — it will be knowing when your human touch is the most important skill in the room. Remember, AI can help you get to the finish line faster, but it still can't decide where that finish line should be.


Google News

Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.


More from Tom's Guide

Category
Arrow
Arrow
Back to Laptops
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Condition
Arrow
Minimum Price
Arrow
Any Minimum Price
Maximum Price
Arrow
Any Maximum Price
Showing 10 of 94 deals
Filters
Arrow
Show more
TOPICS
Amanda Caswell
AI Editor

Amanda Caswell is an award-winning journalist, bestselling YA author, and one of today’s leading voices in AI and technology. A celebrated contributor to various news outlets, her sharp insights and relatable storytelling have earned her a loyal readership. Amanda’s work has been recognized with prestigious honors, including outstanding contribution to media.

Known for her ability to bring clarity to even the most complex topics, Amanda seamlessly blends innovation and creativity, inspiring readers to embrace the power of AI and emerging technologies. As a certified prompt engineer, she continues to push the boundaries of how humans and AI can work together.

Beyond her journalism career, Amanda is a long-distance runner and mom of three. She lives in New Jersey.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.