I gave ChatGPT permission to disagree with me with this prompt — and its responses became dramatically better

ChatGPT app on iPhone
(Image credit: Shutterstock)

For a long time, I used AI the same way most people probably do, which was essentially prompting with the hope for helpful answers. And to be fair, that usually worked. As long as you don't treat ChatGPT like Google, the answers are pretty good.

ChatGPT can organize thoughts, polish rough ideas and confidently help me work through decisions faster than Google ever could. But recently, I started noticing something strange. The AI often seems to agree with me. This is the case even with ChatGPT-5.5. Instant set as the default, which supposed to be a little less "people pleasing."

I appreciate it to a degree, but it can be a little much. From brainstorming story ideas, debating a purchase or trying to make a difficult decision, the responses often felt overly validating, like the AI was a hype machine instead of actually being useful.

That’s when I tried something surprisingly simple: I gave ChatGPT permission to disagree with me. And honestly, it completely changed the quality of the responses I got back.

The prompt that changed everything

A close up photo of someone's hands while typing on a laptop

(Image credit: Shutterstock)

Instead of asking AI to simply help me, I started using this prompt: “Act like a thoughtful critic, not a people pleaser. If my reasoning is weak, incomplete or biased, tell me directly and explain why.”

That one sentence immediately changed the tone of the conversation. Instead of reflexively reinforcing my ideas, ChatGPT started actually questioning assumptions, identifying weaknesses in my logic, pointing out missing context, highlighting emotional bias and surfacing real risks I hadn’t considered.

Latest Videos From

Frankly, the AI suddenly felt less eager to please and more likely to help as an actual thinking partner and collaborator.

The difference was obvious almost immediately

A man sits at the counter top of a coffee shop working on his laptop. In the background, other people are also working. He's wearing headphones to block out distracting sounds to aid concentration

(Image credit: Getty Images)

The first thing I tested was a business idea. I have noticed a particular white space in the tech and AI industry for women, so I pitched the idea to ChatGPT.

I'll admit, some of the feedback was uncomfortable. I thought my idea was a fantastic one, but what ChatGPT gave me was dramatically more valuable than simply hype.

Normally, when I pitched ideas, ChatGPT would find ways to strengthen the angle and help me improve it. But with the "disagree with me" prompt, the AI did something much more useful: it explained why the idea might not work in the first place.

It pointed out what has already been done in the category, audience fatigue, weak information and assumptions I was making without evidence. I'll admit, some of the feedback was uncomfortable. I thought my idea was a fantastic one, but what ChatGPT gave me was dramatically more valuable than simply hype.

Instead of giving me confidence too early, the AI forced me to pressure test my idea before wasting hours building it out. The thing is, AI gets smarter when it stops trying to be nice.

An unexpected realization

texting

(Image credit: Future)

A lot of people accidentally train AI to become a validation machine. Think about how most prompts are written:

“Help me improve this idea”

“Tell me what you think”

“Does this make sense?”

“What’s the best option?”

Those questions subtly encourage agreement. But when you explicitly invite criticism, the AI often shifts into a much more analytical mode. That matters a lot here because some of the most useful thinking happens when your assumptions are challenged, not reinforced.

So, I started using this for everyday decisions too. After seeing how well this worked for brainstorming, I started trying it in other parts of my life. For example, I asked ChatGPT to critique an overloaded weekly schedule I was trying to force myself to follow.

Instead of helping optimize it, the AI pointed out something I hadn’t fully admitted to myself that the schedule wasn’t failing because I lacked discipline, but because I ignored reality. My plan assumed I would have uninterrupted focus, when in reality I get interrupted at work at least once a day. My schedule predicted my energy levels as always high, which really isn't realistic either. That response genuinely changed how I approached planning. By AI exposing assumptions I hadn’t questioned, I was getting better answers.

Bottom line

Large language models (LLMs) are often designed to be conversational, cooperative and helpful. But “helpful” can sometimes drift too far into overly agreeable, overly optimisitc, emotionally validating and even conflict-avoidant. That’s why adding a little constructive friction can dramatically improve the quality of AI conversations.

In many ways, this simple prompt changes AI from an assistant that echoes your thinking into a system that encourages critical thinking to stress-test your thinking.

For me, that feels like one of the most valuable ways to use AI. Giving ChatGPT permission to disagree with me didn’t make the AI have a harsher tone, just more credible and honest with me.


Click to follow Tom's Guide on Google News

Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds. Subscribe to Tom's Guide on YouTube and follow us on TikTok.


More from Tom’s Guide

Amanda Caswell
AI Editor

Amanda Caswell is one of today’s leading voices in AI and technology. A celebrated contributor to various news outlets, her sharp insights and relatable storytelling have earned her a loyal readership. Amanda’s work has been recognized with prestigious honors, including outstanding contribution to media.

Known for her ability to bring clarity to even the most complex topics, Amanda seamlessly blends innovation and creativity, inspiring readers to embrace the power of AI and emerging technologies. As a certified prompt engineer, she continues to push the boundaries of how humans and AI can work together.

Beyond her journalism career, Amanda is a long-distance runner and mom of three. She lives in New Jersey.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.