ChatGPT is becoming more human again — here's why a psychologist thinks that could be a problem
Is OpenAI a help or a hindrance for mental health?

OpenAI CEO Sam Altman recently posted on X that the team plans to rethink its approaches to both mental health issues as well as offer up more features that would “treat adult users like adults." This includes access to erotica.
Since then, this post has blown up, with Sam Altman having to clarify his wording, pointing out that OpenAI would still be cautious when it came to the mental health of its users, but wanted to give more control over to the user.
Or, as Altman put it in his follow-up post, “we are not the elected moral police of the world. In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here.”
What all of this means right now is somewhat unclear. After some serious concerns earlier in the year around ChatGPT’s danger to people struggling with mental health problems, it had to tighten up its controls. Now, as it looks to loosen them again, what will this mean for certain users?
Ok this tweet about upcoming changes to ChatGPT blew up on the erotica point much more than I thought it was going to! It was meant to be just one example of us allowing more user freedom for adults. Here is an effort to better communicate it:As we have said earlier, we are… https://t.co/OUVfevokHEOctober 15, 2025
Is ChatGPT set up to deal with mental health?
“Tools like ChatGPT have tremendous potential, but their progress must be matched with responsibility — especially when it comes to using ChatGPT for mental health issues,” Dr Patapia Tzotzoli, a clinical psychologist and founder of My Triage Network, told Tom’s Guide.
ChatGPT cannot safely manage or assess risk. Its agreeable style designed to follow the user’s lead can inadvertently reinforce distorted beliefs or unhelpful assumptions, especially if users over-trust its tone of empathy.
Dr Patapia Tzotzoli
“ChatGPT is uniquely powerful: It is instant, private and always available, offering calm, non-judgemental, and supportive replies. All these are qualities that feel safe and validating to any user.”
AI chatbots, not just ChatGPT, are a double-edged sword in this field. They have been proven to be successful companions to people facing loneliness or helping users get through difficult times.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
However, they are equally problematic in this area, as they learn to strike a balance of support before knowing when to stop.
“If ChatGPT is made to act more human-like, use friendly language, emojis, or adopt a “friend” persona, this emotional realism may increase its appeal while blurring the line between simulation and genuine understanding,” Tzotzoli explained.
“ChatGPT cannot perceive or contain emotion, cannot pick up on nuance, and cannot safely manage or assess risk. Its agreeable style designed to follow the user’s lead can inadvertently reinforce distorted beliefs or unhelpful assumptions, especially if users over-trust its tone of empathy.”
ChatGPT cannot perceive or contain emotion, cannot pick up on nuance, and cannot safely manage or assess risk.
Where concern has been raised before on these models is on how they were trained and learned to deal with situations. While it can offer general support for emotional problems, it can struggle to give individual answers that work for each person.
“ChatGPT is a carefully trained language model seeking reward and tuned by feedback from other humans, including OpenAI employees. This is particularly important because ChatGPT as a machine learns from human feedback, and thus depends heavily on the quality of this feedback, which may be inconsistent and introduce bias,” Tzotzoli said.
“As a result, it can lead machines to optimize for reward signals rather than truth or usefulness”.
In other words, ChatGPT isn’t always looking for what is going to be the best answer for long-term growth and support, but one that will receive the correct response there and then. For most tasks, this is good, but can be problematic when a user needs to be confronted and not just agreed with.
AI models, including ChatGPT, are getting better at perceiving emotions and offering the correct responses. In some cases, they are being tuned more to disagreeing where necessary, but it is a tricky level to balance and won’t always be correct.
This is not to say that these tools can’t have a place. “The opportunity lies in using AI for support but not to replace human (professional) interactions, especially when it’s about one’s mental health care,” Tzotzoli explained.
“The question is how we can utilize its potential by integrating it ethically, transparently, and safely into everyday life conversations about mental health, where technology can support but not substitute real-life connections and expert in-person help.”
In his announcement post of this new update, Altman made the point that users will be able to get a more personalized experience from ChatGPT. While it was made clear in his follow-up that this would not be the same for people with mental health concerns, it wasn’t clear how it would be decided who was at risk, and how much personalization would be allowed.
“The ability to decide how human-like an AI behaves may feel like personalisation, but it also carries risks. The freedom to “tune” behavior can easily be influenced by what people or companies seek to control, shape, or monetize. The real challenge isn’t technical - it’s ethical: ensuring this freedom serves human wellbeing, transparently and responsibly,” Tzotzoli explained.
This isn’t a problem that is unique to OpenAI. Anthropic, the company behind Claude, and xAI, the makers of Grok, are facing the same problems. How much should AI chatbots be allowed to express ‘emotion’ and act as a mental health barrier?
For now, it isn’t actually clear how much intervention OpenAI is planning with this update. Altman has pushed that it will remain safe for those who need it to be, but as Tzotzoli points out, it is a conversation that remains unsolved.
More from Tom's Guide
- I don't want my child sexting with ChatGPT — here's why I'm switching my family to Claude
- Copilot is getting a major upgrade: Here's the biggest new features coming to Windows 11
- I just watched Stephen Hawking win an F1 race in Sora 2, and now I think AI may be a bubble











Alex is the AI editor at TomsGuide. Dialed into all things artificial intelligence in the world right now, he knows the best chatbots, the weirdest AI image generators, and the ins and outs of one of tech’s biggest topics.
Before joining the Tom’s Guide team, Alex worked for the brands TechRadar and BBC Science Focus.
He was highly commended in the Specialist Writer category at the BSME's 2023 and was part of a team to win best podcast at the BSME's 2025.
In his time as a journalist, he has covered the latest in AI and robotics, broadband deals, the potential for alien life, the science of being slapped, and just about everything in between.
When he’s not trying to wrap his head around the latest AI whitepaper, Alex pretends to be a capable runner, cook, and climber.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.