Skip to main content

ChatGPT will still offer medical and legal advice — despite what rumors suggest

ChatGPT logo on phone
(Image credit: Shutterstock)

OpenAI made it clear with the release of GPT-5 that ChatGPT would now be a place to get medical advice and assistance on your health queries. However, a recent change to the chatbot’s terms and conditions has a lot of users questioning if this is still the case.

Likewise, ChatGPT users took to X in swarms to claim the chatbot would no longer give legal advice. This is due to a line in ChatGPT's terms and conditions that states: “Provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”

Since this rumour started circling, Karan Singhal, the head of Health AI at OpenAI posted on X saying: “Not true. Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information”.

What has changed?

a man in gym gear sat down and looking on phone

(Image credit: Shutterstock)

So if ChatGPT continues to offer advice in these areas, what does this change in the terms and services actually mean?

While ChatGPT will continue to offer this advice, the change in services suggests that users shouldn’t then perform activities that may harm others based on the advice given, without consulting a legitimate professional.

In other words, because ChatGPT isn’t a medical or legal professional itself, don’t use the advice it gives on someone else who could be affected by its outcomes.

This is likely to limit users trying to advertise themselves as lawyers or medical professionals by using ChatGPT as the source of information.

It is similar to the company’s previous usage policy, which said that users shouldn’t perform activities that “may significantly impair the safety, well-being, or rights of others”.

A cautious approach

Sam Altman with ChatGPT on phone

(Image credit: Shutterstock)

While ChatGPT does still offer medical advice, OpenAI is becoming more cautious with what advice it gives and the way that the AI chatbot interacts with certain users.

Last week, OpenAI published a long document detailing major changes made in ChatGPT’s responses to sensitive conversations. OpenAI claims it worked with more than 170 mental health experts to help ChatGPT more reliably recognize signs of distress.

This comes after Sam Altman, CEO of OpenAI, recently claimed that the company would be relaxing guardrails for mental health to make the model more accessible to everyone. The mental health update re-routes from sensitive conversations and suggests taking breaks if users seem distressed.

While this update is separate from offering medical advice, it does list out changes made by OpenAI when it comes to offering advice around psychosis, mania, and other severe mental health symptoms.


Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.

Google News


More from Tom's Guide

Category
Arrow
Arrow
Back to Laptops
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 97 deals
Filters
Arrow
Show more
Alex Hughes
AI Editor

Alex is the AI editor at TomsGuide. Dialed into all things artificial intelligence in the world right now, he knows the best chatbots, the weirdest AI image generators, and the ins and outs of one of tech’s biggest topics.

Before joining the Tom’s Guide team, Alex worked for the brands TechRadar and BBC Science Focus.

He was highly commended in the Specialist Writer category at the BSME's 2023 and was part of a team to win best podcast at the BSME's 2025.

In his time as a journalist, he has covered the latest in AI and robotics, broadband deals, the potential for alien life, the science of being slapped, and just about everything in between.

When he’s not trying to wrap his head around the latest AI whitepaper, Alex pretends to be a capable runner, cook, and climber.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.