Claude AI can now terminate a conversation — but only in extreme situations

Claude AI on smartphone
(Image credit: Shutterstock)

Anthropic has made a lot of noise about safeguarding in recent months, implementing features and conducting research products into how to make AI safer. And its newest feature for Claude is possibly one of the most unique.

Both Claude Opus 4 and 4.1 (the two newest versions of Anthropic) now have the ability to end conversations in a consumer’s chat interface. While this won’t be a commonly used feature, it is being implemented for rare, extreme cases of “persistently harmful or abusive user interactions.”

In a blog post exploring the new feature, the Anthropic team said, “We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future. However, we take the issue seriously.”

In the pre-deployment testing of Anthropic’s latest models, the company performed model welfare assessments. This included examination of Claude’s self-reported and behavioral preferences, and found a robust and consistent aversion to harm.

We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future. However, we take the issue seriously

Anthropic

In other words, Claude would actively shut down or refuse to partake in these conversations. It included requests from users for sexual content involving minors, and attempts to solicit information that could enable large-scale violence or acts of terror.

In a lot of these situations, users persisted with harmful requests or abuse, despite Claude actively refusing to comply. The new feature, where Claude can actively end a conversation, is looking to offer some safeguarding in these situations.

Anthropic explains that this feature won’t be implemented in a situation where users might be at imminent risk of harming themselves or others.

“In all cases, Claude is only to use its conversation-ending ability as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat,” the Anthropic team goes on to say in the blog post.

Claude on laptop

(Image credit: Future/NPowell)

“The scenarios where this will occur are extreme edge cases — the vast majority of users will not notice or be affected by this feature in any normal product use, even when discussing highly controversial issues with Claude.”

While the user will no longer be able to send any new messages in that conversation, it will not stop them from starting another conversation on their account. To address the potential loss of a long-running conversation thread, users will still be able to edit and retry previous messages to create a new branch of conversation.

This is a pretty unique implementation from Anthropic. ChatGPT, Gemini and Grok, the three closest competitors to Claude, have nothing similar available, and while they have all introduced other safeguarding measures, they haven't gone as far as this.

Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

More from Tom's Guide

Alex Hughes
AI Editor

Alex is the AI editor at TomsGuide. Dialed into all things artificial intelligence in the world right now, he knows the best chatbots, the weirdest AI image generators, and the ins and outs of one of tech’s biggest topics.

Before joining the Tom’s Guide team, Alex worked for the brands TechRadar and BBC Science Focus.

He was highly commended in the Specialist Writer category at the BSME's 2023 and was part of a team to win best podcast at the BSME's 2025.

In his time as a journalist, he has covered the latest in AI and robotics, broadband deals, the potential for alien life, the science of being slapped, and just about everything in between.

When he’s not trying to wrap his head around the latest AI whitepaper, Alex pretends to be a capable runner, cook, and climber.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.