Your Claude chats are being used to train AI — here's how to opt out

Claude on phone with Anthropic logo in the background
(Image credit: Shutterstock)

Anthropic now uses Claude conversations to train its AI models unless you manually opt out. The company changed its privacy policy last month, reversing its previous stance of not using chat data for training purposes.

Now that Claude 4.5 is here, the new policy affects all free and paid Claude users. Your chats and coding sessions could become training data for future models unless you turn off a specific setting. Anthropic enabled this by default, so you need to take action to protect your privacy.

The opt-out process is straightforward and takes less than a minute. Here's exactly where to find the setting and how to disable it.

What this policy covers

(Image: © Anthropic claude)

Both regular chats and coding sessions are included in the training policy. If you use Claude for programming help, those conversations become training data unless you opt out.

Old conversations become training data if you open them again after the policy change. Your complete chat history isn't automatically included — only conversations you access after October 8, 2025.

Anthropic keeps your data for five years whether or not you opt out of training. The company extended its retention period from 30 days, so your conversations remain stored much longer than before.

Who needs to opt out?

(Image: © Anthropic)

Free and paid personal Claude users must opt out manually if they want privacy. The setting applies to all standard accounts regardless of subscription level.

Commercial, government, and educational account users are automatically exempt. Enterprise accounts aren't affected by the policy change and conversations from these users won't be used for training.

New users see the choice during signup but the training option is enabled by default. You need to actively disable it even when creating a new account.

How to opt out of Claude AI training on your chats

1. Access Privacy Settings

(Image: © Tom's Guide)

Open Claude and click your profile icon or name to access account settings. The Privacy Settings option appears in the dropdown menu or settings panel.

Then navigate to Privacy by clicking it from the menu. This section controls all privacy-related features including data training permissions.

2. Find the training toggle

(Image: © Tom's Guide)

Locate the setting labeled Help improve Claude in Privacy Settings. This controls whether Anthropic uses your conversations to train AI models.

Check the current position of the toggle switch. If it's turned on (positioned to the right and highlighted), Anthropic is currently using your chats for training.

3. Turn off data training

(Image: © Tom's Guide)

Click the toggle to switch it off so it moves to the left and turns gray. This disables training on your conversations.

The change takes effect immediately once you turn off the toggle. New chats and any old conversations you revisit after this point won't be used for training.

More from Tom's Guide

Category
Arrow
Arrow
Back to Laptops
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Condition
Arrow
Storage Type
Arrow
Price
Arrow
Any Price
Showing 10 of 292 deals
Filters
Arrow
Show more
Kaycee Hill
How-to Editor

Kaycee is Tom's Guide's How-To Editor, known for tutorials that skip the fluff and get straight to what works. She writes across AI, homes, phones, and everything in between — because life doesn't stick to categories and neither should good advice. With years of experience in tech and content creation, she's built her reputation on turning complicated subjects into straightforward solutions. Kaycee is also an award-winning poet and co-editor at Fox and Star Books. Her debut collection is published by Bloodaxe, with a second book in the works.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.