Claude Chats Will Train AI: Here’s How to Opt Out

▼ Summary
– Anthropic will begin using user conversations with Claude as training data starting October 8, unless users opt out, reversing its previous policy of not training on chats.
– Users must actively disable the automatically enabled toggle in Privacy Settings to prevent their new chats and coding tasks from being used to train future models.
– The policy change aims to improve Claude by leveraging real-world interaction data to identify useful and accurate responses, enhancing model performance over time.
– Anthropic has extended its data retention period from 30 days to five years for all users, regardless of their training opt-out choice.
– Commercial-tier users, such as those with government or educational licenses, are exempt from this change and will not have their conversations used for training.
Anthropic has announced a significant shift in its data handling practices, revealing that conversations with its Claude chatbot will soon contribute to training its large language models. This change, effective October 8, means user chats and coding tasks will automatically become training data unless individuals proactively opt out. Previously, the company refrained from using such interactions for model improvement, making this a notable departure from its original stance.
The company justifies this policy update by explaining that real-world user data offers invaluable insights for enhancing AI performance. According to Anthropic, input from actual interactions helps identify which responses are most accurate and helpful, ultimately leading to a more capable chatbot. To gather these benefits, the firm needs a continuous stream of new conversational data to refine its models over time.
Originally planned for September 28, the implementation was postponed to provide users additional time to understand their options and to ensure a seamless technical transition. Both new and existing users are being notified through sign-up prompts and pop-up messages detailing the revised terms.
During account setup or when encountering the notification, users see a toggle switch labeled “Allow the use of your chats and coding sessions to train and improve Anthropic AI models.” This setting is enabled by default, so anyone who accepts the updated terms without adjusting it will be included in the training program. For those who prefer to exclude their data, the option to disable this feature is located in Privacy Settings under “Help improve Claude.” Sliding the switch to the off position will prevent future chats from being used for training.
It is important to note that the new policy applies only to new conversations and old chats that are reopened after the policy takes effect. If you do not opt out, any new discussion or revisited thread becomes eligible for training, but your entire historical archive won’t be used unless you actively return to a previous conversation.
Alongside the training policy, Anthropic has also extended its data retention period. User data will now be stored for up to five years, a substantial increase from the previous 30-day standard. This extended retention applies regardless of whether a user consents to model training.
The policy update affects both free and paid individual users, though commercial, government, and educational accounts are exempt. Conversations from these enterprise-tier plans will not be used to train future models.
Claude has become popular among software developers for its coding assistance capabilities. Since the new policy encompasses coding sessions as well as standard chats, Anthropic stands to collect a significant volume of technical data for model enhancement.
Before this update, Claude was among the few leading AI chatbots that did not automatically use conversations for training. In contrast, services like OpenAI’s ChatGPT and Google’s Gemini typically include user data in model training by default for personal accounts, requiring users to opt out if they wish to avoid participation.
Choosing to opt out can help preserve privacy, especially for sensitive or proprietary discussions. However, it is worth remembering that publicly shared information—such as social media posts or online reviews—is often collected by AI companies as training material, regardless of individual privacy settings.
(Source: Wired)





