Anthropic Will Start Training Its AI Models on Chat Transcripts

Introduction
Artificial Intelligence companies are moving rapidly toward data-driven model improvement, and Anthropic—the creator of Claude AI—is the latest to join the trend. The company has officially announced that, starting September 28, 2025, it will begin using user chat transcripts and coding sessions to train its AI models.
This change will apply to all consumer-facing tiers of Claude (Free, Pro, Max, and Claude Code), but business and enterprise accounts remain excluded. While Anthropic assures users that the process includes privacy safeguards and automated data anonymization, the decision has sparked debates around user consent, transparency, and digital ethics.
As Anthropic evolves its AI platforms—introducing crowd-developed features like training models on chat transcripts—it’s worth exploring how the ecosystem connects. For instance, developers using Claude within their browsers can check out our guide on the new Anthropic Claude AI agent for Chrome to see how Claude is being embedded right into online workflows. Meanwhile, in the broader AI ethics conversation, our coverage of the Anthropic copyright settlement provides deep context on how these training decisions intersect with legal and creative rights. And for those leveraging long-term engagements with Claude, our post on Claude’s memory feature explores how session continuity, privacy, and context retention play out in real use cases. Together, these posts paint a full picture of the present state—and future potential—of Claude’s growing ecosystem.
In this article, we’ll explore the policy details, its impact on users, privacy concerns, industry reactions, and what it means for the future of AI.
Who Is Anthropic?
Anthropic was founded in 2021 by former OpenAI researchers with a mission to build safe and reliable AI systems. Its flagship model, Claude AI, has grown into one of the most popular alternatives to ChatGPT, praised for its safety-first principles, longer context handling, and human-like conversations.
Until now, Anthropic positioned itself as a privacy-conscious AI company, limiting data retention and emphasizing user trust. However, with growing competition from OpenAI, Google DeepMind, and Meta AI, the company appears to be shifting strategies by using real-world consumer data to enhance model performance.
What Exactly Is Changing?
Starting September 28, 2025:
- Chat & Code Data Training
New or resumed chat transcripts and coding sessions will be used to train Claude AI models. - Retention Period
If users accept the new terms, their data may be stored for up to five years. - Opt-Out Option
Users can opt out via a settings toggle or the initial pop-up prompt. However, the opt-out switch is small and defaulted ON, which might lead to unintentional consent. - Consumer Tiers Only
Applies to Claude Free, Pro, Max, and Claude Code. Excludes Claude for Work, Claude for Education, Claude for Teams, API users on Amazon Bedrock & Google Vertex AI. - No Retroactive Data Use
Only new sessions or resumed chats after September 28 will be included. Past inactive sessions remain excluded. - Privacy Measures
Anthropic claims it will not sell user data and uses automated systems to remove sensitive personal information.
Why Is Anthropic Making This Move?
1. Better Model Performance
Real-world user conversations help AI models learn more natural dialogue patterns, spot errors, and improve reasoning. Training on live, diverse interactions gives models stronger adaptability.
2. Staying Competitive
OpenAI’s ChatGPT and Google Gemini already rely on user data (unless opted out). By adopting a similar policy, Anthropic ensures its Claude models don’t fall behind.
3. Balancing Scale and Safety
Anthropic argues that anonymized large-scale data is essential to building safer, more robust AI—especially in complex areas like programming assistance, legal reasoning, and multi-step problem solving.
Privacy Concerns & Risks
Despite Anthropic’s assurances, privacy advocates warn of several risks:
- Inadvertent Consent
Many users may click “Accept” without noticing the small opt-out toggle. - Data Sensitivity
Even anonymized data may expose personal identifiers, proprietary coding logic, or confidential workplace conversations. - Power Imbalance
Tech companies benefit from massive user datasets, while users lose control and ownership of their own digital interactions. - Legal Challenges
New policies may invite scrutiny under GDPR (Europe), CCPA (California), and India’s DPDP Act.
How Anthropic’s Policy Compares to Others
Company | Policy on Training Data | Opt-Out Available? | Retention Period |
---|---|---|---|
Anthropic | Uses chat transcripts & code sessions (consumer only) | Yes, toggle-based | 5 years |
OpenAI | Uses chats for training unless opted out | Yes | Unclear |
Uses Bard/Gemini data for AI improvement | Limited | Varies | |
Meta | Uses public & user data for LLaMA & Meta AI | Very limited | Ongoing |
This table highlights that Anthropic’s policy is aligned with industry norms, but its five-year retention period is notably long.
Benefits for Users
While the policy raises concerns, there are potential upsides:
- Smarter Claude Models – Users may see improvements in conversation quality, context handling, and coding accuracy.
- Safer AI Responses – More real-world data allows Anthropic to better filter bias, toxicity, and harmful outputs.
- Rapid Model Updates – Training with consumer data accelerates Claude’s evolution compared to relying only on curated datasets.
How to Opt Out (Step by Step)
If you value privacy and don’t want your data included:
- Wait for the Pop-up – On or after Sept 28, you’ll see a notice about data usage.
- Locate the Toggle – A small switch (defaulted “On”) allows opting out.
- Disable It – Toggle “Off” before clicking Accept.
- Change in Settings – If you already accepted, you can later visit Settings → Privacy → Data Use to update preferences.
- Remember the Limitation – Opting out prevents future data use, but not data already stored.
Industry Reactions
- Privacy Experts: Warn that default opt-ins erode user autonomy.
- Developers: Concerned about exposing proprietary code in coding sessions.
- AI Ethicists: Argue policies need clearer language and true consent mechanisms.
- Competitors: Likely to adopt similar policies, creating an industry-wide trend.
What It Means for the Future of AI
This move shows a clear direction for consumer AI platforms:
- More integration of user data into model training.
- Longer data retention policies, raising privacy stakes.
- Shifting responsibility to users to opt out instead of opting in.
- Emergence of trust as a differentiator—the AI company that balances performance with privacy may win long-term loyalty.
FAQ
Q1: Does this apply to Claude API users?
No, API and enterprise tiers are excluded.
Q2: Can Anthropic sell my data?
No, Anthropic states it does not sell user data.
Q3: Can I delete old data?
No, once accepted, your past data during the five-year retention remains.
Q4: Why five years?
Anthropic claims longer retention helps in long-term safety alignment research.
Conclusion
Anthropic’s policy shift is a major milestone in consumer AI development. While the company emphasizes privacy protections and user control, the default opt-in mechanism raises valid concerns.
For users, this is a reminder that reading the fine print matters. As AI becomes increasingly integrated into daily life, balancing innovation with privacy will remain a critical challenge.