Claude AI Data Training Deadline: Must Decide by September 28

Anthropic requires Claude users to decide by September 28 whether to allow conversation data for AI model training, marking a major shift in AI privacy policy

Claude AI privacy policy update and data training deadline
Claude AI privacy policy update and data training deadline

Major Policy Shift

Anthropic announced that Claude users must decide by September 28 whether to allow conversation data for AI model training. This policy change marks a significant departure from the company’s previous privacy-first approach.

Previously, Anthropic did not use consumer chat data for model training, with user prompts and conversation outputs automatically deleted from backend systems within 30 days. Now, the company seeks to leverage user conversations and programming sessions to train AI systems, extending data retention from 30 days to five years for users who don’t opt out.

Scope and User Types

This new policy applies to Claude Free, Pro, and Max users, including those using Claude Code. However, enterprise customers using Claude Gov, Claude for Work, Claude for Education, or API access are not affected.

Reports indicate that if users ignore the prompts after September 28, Claude will stop functioning until users make a choice in the model training settings.

How to Make Your Choice

Existing users will see a popup called “Consumer Terms and Policy Update.” The large blue “Accept” button defaults to opting into data training, while a smaller toggle option allows users to opt out.

Users can change their choice anytime in privacy settings, but once data is used for training, it cannot be withdrawn. Importantly, only future chats and code will be affected; past conversation content won’t be used unless users reopen those conversations.

This policy change reflects broader trends in the AI industry, with companies seeking more user data to improve model performance. As AI competition intensifies, high-quality training data becomes increasingly valuable.

TechCrunch reports this marks a significant departure from Anthropic’s previous privacy-first approach, aligning with practices of other AI companies.

Privacy Protection Recommendations

For privacy-conscious users, experts recommend:

  1. Check Settings Immediately: Confirm privacy preferences before the September 28 deadline
  2. Regularly Review Conversations: Delete sensitive conversations you don’t want used for training
  3. Consider Alternatives: Evaluate privacy policies of other AI services
  4. Backup Important Data: Save important conversation records

Data Usage Transparency

Anthropic states that opted-in user data will be used to improve Claude’s performance and safety. However, specific details about data usage methods and training processes haven’t been fully disclosed.

User-deleted chat records won’t be used for training, providing some degree of control. However, data already used for training cannot be removed from trained models.

Market Reaction and Future Outlook

This policy change has sparked widespread discussion in the AI community. Supporters believe it will help improve AI model quality, while critics worry about privacy erosion.

As AI technology continues developing, balancing user data value with privacy protection will remain an ongoing industry challenge. Anthropic’s policy adjustment may set a precedent for similar changes by other AI companies.

For Claude users, September 28 is not just a deadline, but an important moment to reassess privacy expectations for AI services. User choices will directly impact future AI model development and data usage practices.

作者:Drifter

·

更新:2025年9月28日 上午06:30

· 回報錯誤
Pull to refresh