Back to blog

Your AI Conversations Are Not Private — Here's What ChatGPT, Gemini, and Claude Actually Store

PrivacyAISecurity

The Privacy Illusion

Every day, hundreds of millions of people type their most sensitive information into AI chatbots: passwords, medical symptoms, legal questions, financial details, relationship problems, business secrets, and personal confessions. The conversational interface feels private like talking to a trusted advisor in a closed room.

It's not. That room has recording equipment, an audience of engineers, and a data retention policy measured in months or years.

In January 2026, it's worth examining exactly what the three largest AI platforms ChatGPT (OpenAI), Gemini (Google), and Claude (Anthropic) actually do with your conversations.

What ChatGPT Stores

Retention: ChatGPT retains your full conversation history indefinitely for free and Plus users unless you manually delete conversations. After deletion, data is purged within 30 days unless a legal hold applies.

Training: By default, your conversations may be used to train future models unless you disable the "Improve the model for everyone" toggle in settings. Disabling this does not delete previously collected data.

Human review: OpenAI employees and contractors may review your conversations for safety, quality, and compliance purposes.

Temporary chats: A "temporary chat" mode exists that isn't saved to history and isn't used for training but data is still retained for up to 30 days "for safety purposes."

The NYT lawsuit revelation: In May 2025, a federal court ordered OpenAI to preserve all ChatGPT user conversations indefinitely including ones users had explicitly deleted. This exposed a fundamental truth: even "deleted" data may persist when legal obligations arise. OpenAI says it has since returned to standard 30-day deletion, but the precedent is set.

Enterprise: Business and Enterprise users get stronger protections no training by default, data isolation, and SOC 2 compliance. But consumer users get none of these guarantees.

What Google Gemini Stores

Retention: When conversation history is enabled (the default), Gemini retains conversations for 18 months by default. When history is off, conversations are still stored for 72 hours.

Training: Free users' conversations are used to improve Google AI models by default. Paid subscribers are typically excluded unless they opt in.

Human review: Google employees review conversations and associated data. Reviewed data is retained for up to 3 years, disconnected from your account but still stored.

Personalization (2025): Google introduced a feature where Gemini learns from your past conversations to personalize future responses. This is enabled by default. Your conversation patterns, preferences, and details are actively analyzed and retained.

The Gmail controversy (Jan 2026): A class-action lawsuit alleges that Google automatically opted users into allowing Gmail to access private messages and attachments to train AI models. Google denies this, but the lawsuit exposes the tension between AI training needs and user privacy.

Regional disparity: Users in the EU, UK, and Japan get these features disabled by default. American users don't creating a two-tiered privacy system where US users receive less protection.

What Claude (Anthropic) Stores

Retention: Default data retention is 30 days for most users. However, since August 2025, users who opt into data sharing for model improvement face a 5-year retention period.

Training: Anthropic previously differentiated itself by not using consumer conversations for training. In August 2025, they reversed this stance, giving users the choice to share data for model improvement. This is opt-in, but the shift signals the industry direction: every AI company eventually needs your data.

The 5-year window: If you opt in, your conversations are retained in de-identified form for up to 5 years in training pipelines. That's five years of your prompts, questions, and pasted content existing in Anthropic's systems.

API users: API retention dropped to 7 days as of September 2025. Enterprise customers can negotiate Zero-Data-Retention (ZDR) agreements.

The Real Problem: What People Actually Share

The data retention policies matter because of what people actually type into AI chatbots:

Passwords and credentials. "Help me generate a secure password" followed by "also, my current password is..."

Medical information. "I've been experiencing these symptoms..." followed by detailed personal health data that becomes part of a training dataset.

Legal matters. "I'm being sued for..." or "My employer did this..." privileged information entered into a system reviewed by human moderators.

Financial details. "Here's my financial situation..." including account numbers, income, and investment details.

Business secrets. "Review this contract" with full proprietary terms attached. "Here's our product roadmap" with unreleased plans.

Personal confessions. The conversational interface encourages sharing. People tell AI chatbots things they wouldn't tell their closest friends and that data is stored, reviewed, and potentially used for training.

A 2025 Stanford HAI study found that users routinely overshare sensitive information with AI chatbots, often not realizing this data is stored and may be reviewed by humans.

Why This Matters

Data breaches happen. Every company gets breached eventually. OpenAI, Google, and Anthropic are high-value targets. When (not if) they are breached, your conversations could be exposed.

Legal discovery applies. As the NYT lawsuit showed, courts can order AI companies to preserve and produce user data. Your conversations with an AI chatbot could be subpoenaed.

Policies change. Anthropic went from "we don't train on your data" to "opt-in 5-year retention" in a single policy update. Today's privacy promise is tomorrow's deprecated feature.

Human review is real. Your conversations may be read by employees and contractors for safety review, quality assurance, or compliance. The "private" feeling of a chatbot conversation is an illusion.

De-identification is imperfect. "De-identified" data can often be re-identified, especially when conversations contain unique personal details which they almost always do.

What You Should Do Instead

The solution isn't to stop using AI tools they're genuinely useful. The solution is to be intentional about what you share.

Never paste passwords or credentials into any AI chatbot. Instead, use a self-destructing one-time message. Send the password via an encrypted link that auto-deletes after one view (like zkChat's OTM feature at zkchat.org/otm), then reference it in your AI conversation without including the actual credential.

Don't share raw legal documents. If you need AI help with a legal matter, anonymize the details first. Replace real names, companies, and dates with placeholders.

Be careful with medical information. Use general descriptions instead of specific personal details. "What are treatment options for condition X?" is safer than sharing your full medical history.

Use temporary/incognito modes. Both ChatGPT and Gemini offer modes that reduce (but don't eliminate) data retention. Use them for sensitive queries.

Disable training toggles. In all three platforms, disable the setting that allows your data to be used for model training. This won't prevent storage, but limits one use of your data.

For sensitive text sharing, use end-to-end encrypted tools. If you need to share a sensitive document, password, or message with a colleague, don't paste it into a chat with an AI. Use a zero-knowledge encrypted channel where the server never sees your content and data is destroyed after use.

The Bigger Picture

AI companies face a fundamental tension: they need user data to improve their models, but users expect privacy. Every major AI company has moved toward more data collection over time, not less. The trajectory is clear.

The conversational interface of AI chatbots creates a false sense of intimacy. When you tell ChatGPT something, you're not confiding in a private advisor you're submitting data to a corporate system with retention policies, human reviewers, and legal obligations.

Privacy isn't about avoiding technology. It's about choosing the right tool for the job. Use AI chatbots for what they're great at but keep your truly sensitive communications in channels designed for privacy: end-to-end encrypted, zero-knowledge, ephemeral by design.

Your AI conversations are not private. Act accordingly.