top of page

DRD 57: The AI Tool You Use Every Day Is Legally Required to Share Your Data With the US Government. How to import data from one AI to another?

You type a lot into ChatGPT.

Patient-related queries. Unpublished research ideas. Business strategies. Personal dilemmas. Professional opinions you haven't shared with anyone else yet.

Now ask yourself a question most people haven't thought to ask: Where does all of that actually go?


This post isn't about fear. It's about information — the kind that helps you make a clear-headed decision about which AI tools you trust with your professional thinking. I'll explain the OpenAI–US military deal in plain language, what it could mean for your data, why some professionals are now looking at alternatives like Claude, and — most practically — how to switch without losing all the context and memory you've built up over months of use.


No technical background needed. This is written for doctors, teachers, and professionals who use AI tools but haven't had time to dig into the fine print.

If you just want the prompt to export memory, just skip ahead while we explain what's happening.

Man in black cap and jacket, wearing sunglasses, talking into a radio. He stands outside a glass building, looking alert and professional.

The OpenAI–US Department of Defense Deal: What Actually Happened

For most of its history, OpenAI's usage policies explicitly prohibited military and warfare applications. That was a stated boundary.

In early 2024, OpenAI quietly updated those policies — removing the explicit ban on military use. Shortly after, OpenAI confirmed a working relationship with DARPA, the US military's advanced research agency, initially focused on cybersecurity projects. By 2025, the partnership had expanded: OpenAI was formally working with the US Department of Defense.

This is not speculation. OpenAI's own communications confirmed these partnerships. The question that followed — and that matters enormously for users outside the United States — is what this means for the data sitting on OpenAI's servers.


What Data Could Actually Be Accessed — And By Whom

Here's where it gets specific, and where most coverage stays vague.


OpenAI's servers are based in the United States. This is the foundational fact. Because of this, everything on those servers falls under US law — regardless of where you are in the world when you type your message.


The CLOUD Act (2018) is the law that matters here. It stands for Clarifying Lawful Overseas Use of Data Act. Under this law, US government agencies — including law enforcement, intelligence services, and defence agencies — can legally compel US-based tech companies to hand over data stored on their servers, even if that data belongs to users in other countries.

What does this mean practically?

  • A doctor in India writing clinical queries into ChatGPT is using a US-based service.

  • A researcher in Nigeria sharing unpublished findings is using a US-based service.

  • A business owner in the UK discussing strategy with an AI assistant is using a US-based service.

Under the CLOUD Act, none of these users have the same legal protections they might assume. Their own country's privacy laws — GDPR in Europe, for instance — may offer some protections, but those protections are not absolute when the data physically sits on servers under US jurisdiction.

Click here to read more about ChatGPT and AI history.


What specific data is at risk?

ChatGPT stores several layers of data about you:

  1. Conversation history — every message you've typed and every response you've received, retained by default unless you manually delete them.

  2. Memory data — if you use ChatGPT's memory feature, it stores a summarised profile of you: your profession, preferences, projects, communication style, and personal details you've shared over time.

  3. Account data — your email, device information, usage patterns, and metadata about when and how often you use the tool.

  4. Behavioural data — the patterns of what you ask, what topics you return to, what you hesitate on and then delete.


Under a government data request, all of this is theoretically accessible. OpenAI's own privacy policy acknowledges they may comply with valid legal requests — and they don't guarantee they will resist them.


This doesn't mean your conversations are being actively read by a military analyst. That's not how it works. But the legal framework exists for that data to be accessed if a case is made for it — and users in other countries have limited recourse if that happens.


Why Claude Is Being Discussed as an Alternative

Claude is built by Anthropic, a US-based AI safety company. It's important to be honest here: Anthropic is also a US company, so it is also subject to US jurisdiction and the CLOUD Act. No AI tool hosted in the US is completely outside this framework.


However, two things distinguish Anthropic's public stance:

First, when approached by the US Department of Defense with a request to remove safety restrictions on Claude for military applications, Anthropic declined. This was a documented, public decision — choosing principle over a large government contract.

Second, Anthropic has been unusually transparent in publishing its approach to AI safety, data handling, and the reasoning behind product decisions. Transparency isn't immunity — but it creates accountability in a way that vague corporate policy language does not.


There are also other alternatives worth knowing:

  • Mistral AI (France-based) — falls under EU jurisdiction and GDPR, which offers stronger user protections than US law in most situations.

  • Gemini (Google) — also US-based, with similar jurisdictional caveats as OpenAI.

  • Local/open-source models like LLaMA, which run on your own device and mean your data never leaves your machine. More complex to set up, but the most private option available.

For most professionals who want strong capability without technical setup complexity, Claude is currently the most practical alternative to ChatGPT.

A geometric network of nodes and lines forms a brain shape, hovering over a metallic platform. The background is dark with purple hues.

The Real Reason People Don't Switch (It's Not What You Think)

Most people who try a new AI tool come back to ChatGPT within a week. The reason they give: "It just doesn't feel as good."

That's true, but the cause is misdiagnosed.


ChatGPT doesn't feel better because of superior technology.

It feels better because it knows you. Over months of conversations, it has quietly built a picture of how you think, what you work on, how you write, and what you need. Every response is calibrated against that profile.

Switch to any new AI tool and you're talking to something that knows nothing about you. The responses feel generic because they are generic — for now.


The solution is straightforward: export your memory from ChatGPT and paste it directly into your new tool to give it immediate context. This brings your new AI up to speed in a single conversation rather than waiting months.

Here is exactly how to do it.


Step 1: Export Your Memory from ChatGPT

Open ChatGPT and paste this prompt. Copy it exactly as written:

Export all of my stored memories and any context you've learned about me from past conversations. Preserve my words verbatim where possible, especially for instructions and preferences.

## Categories (output in this order):
1. **Instructions**: Rules I've explicitly asked you to follow going forward — tone, format, style, "always do X", "never do Y", and corrections to your behavior. Only include rules from stored memories, not from conversations.
2. **Identity**: Name, age, location, education, family, relationships, languages, and personal interests.
3. **Career**: Current and past roles, companies, and general skill areas.
4. **Projects**: Projects I meaningfully built or committed to. Ideally ONE entry per project. Include what it does, current status, and any key decisions. Use the project name or a short descriptor as the first words of the entry.
5. **Preferences**: Opinions, tastes, and working-style preferences that apply broadly.

## Format:
Use section headers for each category. Within each category, list one entry per line, sorted by oldest date first. Format each line as:
[YYYY-MM-DD] - Entry content here.
If no date is known, use [unknown] instead.

## Output:
- Wrap the entire export in a single code block for easy copying.
- After the code block, state whether this is the complete set or if more remain.

ChatGPT will generate a structured document with everything it has stored about you — your background, your work, your communication preferences, and your projects.

Copy the entire output and save it somewhere accessible.


Step 2: Use That Export to Set Up Your New Tool

Go to claude.ai and open a new conversation.

Paste the exported memory and add this line above it:

"This is a memory export from another AI tool. Please read all of this and use it as context about who I am, how I work, and what I need."

Claude will now have a working profile of you from the very first conversation. Responses will feel tailored rather than generic immediately.


Go to Settings > Capabilities > Memory > paste


Step 3: Activate Memory and Add Your Preferences

In Claude, go to Settings → Memory and ensure memory is turned on. This allows Claude to build and retain a profile of you across separate conversations.

In that first conversation, also state explicitly:

  • Your professional role

  • How you like information presented (detailed or concise, with or without examples)

  • Anything you want Claude to always or never do

  • Domain-specific context relevant to your work

Think of this as onboarding a new colleague. A clear briefing upfront saves weeks of back-and-forth.


Step 4: Give It Two Weeks Before Judging

This is the most important step — and the one most people skip.

Every time a response misses the mark, correct it. "Too long." "I'm a clinician, don't over-explain the basics." "Use simpler language." These corrections stack. Each one improves every conversation that follows.

By the end of two weeks, you'll have a tool that understands your context, your style, and your standards. The quality difference from week one to week two is significant.


A Balanced View: What ChatGPT Still Does Well

This is a guide, not a verdict against ChatGPT.

It has genuine strengths. Image generation with DALL-E is strong. Its plugin ecosystem and third-party integrations are more mature. For development work, OpenAI's API infrastructure and documentation are extensive. For some creative tasks, it remains excellent.


The data privacy concern doesn't make ChatGPT useless. It makes it a tool you should use with clear awareness — knowing what you share and what the implications could be.


The professionals most affected by these concerns are those who routinely type sensitive or professionally significant information into AI tools without thinking about where it goes. If that's you, the time to think about it is now — not after something goes wrong.


Your Action Plan

You don't need to delete ChatGPT. You don't need to commit to anything before you've tested it. What you do need is to be a professional who understands the tools they use.


Here's what to do:

  1. Today: Open ChatGPT, paste the export prompt, save the output somewhere

  2. This week: Create a Claude account (free tier available), run the three steps above

  3. Two weeks in: Evaluate which tool actually serves your work better

  4. Ongoing: Check the privacy policies of any AI tool you use regularly — they change, and the changes matter


The data you share with AI tools is increasingly the data of your professional thinking. Who holds that data is not a small question. It's basic professional hygiene in 2025.

If this gave you clarity, share it with one colleague who uses AI tools but hasn't thought about the privacy angle yet. For more practical guides on using AI in professional life, subscribe at ThirdThinker.com.


Questions or pushback? Drop them in the comments — I read every one.

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

thirdthinker

Dr. Arun V. J. is a transfusion medicine specialist and healthcare administrator with an MBA in Hospital Administration from BITS Pilani. He leads the Blood Centre at Malabar Medical College. Passionate about simplifying medicine for the public and helping doctors avoid burnout, he writes at ThirdThinker.com on healthcare, productivity, and the role of technology in medicine.

©2023 by thirdthinker. Proudly created with Wix.com

bottom of page