🤖 Guide AI Security · 7 min read · General · November 10, 2025

Is Your AI Assistant Leaking Your Secrets?

Every time you type into ChatGPT or Copilot, where does it go? This guide explains what AI assistants do with your data and what to never type into one.

AI
🤖 Guide AI Security
AI
ChatGPT data privacy AI tools personal security

The Moment You Hit Send

You’re at your desk. You paste a contract into ChatGPT and ask it to summarise the key points. It works brilliantly. But here’s the question nobody asks out loud: where did that contract just go?

The honest answer is: to a server you don’t control, owned by a company you didn’t sign a contract with, which may use your words to train future versions of its AI.

This isn’t a scare story. It’s just how most AI assistants work by default — and knowing this takes five minutes, but it can protect you from a serious mistake.


How AI Assistants Actually Work

When you type a message into a chatbot like ChatGPT or Gemini, your text travels across the internet to a company’s data centre. Their AI model reads it, generates a reply, and sends it back to you. Simple enough.

What’s less visible is what happens to your message after that:

  • It gets stored. Most services keep your conversation history to improve your own experience (showing past chats) or to review for safety.
  • It may be used for training. Some AI providers use conversations to improve their models. OpenAI, for example, uses free-tier conversations for training unless you opt out. Paid tiers often have different policies.
  • Employees may read it. AI companies employ human reviewers to check flagged conversations for safety. Your message could be one of them.
  • It lives in a data breach target. Any large cloud service holding millions of users’ data is a valuable target for hackers.

None of this is hidden — it’s in the terms of service. But nobody reads the terms of service.


The 5 Things You Should Never Type Into a Public AI

1. Passwords, API keys, or login credentials

It sounds obvious, but it happens constantly — especially when developers paste code to ask for debugging help. If your code contains a real API key, you’ve just sent it to a third-party server.

2. Patient or medical information

Healthcare providers are bound by strict privacy laws (HIPAA in the US, similar rules elsewhere). Pasting patient notes, test results, or diagnoses into a public AI tool can put you in legal jeopardy — and puts the patient’s privacy at risk.

3. Your company’s financial data

Revenue figures, salary information, acquisition talks, or pending litigation details are exactly the kind of information that can cause serious damage if it leaks — even accidentally.

4. Personally identifiable information (PII)

Full names combined with addresses, dates of birth, passport numbers, or bank account details should never go into a public AI. Even if you’re just asking the AI to “format this into a letter,” the data goes with it.

5. Confidential contracts or NDAs

Legal documents often contain pricing, exclusivity clauses, and intellectual property details. Your NDA might explicitly forbid sharing the contents with any third party — which includes an AI company’s servers.


”But I Use the Paid Version — Isn’t That Safer?”

Paid plans typically offer better privacy controls. OpenAI’s ChatGPT Team and Enterprise plans, for example, do not use your data for training by default, and data is handled under stricter terms.

But “safer” is not the same as “completely private.” Your data still travels to their servers. It’s still subject to legal requests from governments. It’s still a potential breach target.

The rule of thumb: even on a paid plan, treat a public AI like a public coffee shop’s Wi-Fi. Convenient, useful, but not the place for your most sensitive information.


Enterprise AI Tools: A Different Story

Many organisations are now deploying AI tools that run inside their own environment — either on their own servers or within a private cloud tenancy. Microsoft’s Copilot for Microsoft 365, for example, is designed so your company’s data stays within your existing Microsoft 365 tenant and is governed by your own data policies.

Similarly, tools like Azure OpenAI Service let you use GPT models without your prompts being used for training by Microsoft.

These “enterprise-grade” setups offer meaningful privacy advantages, but they require proper configuration. An IT team that enables Copilot but doesn’t configure data governance policies has just given employees an AI tool with no guardrails.


A Simple Personal Policy for Using AI Safely

You don’t need to be a security professional to protect yourself. Just apply these rules:

  1. Anonymise before pasting. Replace real names, company names, and specific figures with placeholders. “Acme Ltd agreed to pay $4.2M” becomes “Company X agreed to pay $Y.”

  2. Turn off chat history. Both ChatGPT and Gemini allow you to disable conversation history in settings. Do it.

  3. Check your workplace policy. Many organisations now have AI usage policies. If yours doesn’t, ask IT. You might be surprised what’s already been decided.

  4. Use a local model for sensitive tasks. Tools like Ollama let you run capable AI models entirely on your own laptop — nothing leaves your machine. It’s not as polished as ChatGPT, but it’s private.

  5. Assume your message could be read by a person. Not because it will be, but because it’s the right mental model for deciding what to type.


The Takeaway

AI assistants are genuinely useful. They save time, help with writing, and can explain complex topics in plain English. The goal here isn’t to make you afraid of them — it’s to help you use them intelligently.

The rule is simple: don’t put anything into a public AI that you wouldn’t be comfortable seeing printed in a newspaper.

If you’re a business owner or IT manager wondering whether your organisation is handling AI tools safely, talk to our team. We help companies put the right policies and controls in place before an incident forces the conversation.

Want to protect your organisation?

Talk to our certified security team and get tailored advice for your business.

Get in Touch