Skip to main content
AI SecurityChatGPTSMB

Is ChatGPT safe for my small business?

Jaymar L.5 min read

A client emailed me last week: "We've been using ChatGPT to draft client proposals. Should I be worried?" The honest answer is: it depends on which plan you're using, and more importantly, what data your team is pasting in.

Here's the plain-English breakdown.

The four plans and what they actually do with your data

Free / Plus (personal). By default, OpenAI uses your conversations to improve its models. That means the content of your chats—including anything you paste in—can be used as training data. There is an opt-out in settings, but it's off by default and most users never touch it.

ChatGPT Team. Conversations are not used for training by default. You also get a shared workspace with some admin controls. This is a meaningful step up in data handling, but it's not a zero-risk environment—your data still flows through OpenAI's infrastructure, and you're operating under their standard terms.

ChatGPT Enterprise. Data is not used for training, you get a Business Associate Agreement (BAA) option for HIPAA-adjacent use cases, and you have more controls over data retention. This is what you'd look at if you're in healthcare, legal, or finance and your data carries regulatory weight.

The version your team is using right now almost certainly isn't Enterprise.

What should never go into any AI tool

Regardless of plan, some categories of data should stay out of public-facing AI tools entirely:

  • Client PII. Names, addresses, Social Security numbers, email addresses paired with account details. Even if it's not training your prompt today, you're transmitting it to a third party and their subprocessors.
  • Financial records. Tax returns, bank statements, payroll data. Not a ChatGPT problem specifically—this applies to any SaaS tool with AI features.
  • Contract terms and NDAs. Confidentiality provisions often cover "any third-party tool." If your client's NDA says you won't share their information with third parties, feeding their contract to ChatGPT could be a breach.
  • Source code with credentials. API keys, database connection strings, .env files. People paste these constantly. If the key is active, rotate it immediately.
  • HR and personnel data. Performance reviews, compensation discussions, termination memos.

The rule of thumb I give clients: if you'd hesitate to read it aloud in a coffee shop, don't paste it into a free AI tool.

What's actually low-risk

Not everything is dangerous. These use cases are generally fine even on a personal ChatGPT plan:

  • Drafting generic templates (then filling in client-specific details yourself)
  • Summarizing public articles or your own blog posts
  • Brainstorming marketing angles using hypothetical examples
  • Formatting and proofreading copy that contains no client data
  • Writing SQL or code against a fictional dataset

The pattern is: generalize first, then personalize offline.

How to lock your team down (without banning the tool)

Banning AI tools rarely works. People find workarounds; they just do it on personal devices instead of work laptops, and now you have even less visibility. A better approach:

Write a one-page AI acceptable-use policy. Define which tools are approved, which data categories are off-limits, and what the process is for reporting a mistake. Keep it short enough that someone reads it.

Require the Team plan minimum. If your team is using ChatGPT professionally, the $25/user/month Team plan is the minimum defensible option. Upgrade the whole team, not just the power users.

Do a data inventory before you expand. Before you let ChatGPT touch a new category of work—client financial summaries, HR workflows, medical notes—ask: what data will flow through this, where does it go, and is that documented?

Train once, reinforce quarterly. A fifteen-minute walkthrough of what can and can't go into AI tools is more effective than a dense policy document nobody reads. Block time for it.

Log the incidents. If someone pastes something they shouldn't have, you want to know. Build a simple way for people to report it without fear of punishment—so small mistakes don't become big ones.

What to do today if your team is already using it

  1. Check which plan everyone is on. Go to Settings and confirm.
  2. Turn on "Improve the model for everyone" opt-out if you're on Free or Plus (Settings → Data Controls).
  3. Audit what's been shared. Ask your team directly. Assume the answer is more than you expect.
  4. Rotate any credentials that may have been pasted into any AI tool, ever.
  5. Send a one-paragraph email to your team with the three data categories that are off-limits, starting now. Perfect policy later; practical guidance today.

If this is on your mind, the Vendor Security Question Checklist (free download) walks through what to ask any AI vendor before signing—including OpenAI. It takes ten minutes to fill out and surfaces the right questions before you're locked into a contract.

Want to talk through your situation?

A free 30-minute call to discuss where AI is already touching your business and what to do about it. No pitch deck.

Book a free call