12 min read

How to safely use AI at work — the policy every company needs (and most haven't written)

AI tools are now in every team's workflow. Most companies have no policy. The result: leaked customer data, leaked source code, leaked strategic plans, leaked salary data. Here is the practical guide to writing an AI usage policy that actually gets followed.

Your engineering team uses Cursor. Marketing uses ChatGPT. Sales uses an AI note-taker. HR uses Claude for policy drafting. Customer support uses Intercom Fin. Operations uses Zapier AI.

Six different AI tools. Six different vendors with access to your data. No policy. The first time something goes wrong — a customer's PII in someone's ChatGPT history, a strategic plan summarized through an AI tool that retains data, a salary table ingested by an AI assistant — you find out the hard way that "everyone uses AI" is an unanswered governance question.

This is the practical guide to writing an AI usage policy. The right shape for solo founders / small teams in 2026 — neither bureaucratic enterprise governance nor zero-policy cowboy mode.

TL;DR

The policy that works at solo-founder + small-team scale is:

1. Allowlist of approved AI tools. Anything not on the list requires approval. 2. Per-data-class rules. Customer data, source code, financial data, HR data — different rules for each. 3. Vendor data-handling requirements. Tool must offer "no training on our data" + reasonable retention. 4. Logging and audit. Approved tools' usage is logged; periodic review catches drift. 5. Incident response. When something leaks, the steps are pre-defined.

A 1-page document covers the above. Skip the 50-page enterprise template; nobody at your scale will read it.

Why a policy matters

The default failure modes:

### Customer data in someone's ChatGPT history

A support agent asks ChatGPT for help drafting a response to a customer ticket. They paste the customer's full message — including their email, support history, sometimes their account details. ChatGPT retains this in chat history; in OpenAI's case, may use it to improve future models (depending on the user's settings).

If your customer is GDPR-protected and you've shared their data with OpenAI without a data-processing agreement, you're in violation. The customer doesn't know; you don't know; the regulator finds out when something else triggers an audit.

### Source code in an AI assistant's training set

An engineer uses an AI tool that promised "no training on your data." Six months later, the vendor changes their policy or has an incident; the engineer's prompts (including pasted source code) become training data. Your competitive advantage walks out the door.

### Strategic plans summarized via an AI note-taker

A founder uses an AI note-taker to summarize their fundraising-strategy meeting. The note-taker stores the transcript; the storage isn't end-to-end encrypted; the vendor's employee with database access could read it.

### Salary data ingested by an AI tool

HR uses an AI tool to draft compensation policy. They paste the company's full salary table. The tool retains it. Six months later a former employee discovers their compensation was discussed in an AI tool's chat history.

These are real failure modes that have happened. None requires malice; just no policy.

The 1-page policy

Here's the template. Customize the company name; adjust the data classes for your specific business; commit to enforcement.

---

[Company Name] AI Usage Policy

*Last updated: [date]*

### Approved AI tools

The following AI tools are approved for company use:

  • Cursor / GitHub Copilot — engineering work, with the constraint below
  • ChatGPT (Team plan) / Claude Pro — general writing, research, drafting
  • [Specific tools your team actually uses] — list each with its data-handling commitment

If you want to use a tool not on this list, request approval before using it. Approval requires: - The tool's data-handling commitment (training, retention, encryption) - The data class you intend to use it with - Written sign-off from [the person responsible at your scale — for solo founders, you]

### Data classes

| Class | Examples | Rules | |-------|---------|-------| | Public | Marketing copy, blog posts, public website content | Any approved tool, any settings | | Internal | Strategy docs, internal wikis, meeting notes | Approved tools with "no training on our data" enabled | | Customer data | User emails, support tickets, account details, PII | Only tools with signed DPA + minimal retention | | Source code | Production code, infrastructure config | Only tools with explicit "no training on this" commitment + audit log | | Financial / HR | Salary, financial projections, hiring decisions | Approved tools only; never paste raw tables; redact identifiers before any AI use | | Customer secrets | API keys, credentials, customer-uploaded data | NEVER paste into any AI tool |

### Tool data-handling requirements

For a tool to be approved for any data class above "Public," it must:

  • Offer a "do not train on our data" setting (and we have it enabled)
  • Have a published retention policy with a maximum retention duration
  • Encrypt data at rest and in transit
  • Be subject to a Data Processing Agreement (DPA) for "Customer data" or above
  • Have a security disclosure / responsible-disclosure policy

### Logging and audit

Approved tools used for "Customer data" or higher must have: - Audit log of who accessed what - Quarterly review by [person responsible]

### Incident response

If you suspect something inappropriate has been shared with an AI tool:

1. Stop immediately — don't continue the session, don't send a follow-up "I didn't mean that" message 2. Notify [responsible person] within 24 hours 3. Request data deletion from the vendor (most have a self-serve flow) 4. Document what was shared, with whom, when

If the data class was "Customer data" or higher, the incident may trigger breach-notification obligations. Don't assume; check with [responsible person].

---

That's the entire policy. 700 words. People will actually read it.

Why the 1-page version works

Most AI policy templates online are 30-50 pages, copied from enterprise governance frameworks. They're written for compliance lawyers + procurement teams; they're not designed to change behavior at your scale.

The 1-page version trades comprehensiveness for adoption. People who use AI tools daily will read 700 words; they will not read 50 pages.

The corner cases the 1-page version doesn't cover (multi-jurisdiction data residency, automated PII detection in tool outputs, AI-tool-bill-of-materials reporting) matter once you have 100+ employees and a real GRC function. At solo-founder + small-team scale, the 1-page version is the right level.

What changes when you actually need to enforce this

The policy is the documentation. Enforcement is what changes behavior.

### Solo founder enforcement

Your enforcement is "I made the rule, I follow it, my one teammate follows it because we trust each other." The policy exists for the moments when something goes wrong + you need to know what to do.

### 5-15 person team enforcement

You appoint someone (the founder, or the technical co-founder, or — when you have one — the operations lead) as the responsible person for AI tool decisions. They approve new tools, run quarterly reviews, handle incidents.

The "appointed person" pattern works because someone owns it. Without an owner, the policy degrades into "we have a doc somewhere" and behavior drifts.

### Scaling past 15

Around 15-30 employees, you formalize. AI committee, periodic training, automated policy enforcement (DLP that catches PII in outbound traffic to AI vendors, browser extensions that warn before paste). At that scale, the 1-page policy expands into a real document set.

For most readers of this post, the 1-page version is the right scope. Scale up when you scale up.

Common AI policy mistakes

### Mistake 1 — banning all AI use

You'll have a faction inside the company that uses AI tools "in secret" because the official policy banned it. The data leak risk is higher than approved use because the secret use has no logging or guardrails.

The fix: explicit approval for tools that meet the data-handling bar. Banning everything just pushes use underground.

### Mistake 2 — approving tools without checking data handling

A tool's marketing page says "we don't train on customer data." The actual terms-of-service contradict this. Or the policy is "we don't train" but retention is unlimited.

The fix: read the actual terms-of-service before approving. Do not rely on marketing claims.

### Mistake 3 — no incident response plan

Something will leak. When it does, the people who caused it need to know what to do. Without a plan, the response is panic + delay; the delay makes everything worse.

The fix: explicit 4-step incident response in the policy. Stop, notify, request deletion, document.

### Mistake 4 — letting "approved" tools drift to broader use

The tool was approved for marketing copy. Three months later, sales is using it for customer-facing pitches that include customer data. The original approval no longer covers the actual use.

The fix: quarterly review. Reapprove tools against current actual use, not original approved use.

What about the security of AI features in YOUR product?

The above is about your team's use of OTHER companies' AI tools. The other side is the AI features in YOUR product — chatbots, agents, assistants you ship to customers.

That side has its own security envelope: prompt injection, RAG poisoning, MCP scope abuse, data exfiltration through chained tool calls. The defenses are structural; the failure modes are predictable.

Securie covers the AI-features-in-your-product side with Day-1 production-validated specialists. The llm-safety stack wraps every customer-facing LLM call with Llama Guard 4; rag-guard catches retrieval-time attacks; mcp-guard scope-guards tool surfaces; the cost-firewall caps per-user spend.

Both sides matter. The 1-page policy covers your team's use of external AI; Securie covers the AI features in what you build.

Related

Related posts