AI Usage Policy template — startup-friendly acceptable use

Updated

A copy-edit-and-publish AI acceptable-use policy. Covers approved tools, shadow-AI risks, training-data opt-outs, customer-data handling, and the questions every enterprise security questionnaire asks. Aligns with NIST AI RMF + EU AI Act transparency basics. Not legal advice — customize and have counsel review before publishing.

How to use

Customize the {{COMPANY}}, {{APPROVED_TOOLS}}, and contact details. Have counsel review. Publish to /legal/ai-usage-policy and link from your security trust center + employee handbook.

Template (markdown)

copy-paste, replace {{PLACEHOLDERS}}
# {{COMPANY}} — AI Usage Policy

**Last updated**: {{DATE}}
**Owner**: {{POLICY_OWNER}}
**Audience**: All {{COMPANY}} employees, contractors, and consultants

## 1. Why this policy exists

AI tools — generative AI assistants, code copilots, agentic systems — are powerful productivity multipliers and a new class of security and privacy risk. This policy describes how {{COMPANY}} expects you to use AI tools, what is forbidden, and where to ask questions.

This policy is informed by NIST AI RMF (AI 100-1) and the EU AI Act's transparency basics. It applies to every {{COMPANY}} employee and to every contractor with access to {{COMPANY}} systems or data.

## 2. Approved tools

The following AI tools are approved for {{COMPANY}} work:

{{APPROVED_TOOLS}}
- Example: GitHub Copilot (paid, with org-level training-opt-out enabled)
- Example: ChatGPT Enterprise (paid, with zero data retention)
- Example: Claude.ai Team plan (paid, with org-level workspace)
- Example: Cursor IDE (paid, with privacy mode enabled)

Tools NOT on this list are NOT approved without prior review by {{SECURITY_CONTACT}}. This includes — explicitly — personal-account ChatGPT/Claude/Gemini, free-tier AI tools, and AI tools that train on user input by default.

## 3. Shadow AI is forbidden

You may NOT:
- Use a personal AI account to process {{COMPANY}} data
- Paste {{COMPANY}} source code, customer data, or internal documents into any non-approved AI tool
- Install MCP servers or agentic-AI extensions on {{COMPANY}}-issued devices without {{SECURITY_CONTACT}} approval
- Use AI tools whose training-data opt-out is not configured

Why: 47% of GenAI users access tools via personal accounts (Zscaler ThreatLabz 2026). Source code, regulated data, and IP routinely leak this way. Shadow-AI breaches cost on average $670K more per incident than other AI breaches.

## 4. Training-data opt-out

For every approved AI tool, the following must be enabled:
- Workspace / organization training-data opt-out
- Conversation retention set to the minimum the tool supports
- Where available, zero-data-retention contracts (ChatGPT Enterprise, Claude Team, etc.)

If you cannot find the opt-out for a tool you need to use, do not use it until {{SECURITY_CONTACT}} confirms.

## 5. What you must NEVER paste into ANY AI tool

- Customer PII (names, emails, addresses, payment details)
- Customer health information (PHI under HIPAA)
- {{COMPANY}} secrets (API keys, passwords, certificates, .env contents)
- Source code from any private {{COMPANY}} repository unless the tool is explicitly approved for code (Cursor / Copilot / Claude Code with workspace opt-out)
- Customer-uploaded files
- Information protected by NDA

## 6. AI-generated code review

Code generated by AI assistants (Copilot, Cursor, Claude Code, Lovable, Bolt, v0, Replit, etc.) MUST be reviewed before merge. The reviewer must:
- Verify the code does not introduce known vibe-coding-class vulnerabilities (BOLA, leaked secrets, missing RLS, prompt injection, hallucinated packages — see /guides for each)
- Verify any new dependencies pass the slopsquatting heuristic (first-publish date, downloads, GitHub repo, publisher)
- Verify auth checks are intact at every layer the AI touched
- Run the relevant security scan ({{SECURITY_SCAN_TOOL}}) before approval

AI-suggested install commands (npm, pip, gem, go) MUST be verified against the registry before running on a {{COMPANY}} machine.

## 7. AI features in {{COMPANY}} products

Any AI feature shipped in a {{COMPANY}} product must:
- Have a documented threat model covering OWASP LLM Top 10
- Pass an AI red-team CI gate at >= 0.90 prompt-injection-resistance
- Wrap LLM calls in input + output safety filtering (Llama Guard 4 or equivalent)
- Emit AIBOM (CycloneDX 1.6) + model card on release
- Disclose to users that AI is being used + what data flows to the model

If your feature is high-risk under EU AI Act Annex III (credit, employment, education, law enforcement, etc.), additional Article 11 + Annex IV documentation is required. See /guides/eu-ai-act-checklist-for-startups.

## 8. Reporting

If you suspect:
- A {{COMPANY}} secret was pasted into an unapproved AI tool — report immediately to {{SECURITY_CONTACT}}
- An AI-generated PR introduced a security bug — file an internal report
- An AI agent in production behaved adversarially — escalate per the incident response playbook (/templates/incident-response-playbook)

Reporting is not punitive. The faster we know, the faster we mitigate.

## 9. Training and updates

This policy is reviewed every 6 months and updated as the AI landscape evolves. Material changes are communicated to all employees with at least 30 days notice. Annual training is required for all employees with access to {{COMPANY}} systems.

## 10. Contact

Questions, exceptions, or new-tool review requests: {{SECURITY_CONTACT}}.

---

*This template is a starting point provided by Securie ({{COMPANY}} customizes for its context). Not legal advice — please have counsel review before publishing.*