Ga naar inhoud
Terug naar Kennisbank
AI Policy
GDPR
Compliance

How to Write a Practical AI Policy (That's Actually GDPR-Friendly)

A step-by-step guide to creating an AI usage policy that protects your business, satisfies regulators, and doesn't kill innovation.

15 december 20257 min read

Why You Need an AI Policy (Even If You Think You Don't)

If anyone in your company uses ChatGPT, Copilot, or any AI tool, you already have an AI practice. You just don't have a policy governing it.

That's a problem. Without clear guidelines, employees make their own decisions about what data to feed into AI systems, which tools to use, and how to apply the outputs. This creates risk: legal, reputational, and operational.

We've seen it firsthand. A marketing team pasting customer emails into ChatGPT for "quick summaries." A finance analyst uploading revenue spreadsheets to get help with formulas. A developer feeding proprietary code into a public code assistant. None of it malicious. All of it risky.

A good AI policy doesn't ban AI. It channels it. Here's how to write one that works in practice.

Step 1: Audit Current AI Usage

Before writing a single rule, find out what's already happening. Send a short survey to every team:

  • What AI tools are you currently using?
  • What data do you input into these tools?
  • What decisions do you make based on AI outputs?

You'll be surprised. In most organisations, 30-60% of employees are already using AI tools, often without IT's knowledge. This isn't a failure of governance, it's a sign that people are trying to be more productive. Your policy should support that instinct while managing the risk.

During the audit, pay particular attention to "shadow AI": tools employees signed up for on their own, browser extensions that process page content, or AI features embedded in existing software that were enabled by default. These are easy to miss and often the biggest source of unmanaged data exposure.

Step 2: Classify Your Data

Not all data carries the same risk. Create a simple classification:

  • Public: Marketing content, published reports, general knowledge
  • Internal: Process documents, non-sensitive business data
  • Confidential: Customer data, financial records, HR information
  • Restricted: Personal data (GDPR), trade secrets, legal privileged material

Your policy should map these classifications to permitted AI tools. For example: public and internal data can go into commercial AI tools, but confidential and restricted data requires approved, private AI systems only.

This is where most policies fail. They either ban everything (so people ignore the policy) or allow everything (so there are no guardrails). The classification approach gives people a clear decision framework: check the data type, check the tool, proceed or don't.

Make the classification visible. A one-page cheat sheet pinned in the office or bookmarked in Slack will do more than a 30-page document buried in SharePoint.

Step 3: Define Approved Tools

Maintain a short list of approved AI tools. For each tool, specify:

  • What data classifications it can process
  • Who has access
  • What the tool's data retention policy is
  • Whether the provider uses your data for training
  • Where the data is processed and stored geographically

Under GDPR, if a tool uses your input data for model training, and that input contains personal data, you have a potential compliance issue. Choose tools with clear data processing agreements.

Review your approved tool list quarterly. AI tools change their terms of service, data handling practices, and features frequently. What was compliant six months ago may not be today. Assign someone in your team to own this review. It doesn't need to be a full-time role, but it does need to be someone's responsibility.

Step 4: Set Output Guidelines

AI outputs need human review. Your policy should specify:

  • Never publish AI-generated content without human review, especially anything customer-facing, legal, or financial.
  • Disclose AI involvement where relevant (e.g., in client deliverables or regulated communications).
  • Verify facts. AI systems hallucinate. Build fact-checking into your workflow for any output that will influence decisions.
  • Document AI involvement in key decisions. If an AI tool helped draft a contract clause, analyse financial data, or recommend a course of action, note that somewhere. This becomes important if the decision is later questioned or audited.

A good rule of thumb: treat AI output like advice from a very fast intern. It's often good, sometimes excellent, but you wouldn't let an intern sign off on a client proposal without review.

Step 5: Address GDPR Specifically

If you operate in the EU or handle EU residents' data, your AI policy must address:

  • Lawful basis: What's your legal basis for processing personal data through AI tools? Legitimate interest is common, but you need to document it.
  • Data minimisation: Only input the minimum personal data necessary. If you need AI to summarise a customer complaint, strip out the personal identifiers first.
  • Right to explanation: If AI influences decisions about individuals (hiring, credit, service eligibility), can you explain how the decision was reached? This is especially critical under the EU AI Act, which introduces additional obligations for high-risk AI systems.
  • Data transfers: Does the AI tool transfer data outside the EU? If so, what safeguards are in place? Standard contractual clauses, adequacy decisions, or equivalent mechanisms must be documented.
  • DPIA requirement: For high-risk AI processing, a Data Protection Impact Assessment may be required under Article 35 of GDPR. If you're processing personal data at scale, profiling individuals, or using AI for automated decision-making, check whether a DPIA applies.
  • Vendor due diligence: Your data processor obligations extend to AI tool providers. Ensure you have a proper Data Processing Agreement in place with each provider, and verify their compliance claims aren't just marketing copy.

Step 6: Make It Practical

The best AI policy is one people actually follow. Keep it:

  • Short: 2-3 pages maximum for the core policy. Link to detailed annexes for those who want the specifics.
  • Clear: No legal jargon. Use examples. "Don't paste customer emails into ChatGPT" is clearer than "Refrain from processing personal data through non-approved AI systems."
  • Living: Review quarterly. AI tools change fast, and your policy should keep pace. Version it clearly so people know they're reading the latest.
  • Supportive: Frame it as "here's how to use AI safely" not "here's everything you can't do." People will work around restrictive policies. They'll embrace supportive ones.
  • Enforceable: Include consequences for violations, but be proportionate. An employee who accidentally pastes a customer name into ChatGPT needs training, not termination. Repeated, wilful violations of data handling rules are a different matter.

Template Structure

Here's a practical structure for your AI policy document:

  1. Purpose & scope. Why this policy exists and who it applies to
  2. Approved tools list. Updated quarterly, with links to each tool's data processing terms
  3. Data classification matrix. What can go where, with examples for each category
  4. Usage guidelines. Do's and don'ts with concrete scenarios from your business
  5. Output review requirements. Who reviews what, and what sign-off is needed
  6. Incident reporting. What to do if something goes wrong (data leak, hallucinated output sent to a client, etc.)
  7. Training requirements. Who needs to complete AI awareness training, and how often
  8. Review schedule. When the policy gets updated, and who owns the review

Getting Started

Don't aim for perfect. Start with a v1 that covers the basics, share it with your team, and iterate. A simple policy that people follow is infinitely better than a comprehensive one that sits in a drawer.

Run a 30-minute briefing when you roll it out. Walk through real examples. Show your team what's okay, what isn't, and where to go if they're unsure. Then make it easy to find: pin it in your Slack workspace, add it to your onboarding checklist, and reference it in your employee handbook.

Need help creating an AI policy tailored to your organisation? Get in touch. We can have a draft ready for review within a week.