Do I need an AI Acceptable Use Policy (AI AUP) in my business?
A Supplier of IT Services should implement an AI Acceptable Use Policy (AI AUP) whenever they provide, use, or manage AI systems, tools, or capabilities as part of their services—especially if those AI systems could impact clients, users, data, or business outcomes.
Here’s a breakdown of when and why an IT services supplier needs an AI AUP:
1. When the Supplier Provides AI-Enabled Services
If the supplier offers or integrates AI features (e.g., chatbots, predictive analytics, generative AI tools, automated decision-making), an AI AUP defines:
- Acceptable vs. prohibited uses of AI by clients or end-users.
- Responsibilities for compliance with laws (e.g., GDPR, EU AI Act, etc.).
- Restrictions on using AI for harmful or unethical purposes (e.g., discrimination, surveillance, misinformation).
Example:
A managed services provider offering AI-based threat detection must ensure clients do not repurpose the model for unauthorized surveillance.
2. When the Supplier Manages or Hosts AI Systems for Clients
If the supplier maintains or deploys AI infrastructure, they’re responsible for setting terms around:
- Model governance (monitoring bias, drift, and misuse).
- Access controls for AI capabilities.
- Auditability and transparency obligations.
Example:
An IT company managing cloud AI platforms for clients should define acceptable configurations and model usage standards.
3. When Required by Contracts or Regulations
AI AUPs are increasingly mandated by:
- Client contracts (especially in regulated sectors like finance or healthcare).
- Legal frameworks (e.g., EU AI Act, NIST AI RMF, ISO/IEC 42001 AI Management Standard).
- Cybersecurity certifications (e.g., SOC 2, ISO 27001 extensions for AI).
4. When Protecting Reputation and Liability
An AI AUP can limit liability if clients or employees misuse AI. It sets expectations for:
- Ethical AI use.
- Attribution of responsibility.
- Procedures for reporting or remediating AI misuse.
In short
A Supplier of IT Services should implement an AI Acceptable Use Policy when:
- They use or offer AI in any capacity (internally or externally).
- They handle client data or integrate third-party AI tools.
- They must meet legal, regulatory, or contractual obligations.
- They want to manage reputational, ethical, and compliance risks.
How to use this doc: Either add it to your website list of documents and host it on your website, or if your client is using Co-pilot or any AI Tool which you are managing on its behalf