If you’ve been following AI policy updates, you’ve likely seen major AI providers, such as OpenAI, clarifying that their systems shouldn’t be used to deliver legal advice without a licensed professional involved. This is a sensible boundary. While large language models (LLMs) excel at summarisation and pattern recognition, legal work, especially around contracts,  demands precision, accountability, and jurisdiction-specific expertise that today’s general-purpose LLMs aren’t built to deliver.

This post explains why LLMs are risky for legal workflows, what OpenAI’s October 2025 policy clarifications mean in practice, and how Cloud Contracts 365 takes a fundamentally different approach: no LLMs in the core decisioning loop and a lawyer-in-the-loop for critical matters.

Quick note on OpenAI’s updated usage policies

OpenAI’s unified Usage Policies (effective October 29, 2025) explicitly prohibit using their services for:

  • “provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional”

  • “automation of high-stakes decisions in sensitive areas without human review … [including] legal [and] medical”

Source: OpenAI Usage Policies

The company’s changelog also notes: “2025-10-29: We've updated our Usage Policies to reflect a universal set of policies across OpenAI products and services.” See OpenAI Usage Policies.


The core problem: Legal advice is not “best-effort text”

LLMs generate plausible language, not guaranteed-accurate legal outcomes. That gap matters in law, where the cost of a single mistake can be high.

  • Hallucinations and omissions

  • LLMs can produce confident but incorrect statements or miss critical edge cases.

  • Contracts hinge on exact language, defined terms, cross-references, and governing law—areas where “close enough” is not acceptable.

  • Jurisdictional nuance

  • Enforceability depends on jurisdiction, industry, and regulatory context.

  • General-purpose models aren’t reliably tuned to apply the right jurisdictional standard or keep up with fast-changing rules.

  • Confidentiality and privilege

  • Depending on configuration, prompts and outputs may be processed and logged in ways that complicate confidentiality, privilege, and data residency obligations.

  • Explainability and accountability

  • You need to know why a clause changed, what precedent supports it, and who is responsible for the decision.

  • LLMs don’t natively provide chain-of-custody for edits or a defensible rationale tied to legal precedent.

  • Version control and traceability

  • Legal work requires audit trails, redlines, approvals, and attributable decisions—structures that generic chat interfaces don’t enforce by default.

  • Risk allocation and duty of care

  • Clients expect a duty of care. If an AI tool suggests a clause that causes damage, who is accountable?

Most AI vendors explicitly disclaim responsibility for professional outcomes; OpenAI’s policy draws a bright line on licensed advice and high-stakes decisions: see OpenAI Usage Policies.



Contracts magnify these risks

Contracts are dense, interdependent, and business-critical. A small drafting error can:

  • Shift liability or indemnification exposure

  • Break compliance with customer, vendor, or regulatory obligations

  • Create ambiguity that is expensive to litigate

  • Delay deals and revenue recognition

Because contracts are also repetitive and voluminous, teams are tempted to “speed things up” with LLMs. But without strict guardrails and qualified review, you risk accelerating errors rather than outcomes.


What the OpenAI policy clarifications mean for you

  • Expect tools to steer away from tailored legal advice

    • You may get general guidance (“what is indemnification?”) but not situation-specific drafting advice (“which indemnity clause should we accept in this deal under New York law?”) without a professional in the loop.

    • Policy text: “provision of tailored advice that requires a license … without appropriate involvement by a licensed professional.” OpenAI Usage Policies

  • High-stakes legal decisions require human review

    • Policy text prohibits “automation of high-stakes decisions in sensitive areas without human review … [including] legal.” OpenAI Usage Policies

  • Disclaimers won’t protect your business risks

    • Even if a provider disclaims liability, your organisation still bears the consequences of a bad clause or missed compliance requirement.

  • Compliance and procurement scrutiny will increase

    • Legal, InfoSec, and Procurement teams are asking tougher questions about data handling, auditability, and professional oversight when evaluating AI in legal workflows.


The Cloud Contracts 365 approach: No LLMs in the core, lawyer in the loop

Cloud Contracts 365 is purpose-built for contracting with a focus on safety, compliance, and outcomes.

What sets us apart:

  • No LLMs in the core decisioning loop

    • We don’t rely on generative models to draft or review contracts.

  • Deterministic workflows and validated contract libraries.

    • A qualified lawyer's insights to drive accurate reviews.

    • A lawyer is on hand should you need help with more complex matters.


 

Bottom line

LLMs are powerful, but they’re not suitable for delivering legal advice or making binding contract decisions.

OpenAI’s Usage Policies underline this with explicit prohibitions on licensed advice without professional involvement and on automating high-stakes legal decisions without human review: see OpenAI Usage Policies.

Cloud Contracts 365 gives you the speed and consistency you want, without putting generative AI in control of your legal risk, and with a lawyer in the loop when it matters.


See it in action

Ready to modernise contracting safely? Book a demo of Cloud Contracts 365 and see how our no-LLM, lawyer-in-the-loop approach accelerates deals while protecting your business.

Book your demo here