A major professional services firm recently refunded part of a government engagement fee after submitting a report containing fabricated citations, fake academic references, and a court judgment that did not exist — all generated by an AI language model, none caught before submission. The incident made headlines as an AI governance story.

Practitioners across security advisory, ESG consulting, and technical due diligence are recognising something more specific: the same failure mode is appearing in client deliverables across the industry — and in the briefs that clients send to advisers.

The RFP Problem Nobody Is Naming

In security advisory, a pattern has become unmistakeable. RFPs arrive with scope documents describing threat environments generic to no specific location, referencing frameworks that do not apply to the asset type, and specifying deliverables that mirror consulting jargon without connecting to the organisation's actual risk profile. The brief was written by a language model. And nobody reviewed it with enough domain knowledge to notice.

The consequence is not just a poorly scoped engagement. It is a procurement process built on an AI-generated misunderstanding of what the organisation actually needs — which the winning adviser then delivers against without challenge, producing a report that looks complete and is analytically hollow.

The same pattern appears in ESG. Materiality assessment briefs written at a level of specificity suggesting the client copied a framework description. Sustainability reporting requests asking for GRI alignment without specifying which standards apply to their sector. TCFD-aligned climate risk requests from organisations that have not identified their material climate exposures. The brief reveals that nobody who understands the subject wrote it.

Why This Is a Governance Failure, Not a Technology Problem

AI language models generate plausible content by pattern-matching to training data. They do not flag uncertainty — they produce confidence. In contexts where specialist domain knowledge is required to evaluate output quality, hallucinations pass unchallenged.

In security advisory, ESG consulting, and technical due diligence, that specialist knowledge is exactly what is required. A TVRA scope written without location-specific threat intelligence is a template, not an assessment. An ESG materiality brief written without sector-specific impact analysis is a framework description, not a brief. When neither side of the transaction has a practitioner in the loop, the output is a loop of automated plausibility — expensive, well-formatted, and wrong.

The Standard That Actually Applies

The principle is straightforward: AI in professional services is a drafting and research acceleration tool. It is not an analytical tool, a judgement tool, or an accountability tool. Every substantive claim in a professional deliverable — every scope statement, every risk assessment, every framework recommendation — requires a practitioner with domain expertise to own it explicitly.

Organisations managing this well apply a simple rule: AI drafts, practitioners decide, and the decision is documented. The organisations that are not managing it well are discovering the gap in a security incident that a correctly scoped engagement would have prevented, or in a regulatory finding on a disclosure that nobody with relevant expertise reviewed.

Discuss a requirement

Initial conversations are obligation-free. Senior practitioner from the first call.

Start a Conversation →