EU AI Act Article 12 enforcement begins August 2026 — tamper-proof audit logs become mandatory for high-risk AI. Is your firm ready? →
Compliance

FCA Consumer Duty and AI:
What Regulated Firms Must Evidence in 2026

Consumer Duty does not prohibit AI. It requires you to govern it — and to demonstrate that governance when the FCA asks. Here is what that obligation means in practice, and what a compliant evidence trail looks like.

Regulation 8 May 2026 · Inference Agents

Most compliance officers at regulated firms are grappling with the same question: staff are using AI tools — ChatGPT, Claude, Microsoft Copilot, and various specialist tools — to draft client communications, research investment options, and produce suitability letters faster than was previously possible. The FCA has been clear that it is not going to prohibit this. But Consumer Duty creates a hard obligation: you must be able to demonstrate that every client interaction, including those assisted by AI, delivered good outcomes.

The question is not whether your firm can use AI. It is whether your firm can evidence its AI use to the FCA's standard.

What Consumer Duty actually requires

FCA Consumer Duty (PS22/9, effective July 2023) introduced a consumer principle requiring firms to act to deliver good outcomes for retail customers. The four Consumer Duty outcomes — products and services, price and value, consumer understanding, and consumer support — each require firms to monitor, assess, and evidence compliance on an ongoing basis.

For AI-assisted interactions, this creates a specific evidential obligation. The FCA expects firms to:

  • Identify which client interactions are being assisted or influenced by AI tools
  • Demonstrate that those interactions delivered fair value and appropriate outcomes
  • Maintain records sufficient to enable FCA review, FOS complaint investigation, or internal audit
  • Evidence that appropriate data handling controls were in place during AI use

A suitability letter drafted using an AI assistant is not inherently non-compliant. But if an FCA supervisor asks to see the process by which that letter was produced — what the AI was given, what it produced, what policy guardrails were applied, and how client data was handled — and you cannot answer that question with structured evidence, you have a compliance gap.

The obligation is to demonstrate compliance, not to avoid the technology. Regulated firms that prohibit AI use do not automatically satisfy Consumer Duty — they simply reduce the surface area of the question. Firms that deploy AI thoughtfully and maintain structured evidence of that deployment are in a stronger compliance position than firms that ban it and hope no one notices staff using personal subscriptions.

Why manual logs are not sufficient

A common initial response from compliance teams is to instruct staff to copy-and-paste their AI conversations into a shared drive folder, or to maintain a log of AI tool usage alongside the client file. This is better than nothing. It is not, however, what the FCA means by adequate records.

The problems with manual logging are structural:

  • Completeness. Manual logging depends on individual staff members remembering to log every interaction. In practice, routine uses — quick fact-checks, draft paragraph generation, research queries — will go unlogged. You cannot evidence what was not recorded.
  • Integrity. A document on a shared drive can be edited after the fact. A regulator reviewing your records cannot determine whether the log reflects what actually happened. Manual logs are not tamper-evident.
  • PII exposure. If staff are copying client data into AI prompts and then copying those prompts into a shared folder, you have a secondary GDPR problem: a collection of client personal data in unstructured format, outside the client management system, with no access controls proportionate to the sensitivity of the data.
  • Queryability. If the FCA asks for all AI-assisted interactions involving a specific client, or all interactions where a model produced text about a particular product type, you need to be able to run that query. A folder of text files does not support that.

A compliant AI audit trail is structured, automatic, tamper-evident, and queryable. It is built at the infrastructure level — not by asking staff to keep notes.

SMCR: personal accountability does not disappear

SMCR Obligation

Under the Senior Managers and Certification Regime, a named Senior Manager cannot absolve themselves of accountability by pointing to an AI tool. The regime is explicit: delegating a decision to an algorithm does not remove personal responsibility for the outcome of that decision.

This is a critical point for FCA-authorised IFA and wealth management firms. If an AI-assisted suitability letter turns out to contain unsuitable recommendations — because the model misunderstood the client's risk profile, or produced a plausible-but-wrong analysis — the named principal is personally accountable. Not the software vendor. Not the model provider. The Senior Manager whose firm deployed the tool and whose name is on the advice.

This does not mean AI cannot be used in suitability processes. It means that the Senior Manager must be able to demonstrate they exercised appropriate oversight of the AI's role in that process. That oversight has to be evidenced in records that predate any complaint or regulatory query — not reconstructed after the fact.

What a compliant AI evidence trail looks like

At the infrastructure level, a compliant AI evidence trail for Consumer Duty and SMCR purposes has four components:

1. Interception at the API level

Every prompt sent to an AI model and every response received should be captured at the point of transmission — before the model processes the request, and again on the way back. This cannot be done reliably at the application layer (staff can bypass application-level controls). It requires a proxy that sits between your AI tools and the model providers.

2. PII detection before the model sees it

GDPR Article 5 requires that personal data be processed lawfully, fairly, and transparently. Sending a client's National Insurance number, account number, or medical history to a third-party AI model without appropriate controls is a lawful processing question. A compliant gateway detects and handles UK-format PII — NI numbers, sort codes, postcodes, account numbers — before the prompt reaches the model, not after.

3. Tamper-evident logging with cryptographic verification

Each interaction should be logged to an append-only audit chain where each record is cryptographically linked to the previous one. HMAC-SHA256 chaining means that any retrospective modification of the log — even a single character — produces a verification failure that cannot be disguised. The FCA, FOS, or your own audit team can verify at any point that the log has not been altered.

4. Policy enforcement at the edge

Consumer Duty requires firms to apply appropriate guardrails to client interactions. An AI model that will produce whatever the user asks — regardless of regulatory constraints, firm policy, or client suitability context — is not a governed AI tool. Policy rules applied at the proxy level, before the request reaches the model, enforce firm policy on every interaction without depending on staff remembering to apply them manually.

The Inference Agents compliance gateway sits between your API-connected AI tools and the model providers. You configure FCA Consumer Duty and SMCR guardrails once. Your team keeps working exactly as before — one configuration change per tool (a base URL change), no new interface, no training. The gateway intercepts every interaction, applies your policy rules, protects client PII, and builds the tamper-proof audit chain automatically. Get early access →

EU AI Act Article 12: the August 2026 deadline

While Consumer Duty and SMCR are the immediate regulatory drivers for most UK regulated firms, EU AI Act Article 12 creates an additional obligation for firms with EU operations or EU-resident clients. Article 12 requires that AI systems used in high-risk applications — which includes financial advice — automatically generate tamper-proof logs of every decision made by the system, retained for the lifetime of the system.

The key requirement is automatic generation. Standard application logs — which most AI tool providers do offer — are not sufficient for Article 12 compliance. The regulation requires cryptographic immutability: the log must be structured so that any tampering is detectable. Enforcement begins August 2026.

UK firms without EU clients or operations are not directly subject to the EU AI Act. However, UK government policy on AI regulation is actively developing, and the gap between UK and EU requirements for regulated sectors is expected to narrow. Firms that build compliant audit infrastructure now for Consumer Duty will find that infrastructure satisfies Article 12 without additional effort.

Practical steps for regulated firms

If your firm is actively using AI tools — or staff are using them in ways that are not yet formally governed — the compliance programme should address four things:

  • Inventory. Identify every API-connected AI tool in use across the firm. This is different from browser-based tools (ChatGPT.com, Claude.ai) — API-connected tools can be governed at the infrastructure level. Browser-based use requires policy controls.
  • PII governance. Review what client data is entering AI prompts and whether current controls are sufficient under GDPR. This is often the highest-risk finding from an initial AI audit.
  • Policy definition. Document what AI tools are permitted to do and not do. For FCA-authorised firms, this should reference Consumer Duty outcomes and SMCR accountability requirements explicitly.
  • Evidence infrastructure. Put in place the logging and audit chain infrastructure that makes your policy enforceable and evidenceable — not just documented.

The last step is where most firms currently have the largest gap. Policy documents without enforcement infrastructure are not evidence of compliance — they are evidence of intent. Consumer Duty asks for demonstrated outcomes, not stated intentions.

Regulated firms looking to address this gap can apply for early access to the Inference Agents compliance gateway, which handles steps three and four at the infrastructure level, with Consumer Duty and SMCR guardrails pre-loaded. See pricing for usage-based plans designed to scale with your firm's AI activity.

Ready to govern your AI activity?

Apply for early access. We are onboarding a small number of FCA-authorised firms. No commitment required.

Get Early Access