EU AI Act Article 12 enforcement begins August 2026 — tamper-proof audit logs become mandatory for high-risk AI. Is your firm ready? →
Regulation

SMCR and AI:
Why Your Senior Managers Are Still Personally Liable

Delegating a task to an AI tool does not remove the personal accountability that goes with it. Under SMCR, the named Senior Manager is still responsible — and must be able to evidence appropriate oversight when the FCA asks.

SMCR 8 May 2026 · Inference Agents

Across FCA-authorised firms — IFAs, mortgage brokers, insurance brokers, wealth managers — the pattern is the same. Staff are using AI tools to draft suitability letters faster, handle routine client queries, research product options, and generate initial paragraphs of compliance documentation. The tools are productive. They are also creating a compliance gap that most firms have not yet closed.

The gap is not about whether AI can be used. The FCA has not prohibited AI in regulated activities. The gap is about accountability. When an AI tool assists with a regulated function and something goes wrong, Senior Managers at FCA-authorised firms face a question they are currently not well-positioned to answer: what oversight did you exercise over that AI interaction, and where is the evidence?

What SMCR actually requires

The Senior Managers and Certification Regime, which applies to all FCA-authorised firms, assigns personal accountability to named individuals for specific areas of the firm's business. Prescribed Responsibilities — governance of regulatory obligations, oversight of the firm's risk framework, responsibility for client-facing functions — are allocated to named Senior Managers in their Statements of Responsibilities.

The critical point is that these responsibilities attach to people, not to processes. A Senior Manager can delegate a task — drafting a suitability letter, handling an initial client assessment, managing a client onboarding process — to a member of staff, a third-party supplier, or an AI tool. What they cannot delegate is the accountability that goes with that task. SMCR is explicit on this. If the outcome of a delegated task is unsuitable, harmful, or non-compliant, the named Senior Manager is personally responsible.

SMCR Accountability Principle

A Senior Manager is personally accountable for the performance of the functions allocated to them in their Statement of Responsibilities. This accountability is not diminished by the use of AI tools, automated systems, or third-party software. The regime requires Senior Managers to take reasonable steps to ensure that their areas of the business are run effectively and in compliance with regulatory requirements — and to be able to evidence those steps.

The delegation fallacy

The assumption that is causing compliance gaps at many FCA-authorised firms is a version of the following logic: "We are using AI to assist — not replace — human judgement. The adviser reviews the AI output before it goes to the client. Therefore, accountability rests with the reviewing adviser."

This logic is not wrong in principle. Human oversight of AI outputs is a genuine accountability control. But it only works as a compliance argument if that human review is evidenced — contemporaneously, in structured records, at a level of detail that a regulator can evaluate. The problem is that most firms have not put the infrastructure in place to generate that evidence automatically.

Consider what the FCA would need to see to accept "human oversight of AI" as a sufficient SMCR control:

  • The AI input (the prompt given, including any client data that was included)
  • The AI output (exactly what the model produced, before any editing)
  • The policy guardrails that were applied to that interaction
  • Evidence that a named individual reviewed the output before it reached the client
  • Confirmation that the client PII in the prompt was handled within GDPR and the firm's data policies

Assembled manually after a complaint, these records will not withstand scrutiny. An FCA supervisor conducting a SMCR accountability review is looking for evidence created at the time of the interaction, not retrospectively compiled in response to their query.

Where the risk is highest

Suitability letters and advice documents

AI-assisted suitability letter drafting is now common in IFA practices. The adviser provides client information and risk profile, the AI produces a first draft, the adviser reviews and edits. For the output that reaches the client, the adviser's name is on it. Under SMCR, the Senior Manager responsible for that adviser's function is personally accountable for the adequacy of the suitability process — including the AI's role in it.

If the AI produced a letter that mischaracterised the client's risk profile, or included product recommendations that were unsuitable given the client's circumstances — and the reviewing adviser did not catch it — the SMCR question is not "did the AI make a mistake?" It is "did the Senior Manager's oversight framework ensure that AI mistakes were caught before they reached clients?" If the answer is no, the accountability exposure is personal.

Client communications and query handling

AI tools used to draft or respond to client communications create a different but related risk. Consumer Duty requires firms to demonstrate good outcomes for consumer understanding — clients must be given information they can act on, in a form they can understand. If an AI tool is generating client communications, the firm must be able to evidence that every communication met this standard, not just the ones that resulted in complaints.

For mortgage and insurance brokers, this risk is acute. MCOB 11 requires that mortgage recommendations are suitable and that the suitability assessment is documented. If AI is involved in any part of the suitability process, that involvement must be visible in the firm's records and attributable to a named function for SMCR purposes.

Insurance underwriting and automated decisions

For insurance brokers and firms using AI in underwriting or pricing decisions, GDPR Article 22 adds a further layer. Article 22 restricts solely automated decision-making that produces legal or similarly significant effects for individuals. Insurance pricing and coverage decisions are within scope. Firms must maintain meaningful human oversight of every such decision and be able to demonstrate that oversight in records. SMCR means a named Senior Manager is accountable for ensuring that oversight framework is in place — and is working.

GDPR Article 22 and SMCR: a compounding obligation

It is worth being precise about how GDPR Article 22 and SMCR interact, because the combination is more demanding than either regulation alone.

GDPR Article 22 requires that automated decisions affecting individuals are not solely automated — human oversight must be real and meaningful, not nominal. A process that routes an AI recommendation to an adviser who rubber-stamps it without genuine review does not satisfy Article 22.

SMCR requires that the Senior Manager responsible for a function can evidence that the function was performed appropriately. For any function involving Article 22-relevant AI decisions, this means the Senior Manager must be able to demonstrate not just that a human was in the loop, but that the human review was substantive and documented on each occasion.

The combination means this: a firm cannot satisfy both GDPR Article 22 and SMCR with a policy that says "advisers review AI outputs before use." That policy needs to be backed by infrastructure that generates contemporaneous evidence of every review — the AI input, the AI output, the reviewing individual, and the outcome. Policy without evidence is intent. SMCR requires demonstrated performance.

What adequate oversight evidence looks like

For an FCA-authorised firm to satisfy its SMCR obligations with respect to AI-assisted regulated activity, the oversight evidence needs to meet four criteria.

1. Automatic, not manual

Evidence of AI use must be generated automatically at the point of every interaction. Manual logging — asking staff to copy their AI conversations into a shared folder — will not produce complete records. In practice, routine AI use will go unlogged. A compliance proxy that sits between the firm's AI tools and the model providers captures every interaction at the infrastructure level, without depending on individual staff behaviour.

2. Tamper-evident

Records that can be edited after the fact do not satisfy a SMCR accountability review. Supervisors need to be able to verify that the records reflect what actually happened — not what the firm wished had happened after a complaint was raised. HMAC-SHA256 cryptographic chaining ensures that any retrospective modification of the audit log is detectable. The chain cannot be altered without producing a verification failure.

3. PII-protected

Every AI interaction involving client data is simultaneously a GDPR compliance question. Client NI numbers, account details, health information, and financial circumstances entering an AI prompt must be handled within the firm's data governance framework before the prompt reaches a third-party model provider. A compliant gateway applies PII detection to UK-format data — postcodes, sort codes, NI numbers, account numbers — at the infrastructure level, before transmission.

4. Queryable for regulatory export

When the FCA asks for all AI-assisted interactions involving a specific client, or all interactions where a particular function was involved, the firm must be able to run that query against structured audit records. A folder of text files or a spreadsheet maintained by a compliance officer does not support this. The audit log needs to be structured, indexed, and exportable.

The Inference Agents compliance gateway addresses all four requirements at the infrastructure level. One configuration change per AI tool — a base URL change — routes every interaction through the gateway. Staff keep working exactly as before. Every prompt and response is intercepted, PII-scanned, policy-checked, and logged to a tamper-evident HMAC-SHA256 audit chain. Senior Managers get the evidence trail SMCR requires without building it manually. See pricing for usage-based plans that scale with your firm's AI activity.

Practical steps for FCA-authorised firms

For firms that are actively using AI tools and have not yet closed the SMCR accountability gap, the programme is straightforward in structure, even if the implementation requires care:

  • Map your AI use. Identify every API-connected AI tool in use across the firm. This is distinct from browser-based tools (ChatGPT.com, Claude.ai) — API-connected tools can be governed at the infrastructure level immediately. Browser-based use requires a policy layer.
  • Allocate SMCR responsibility. Ensure that every AI-assisted regulated function has a named Senior Manager in whose Statement of Responsibilities AI governance sits explicitly. "AI oversight" should not be an implied responsibility — it should be named.
  • Review PII governance. Assess what client data is currently entering AI prompts and whether current controls are adequate under GDPR and your firm's data policies. This is typically the highest-risk finding from an initial AI audit.
  • Build the evidence infrastructure. Put in place the logging, PII detection, and audit chain that make your oversight framework evidenceable — not just documented. Policy without infrastructure is not a SMCR defence.
  • Document the framework. Update your SMCR governance documentation to reflect AI oversight responsibilities, the controls in place, and the evidence those controls generate. This documentation should be reviewed and signed off by the relevant Senior Manager.

Regulated firms ready to address the accountability gap can apply for early access to the Inference Agents compliance gateway. The gateway intercepts, scans, and logs every AI interaction from your API-connected tools — generating the tamper-evident audit chain that SMCR oversight requires.

Ready to govern your AI activity?

Apply for early access. We are onboarding a small number of FCA-authorised firms. No commitment required.

Get Early Access