Three months is not long. For regulated professional services firms that use AI tools in client-facing processes — financial advice, legal analysis, mortgage suitability assessments, insurance decisions — Article 12 of the EU AI Act creates a specific and non-negotiable infrastructure obligation. If your firm serves EU-resident clients or maintains any EU operations, this applies to you.
The question is not whether to comply. It is whether you understand exactly what compliance requires and whether you have time to build it before the deadline.
What Article 12 actually requires
Article 12 of the EU AI Act is titled "Record-keeping." It requires that high-risk AI systems be designed and built so that they automatically generate logs of their operation throughout their lifetime. The key word is automatically.
The regulation specifies that logging must:
- Be automatic — generated by the system itself, not by staff manually recording interactions
- Cover events throughout the system's operation that are relevant to identifying risks to health, safety, and fundamental rights
- Be tamper-proof — structured so that any retrospective modification is detectable
- Be retained for the lifetime of the AI system, or longer where required by sector-specific regulations
The regulation explicitly rules out manual logging as a compliance method. A shared document where staff copy-paste their AI conversations is not a tamper-proof log. Neither is a folder of screenshots. Article 12 requires cryptographic immutability: a log structure where any alteration — even a single character — produces a verification failure that cannot be disguised.
The obligation runs to the infrastructure, not the application. You cannot satisfy Article 12 by asking your AI tool vendor whether they maintain logs. The obligation sits with the deploying organisation — your firm — to ensure that every AI interaction in a high-risk application is captured to a tamper-evident record that your firm controls and can produce on demand.
Who Article 12 applies to
The EU AI Act applies to providers and deployers of AI systems used in the EU. After Brexit, UK firms are not automatically within scope — but the scope is broader than many assume.
The EU AI Act applies to any organisation deploying an AI system where that system is used in connection with individuals located in the EU, or where the deploying organisation is itself established in the EU. UK firms with EU-resident clients, EU branch offices, or EU-facing operations are within scope for those activities.
For regulated UK professional services firms, the relevant question is: do any of your AI-assisted processes involve EU-resident clients? For IFA and wealth management firms, law firms with EU client relationships, insurance brokers underwriting EU-resident policyholders, or mortgage brokers assisting EU nationals purchasing in the UK — the answer is likely yes for at least a subset of interactions.
Purely domestic UK firms with no EU clients or operations are not directly subject to the EU AI Act. However, two additional factors are worth noting:
- UK regulatory convergence. The UK government has signalled intent to introduce AI governance requirements for regulated sectors. FCA Consumer Duty already creates overlapping obligations — firms that build EU-compliant infrastructure now will find it satisfies UK requirements as they develop without additional effort.
- Client expectations. EU-resident individuals and institutional counterparties increasingly expect their service providers to meet EU AI Act standards regardless of where the provider is domiciled. Compliance is becoming a commercial requirement as well as a regulatory one.
High-risk AI systems: which ones apply to regulated firms
Article 12 applies specifically to AI systems classified as high-risk under Annex III of the EU AI Act. For regulated professional services firms, the most relevant classifications are:
Financial services
AI systems used in creditworthiness assessment and credit scoring for individuals are explicitly high-risk. More broadly, AI systems that influence or assist decisions about individual access to financial services — including investment suitability, insurance underwriting, and mortgage eligibility assessments — fall within this category.
Legal and compliance processes
AI systems used to assist in the administration of justice and legal processes are classified as high-risk. This covers AI tools used in legal research, document review, and case analysis where the output influences decisions about individuals. Law firms using AI to assist with matter work involving EU-resident clients should treat those interactions as within scope.
The internal use question
AI tools used purely for internal administrative tasks — drafting internal memos, summarising research that does not involve individual clients, generating standard clauses — may fall outside Annex III. The test is whether the AI output influences a decision about a specific individual. If it does, the interaction is within scope. If it is entirely internal and involves no individual's data or interests, it may not be. The safer position is to treat all API-connected AI tool usage as within scope and apply governance uniformly.
What "tamper-proof" means technically
Article 12 does not specify a particular technical implementation for tamper-proof logging. However, the functional requirement is clear: the log must be structured so that any modification after the fact is detectable. This rules out standard database logs where records can be updated or deleted, and it rules out flat file logs where entries can be edited.
The industry-standard approach for tamper-evident logging is cryptographic chaining. Each log entry is signed with an HMAC (hash-based message authentication code) that incorporates the content of the entry and the hash of the previous entry. The result is a chain: if any entry in the chain is modified, the hash of that entry changes, which invalidates the hash of every subsequent entry. Verification is deterministic — any party can check whether the chain is intact without access to the signing key.
HMAC-SHA256 chaining provides a verifiable, independently auditable record that satisfies the technical requirements of Article 12. It is also the standard used in blockchain-adjacent audit systems and is well-understood by regulators and auditors.
The practical implication: Article 12 compliance requires a logging layer that sits outside the AI tool itself, is controlled by your firm, and implements cryptographic chaining. A vendor-provided conversation history — even one that cannot be edited by the user — does not satisfy this, because the vendor controls the data and the chain verification.
The overlap with FCA Consumer Duty and SMCR
For FCA-authorised firms, Article 12 does not exist in isolation. It overlaps substantially with obligations already created by FCA Consumer Duty and the Senior Managers and Certification Regime.
Consumer Duty requires firms to evidence good outcomes for every client interaction — including AI-assisted ones. SMCR makes named Senior Managers personally accountable for outcomes even where those outcomes were produced by or assisted by an AI tool. Both create demand for the same infrastructure: a structured, queryable, tamper-evident record of every AI interaction in a client-facing process.
The same infrastructure satisfies both regimes. A compliance proxy that intercepts every API-connected AI interaction, applies firm policy rules, protects client PII, and writes each interaction to a tamper-evident HMAC-SHA256 audit chain satisfies Article 12 logging requirements and FCA Consumer Duty evidential requirements simultaneously. Firms with both FCA authorisation and EU client exposure do not need two separate systems.
See our article on FCA Consumer Duty and AI governance for a detailed breakdown of the UK-side obligations and what a compliant evidence trail looks like for IFA, legal, and insurance firms.
What UK regulated firms must do now
With enforcement beginning in August 2026, the practical steps for regulated firms within Article 12 scope are:
1. Identify your in-scope AI activity
Map every API-connected AI tool in use across the firm. This is different from browser-based AI tools (ChatGPT.com, Claude.ai) — API-connected tools can be governed at the infrastructure level. Browser-based use requires policy controls and acceptable use policies, but is not governable by a proxy. Your Article 12 compliance programme should cover API-connected tools as the primary technical control, paired with a firm-wide acceptable use policy covering personal and browser-based AI subscriptions.
2. Determine which interactions involve individuals in scope
For each API-connected AI tool, identify whether its outputs influence decisions about specific individuals — particularly EU-resident clients or counterparties. Financial advice tools, legal analysis tools, and any tool whose output goes into a client-facing document should be treated as in scope.
3. Implement a tamper-evident logging layer
Deploy a compliance proxy that intercepts every AI API call and logs it to a tamper-evident chain. The proxy should be configured to capture: the prompt sent, the model's response, the timestamp, the user or team associated with the interaction, and any policy rules applied. The log should be held by your firm, not the AI tool vendor.
4. Add PII detection at the gateway
GDPR obligations apply in parallel to Article 12. Before a prompt containing client personal data reaches an AI model, UK-format PII — National Insurance numbers, sort codes, account numbers, postcodes — should be detected and handled appropriately. This is a gateway function, not an application function: it must operate before the model sees the data, not after.
5. Document your governance framework
Article 12 compliance is an infrastructure requirement, but it sits within a broader AI governance framework. Regulators — whether the FCA or an EU supervisory authority — will also expect to see a policy document describing which AI tools are used, for what purposes, what controls are applied, and how oversight is exercised. The infrastructure is the evidence. The policy document is the governance.
Regulated firms looking to address the logging and PII requirements at the infrastructure level can apply for early access to the Inference Agents compliance gateway. The gateway intercepts all API-connected AI tool usage, applies firm policy rules, detects UK PII, and writes every interaction to an HMAC-SHA256 tamper-evident chain — with a single configuration change per AI tool. See pricing for usage-based plans that scale with your firm's AI activity.