Solicitors at UK law firms are using AI tools to draft documents, research case law, summarise client files, and prepare correspondence. The productivity benefit is real. So is the compliance exposure — and most firms have not yet put the governance infrastructure in place to manage it.
The SRA has been clear that its Code of Conduct 2019 applies to the use of AI tools where those tools are used in connection with legal practice. It has also signalled, through its AI thematic reviews and published guidance, that firms should expect scrutiny of how they are governing AI use — particularly where client confidential information is involved. The question for practice managers, COLPs, and senior partners is not whether the SRA's attention will reach AI governance, but whether the firm can demonstrate that it has addressed it before that attention arrives.
SRA Code obligations and AI: what applies
The SRA Code of Conduct 2019 does not mention AI specifically. It does not need to. The obligations it imposes are outcomes-based and apply to the means by which those outcomes are — or are not — achieved. When an AI tool is used to assist with client work, the firm's existing obligations apply to that use.
Three areas of the Code are directly engaged by AI use in legal practice.
Rule 6.3 — client confidentiality
Rule 6.3 requires firms to maintain the confidentiality of all information about a client. This obligation applies regardless of the form in which the information is held or the tool used to process it. When a fee earner copies client matter information into a prompt sent to ChatGPT, Claude, Copilot, or any other AI tool, that information is transmitted to a third-party system. The firm must be able to demonstrate that appropriate safeguards were in place to maintain confidentiality — including contractual controls with the AI provider, technical controls over what was transmitted, and a record of how the information was handled.
The SRA's position is that client confidentiality obligations are not disapplied by the use of technology. A document transmitted unencrypted to a third-party provider is a confidentiality breach. So, in principle, is client matter information entering an AI tool that lacks an appropriate data processing agreement or whose training data policy does not exclude customer inputs.
You keep the affairs of current and former clients confidential unless disclosure is required or permitted by law or the client consents. Obligations of confidentiality apply to all information about a client — not only to documents formally designated as confidential.
Rule 2.1 — effective governance
Rule 2.1 requires firms to have effective governance structures, arrangements, systems and controls in place to ensure they, and the people they employ or work with, comply with their obligations under the SRA's regulatory arrangements. For firms using AI tools, this means AI governance cannot be ad hoc. There must be a documented policy, technical controls that enforce it, and records that evidence compliance.
A firm that relies on individual fee earners to exercise discretion about what client data they put into AI tools — without technical guardrails or any monitoring — does not have effective systems and controls for AI governance purposes. Policy without enforcement infrastructure is not a governance framework. The COLP is personally responsible for ensuring that the firm's governance arrangements meet this standard.
Personal accountability of the COLP
The Compliance Officer for Legal Practice is responsible for ensuring the firm's compliance with SRA obligations. For AI governance, this means the COLP must be able to demonstrate — not simply assert — that the firm's use of AI tools is within the scope of its professional obligations. When the SRA makes enquiries about a firm's AI use, the COLP will be the named individual who must account for it.
Legal professional privilege and AI tools
Legal professional privilege — both legal advice privilege and litigation privilege — can attach to AI-assisted work product, subject to the usual tests. The product must have been created in connection with the giving of legal advice or in connection with actual or anticipated litigation, and the dominant purpose of the communication must be legal advice or litigation preparation.
The involvement of AI tools does not, of itself, destroy privilege. But it creates a risk that firms should actively manage. Communications that pass through a third-party system may, in certain circumstances, be considered to have lost their privileged character if the transmission of the underlying information to that third party was not itself covered by appropriate protections. This is a fact-specific question, and the law in this area is still developing. But the prudent approach is to ensure that client data entering AI tools is subject to the same confidentiality controls — contractual and technical — that govern any other third-party disclosure in the course of legal work.
The practical risk: A fee earner copies a client's privileged draft agreement into an AI tool to request a plain-English summary. The AI provider's terms do not exclude customer inputs from training data. If the firm cannot demonstrate that contractual and technical safeguards were in place at the time of the interaction, it has a potential privilege and confidentiality problem — and no record to show that it managed the risk. The absence of a record is itself the compliance failure.
GDPR when client PII enters AI prompts
Every time a fee earner includes a client's name, address, NI number, financial position, health information, or any other personal data in an AI prompt, that transmission is a transfer of personal data to a data processor. UK GDPR applies.
The firm must have:
- A lawful basis for that processing (typically a contractual necessity or legitimate interests basis, depending on the activity)
- A data processing agreement with the AI provider that meets the requirements of UK GDPR Article 28
- A privacy notice that accurately describes to clients how their data may be processed using AI tools
- Technical controls to identify and manage personal data at the point of each AI interaction — before it is transmitted
The ICO's guidance on generative AI makes clear that organisations using AI tools to process personal data are data controllers in respect of that processing and must comply with all applicable UK GDPR obligations. For law firms, this overlaps directly with SRA confidentiality obligations: client personal data entering an AI tool without the appropriate safeguards is simultaneously a data protection failure and a professional conduct failure.
GDPR Article 22 and automated legal analysis
Where AI tools are used to produce outputs that inform decisions with legal or significant effects for individuals — an automated risk assessment, a document classification that determines how a matter is handled, a compliance check that flags or excludes a transaction — GDPR Article 22 may be engaged. Article 22 restricts solely automated decision-making that produces such effects, and requires that human oversight of those decisions is real, not nominal. For law firms using AI in matter management or due diligence workflows, this means the human review of AI outputs must be evidenced at each instance — not just documented as a policy.
EU AI Act Article 12 and the direction of travel
EU AI Act Article 12 requires that AI systems used in high-risk applications automatically generate tamper-proof logs, retained for the system's lifetime. Enforcement begins August 2026. UK firms with EU-based clients or cross-border practices may be within scope. Regardless of direct applicability, Article 12 signals the regulatory standard that is emerging across jurisdictions: automatic, tamper-evident logging of every AI interaction at the infrastructure level.
The SRA has not yet adopted equivalent mandatory logging requirements for law firm AI use. But its supervisory focus is moving in this direction. Firms that have already put compliant logging infrastructure in place will be better positioned when the SRA's expectations crystallise — rather than building retrospectively under regulatory pressure.
What adequate AI governance evidence looks like
When the SRA reviews a firm's AI governance — whether as part of a thematic review, a supervisory engagement, or in response to a complaint — the evidence it will look for falls into four categories.
1. A documented AI use policy
The firm must have a written policy governing the use of AI tools in legal practice. At minimum it should cover: which tools are approved for use with client data, what categories of client information may and may not be entered into AI tools, the data processing agreement status of each approved tool, the review process required before AI-assisted output is used in client work, and the fee earner and COLP responsibilities for compliance.
2. Technical controls that enforce the policy
A written policy that relies entirely on individual fee earner discretion is not an effective governance control. Firms should implement PII detection at the infrastructure level — a compliance gateway that intercepts every AI interaction involving client data, scans for personal data in UK formats (NI numbers, sort codes, postcodes, account numbers), and applies policy controls before the prompt is transmitted to the AI provider.
3. A tamper-evident audit log
Every AI interaction involving client matter data should be logged automatically, with the prompt, the response, the PII scan result, and the policy controls applied at the time. The log must be tamper-evident — records assembled retrospectively, or maintained in a spreadsheet that can be edited without detection, will not withstand scrutiny. HMAC-SHA256 cryptographic chaining ensures that any modification of the audit record produces a verifiable failure in the chain, making retrospective alteration detectable.
4. Queryable records for matter-level export
In the event of a complaint, a regulatory query, or a matter dispute, the firm must be able to retrieve all AI interactions relating to a specific client matter. This requires structured, indexed records — not a folder of exported chat logs. A compliance gateway that logs interactions by client reference, fee earner, and date supports the kind of targeted query that a COLP needs to respond to a supervisory enquiry or a client complaint.
The Inference Agents compliance gateway addresses all four requirements at the infrastructure level. One configuration change per AI tool routes every interaction through the gateway. Fee earners continue working as before. Every prompt and response is intercepted, PII-scanned, and logged to a tamper-evident HMAC-SHA256 audit chain that the COLP can query and export for regulatory or matter purposes. See usage-based pricing — the gateway scales with your firm's AI activity, with no fixed fee for low-volume practices.
Practical steps for law firm COLPs
For COLPs who are building or reviewing their firm's AI governance framework, the following steps reflect the SRA's own published guidance on AI risk management for law firms.
- Audit current AI tool use. Identify every AI tool in use across the practice — approved and unapproved. Fee earners who have adopted AI tools informally (personal subscriptions, browser extensions, tools installed on personal devices) represent a governance gap that must be addressed through policy rather than technical controls alone.
- Review data processing agreements. For every AI tool approved for use with client data, confirm that a UK GDPR Article 28 data processing agreement is in place and that the provider's terms do not permit use of customer inputs as training data without explicit consent.
- Update the firm's privacy notice. If the firm is using AI tools to process client personal data and the privacy notice does not describe this, it should be updated. Clients have a right to understand how their personal data is processed.
- Implement technical controls. Put a compliance gateway in place that logs AI interactions involving client data, applies PII detection, and generates a tamper-evident audit record. A policy without technical enforcement is not an effective control under Rule 2.1.
- Document COLP accountability. Ensure the COLP's responsibilities for AI governance are recorded in the firm's management structure documentation. The COLP should be able to demonstrate that they have exercised oversight of the firm's AI governance arrangements — not simply that a policy document exists.
COLPs and practice managers ready to address the AI governance gap can apply for early access to the Inference Agents compliance gateway. The gateway intercepts, scans, and logs every AI interaction from your API-connected tools — producing the tamper-evident audit record that SRA and GDPR governance requires.