Discussions about artificial intelligence (AI) in legal practice tend to be deeply polarised. Some lawyers believe AI is the future of the profession, set to transform everything from legal research to drafting and case strategy. Others see it as unpredictable, risky and potentially even dangerous.
As with most technological advancements, the reality likely lies somewhere in between. AI is neither a magic bullet nor an existential threat—it is a tool. Whether that tool is helpful or harmful depends on how it is understood, deployed and monitored.
One thing is certain though: the next generation of lawyers will be AI-literate. They will expect AI to be part of their daily workflow, just as legal research databases and digital case management systems are today. Those currently practising therefore need to engage with AI now—not to blindly adopt it or reject it, but to develop an informed, critical approach to its use.
The uncertainty surrounding AI use is why I was delighted to present a seminar on ‘Generative AI for Lawyers’ recently—to provide context, foster discussion and help legal professionals better understand generative AI (GenAI) in particular.
My goal for the seminar was not to persuade or dissuade anyone from using GenAI, but rather to encourage informed decision-making based on a clear-eyed understanding of AI’s potential and its pitfalls. A summary of the key points we discussed and the interactive demos we looked at during the seminar follows.
Key regulatory and ethical considerations
Although AI is not explicitly regulated for legal practice, existing professional rules remain highly relevant. The Solicitors Regulation Authority (SRA) and Bar Standards Board (BSB) have both emphasised that lawyers must ensure AI use aligns with key professional obligations, including:
- Competence: Lawyers must understand AI’s capabilities and limitations to use it responsibly. AI-generated content must be reviewed and verified before being relied upon.
- Integrity: Lawyers must use AI in an honest and ethical manner, ensuring that its use does not mislead clients, the courts or third parties. AI-generated content should not be presented as independently verified legal advice without proper review, and lawyers must remain transparent about its limitations.
- Confidentiality: Sensitive information must not be fed into AI tools unless the lawyer has control over how it is processed and is confident that confidentiality (and privilege, if relevant) will be preserved.
- Clients’ best interests: Lawyers must assess whether using AI in a particular case serves the client’s best interests. This includes considering factors such as accuracy, transparency and potential risks associated with AI tools.
Recent US-focused commentary from Thomson Reuters and Reed Smith distils certain ethical considerations for AI use into a “Seven Cs” framework, of which two principles resonate particularly well with the UK regulatory landscape and the concerns we discussed in the seminar:
- Competence – Lawyers don’t need to be AI engineers, but they must understand enough about how the technology works to be able to assess AI-generated content critically.
- Confirmation – AI outputs should never be taken at face value; results need independent verification.
The data dilemma: risks of confidentiality and accuracy
Lawyers need to be aware of the potential sources of data that GenAI tools might rely on when producing outputs. Beyond the risk that training data might improperly include proprietary or copyrighted material—which could inadvertently be reproduced in a lawyer’s work—lawyers may also unknowingly contribute their own or their client’s confidential information to the AI model through the questions they ask or the details they input or upload, if that process is not properly managed. Depending on the task at hand, mitigating this risk may require limiting AI use to in-house tools or those with contractual guarantees over data processing—approaching AI-related data handling with the same caution as GDPR compliance when dealing with sensitive information.
One of the most pressing issues that flows from this is how GenAI processes and stores data. Unlike traditional databases, AI models do not store records of data verbatim. Instead, they break information down into mathematical relationships between words and concepts. This leads to several critical issues:
- Confidential information can be leaked. If a lawyer inputs privileged data into an AI tool, it could theoretically be reproduced in a future output, leading to data breaches.
- Data cannot be deleted. If AI has been trained on incorrect or sensitive information, there is no straightforward way to remove it.
- AI prioritises coherence over accuracy. A generative AI tool doesn’t “know” what is true—it simply predicts the statistically most likely next word in a sequence. This means it can produce completely fabricated but highly plausible-sounding outputs—often referred to as “hallucinations.”
The risk of unreliable or biased outputs means that AI must be used cautiously, particularly in legal research and when drafting. AI can make errors, misunderstand legal context, or misinterpret sources. Lawyers should therefore always approach AI-generated outputs with scrutiny, treating them as starting points for further review rather than final answers.
The five-step approach for structured AI use
To reduce risks and maximise AI’s potential, I adopt a structured approach when interacting with AI tools. My simple yet effective framework involves five steps:
1. Define the task
Be explicit about what you need. Instead of asking an AI tool to “summarise this contract”, specify, “Summarise the key indemnity and limitation of liability clauses”. Break a task down into multiple sub-tasks if need be. Well-defined requests lead to more relevant responses.
2. Set the parameters
Clarify any constraints. Should the summary be under 200 words? Should it cite specific clause numbers? Should the tone be appropriate for a judge or the lay client? Setting parameters helps ensure AI delivers a tailored response rather than a generic one.
3. Provide context
AI tools work better when they have additional context. If you are asking for a contract summary, specifying whether it is governed by UK law or international trade agreements can improve the response. If your client has an unusual or unexpected risk appetite, providing this context can also help ensure an AI tool’s response takes this information into account too.
4. Validate the output
Never assume AI-generated content is correct. Cross-check AI outputs against authoritative sources, such as legislation, case law or practice notes. If AI generates legal citations, check the actual cases to ensure they are real and relevant.
5. Iterate and refine
If the initial response is lacking, refine your prompt. Adjusting the input can lead to significant improvements in accuracy and usefulness. AI tools work best when users engage in an iterative process, refining their requests based on the outputs received.
I think that the way to get a good AI response is a bit like ordering a pizza. If you simply say “I want a pizza” to the person behind the counter at a pizza shop, you might not get what you expect. But if you break your requirements down and specify each of the details—e.g. “I want a vegetarian pizza for two people who don’t like tomatoes”—then you should obtain a better outcome.
Following a structured approach, such as the one set out above, helps ensure AI is used effectively and can help reduce the risks associated with unreliable outputs.
Using templates to improve AI accuracy
A particularly effective way to control AI outputs is by using structured input templates. Instead of feeding AI unstructured text and hoping for a useful response, lawyers can guide AI’s processing by providing a structured framework.
One powerful technique is to upload an Excel spreadsheet alongside a document that needs analysis. The spreadsheet should contain column headings representing the key categories of information to extract. By way of simplified example, these headings might include:
- Claimant’s allegation with paragraph number(s)
- Claimant’s evidence with paragraph number(s)
- Legal issue(s) raised
- Defendant’s response with paragraph number(s)
- Defendant’s evidence with paragraph number(s)
The AI tool can then be instructed to analyse the document and to populate the spreadsheet accordingly.
This approach ensures that:
- The AI tool extracts relevant data in a structured and organised manner.
- Lawyers can quickly review, validate and refine AI-generated outputs.
This technique is particularly useful when analysing contracts, statutes and judgments, as it allows for efficient data extraction while facilitating legal oversight.
Lawyers must, however, remember and safeguard against the risk of any documentation that they upload or information they provide becoming a part of any AI tool’s model.
Validation: The critical safeguard against AI errors
Even when using structured prompting, AI outputs must be validated before being relied upon. Potential validation techniques include:
- Asking the AI tool to cite its sources, and then verifying those sources independently.
- Requesting paragraph or case references for legal citations, cross-checking them with the corresponding documents.
- Running counterarguments through the AI tool to see if it produces equally strong reasoning for an opposing view.
- Never simply ask AI, “Is this correct?”—tools can wrongly yet confidently confirm incorrect answers as true when asked. Look up real world failures such as Mata v Avianca, where a lawyer in the US relied on fabricated AI-generated case law, to understand the risks.
Validation is not optional. It is an essential step in responsible AI use.
Final thoughts: AI as a tool, not a replacement
Generative AI is neither a threat to be feared nor a shortcut to be blindly embraced. It is a powerful tool that, when used with care, diligence and professional judgement, can bring efficiencies and innovation to legal practice.
Regulators have made it clear that lawyers remain fully responsible for the legal advice they provide—AI does not change that. What AI does offer is a way to enhance legal work, provided it is used responsibly, ethically and with a critical eye.
Lawyers who ignore AI risk being left behind as the profession evolves. The key is to engage thoughtfully, ensuring that when AI tools are used, they are used as aids, not substitutes, for legal expertise.
Further information
For more information from the stone team, contact clerks@36stone.co.uk

Involving Paul Schwartfeger