Master of the Rolls Sir Geoffrey Vos highlights both the potential benefits and dangers of AI in the legal sector, calling for caution amidst rapid adoption and ethical concerns.
One of the UK’s most senior judges, Master of the Rolls Sir Geoffrey Vos, has publicly reflected on the growing role of artificial intelligence (AI) in the legal sector, highlighting both its transformative potential and inherent risks. Speaking at the Legal Geek Conference, Sir Geoffrey likened AI to a “chainsaw” — a powerful tool that, in the right hands, can streamline legal processes but is “super dangerous” if misused. He acknowledged that AI, especially large language models (LLMs), can be highly effective for drafting contracts and researching legal matters, significantly reducing time spent on routine tasks.
Sir Geoffrey also contemplated the possibility of AI making judicial decisions, observing that the technology could theoretically resolve cases in minutes that currently take years of human effort. Despite this capability, he urged caution, asserting that judges’ rulings are final and typically irreversible, underscoring the crucial human elements of empathy, insight, and nuanced judgment that AI cannot replicate. He further noted that AI systems reflect the state of intelligence at a fixed moment in time and may not adapt to evolving legal thought and societal norms, making reliance on them for long-term judicial decisions problematic.
The rapid expansion of AI use in law, however, has led to significant challenges. Earlier this year, the High Court intervened after uncovering instances where AI tools were misused to fabricate legal citations and case law. In one notable case, claimants in a substantial damages lawsuit against Qatar National Bank submitted 18 fictitious case citations among many others generated with publicly available AI tools like ChatGPT. Similarly, in a regulatory case, a lawyer cited non-existent legal precedents multiple times, though the lawyer denied deliberately using AI. These incidents were met with stern warnings from Dame Victoria Sharp, President of the King’s Bench Division, who highlighted the potential damage AI misuse could wreak on public trust in the justice system. She emphasized that knowingly submitting false materials to court might constitute contempt or even criminal charges for perverting the course of justice. The rule of law depends on accuracy and integrity, and although AI tools can produce coherent and plausible text, this content may be entirely incorrect.
Legal experts such as barrister Tahir Khan point out that many errors arise from reliance on general-purpose AI platforms rather than specialised AI tools designed for the legal industry, such as those provided by LexisNexis. Khan stresses that despite the use of AI, the ultimate responsibility for validating any output lies with the lawyer.
Industry data corroborates the growing integration of AI in legal work. Surveys conducted by LexisNexis indicate a substantial rise in generative AI adoption among UK lawyers—from 11% usage in mid-2023 to 26% by January 2024, with further growth expected. While large firms and academic institutions lead this trend, in-house legal teams are increasingly adopting AI tools. Despite this uptake, cultural acceptance within law firms remains mixed. A recent LexisNexis study found that although 61% of lawyers now use AI in daily work, only 17% perceive it as fully integrated into their firm’s strategy and operational framework. Adoption is often slowed by caution and a lack of confidence, with trusted, legally focused AI platforms inspiring more trust in outcomes.
The rise of AI in legal settings also raises cybersecurity concerns. Nearly half of surveyed lawyers worry about confidential client data leaking through opaque AI systems, especially given that over 65% of law firms have experienced cyber incidents. These risks emphasize the need for legal firms to invest not only in AI but also in staff training and robust cyber defence protocols to protect sensitive information.
At the same time, experts caution the legal profession not to over-rely on AI’s analytical power. While AI excels at processing large volumes of data efficiently, it lacks the ability to perform complex legal reasoning or craft the nuanced strategies that experienced lawyers bring to litigation and client advising. Misapplication of AI, particularly in areas requiring deep interpretation or strategic innovation, can result in costly errors or missed details. Tailored AI solutions and rigorous human oversight remain essential to ensure accuracy and maintain the high professional standards required in legal practice.
The increasing prominence of AI in UK law is part of broader national legal tech developments. The UK government is actively addressing AI-driven harms in other sectors, notably criminalizing the use of AI-generated child sexual abuse material and deepfake content, demonstrating a commitment to balancing innovation with ethical safeguards.
In summary, AI’s integration into the UK legal system offers significant efficiency gains and research advantages, as noted by senior jurists and industry analyses. Yet, the judiciary and legal professionals alike are aware of the high stakes involved in preserving justice’s human elements and the system’s integrity. Ongoing vigilance, specialised tools, ethical adherence, and robust security measures are vital to harness AI’s potential responsibly and prevent its pitfalls from undermining the rule of law.
📌 Reference Map:
- Paragraph 1–3 – [1] (The Independent)
- Paragraph 4–6 – [1] (The Independent), [2] (Reuters)
- Paragraph 7–8 – [1] (The Independent), [3] (LexisNexis), [5] (LexisNexis report)
- Paragraph 9 – [6] (HSBC Corporate Insights)
- Paragraph 10 – [7] (Legal Futures)
- Paragraph 11 – [4] (Reuters on AI and child abuse material)
- Paragraph 12 – [1] (The Independent), [5] (LexisNexis report), [7] (Legal Futures)
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The narrative is recent, published on 21 October 2025. However, similar discussions by Sir Geoffrey Vos on AI in the legal sector have been reported earlier, notably in speeches from June 2023 and February 2025. ([judiciary.uk](https://www.judiciary.uk/speech-by-the-master-of-the-rolls-to-the-bar-council-of-england-and-wales/?utm_source=openai)) The Independent article may be summarising or referencing these prior statements. No evidence of recycled content from low-quality sites or clickbait networks was found.
Quotes check
Score:
7
Notes:
The article includes direct quotes attributed to Sir Geoffrey Vos, such as likening AI to a ‘chainsaw’ and discussing its potential to make court decisions in minutes. These quotes appear to be original to this report, with no exact matches found in earlier publications. However, similar themes have been addressed in his previous speeches. ([judiciary.uk](https://www.judiciary.uk/speech-by-the-master-of-the-rolls-to-the-bar-council-of-england-and-wales/?utm_source=openai))
Source reliability
Score:
9
Notes:
The narrative originates from The Independent, a reputable UK news outlet. The quotes attributed to Sir Geoffrey Vos are consistent with his known positions on AI in the legal sector, as evidenced by his prior speeches. ([judiciary.uk](https://www.judiciary.uk/speech-by-the-master-of-the-rolls-to-the-bar-council-of-england-and-wales/?utm_source=openai))
Plausability check
Score:
8
Notes:
The claims about AI’s potential to expedite court decisions align with ongoing discussions in the legal community about AI’s role in the judiciary. The narrative also references real incidents of AI misuse in legal contexts, such as the submission of fictitious case citations. The tone and language are appropriate for the subject matter and region.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative is recent and originates from a reputable source, with quotes consistent with Sir Geoffrey Vos’s known positions on AI in the legal sector. While similar themes have been addressed in his prior speeches, the specific quotes in this report appear to be original. The claims are plausible and supported by real incidents of AI misuse in legal contexts.