Demo

Insurers are shifting their focus from cyber risks to emerging AI-related liabilities in professional services, demanding stricter governance frameworks as firms adopt AI tools amid regulatory and security challenges.

Last month, visits to professional indemnity insurers in London revealed a significant shift in their focus, from cyber risks to the emerging concerns around artificial intelligence (AI). Insurers grapple with understanding the new landscape AI creates, eager to learn how professional services firms, particularly in law, are managing associated risks. The rapidly increasing adoption of AI in business processes presents opportunities for automation and efficiency but also notable liability concerns and operational challenges.

Since investing early in AI-assisted legal tools like AuthorDocs and developing their own in-house generative AI chatbot, the firm providing insights on AI risks has observed both the promise and pitfalls of the technology. For insurers, AI introduces questions around what happens when a client says “we use AI”: what processes are in place, what risks exist, and how might these translate into liability? These issues have become particularly pertinent for professional services, where outputs directly impact client outcomes and regulatory compliance.

A primary concern for insurers is whether firms use AI-generated work product without adequate human verification. The New Zealand case of Wikeley v Kea Investments Ltd highlights the dangers, there, AI-generated legal submissions included fabricated case citations, misleading the court. Similar cases in England and Wales have led to regulatory scrutiny and sanctions against lawyers who failed to properly check AI-compiled documents. These incidents underscore the ethical and professional breaches possible with unchecked AI use, ranging from regulatory penalties to potential professional liability claims if clients suffer because of erroneous AI-driven advice.

Confidentiality risk compounds these issues. Generative AI systems rely on extensive data training, often incorporating client information in ways that may not be fully controlled or secured. Users risk exposing sensitive or privileged data to AI providers whose terms may not guarantee confidentiality, creating vulnerabilities for information leaks or unintended sharing. This risk extends beyond law firms to other professional services, with firms seeking solutions such as proprietary AI tools to maintain control over confidential data. However, in-house systems may lack the sophistication of specialist providers, posing a balancing act between confidentiality and capability.

Insurers now expect insured firms to have formal policies governing AI use. Such frameworks should clarify approved tools, provide staff training on AI limitations such as “hallucinations” (where AI generates inaccurate information), and mandate human review processes for any AI-assisted work product. Surveys indicate most businesses lack comprehensive AI governance; for example, a 2024 Datacom AI Index report found only 13% had audit assurance and governance frameworks while less than half provided workforce policies on AI use. This gap heightens insurer concerns over uncontrolled AI risks.

Government guidance, such as New Zealand’s Public Service AI Framework, offers a model focusing on safe, transparent, and accountable AI adoption with governance, security, skills development, and bias mitigation. Insurers are expected to demand similar rigor as AI use grows more prevalent and complex, scrutinising how firms manage operational oversight, data protection, and compliance with emerging AI regulations.

Insurers are likely to question the nature and scope of AI use within insured firms. Important factors include whether AI supports routine administrative tasks or critical decision-making with greater liability potential, the provenance and training data of AI systems, governance structures ensuring responsibility and human oversight, security measures protecting confidential data, and awareness of applicable regulations. Firms unable to demonstrate robust AI risk management may face higher premiums, reduced coverage, or policy exclusions specifically addressing AI-related losses.

Beyond professional indemnity, AI and deepfake technologies introduce wider insurance challenges. For example, deepfakes have been used in sophisticated scams involving financial transfers, as in a Hong Kong case where $25 million was fraudulently diverted. Such risks prompt the need for companies to implement verification controls and secure adequate insurance across commercial, cyber, media liability, directors and officers, and errors and omissions policies. The evolving legal and regulatory environment demands businesses remain vigilant, continuously reviewing coverage to address AI’s multifaceted risks.

In the legal sector, experts emphasise that AI adoption must be coupled with effective internal policies to mitigate potential errors, breaches of confidentiality, failures in informed consent, and intellectual property infringements. Cases of sanction-worthy conduct, like lawyers submitting AI-generated briefs with fabricated sources, reinforce the imperative for rigorous human validation. Law firms and other professional service providers must align AI use with ethical standards and regulatory expectations to shield themselves from liability and reputational damage.

Ultimately, insurers view AI as a double-edged sword, offering operational efficiency but introducing new liability, regulatory, and security risks. Those firms that proactively develop disciplined governance frameworks, train their workforce, implement stringent oversight, and transparently manage AI risks will be better positioned to respond to insurer inquiries and secure more favourable insurance terms. As AI becomes more embedded in professional practice, the insurance industry will continue to advance its approaches to risk assessment, coverage terms, and premium setting to reflect the complex AI risk environment.

📌 Reference Map:

  • [1] (MinterEllison) – Paragraphs 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12
  • [2] (Reuters) – Paragraph 4
  • [3] (Reuters) – Paragraph 13
  • [4] (Reuters) – Paragraph 13
  • [5] (Lockton) – Paragraph 12
  • [6] (Legal Dive) – Paragraph 5, 12
  • [7] (IBM) – Paragraph 9, 10

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
8

Notes:
The narrative was published on 10 November 2025, making it current. However, similar discussions on AI risks in the legal sector have been reported in recent months, such as the Reuters article from July 2025. ([reuters.com](https://www.reuters.com/legal/legalindustry/innovation-exposure-artificial-intelligence-risks-legal-professionals-2025-07-14/?utm_source=openai)) This suggests that while the content is fresh, the topic has been covered recently. Additionally, the article references a New Zealand case from 2024, indicating that some information may be recycled. Overall, the freshness score is high, but there is some overlap with recent coverage.

Quotes check

Score:
9

Notes:
The article includes direct quotes from the New Zealand case of Wikeley v Kea Investments Ltd [2024] NZCA 609. These quotes are specific to the case and do not appear to be reused from other sources. The use of direct quotes from a recent legal case adds originality to the content.

Source reliability

Score:
9

Notes:
The narrative originates from MinterEllison, a reputable law firm with expertise in the field. This enhances the credibility of the information presented.

Plausability check

Score:
8

Notes:
The article discusses the increasing focus of insurers on AI risks in the legal sector, highlighting concerns such as the use of AI without proper verification and the handling of confidential client data. These concerns are consistent with recent industry reports and legal cases, such as the Reuters article from July 2025. ([reuters.com](https://www.reuters.com/legal/legalindustry/innovation-exposure-artificial-intelligence-risks-legal-professionals-2025-07-14/?utm_source=openai)) The inclusion of a specific legal case from 2024 adds credibility to the claims made. However, the article’s focus on a New Zealand case may limit its applicability to other jurisdictions.

Overall assessment

Verdict (FAIL, OPEN, PASS): PASS

Confidence (LOW, MEDIUM, HIGH): HIGH

Summary:
The narrative is current and originates from a reputable source, enhancing its credibility. While similar topics have been covered recently, the inclusion of specific legal cases and direct quotes adds originality. The concerns raised are plausible and supported by recent industry reports and legal cases. Therefore, the overall assessment is positive.

Supercharge Your Content Strategy

Feel free to test this content on your social media sites to see whether it works for your community.

Get a personalized demo from Engage365 today.

Share.

Get in Touch

Looking for tailored content like this?
Whether you’re targeting a local audience or scaling content production with AI, our team can deliver high-quality, automated news and articles designed to match your goals. Get in touch to explore how we can help.

Or schedule a meeting here.

© 2025 Engage365. All Rights Reserved.