A landmark ruling in New York clarifies that material generated through public AI chatbots is not protected by attorney-client privilege or work-product doctrine, raising caution for legal professionals in the use of generative AI tools.
A federal judge in New York has ruled that material generated through a public AI chatbot is not protected by attorney-client privilege or the work-product doctrine, a decision that could shape how lawyers and clients use generative AI in litigation. According to reporting by legal publishers and commentary from law firms, the case arose after a defendant used an AI system to help develop defence theories and later sought to shield the resulting exchanges from disclosure. The court rejected that effort, finding that the communications were not confidential in the way the privilege rules require.
The ruling, issued by U.S. District Judge Jed S. Rakoff in the Southern District of New York, turns on a familiar principle rather than a novel rule for new technology: privilege depends on secrecy. Legal commentators have said the defendant had used a publicly accessible chatbot without lawyer involvement, meaning the information was shared with an outside platform rather than exchanged privately with counsel. On that basis, the court treated the AI system as a third party, which undermined any claim that the material stayed within the protected attorney-client relationship.
The decision is also notable because it extends beyond privilege to work-product protection. Akerman reported that the judge concluded the AI-related material was discoverable even after the defendant passed search results to counsel. Other legal analyses said the court’s reasoning emphasised that AI tools are not licensed legal advisers and therefore do not fit neatly within doctrines built around lawyer-client communications. The result is one of the first federal rulings to squarely address whether conversations with generative AI can be kept out of litigation, and it came down firmly on the side of disclosure.
For lawyers and in-house teams, the practical message is cautionary rather than prohibitive. The case does not suggest that AI cannot be used in legal work, but it does underline the risk of feeding sensitive facts, strategy or draft arguments into consumer-facing tools that may store, process or reuse inputs. As several legal firms have noted, public AI services often reserve rights over user data, making it harder to argue that a client had a reasonable expectation of privacy.
That leaves companies and individuals facing a simple but significant rule: if confidential information is shared with a third-party AI system, traditional privilege protections may be lost. The broader implication of the ruling is that courts are likely to apply established confidentiality standards to new technology, rather than carving out special treatment for artificial intelligence.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
10
Notes:
The article is based on a recent ruling by U.S. District Judge Jed S. Rakoff, dated February 10, 2026, with a written opinion issued on February 17, 2026. ([akerman.com](https://www.akerman.com/en/perspectives/court-rules-that-information-disclosed-by-layperson-to-ai-tools-is-not-protected-by-attorney-client-or-work-product-privileges.html?utm_source=openai))
Quotes check
Score:
10
Notes:
The article includes direct quotes from Judge Rakoff’s ruling and other legal analyses. These quotes are consistent across multiple reputable sources, confirming their authenticity. ([akerman.com](https://www.akerman.com/en/perspectives/court-rules-that-information-disclosed-by-layperson-to-ai-tools-is-not-protected-by-attorney-client-or-work-product-privileges.html?utm_source=openai))
Source reliability
Score:
8
Notes:
The article is published on the Bergeron Clifford website, which appears to be a law firm blog. While law firm blogs can provide insightful analyses, they may also have inherent biases. Cross-referencing with independent news outlets would enhance reliability.
Plausibility check
Score:
10
Notes:
The claims made in the article align with the known facts of the case and the legal principles involved. The ruling by Judge Rakoff is consistent with established legal doctrines regarding attorney-client privilege and the use of AI tools.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The article provides a timely and accurate summary of Judge Rakoff’s ruling, with direct quotes and consistent information across multiple sources. However, reliance on a single law firm’s blog introduces potential bias, and consulting independent news outlets would enhance the verification process.
