A group of authors, led by Pulitzer winner John Carreyrou, has launched a lawsuit against six major AI companies, claiming they used pirated books for training without permission, seeking billions in damages as industry battles over fair use intensify.
A group of authors led by John Carreyrou, the Pulitzer Prize-winning writer behind “Bad Blood”, has launched a fresh legal challenge against six major AI companies, accusing them of building large language models on pirated books without permission or payment. The lawsuit, filed in the Northern District of California in December 2025, names Anthropic, OpenAI, Google, Meta, xAI and Perplexity and claims the firms relied on material taken from shadow libraries including LibGen, Z-Library and OceanofPDF.
The plaintiffs say they are not interested in a modest payout per title, but in damages that reflect the scale of the alleged copying. According to the complaint, each defendant could owe $150,000 per work, which would push potential damages to $900,000 for a single book across all six companies. That figure is being used to argue that the dispute is not about a technical copyright lapse, but about what the authors describe as a wholesale extraction of value from their work.
The case builds on a key ruling in June 2025 by U.S. District Judge William Alsup in the Bartz v. Anthropic litigation. TechCrunch reported that Alsup held training on legally acquired books could qualify as fair use, but the judge drew a sharper line around material downloaded from pirate sources. That distinction matters here, because the new plaintiffs are leaning on the argument that fair use cannot excuse the use of books that were obtained unlawfully in the first place.
Their complaint also reflects growing frustration with the proposed $1.5 billion Anthropic settlement, which works out at roughly $3,000 per title for about 500,000 books. The authors say that amount amounts to only a small fraction of the Copyright Act’s statutory ceiling and falls far short of what they believe the law allows. In a separate development, the Authors Guild has praised the settlement for those who remain in the class, while the opt-out plaintiffs have chosen to press ahead with their own claims.
The lawsuit arrives as AI copyright battles broaden well beyond books. Reuters and other outlets have reported that music publishers sued Anthropic in January 2026, seeking more than $3 billion over alleged lyric piracy, while the New York Times continues its case against OpenAI. With evidence from the Anthropic litigation now being cited in other disputes, legal experts say the industry is entering a more confrontational phase in which the use of copyrighted material for AI training is likely to be tested case by case, and at much greater scale.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
6
Notes:
The article reports on a lawsuit filed in December 2025, which is recent. However, similar lawsuits have been reported since then, indicating that the narrative may be recycled. The earliest known publication date of substantially similar content is December 23, 2025. The article includes updated data but recycles older material, which raises concerns about its originality. Given these factors, the freshness score is reduced.
Quotes check
Score:
5
Notes:
The article includes direct quotes attributed to the lawsuit, but no online matches were found for these specific quotes. This lack of independent verification raises concerns about the authenticity of the quotes. Without verifiable sources, the credibility of the quotes is questionable.
Source reliability
Score:
4
Notes:
The article originates from a niche publication, the Creative Learning Guild, which may not be widely recognized. The lead source appears to be summarising or rewriting content from other publications, including TechCrunch and Carrier Management, which raises concerns about source independence. The reliance on a single, potentially unverified source diminishes the overall reliability of the information presented.
Plausibility check
Score:
7
Notes:
The claims about authors suing AI companies over the use of pirated books for training models are plausible and have been reported by other reputable outlets. However, the lack of supporting detail from other reputable sources in this article raises concerns. The report lacks specific factual anchors, such as names, institutions, and dates, which makes it difficult to independently verify the claims. The tone and language used are consistent with the region and topic, but the lack of detailed information reduces the overall plausibility score.
Overall assessment
Verdict (FAIL, OPEN, PASS): FAIL
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The article presents a narrative about authors suing AI companies over the use of pirated books for training models. While the claims are plausible and have been reported by other reputable outlets, the article’s reliance on a single, potentially unverified source, lack of independent verification, and recycled content raise significant concerns about its credibility. The absence of independently verifiable quotes and supporting details further diminishes the article’s reliability. Given these issues, the article fails to meet the necessary standards for publication.
