The New York Times has filed a lawsuit against Perplexity AI, alleging unauthorised copying of its articles for AI training, highlighting broader debates over copyright and responsible AI development among publishers and tech firms.
The New York Times has sued Perplexity AI in the U.S. District Court for the Southern District of New York, alleging the start‑up copied, distributed and displayed millions of Times articles without permission to train and operate its generative AI products and to power its “answer engine”. The complaint, filed on Friday, says Perplexity’s systems reproduced paywalled and other proprietary material and that the company’s outputs sometimes fabricated information , so‑called “hallucinations” , which were then falsely attributed to the newspaper alongside its trademarks. [1][2][3]
According to the original report, the Times is seeking damages, injunctive relief and other equitable remedies intended to halt Perplexity’s alleged unauthorised use of its content and to remedy any ongoing harm to its business and reputation. The lawsuit follows a cease‑and‑desist sent by the paper more than a year earlier. [1][2]
The Times emphasised its stance on responsible AI use in a statement: “While we believe in the ethical and responsible use and development of AI, we firmly object to Perplexity’s unlicensed use of our content to develop and promote their products,” NYT spokesperson Graham James said. The paper’s action is part of a broader wave of litigation by publishers and content owners contesting how generative AI systems ingest and re‑present copyrighted material. [1][2][3]
Perplexity, valued at about $20 billion, has denied that it builds foundation models by scraping content and has said its approach is to index publicly available web pages and provide factual citations. The company faces a string of similar suits from publishers and platforms including the Chicago Tribune, Encyclopedia Britannica, News Corp’s Dow Jones and the New York Post, and Reddit, among others. Those challengers allege that Perplexity’s retrieval‑augmented generation systems reproduce their journalism and reference content verbatim or without licence. [1][2][4][5][6][7]
Industry legal filings and recent coverage show the dispute sits at the centre of rising tensions between news organisations , which depend on web traffic and subscription revenue , and AI companies seeking to offer concise, sourced answers by drawing on third‑party content. Plaintiffs argue that such services divert visits and ad or subscription income, while some AI firms counter that their products improve discoverability and cite sources. [4][5][6]
Perplexity’s head of communications, Jesse Dwyer, has characterised lawsuits from publishers as an unsuccessful tactic used against emerging technologies, dismissing them publicly while the company contends it is operating within legal bounds. The litigation will test unresolved questions about whether and when indexing or summarising web content for AI responses amounts to infringement, and whether attribution and citation practices affect those legal assessments. [1][2][6]
As the case proceeds in federal court, it will join other high‑profile suits that could shape commercial practice and regulation in the AI and publishing sectors, and may influence how companies balance access to information with intellectual property rights and publisher revenue models. Observers say rulings in these matters are likely to set precedents on scraping, training data and the responsibilities of AI answer engines. [2][3][4][5]
📌 Reference Map:
##Reference Map:
- [1] (SAMAA) – Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 6
- [2] (Reuters) – Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4, Paragraph 6, Paragraph 7
- [3] (The Guardian) – Paragraph 1, Paragraph 3, Paragraph 7
- [4] (Reuters) – Paragraph 4, Paragraph 5, Paragraph 7
- [5] (CNBC) – Paragraph 4, Paragraph 5, Paragraph 7
- [6] (TechCrunch) – Paragraph 4, Paragraph 6
- [7] (Britannica) – Paragraph 4, Paragraph 5
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
10
Notes:
The narrative is current, with the lawsuit filed on December 5, 2025. ([reuters.com](https://www.reuters.com/legal/litigation/new-york-times-sues-perplexity-ai-infringing-copyright-works-2025-12-05/?utm_source=openai))
Quotes check
Score:
10
Notes:
Direct quotes from NYT spokesperson Graham James and Perplexity’s head of communication Jesse Dwyer are consistent across reputable sources, indicating originality. ([reuters.com](https://www.reuters.com/legal/litigation/new-york-times-sues-perplexity-ai-infringing-copyright-works-2025-12-05/?utm_source=openai))
Source reliability
Score:
10
Notes:
The narrative originates from reputable organisations, including Reuters and The Guardian, enhancing its credibility. ([reuters.com](https://www.reuters.com/legal/litigation/new-york-times-sues-perplexity-ai-infringing-copyright-works-2025-12-05/?utm_source=openai))
Plausability check
Score:
10
Notes:
The claims are plausible, with multiple reputable sources reporting on the lawsuit and similar legal actions against Perplexity AI. ([reuters.com](https://www.reuters.com/legal/litigation/new-york-times-sues-perplexity-ai-infringing-copyright-works-2025-12-05/?utm_source=openai))
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative is fresh, with consistent and original quotes from reliable sources, and the claims are plausible and well-supported by multiple reputable organisations.
