A Californian federal judge has allowed authors to broaden their legal challenge against Databricks, including newer models like DBRX, amid claims of unauthorised training on protected works, marking a significant procedural win in AI copyright litigation.

A federal judge in California has allowed authors in a copyright suit against Databricks and its AI subsidiary Mosaic ML to broaden their case to include newer DBRX models, after they said the company had used protected books without permission to train the systems. According to the court’s June 25, 2025 order in In re Mosaic LLM Litigation, the plaintiffs were also permitted to update the catalogue of works they say were copied. The ruling marked a procedural win for the copyright holders, even though the case had already been running for more than a year.

The dispute began with claims that Mosaic ML had trained its MPT large language models on datasets containing the authors’ works, with Databricks accused of being responsible as the parent company. After DBRX was released, the plaintiffs asked to add a direct infringement claim tied to that model as well. The court accepted that the request came late, but said the case was still in active discovery and the delay, by itself, did not justify shutting the amendment out.

Databricks argued the authors were acting in bad faith, but the judge found no clear sign of strategic delay or dishonesty. The court also rejected the company’s claim that the change would unfairly reshape the litigation, noting that discovery was already touching on DBRX and that the new allegations did not appear to require a wholly separate case theory. On the question of whether the new claims were too thin to survive, the court said that issue was better addressed after amendment rather than used to block it at the outset.

The litigation did not end there. Bloomberg Law reported in August 2025 that the DBRX claims were later dismissed because the allegations were too vague, while the original MPT-related claims remained alive. That later ruling echoed a wider pattern in AI copyright disputes: courts are increasingly being asked to decide whether plaintiffs have said enough, with enough model-specific detail, to get past early motions to dismiss. Similar fights have also been playing out in cases involving Meta and Nvidia, while Anthropic’s massive settlement with authors underscored the scale of financial exposure these disputes can create.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
8

Notes:
The article references a June 25, 2025 court order, indicating the information is approximately 9 months old. The latest related news is from August 2025, suggesting the content is relatively fresh. However, the article was published in June 2025, which is over 7 days prior to the current date, so the freshness score is reduced.

Quotes check

Score:
7

Notes:
The article includes direct quotes from the court’s June 25, 2025 order. However, the earliest known usage of these quotes is from the article itself, making independent verification challenging. Without external sources confirming the quotes, the score is reduced.

Source reliability

Score:
6

Notes:
The article is published on a legal blog by attorney Evan Brown. While the author is a legal professional, the blog is not a major news organisation, which may affect the perceived reliability. Additionally, the article appears to be summarising a court order, which may limit its originality.

Plausibility check

Score:
8

Notes:
The article discusses a court order allowing authors to expand a copyright case against Databricks’ new AI models. This aligns with known legal actions in the AI industry, making the claims plausible. However, the article’s reliance on a single source without independent verification raises some concerns.

Overall assessment

Verdict (FAIL, OPEN, PASS): FAIL

Confidence (LOW, MEDIUM, HIGH): MEDIUM

Summary:
The article provides information about a court order allowing authors to expand a copyright case against Databricks’ new AI models. However, the reliance on a single source without independent verification, the inability to confirm the originality of the quotes, and the article’s age (over 7 days old) raise significant concerns. Given these issues, the content does not meet the necessary standards for publication.

Share.

Get in Touch

Looking for tailored content like this?
Whether you’re targeting a local audience or scaling content production with AI, our team can deliver high-quality, automated news and articles designed to match your goals. Get in touch to explore how we can help.

Or schedule a meeting here.

© 2026 AlphaRaaS. All Rights Reserved.
Exit mobile version