Demo

A Canadian blogger’s post about AI and copyright was automatically rewritten by an AI bot, which also deemed the original piece untrustworthy. The incident raises questions about automation, authorship, and the reliability of AI in editorial decisions.

An AI-driven news site has rewritten a Canadian blogger’s post about copyright and artificial intelligence, then marked the result as unfit for publication , a twist that has sharpened the debate over scraping, authorship and the reliability of machine-made editorial judgements.

The episode began when Hugh Stephens, writing on his personal blog, noticed a WordPress alert inviting him to approve what looked like a comment. Instead, it was a link to a story on London News, a site he says is generated by Noah News Service and run by HBM Advisory. The article, published the same day as his own post, covered the same underlying dispute between CanLII and Caseway, and Stephens said it appeared to mirror the structure and logic of his piece even though the wording had been changed. The site later identified the story as being “inspired by” his original post.

That distinction matters. Copyright law protects expression, not raw facts, and the U.S. Copyright Office has said works with sufficient human creativity can still qualify for protection even when AI is involved, while fully machine-generated material cannot. But Stephens argues that what happened here was less creative inspiration than automated rewriting of a copyrighted article, raising the familiar question of whether an AI system was fed a copied version of the original work before producing its own version. News organisations have been pressing similar concerns in legal disputes, including lawsuits alleging that AI companies used publishers’ content without permission.

What made the case more striking was the bot’s own assessment of the rewritten article. According to Stephens, London News graded the story on freshness, quotes, source reliability and plausibility, but still concluded that it should fail overall on credibility. The bot criticised the blog format, said the absence of direct links undermined transparency and suggested the piece did not meet standards for editorial indemnity. Stephens said he could live with some of the criticism, but found it odd that a machine was acting as both copier and judge.

His response also exposed a broader tension in online publishing. The National Post reported that, in a survey commissioned by News Media Canada, more than seven in 10 Canadians supported federal action to stop AI companies from taking and repackaging news content without permission or compensation. At the same time, researchers and media analysts have repeatedly warned that AI-generated text can be convincing while still containing errors, bias or unsupported claims, which is why fact-checking and cross-referencing remain essential. Detection tools exist, but Axios has reported that they are increasingly unreliable as synthetic content improves.

Stephens also noted that the site’s filtering seemed selective. Of the AI-related stories he reviewed on London News, some were approved, others failed, and one was marked conditional. He suggested that the system’s human oversight may have played a role in softening material that would have irritated commercial interests, though he acknowledged that he could not prove that. The broader point, for him, is that AI can be useful for surfacing themes and testing credibility, but it remains a poor substitute for judgment, context and accountability.

For Stephens, the irony is hard to miss: a bot that may have lifted his work then dismissed it as unreliable. For the wider publishing world, the episode lands in familiar territory. The industry is already fighting over who gets to train on whose content, how much human input makes AI-assisted work legally and ethically defensible, and whether machine scoring systems are themselves trustworthy enough to decide what counts as credible journalism.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
8

Notes:
The article was published on April 27, 2026, and discusses a recent incident involving AI rewriting a blog post. The earliest known publication date of similar content is April 20, 2026, when the London News article was published. The narrative appears original, with no evidence of recycling or republishing across low-quality sites. The content is based on a personal blog post, which typically warrants a high freshness score. No discrepancies in figures, dates, or quotes were found. The article includes updated data and does not recycle older material. Overall, the freshness score is high.

Quotes check

Score:
7

Notes:
The article includes direct quotes from the AI bot’s analysis of the blog post. These quotes appear to be original to this article, with no matches found in earlier material. However, the quotes cannot be independently verified, as they originate from the AI bot’s internal assessment. The lack of external verification sources raises concerns about the authenticity of the quotes. Therefore, the score is moderate.

Source reliability

Score:
6

Notes:
The narrative originates from a personal blog, which is not a major news organisation. While the author, Hugh Stephens, has expertise in international copyright issues, the blog’s content is not subject to the same scrutiny as mainstream media. The article references reputable sources but lacks direct links to these sources, raising concerns about transparency and verifiability. The source’s reliability is moderate due to these factors.

Plausibility check

Score:
8

Notes:
The claims made in the article are plausible and align with known issues regarding AI-generated content and copyright concerns. The narrative is consistent with industry trends and is covered by other reputable outlets. The report includes specific factual anchors, such as dates, institutions, and events. The language and tone are consistent with the region and topic. Overall, the plausibility score is high.

Overall assessment

Verdict (FAIL, OPEN, PASS): PASS

Confidence (LOW, MEDIUM, HIGH): MEDIUM

Summary:
The article presents a plausible and original narrative about an AI bot rewriting a blog post and assessing its credibility. While the freshness and plausibility scores are high, concerns about the verifiability of quotes and the reliability of the source lead to a medium confidence level. The lack of independent verification sources and the use of direct quotes from the AI bot’s internal assessment are notable concerns.

Supercharge Your Content Strategy

Feel free to test this content on your social media sites to see whether it works for your community.

Get a personalized demo from Engage365 today.

Share.

Get in Touch

Looking for tailored content like this?
Whether you’re targeting a local audience or scaling content production with AI, our team can deliver high-quality, automated news and articles designed to match your goals. Get in touch to explore how we can help.

Or schedule a meeting here.

© 2026 Engage365. All Rights Reserved.