Demo

The US Supreme Court’s refusal to hear Thaler v. Perlmutter confirms that artificial intelligence alone cannot qualify for copyright, emphasising the importance of human involvement in creative works and reshaping security and IP governance.

The Supreme Court’s refusal to take up Thaler v. Perlmutter has left a clear message in place: under current US law, works created entirely by artificial intelligence do not qualify for copyright protection without meaningful human authorship. The D.C. Circuit had already upheld the Copyright Office’s refusal to register an image generated solely by AI, and the high court’s decision not to intervene on 2 March 2026 leaves that ruling intact.

That matters well beyond the courtroom. The Copyright Office has been examining AI and copyright since early 2023, gathering more than 10,000 public comments after launching its inquiry and then publishing a two-part report series, including a January 2025 section focused on the copyrightability of generative AI outputs. Its position, reinforced by the courts, is that copyright still turns on human creativity, not on the machine that assembled the final work.

The practical distinction is between AI as a tool and AI as the effective creator. If a person uses generative systems to support a work but then applies substantial editorial judgment, rewrites the material or combines outputs into a distinctly human-curated expression, copyright may still attach to the finished product. But a simple prompt followed by direct publication is far less likely to meet the standard, because the law continues to require authorship by a human being.

For security leaders, the issue is no longer just legal theory. Companies are increasingly using AI to draft text, create images and produce other assets that they may later want to license, protect or enforce. If those materials are generated with too little human involvement, they may be harder to defend in a dispute, and a rival or infringer could potentially challenge ownership by pointing to the AI-heavy creation process. That makes AI use a matter of intellectual property governance as much as innovation.

The result is an expanded role for chief information security officers. Rather than standing outside the creative process, security teams may need visibility into how content is produced, whether prompts, edits and approvals are being documented, and whether so-called shadow AI is exposing the company to legal and operational risk. In that sense, the latest court ruling strengthens the argument that AI oversight belongs not only in legal and product teams, but in the broader security and risk function as well.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
8

Notes:
The article references the Supreme Court’s denial of certiorari in Thaler v. Perlmutter on 2 March 2026, which is recent. However, the article was published on 21 April 2026, indicating a delay of over a month. This delay is significant in the fast-evolving field of AI and copyright law, potentially affecting the relevance and accuracy of the information presented. ([mayerbrown.com](https://www.mayerbrown.com/en/insights/publications/2026/03/supreme-court-denies-review-in-ai-authorship-case?utm_source=openai))

Quotes check

Score:
7

Notes:
The article includes direct quotes from Dr. Stephen Thaler, such as his statement that he neither prompted the AI system nor did any further edits or alterations to the final AI-generated image. While these quotes are attributed to Dr. Thaler, they cannot be independently verified through the provided sources. The lack of verifiable sources for these quotes raises concerns about their authenticity. ([mayerbrown.com](https://www.mayerbrown.com/en/insights/publications/2026/03/supreme-court-denies-review-in-ai-authorship-case?utm_source=openai))

Source reliability

Score:
6

Notes:
The article is published on Klogix Security’s blog, a company specialising in cyber risk consulting. While the company is reputable within its niche, it is not a major news organisation. This raises concerns about the independence and potential bias of the source. Additionally, the article heavily relies on its own analysis and does not provide links to primary sources or external references, which diminishes its credibility.

Plausibility check

Score:
7

Notes:
The article discusses the Supreme Court’s denial of certiorari in Thaler v. Perlmutter, a real and recent case. However, the article’s analysis and conclusions are based on the author’s interpretation and are not corroborated by independent sources. The lack of supporting evidence from other reputable outlets makes the claims less reliable.

Overall assessment

Verdict (FAIL, OPEN, PASS): FAIL

Confidence (LOW, MEDIUM, HIGH): MEDIUM

Summary:
The article presents information on the Supreme Court’s denial of certiorari in Thaler v. Perlmutter, but it is published over a month after the event, contains unverifiable quotes, relies on a potentially biased source, and lacks independent verification. These factors significantly undermine its credibility and reliability.

Supercharge Your Content Strategy

Feel free to test this content on your social media sites to see whether it works for your community.

Get a personalized demo from Engage365 today.

Share.

Get in Touch

Looking for tailored content like this?
Whether you’re targeting a local audience or scaling content production with AI, our team can deliver high-quality, automated news and articles designed to match your goals. Get in touch to explore how we can help.

Or schedule a meeting here.

© 2026 Engage365. All Rights Reserved.