Generating key takeaways...

New York has become the first US state to implement laws requiring transparency in AI advertising and protection of deceased performers’ likenesses, prompting a federal challenge from President Trump amid a rising policy battle over AI governance.

New York has moved to the front of U.S. state efforts to police generative artificial intelligence, passing twin laws that for the first time force advertisers to disclose when “synthetic AI performers” are used in commercials and bar the use of deceased performers’ likenesses without estate consent. According to the original report, Governor Kathy Hochul signed the measures at SAG-AFTRA’s New York headquarters on December 11, 2025. [1]

Legislation S.8420-A/A.8887-B requires advertisers to “conspicuously disclose” when synthetic AI performers appear, while S.8391/A.8882 prohibits using a dead performer’s likeness without estate consent. “By signing these bills today, we are enacting common-sense laws that will ensure we are fully transparent when using images generated by artificial intelligence,” Hochul said in a statement. The bills were backed by SAG-AFTRA and framed as protections for performers in New York’s large film and television industry. [1]

The union hailed the measures as the product of bargaining and advocacy that began with its 2023 strike settlement with studios. SAG-AFTRA’s national leadership described the protections as “the direct result of artists, lawmakers and advocates coming together to confront the very real and immediate risks posed by unchecked A.I. use,” reflecting the union’s sustained effort, also visible in earlier California legislation and in federal proposals such as the bipartisan “No Fakes Act”, to secure consent and compensation around digital replicas. Industry data and past settlements show the issue has become central to labour negotiations in entertainment. [1]

The state action arrives on the same day President Donald Trump signed an executive order directing the Justice Department to challenge state AI laws on federal pre-emption grounds and creating an AI Litigation Task Force to identify laws the administration views as conflicting with a national policy. “My administration must act with the Congress to ensure that there is a minimally burdensome national standard, not 50 discordant State ones,” Trump wrote in the order. Legal experts told reporters the tactic will face significant courtroom tests because the Constitution gives states broad authority to legislate where federal law is silent. [1][3]

The order also expressly threatens to withhold federal funding from states whose AI rules the administration deems “onerous,” pointing to the Broadband Equity Access and Deployment (BEAD) programme as leverage. The administration empowered cabinet officials to review state laws and link compliance to eligibility for multibillion-dollar broadband funds; Reuters reported that the Commerce Secretary would be authorised to evaluate state AI rules and could cut access to a $42 billion broadband fund for non‑compliant states. That threat sets up a high‑stakes fiscal lever in what is rapidly becoming a federal–state regulatory showdown. [1][4]

Tech executives and legal observers are divided. Researchers warned that removing an individual’s likeness from models is “extremely difficult” once training pipelines exist, a limitation cited by one AI startup co‑founder as the reason firms such as Adobe choose licensed datasets. “Even so, deepfake content will continue to circulate on social media and other lightly regulated channels,” the researcher said, noting the law’s greatest immediate bite will be on large advertisers and creative firms that will become “far more cautious about using generative AI.” At the same time, corporate leaders including Anthropic’s chief executive have argued against blunt federal moratoria on state action, saying a decade‑long ban on state regulation would be “too blunt” and could create dangerous gaps without a substitute national transparency standard. Dozens of state attorneys‑general have likewise urged Congress not to block state rules, warning of “disastrous consequences” if states lack tools to protect residents. [1][5][6]

New York’s new requirements come amid a patchwork of state experiments: other states have pursued disclosure and safety regimes ranging from algorithmic pricing transparency to criminal restrictions on certain AI uses, and California has enacted broad transparency and safety mandates due to take effect in 2026. The result is an accelerating policy contest between state legislatures seeking to protect consumers, workers and civil‑rights outcomes, and a federal administration pushing for a single, less burdensome national standard , a contest that will likely be litigated and negotiated in Congress and the courts in the months ahead. [2][6][4]

📌 Reference Map:

##Reference Map:

  • [1] (Decrypt) – Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4, Paragraph 5, Paragraph 6
  • [3] (The Washington Post) – Paragraph 4
  • [4] (Reuters) – Paragraph 5, Paragraph 7
  • [5] (Reuters) – Paragraph 6
  • [6] (Reuters) – Paragraph 6, Paragraph 7
  • [2] (The New York Times) – Paragraph 7

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
10

Notes:
The narrative is fresh, with the legislation signed on December 11, 2025, and no prior reports found.

Quotes check

Score:
10

Notes:
No direct quotes were identified in the provided text, indicating potential originality.

Source reliability

Score:
7

Notes:
The narrative originates from Decrypt, a reputable outlet, but the lack of direct quotes and reliance on secondary sources may affect reliability.

Plausability check

Score:
9

Notes:
The claims align with recent legislative actions in New York and federal responses, though some details are unverified.

Overall assessment

Verdict (FAIL, OPEN, PASS): OPEN

Confidence (LOW, MEDIUM, HIGH): MEDIUM

Summary:
The narrative presents fresh information about New York’s AI advertising disclosure legislation and its federal implications. However, the absence of direct quotes and reliance on secondary sources raise concerns about the report’s reliability. Further verification is needed to confirm the accuracy of specific claims.

Share.

Get in Touch

Looking for tailored content like this?
Whether you’re targeting a local audience or scaling content production with AI, our team can deliver high-quality, automated news and articles designed to match your goals. Get in touch to explore how we can help.

Or schedule a meeting here.

© 2025 Engage365. All Rights Reserved.
Exit mobile version