Jennifer Rothman, a USC law professor and rights expert, warns that current legislative efforts to regulate AI-generated digital replicas risk creating lasting, transferable property rights over individuals’ voice and likeness, potentially infringing on personal liberty and privacy.

There were so many things Jennifer Rothman wanted to ask me before our conversation began. The Nicholas F. Gallicchio Professor of Law at the University of Southern California, Rothman is an international authority on the right of publicity and a scholar of “the ways intellectual property law is employed to turn people into a form of property.” Her concern is simple and stark: as deepfakes, voice clones and digital replicas proliferate, legislative responses risk locking people into perpetual, transferable forms of legal ownership rather than protecting them from exploitation. [1]

Rothman’s alarm is not abstract. She points to a viral April 2023 incident in which a song called “Heart on My Sleeve” used AI-generated vocals that mimicked Drake and the Weeknd, spurring music industry litigation and a political response that produced competing bills in Congress. In testimony before a House subcommittee in February 2024 she warned that draft legislative efforts “allow another person, or most likely a company, to own or control another person’s name, voice, and likeness forever and in any context.” According to Rothman, permitting such ownership “in perpetuity” risks violating fundamental liberty interests. [1][7]

That tension sits at the heart of contemporary policy debates. In July 2024 senators introduced the NO FAKES Act, a bipartisan attempt to create a federal right protecting individuals’ voice and visual likeness from unauthorized digital replicas. Proponents frame it as a necessary national standard; critics including Rothman argue it could instead institutionalise a market in digital replicas by creating transferable rights and long-term licences that enable uses the depicted person never specifically authorised. The bill’s sponsors pitched it as protecting creators and the public; Rothman says it still fails two critical tests: meaningful, person-specific authorisation and protection against public deception. [6][1]

The legal landscape is already crowded at state and federal levels. Since 2024 a string of state laws have criminalised non-consensual explicit deepfakes or deceptive AI media: Pennsylvania’s Senate passed a prohibition on sexually explicit deepfakes; New Jersey enacted criminal penalties for producing or sharing deceptive AI media; Minnesota proposed bans on “nudification” tools; and at the federal level the Take It Down Act, signed into law in April 2025, criminalises publishing intimate images without consent, including AI-generated deepfakes, and requires platforms to remove such content within 48 hours of a victim’s notice. The mosaic of statutes reflects intense policy activity but also raises questions about preemption, conflicts between state regimes and any future federal standard, and potential First Amendment trade-offs. [4][5][3][2]

Rothman stresses that litigation already fills some gaps; recent New York suits by voice actors under right-of-publicity and privacy laws indicate state law can remedy certain harms. But she warns that law alone is only part of the answer, practical frictions such as takedown logistics, whack-a-mole reposting across platforms, and resource disparities between public figures and private individuals mean many victims will struggle to obtain timely relief. The problem is both legal and infrastructural: removing an offending clip from one site rarely stops it from appearing on others, and victims without high commercial value lack the economic incentive or means to pursue lengthy litigation. [1][7]

Beyond statutory and common-law doctrine, Rothman highlights how copyright doctrine and private contracts can “propertize” people in ways that complicate remedy. Photographs, audio recordings and motion pictures are already copyrightable, and copyright holders can make and authorise derivative works. As AI systems train on and synthesise those captured attributes, copyright and contract terms, terms of service that grant platforms broad licences, or studio agreements assigning digital-replica rights, can entrench third-party control over an individual’s likeness and voice. Rothman warns of a coming “identity thicket” where overlapping rights, some that remain with the person and some that become alienable, create legal and market chaos. [1]

The practical stakes are illustrated by new generation tools. OpenAI’s Sora 2, released with features allowing users to upload likenesses and generate videos, prompted debate over whether an “end-to-end” control promise is enforceable in practice. Rothman recounts instances of unauthorised recreations, an apparent Jenna Ortega replication as a TV character among them, and reports that students have been able to upload teachers’ images to produce mocking videos. Platform operators argue they are merely enabling creativity and not hosting infringing content, but Rothman notes litigation is already probing whether AI companies can be treated as speakers or be liable for negligent system design that permits defamatory or non-consensual identity uses. [1]

A particularly fraught question is transferability: should rights over a living person’s name, likeness or voice be forever alienable by contract? Rothman argues they should not. She compares such transfers to impermissible bargains, “We don’t let people sell their votes. We don’t let people sell themselves into slavery”, and urges that living persons retain non-transferable control over how their identities are used. Allowing perpetual or transferable licences to digital replicas, she warns, would entrench market power in studios, managers or platforms and could deprive future performers of opportunities if legacy replicas are reused in place of new human labour. [1]

At the same time Rothman recognises competing values. Robust publicity and privacy regimes risk chilling legitimate speech, political commentary and documentation of wrongdoing if platforms or litigants use identity claims to demand removal of authentic material. She cautions lawmakers to balance protections against exploitation with breathing room for parody, critique and public interest reporting, an equilibrium that is difficult to achieve in statutory drafting and even more complicated when platforms apply rules across millions of pieces of content. [1]

Rothman concludes with a narrowly pragmatic prescription: “do no harm.” Any federal statute should prioritise two core protections, clear, demonstrable authorisation by the person depicted for the specific use, and safeguards against deceiving the public, rather than creating a broad federal property right that expands the market for digital replicas. She urges a mix of legal restraint, litigation-tested doctrines, platform cooperation on detection and takedown, and technological tools for authentication, while acknowledging the arms race inherent in detection technologies and the uncertainty of what the public will accept or prefer in offerings of AI-generated cultural goods. [1]

The question Rothman frames is existential as well as technical: who will control a person’s image and voice in an era where replication is cheap and distribution is instant? The policy choices made now, whether through state experiments, a federal NO FAKES-style law, or incremental case law, will shape the terrain in which identity, commerce and democracy intersect. For Rothman the test is moral and constitutional as much as economic: laws must protect persons’ liberty to control their own identity and prevent the formation of durable private property regimes over what it means to be human. [1][6][7]

📌 Reference Map:

##Reference Map:

  • [1] (The Penn Gazette) – Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 5, Paragraph 6, Paragraph 7, Paragraph 8, Paragraph 9, Paragraph 10
  • [7] (U.S. House testimony) – Paragraph 2, Paragraph 5
  • [6] (Senators’ press release on NO FAKES Act) – Paragraph 3, Paragraph 10
  • [4] (AP News) – Paragraph 4
  • [5] (AP News) – Paragraph 4
  • [3] (AP News) – Paragraph 4
  • [2] (AP News) – Paragraph 4

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
8

Notes:
The narrative presents recent developments, including Jennifer Rothman’s testimony in February 2024 and the introduction of the NO FAKES Act in July 2024. The article was published on December 24, 2025, indicating timely reporting. However, the discussion of the “Heart on My Sleeve” incident from April 2023 suggests some recycled content. The inclusion of updated data alongside older material may justify a higher freshness score but should still be flagged. ([congress.gov](https://www.congress.gov/118/meeting/house/116778/witnesses/HHRG-118-JU03-Wstate-RothmanJ-20240202.pdf?utm_source=openai))

Quotes check

Score:
9

Notes:
Direct quotes from Jennifer Rothman are consistent with her known statements in her February 2024 testimony. No significant variations or earlier uses of these quotes were found, indicating originality. ([congress.gov](https://www.congress.gov/118/meeting/house/116778/witnesses/HHRG-118-JU03-Wstate-RothmanJ-20240202.pdf?utm_source=openai))

Source reliability

Score:
7

Notes:
The narrative originates from The Penn Gazette, a publication associated with the University of Pennsylvania. While it is a reputable institution, the publication’s specific editorial standards and independence are not widely known, introducing some uncertainty.

Plausability check

Score:
8

Notes:
The claims about Jennifer Rothman’s concerns regarding AI-generated content and the legislative responses align with known events and her public positions. The article’s tone and language are consistent with the subject matter and region, suggesting authenticity. However, the lack of supporting details from other reputable outlets on some claims slightly reduces the score. ([congress.gov](https://www.congress.gov/118/meeting/house/116778/witnesses/HHRG-118-JU03-Wstate-RothmanJ-20240202.pdf?utm_source=openai))

Overall assessment

Verdict (FAIL, OPEN, PASS): PASS

Confidence (LOW, MEDIUM, HIGH): MEDIUM

Summary:
The narrative provides timely and original reporting on Jennifer Rothman’s perspectives regarding AI-generated content and related legislation. While the source’s reliability is somewhat uncertain, the content’s plausibility and the originality of the quotes support the overall assessment.

Share.

Get in Touch

Looking for tailored content like this?
Whether you’re targeting a local audience or scaling content production with AI, our team can deliver high-quality, automated news and articles designed to match your goals. Get in touch to explore how we can help.

Or schedule a meeting here.

© 2025 AlphaRaaS. All Rights Reserved.
Exit mobile version