Shoppers, creators and policymakers are pushing AI makers to build stronger legal guardrails for name, image and likeness use , because when a tool lets you clone a face or voice by default, things get messy fast. Here’s what developers, talent and lawmakers are doing, and why an opt‑in approach matters.
Essential Takeaways
- Backlash was immediate: OpenAI’s Sora 2 drew rapid criticism for defaulting to allow use of real people’s likenesses, prompting policy changes and pledges to support federal rules.
- Federal fix is coming: The NO FAKES Act, reintroduced as a bipartisan bill, would create a national right of publicity for voice and visual likeness, reducing the patchwork of state laws.
- Practical guardrails: Prompt filtering, consent systems, context analysis and opt‑in defaults reduce misuse and help defend developers from secondary liability.
- Who needs to act: Developers should loop in IP and tech counsel early; performers, estates and creators should seek advice to protect their likenesses and monetise responsibly.
- Risk signals: Public figures, talent agencies and unions have voiced concrete harms , reputational, commercial and privacy , that policy and product design must address.
Why Sora 2 became the test case for likeness rights
When OpenAI launched Sora 2, the visual and voice‑replication features looked slick , and then celebrities and estates started spotting unauthorised recreations of their faces and voices. AP News detailed swift alarm among public figures, and talent agencies like Creative Artists Agency called the rollout risky for creators’ rights. The sensory jolt , seeing a convincing fake of someone you know in a short clip , made the issue visceral, not abstract. That public outrage pushed OpenAI to backtrack from an opt‑out model to opt‑in controls, which is exactly the kind of product pivot lawyers recommend before regulators weigh in.
What the NO FAKES Act would change (and why it matters)
Legislators reintroduced the NO FAKES Act as a bipartisan solution to this problem, aiming to set a federal baseline for likeness protections and potentially pre‑empt some state laws. The Senate and House sponsors argue the bill balances innovation with creator control, by recognising a federal right of publicity for voice and visual likeness. For developers, that means a single, nationwide standard could replace a confusing patchwork , and for talent, it could give clearer avenues to stop and monetise digital replicas. The bill’s progress is worth watching because it will shape what “responsible defaults” actually look like in code.
Product fixes that actually reduce misuse (and are lawyer‑friendly)
There are clear technical and policy levers teams can flip today. Prompt filtering flags requests that target identifiable people; consent gates prevent use without explicit permission; context analysis separates newsworthy or educational uses from commercial ads; and opt‑in defaults put control in people’s hands. Industry lawyers tell developers these measures not only protect individuals but also create a stronger defence against secondary liability if someone abuses a tool. In short: build the safety net before the headline storm hits.
Industry reaction , creators, agencies and countries aren’t waiting
Hollywood unions and agencies have been loud: SAG‑AFTRA and major agencies warned of mass misappropriation without guardrails. Meanwhile, international pushes complicated the picture , reports showed creators abroad raising alarms about racist or harmful AI clones and countries like Japan pushing back on some OpenAI moves. That global mix means compliance teams must think across jurisdictions, not just US states. For creators, the takeaway is simple: monitor where your likeness is used, opt out or licence proactively, and get counsel who understands both IP and reputational risk.
How to choose the right approach for your product or portfolio
If you’re a developer shipping a generative tool, start with legal input during design sprints: pick opt‑in as the safer default, layer in prompt and content filters, and provide granular controls for IP owners. If you’re a creator, catalogue what’s unique about your brand , voice, mannerisms, signature looks , and consult an IP lawyer about contracts and potential statutory remedies. For both camps, transparency is key: clear labelling of synthetic content and straightforward takedown or licensing pathways cut down on harm and build trust.
It’s a small change in settings that can make every generated clip safer and more respectful.
Source Reference Map
Story idea inspired by: [1]
Sources by paragraph:
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
10
Notes:
The narrative is current, with the latest developments in AI likeness rights and legal guardrails being reported in October 2025. ([theguardian.com](https://www.theguardian.com/technology/2025/oct/21/bryan-cranston-sora-2-openai?utm_source=openai))
Quotes check
Score:
10
Notes:
Direct quotes from Bryan Cranston and other stakeholders are unique to this report, with no earlier matches found. ([theguardian.com](https://www.theguardian.com/technology/2025/oct/21/bryan-cranston-sora-2-openai?utm_source=openai))
Source reliability
Score:
10
Notes:
The narrative originates from Bloomberg Law, a reputable organisation known for its legal reporting.
Plausability check
Score:
10
Notes:
The claims about OpenAI’s Sora 2 and the NO FAKES Act are consistent with other reputable sources, including The Guardian and Investing.com. ([theguardian.com](https://www.theguardian.com/technology/2025/oct/21/bryan-cranston-sora-2-openai?utm_source=openai))
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative is current, originates from a reputable source, and presents unique quotes and consistent claims, with no signs of recycled content or disinformation.

