Sir Michael Caine and Matthew McConaughey have licensed their voices for commercial AI use, sparking ongoing debates over consent, digital legacy, and risks such as AI fraud. While these partnerships aim to expand storytelling and provide control, concerns about unapproved voice cloning and malicious misuse highlight the need for stringent safeguards.
Sir Michael Caine has agreed to make his voice available for commercial use through the AI audio firm ElevenLabs, a move that puts one of Britain’s most instantly recognisable narrators into a library of purchasable vocal personalities. According to reporting by TechRadar and AP News, the arrangement means Caine’s voice can be used via ElevenLabs’ ElevenReader app and through the company’s new Iconic Marketplace.
The deal comes alongside a similar arrangement with Matthew McConaughey, who is both a user and investor in the company; ElevenLabs says McConaughey will also use its tools to produce Spanish-language audio versions of his newsletter. Industry coverage explains that the partnerships are being showcased as a way to expand access to storytelling tools while offering performers a means of control and remuneration when their voices are used in synthetic form.
ElevenLabs has positioned its Iconic Marketplace as a permissioned, commercial alternative to unauthorised voice cloning, pitching the platform as one where rights holders retain ownership, can negotiate compensation and approve each use. But that framework is complicated by the presence on the platform of voices linked to people who cannot personally consent. Reporting has noted that the catalogue includes vocal recreations associated with historical figures and deceased artists, some authorised by estates or third parties.
That fact has prompted ethical debate about digital legacy and the authority to speak for the dead. Critics argue that decision-making by estates or intermediaries does not resolve questions about what it means to reproduce a voice whose owner is no longer alive to accept or refuse particular uses. Proponents counter that licencing provides a transparent, compensatory route that is preferable to clandestine cloning that has already proliferated online.
Beyond questions of consent, security experts warn of concrete harms as synthetic speech becomes harder to distinguish from the real thing. TechRadar’s reporting highlighted growing concern about “vishing”, fraud conducted using AI-generated voices, and security firms have warned of rising losses associated with such scams. AP News has also chronicled earlier misuses of voice synthesis technology, which prompted ElevenLabs to adopt stricter safeguards after a high-profile abuse.
ElevenLabs says it has introduced measures intended to prevent unauthorised cloning of celebrities and to limit misuse. According to AP News, the company changed its policies and technical controls following incidents in which its tools were misapplied; industry observers nevertheless say that no set of protections can eliminate the risk entirely as the technology spreads.
The conversation over licensed celebrity voices therefore sits at the intersection of commerce, culture and security. For artists such as Caine and McConaughey there is clear commercial and expressive value in extending their vocal reach; for audiences and regulators there is a rising need to weigh the benefits of accessible, tailored audio against an erosion of trust in what we hear. How that balance is struck may shape whether synthetic celebrity voices become a new creative resource or another source of social friction.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
7
Notes:
The article was published 5 days ago, which is recent. However, similar reports have appeared in the past few months, with the earliest known publication date being 4 months ago. ([theguardian.com](https://www.theguardian.com/culture/2025/nov/11/matthew-mcconaughey-michael-caine-ai-voice?utm_source=openai)) This suggests some recycled content. The article references multiple sources, including TechRadar and AP News, indicating a mix of original reporting and aggregation. The presence of older material alongside updated data raises concerns about freshness.
Quotes check
Score:
6
Notes:
The article includes direct quotes from Michael Caine and Matthew McConaughey. However, these quotes appear in earlier reports from 4 months ago, indicating potential reuse. No online matches were found for some quotes, making independent verification challenging. The lack of independently verifiable quotes raises concerns about authenticity.
Source reliability
Score:
6
Notes:
The article originates from TechRadar, a reputable source. However, it aggregates content from multiple sources, including AP News and The Guardian. The presence of aggregated content from multiple sources raises concerns about originality and potential bias. The reliance on aggregated content from multiple sources raises concerns about originality and potential bias.
Plausibility check
Score:
8
Notes:
The claims about Michael Caine and Matthew McConaughey licensing their voices to ElevenLabs are plausible and align with previous reports. However, the article lacks supporting details from other reputable outlets, which raises concerns about the comprehensiveness of the reporting. The lack of supporting details from other reputable outlets raises concerns about the comprehensiveness of the reporting.
Overall assessment
Verdict (FAIL, OPEN, PASS): FAIL
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The article raises several concerns, including potential recycling of content, unverifiable quotes, and reliance on aggregated sources, which undermine its credibility. The presence of aggregated content from multiple sources raises concerns about the independence of the verification sources. The lack of supporting details from other reputable outlets raises concerns about the comprehensiveness of the reporting.

