Demo

Japan’s Justice Ministry advances plans to recognise voices under existing law for protection against unauthorised use by generative AI, as industry advocates push for stronger safeguards amid rising deepfake concerns.

Japan’s Justice Ministry has moved a step closer to recognising voices as something that can be protected under existing law when they are copied or manipulated by generative artificial intelligence. An expert panel has agreed that voices should be treated in the same legal family as portrait rights and publicity rights, and it aims to draw up guidance this summer on what counts as unlawful use and how damages might be assessed. According to reports from The Japan Times and Nippon.com, the panel is working within current civil law rather than drafting an entirely new category of protection.

At its first meeting on Friday, the panel discussed how to respond to the unauthorised use of voices, but no final rules were set. State Minister of Justice Hidehiro Mitani said at the opening of the session that the harm caused by unauthorised voice use can be serious, while also warning that it would be too onerous to expect voice actors and others to fight every case in court as AI develops so quickly. The ministry is now expected to continue examining how existing rights can be applied to synthetic voices.

The issue has become more urgent as deepfake audio and celebrity voice cloning spread. Japan’s Justice Ministry had already announced a study panel to look at civil liability for the unauthorised use of likenesses and voices, including how tort law should be interpreted in cases involving AI-generated content. That review is also expected to focus on the standards for illegal conduct and the calculation of damages, reflecting concern that current legal tools may not be enough on their own.

The debate has been sharpened by efforts within the voice-acting industry to push back against unauthorised AI use. In late 2024, several voice actors launched the No More Mudan Seisei AI group to defend performers from the use of their voices without consent. Voice actor Yuki Kaji has also been building his own answer to the technology through Soyogi Fractal, a voice-AI project he developed after saying he was worried about copyright issues around voices. Kaji announced a new company, FRACTAL, on April 9, 2026, with voice AI and voice actor management as its two main businesses.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
8

Notes:
The article reports on a recent development from April 24, 2026, concerning Japan’s Justice Ministry’s expert panel addressing the unauthorized use of celebrity voices by generative AI. ([japantimes.co.jp](https://www.japantimes.co.jp/news/2026/04/25/japan/japan-celebrity-voices-ai/?utm_source=openai)) This is corroborated by other sources, such as Nippon.com, which also reported on the same date. ([nippon.com](https://www.nippon.com/en/news/yjj2026042400870/?utm_source=openai)) The information appears current and original, with no evidence of prior publication or recycling from low-quality sites. However, the reliance on a single source for the primary information raises concerns about the independence of the reporting.

Quotes check

Score:
7

Notes:
The article includes direct quotes from State Minister of Justice Hidehiro Mitani, stating that the harm caused by unauthorized voice use can be serious and that it would be too onerous for voice actors to fight every case in court as AI develops rapidly. ([japantimes.co.jp](https://www.japantimes.co.jp/news/2026/04/25/japan/japan-celebrity-voices-ai/?utm_source=openai)) A search for these quotes reveals no earlier usage, suggesting they are original. However, the absence of independent verification for these quotes is a concern, as they cannot be cross-checked against other reputable sources.

Source reliability

Score:
6

Notes:
The article originates from Anime News Network, a niche publication focusing on anime and related topics. While it is reputable within its niche, its reach and influence are limited compared to major news organizations. The reliance on a single, specialized source for the primary information raises concerns about the independence and comprehensiveness of the reporting.

Plausibility check

Score:
8

Notes:
The claims made in the article align with known developments in Japan’s legal approach to AI and intellectual property. The Justice Ministry’s initiative to protect celebrity voices from unauthorized AI use is consistent with previous discussions and legislative efforts in Japan. ([japantimes.co.jp](https://www.japantimes.co.jp/news/2026/04/25/japan/japan-celebrity-voices-ai/?utm_source=openai)) However, the article lacks specific details about the guidelines being developed, such as the criteria for unlawful use and the assessment of damages, which are crucial for evaluating the plausibility and potential impact of the proposed measures.

Overall assessment

Verdict (FAIL, OPEN, PASS): FAIL

Confidence (LOW, MEDIUM, HIGH): MEDIUM

Summary:
The article presents current information on Japan’s Justice Ministry’s initiative to protect celebrity voices from unauthorized AI use. However, it relies heavily on a single, niche source with limited reach, and the direct quotes included cannot be independently verified. The lack of independent verification and the reliance on a single source raise significant concerns about the reliability and comprehensiveness of the reporting. Therefore, the overall assessment is a FAIL with MEDIUM confidence.

Supercharge Your Content Strategy

Feel free to test this content on your social media sites to see whether it works for your community.

Get a personalized demo from Engage365 today.

Share.

Get in Touch

Looking for tailored content like this?
Whether you’re targeting a local audience or scaling content production with AI, our team can deliver high-quality, automated news and articles designed to match your goals. Get in touch to explore how we can help.

Or schedule a meeting here.

© 2026 Engage365. All Rights Reserved.