UNICEF calls for urgent legal and technological measures to combat the surge in AI-generated sexual imagery of children, highlighting gaps in current laws and the need for international cooperation.
The United Nations children’s agency has urged governments to make the creation, possession and distribution of AI-generated sexual images of children a criminal offence, saying the scale of the problem demands immediate legal and technological responses. According to UNICEF, the practice of using artificial intelligence to fabricate sexualised images of minors has surged, prompting calls to broaden legal definitions of child sexual abuse material to cover synthetic content. (Paragraph 1 sources: UNICEF press release).
UNICEF cited research across 11 countries in which at least 1.2 million children reported having their images manipulated into sexually explicit deepfakes over the past year, a figure the agency used to underline the extent of victimisation and the cross-border nature of the harm. The organisation warned that existing statutes in many jurisdictions do not expressly cover AI-generated material, leaving a gap predators can exploit. (Paragraph 2 sources: UNICEF press release).
The agency singled out so-called “nudification” techniques, where software strips or alters clothing in photographs to produce fabricated nude or sexualised images, and issued a stark appeal to policymakers and platform operators. “The harm from deepfake abuse is real and urgent. Children cannot wait for the law to catch up,” UNICEF said in a statement. (Paragraph 3 sources: UNICEF press release).
London has moved ahead of other capitals with new legislation that explicitly criminalises the use of AI tools to produce child sexual abuse images, making the United Kingdom the first country to enact such measures. The UK law criminalises creating, possessing or distributing AI systems or manuals intended to generate abusive imagery and carries prison terms for offenders, a change framed by ministers as closing legal loopholes exploited by offenders. (Paragraph 4 sources: UK government announcement, The Guardian).
Regulators and safety organisations are also being given wider powers to scrutinise AI models. The government has authorised designated bodies to test systems for their capacity to produce abusive imagery, a change welcomed by groups such as the Internet Watch Foundation, which said enhanced testing and legal clarity are essential as AI imagery grows more extreme. (Paragraph 5 sources: The Guardian, IWF).
Beyond criminal law, UNICEF urged developers to adopt safety-by-design practices and for digital companies to invest in detection technologies and stronger moderation to curb circulation of abusive material. International co-operation has become part of the response: the UK and US have pledged to work together on capabilities to detect and limit AI-generated child sexual abuse images and have called on other countries to join the effort. Industry, non-governmental bodies and governments are being positioned as complementary actors in a strategy that blends legislation, technical defences and cross-border collaboration. (Paragraph 6 sources: UNICEF press release, UK–US joint pledge, UK government announcement).
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The UNICEF press release dated 4 February 2026 is the earliest known publication of this specific narrative. ([unicef.org](https://www.unicef.org/press-releases/deepfake-abuse-is-abuse?utm_source=openai)) However, similar concerns about AI-generated child sexual abuse material have been reported since October 2023, indicating that the issue has been ongoing for some time. ([theguardian.com](https://www.theguardian.com/technology/2023/oct/25/ai-created-child-sexual-abuse-images-threaten-overwhelm-internet?utm_source=openai))
Quotes check
Score:
7
Notes:
The direct quotes from UNICEF’s press release are consistent with the original source. ([unicef.org](https://www.unicef.org/press-releases/deepfake-abuse-is-abuse?utm_source=openai)) However, the phrase ‘Deepfake abuse is abuse’ has been used in previous reports, raising questions about the originality of this specific wording.
Source reliability
Score:
9
Notes:
The primary source is UNICEF, a reputable international organisation. The secondary sources include The Guardian, a major news outlet, and UK government announcements, both of which are generally reliable. ([theguardian.com](https://www.theguardian.com/technology/2025/apr/23/ai-images-of-child-sexual-abuse-getting-significantly-more-realistic-says-watchdog?utm_source=openai))
Plausibility check
Score:
8
Notes:
The claims about the rise of AI-generated child sexual abuse material are plausible and align with reports from other reputable sources. ([theguardian.com](https://www.theguardian.com/technology/2025/apr/23/ai-images-of-child-sexual-abuse-getting-significantly-more-realistic-says-watchdog?utm_source=openai)) However, the specific figure of 1.2 million children affected over the past year requires independent verification.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The narrative is based on a recent UNICEF press release and corroborated by reputable sources. However, the originality of certain phrases and the need for independent verification of specific figures introduce some uncertainty. ([unicef.org](https://www.unicef.org/press-releases/deepfake-abuse-is-abuse?utm_source=openai))

