Britain will introduce mandatory measures for technology companies to detect and block unsolicited sexual images, including AI-generated deepfakes, as part of new legal measures to combat online abuse and protect vulnerable users.
Britain has moved to force technology companies to take proactive steps to detect and prevent the unsolicited sharing of sexual images, a policy escalation officials say is needed to curb online abuse amplified by artificial intelligence. According to Benzinga, which cited Reuters, the new requirement comes into effect on January 8, 2026 and applies to major social platforms, dating apps and sites hosting adult content. [1][7]
The change formalises cyberflashing as a “priority offence” under the Online Safety Act, meaning platforms must not wait for user complaints before acting but instead implement measures to stop such material reaching users. The UK government said the move is designed to protect women and girls after survey data showed one in three teenage girls has received unsolicited sexual images. Officials have warned companies that failure to comply could lead to fines or even blocking of services in the UK. According to the government announcement, penalties could reach 10% of global revenue. [2][7]
Technology Secretary Liz Kendall told ministers the law requires firms to “detect and block” the content rather than merely respond to reports, emphasising the obligation to make online spaces safer. Ofcom, the UK’s media regulator, will consult on the technical standards platforms must adopt and will have the authority to enforce compliance. The regulator has already published codes of practice under the Online Safety Act to guide how companies should meet these duties. [1][2][6]
The move comes against a backdrop of rapidly evolving risks from AI, including a rise in sexually explicit deepfakes. European and national regulators have pressed platforms for explanations about intimate AI-generated images, with France and the European Commission scrutinising alleged breaches linked to new chatbot modes. UK ministers have publicly urged platforms such as X to address surges in intimate deepfakes. Industry observers say the technological challenge , spotting synthetic imagery at scale without overblocking legitimate content , will test the practicality of the new duty. [1][5][7]
Prosecutors and police meanwhile have been adapting to the legal landscape. The Crown Prosecution Service issued guidance widening the scope for charging people who send or threaten to share intimate images without consent, and courts have already recorded convictions under the new legal framework. One early case saw a convicted offender jailed after sending unsolicited explicit images to adults and a minor via messaging services. That prosecution illustrated how existing criminal sanctions, including prison terms and possible inclusion on the sex offenders register, sit alongside the platforms’ new regulatory obligations. [4][3]
Critics caution that enforcement will require both robust technological solutions and clearer standards to avoid disproportionate impacts on lawful expression. Industry data and expert commentary highlight a tension between automated detection tools and false positives, while legal scholars note the practical burdens on smaller services. The government and Ofcom have signalled they will consult widely on the detailed technical measures expected of firms. [6][2]
The combined effect of criminal law, CPS guidance and the Online Safety Act’s regulator-led duties marks a significant shift in how the UK expects technology firms to manage non-consensual intimate imagery: platforms now face both legal exposure for permitting such images to circulate and regulatory obligations to prevent them from appearing in the first place, with substantial fines or service restrictions as potential consequences. According to the official announcement and media reporting, the intent is to make online environments safer for those most at risk while forcing companies to confront AI-driven harms. [2][6][1]
##Reference Map:
- [1] (Benzinga/Reuters) – Paragraph 1, Paragraph 3, Paragraph 4, Paragraph 7
- [2] (GOV.UK) – Paragraph 2, Paragraph 3, Paragraph 6, Paragraph 7
- [3] (AP News) – Paragraph 5
- [4] (CPS) – Paragraph 5
- [5] (AP News overview of Online Safety Act) – Paragraph 4
- [6] (The Guardian) – Paragraph 3, Paragraph 6
- [7] (Technology.org) – Paragraph 1, Paragraph 2, Paragraph 4
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The narrative is current, with the new law taking effect on January 8, 2026. The earliest known publication date of similar content is September 29, 2025, when the UK government announced plans to make cyberflashing a priority offence under the Online Safety Act. ([gov.uk](https://www.gov.uk/government/news/tech-firms-to-prevent-unwanted-nudes-under-tougher-laws-to-protect-women-and-girls-online?utm_source=openai)) The report cites Reuters, indicating a reputable source. The narrative includes updated data, such as the one in three teenage girls statistic, which justifies a higher freshness score. However, the report may have recycled earlier material, as it references previous announcements and includes updated data. This suggests a mix of new and recycled content. The report was published on January 8, 2026, which is within the past 7 days. Therefore, the freshness score is 8.
Quotes check
Score:
7
Notes:
The report includes direct quotes from Technology Secretary Liz Kendall and Elymae Cedeno, VP of Trust and Safety at Bumble. The earliest known usage of these quotes is from the UK government’s announcement on September 29, 2025. ([gov.uk](https://www.gov.uk/government/news/tech-firms-to-prevent-unwanted-nudes-under-tougher-laws-to-protect-women-and-girls-online?utm_source=openai)) The wording of the quotes appears consistent with earlier material, suggesting potential reuse. However, no online matches were found for the exact phrasing used in the report, indicating potential originality or exclusivity. Therefore, the quotes check score is 7.
Source reliability
Score:
6
Notes:
The narrative originates from Benzinga, a financial news outlet. While Benzinga is generally considered reputable, it is not as widely recognised as major outlets like the Financial Times or Reuters. The report cites Reuters, a highly reputable organisation, which strengthens the reliability of the information. However, the reliance on a single source for the main narrative introduces some uncertainty. Therefore, the source reliability score is 6.
Plausability check
Score:
8
Notes:
The narrative aligns with recent developments, including the UK’s move to make cyberflashing a priority offence under the Online Safety Act. The report includes specific details, such as the one in three teenage girls statistic, which adds credibility. The language and tone are consistent with official communications on the topic. There are no significant inconsistencies or red flags, suggesting the narrative is plausible. Therefore, the plausibility check score is 8.
Overall assessment
Verdict (FAIL, OPEN, PASS): OPEN
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The narrative is timely and plausible, with some reliance on recycled content and a single source, which introduces moderate uncertainty. The quotes may have been reused, but their exact phrasing suggests potential originality. The source is reputable but not as widely recognised as major outlets. Therefore, the overall assessment is OPEN with medium confidence.

