Ofcom launches a formal probe into X following accusations that its AI chatbot Grok was used to produce and disseminate illegal sexualised images, including depictions of children, prompting calls for tougher regulation and international scrutiny.
- Body Copy
The UK’s communications regulator has opened a formal investigation into Elon Musk’s social media platform X after allegations that its AI chatbot Grok was used to generate and share sexualised and non‑consensual images, including material that may amount to child sexual abuse. Ofcom said it will assess whether X breached the Online Safety Act 2023, with potential sanctions ranging from fines to a ban if breaches are found. According to The Guardian and the Associated Press, ministers and regulators have described the content as “vile” and potentially illegal, fuelling urgent scrutiny of the platform’s safety controls.
The controversy has spilled into parliament. Oxford East MP Anneliese Dodds raised the issue during questions, invoking concerns about an “organised campaign of intimidation against female staff at Ofcom” and urging condemnation of the images’ circulation. Oxford Mail reported Dodds as saying: “I agree with the Secretary of State. The production of these disgusting images amount not to freedom of speech but to freedom to abuse, harass and commit crime.” Ministers have echoed that tone: Technology Secretary Liz Kendall characterised the content as “vile” and insisted no one should live in fear of having their image sexually manipulated by technology.
Government officials and regulators have stressed the gravity of claims that some generated images included sexualised depictions of children. The Guardian and AP report that descriptions aired in parliament referenced alleged criminal imagery of children as young as 11, and that such material would plainly fall within existing criminal offences and the Online Safety Act’s remit. Ofcom’s investigation is explicitly tasked with determining whether X’s systems and moderation meet the statutory duties to protect users from illegal and harmful content.
X has responded with product changes, restricting Grok’s image‑creation features to paying subscribers on the platform , a move that critics say merely monetises abuse rather than prevents it. The Associated Press and TechRadar note that the feature reportedly remains accessible via Grok’s separate app and website for some free users, and rival UK AI firms have publicly argued that no current image generator can be rendered wholly misuse‑proof without far stronger safeguards. Industry figures describe the subscription restriction as insufficient while legal and regulatory processes proceed.
The fallout has been international. Malaysia and Indonesia temporarily blocked Grok amid concerns about its misuse to produce explicit, non‑consensual images; those governments cited violations of privacy and human dignity in their decisions. Domestically, several politicians and public figures have publicly quit X in protest, arguing they will no longer drive traffic to a site “that actively enables sexual exploitation of women and children.” The global response underscores how quickly trust in new generative tools can collapse when safety mechanisms are seen to fail.
The episode has sharpened calls for tougher regulation of AI image tools and clearer enforcement of existing laws. British AI firms and safety advocates are urging radical transparency and stricter access controls; some commentators believe the UK should use the Online Safety Act and forthcoming legislative measures to set a global standard. Reporting in Windows Central and TechRadar indicates that ministers are considering rapid enforcement and legal measures to criminalise non‑consensual intimate image generation where necessary, while also warning platforms they cannot “self‑regulate” their way out of responsibility for harms.
Source Reference Map
Story idea inspired by: [1]
Sources by paragraph:
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
9
Notes:
The narrative is current, with the earliest known publication date of similar content being 12 January 2026. The report is based on a press release from Ofcom, which typically warrants a high freshness score. No discrepancies in figures, dates, or quotes were found. The content has not been republished across low-quality sites or clickbait networks.
Quotes check
Score:
10
Notes:
The direct quotes from Oxford East MP Anneliese Dodds and Technology Secretary Liz Kendall are unique to this report, with no identical matches found in earlier material. This suggests potentially original or exclusive content.
Source reliability
Score:
8
Notes:
The narrative originates from the Oxford Mail, a regional newspaper. While it is a reputable source, its regional focus may limit its reach compared to national outlets.
Plausability check
Score:
9
Notes:
The claims about Ofcom’s investigation into X over AI-generated sexual deepfakes are corroborated by multiple reputable sources, including The Guardian and the Associated Press. The narrative includes specific details such as the involvement of Anneliese Dodds and Liz Kendall, which align with other reports.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative is current, based on a recent press release from Ofcom, and includes unique quotes from Anneliese Dodds and Liz Kendall. The claims are corroborated by multiple reputable sources, and the content is accessible without paywalls. The narrative is a factual news report, not an opinion piece or other distinctive content type.

