Generating key takeaways...

Ofcom has opened a formal probe into Elon Musk’s platform X and its AI tool Grok following reports of sexualised deepfake images, prompting a fierce debate over platform responsibility and regulatory compliance under the UK’s new online safety legislation.

Ofcom has opened a formal investigation into Elon Musk’s AI chatbot Grok and its parent platform X after reports that the tool was being used to generate sexualised deepfake images, a development described by the reporting outlet as “deeply concerning”. The same report quoted Prime Minister Keir Starmer calling the images “disgusting” and “unlawful”, and saying X must “get a grip” on the application, while Downing Street indicated it was prepared to consider leaving the platform if adequate action was not taken. [1]

The UK regulator said it will examine whether X failed to meet its legal duties under the Online Safety Act by allowing the creation and dissemination of non-consensual intimate images and sexualised images of children, which could amount to intimate image abuse, pornography or child sexual abuse material. Ofcom’s public statement made clear the investigation will assess the platform’s compliance and the steps X took after an initial request for information. [3]

Under the Online Safety Act, platforms carrying potentially harmful content face strict obligations to protect users, including age verification measures such as facial checks or payment-card verification, and duties to remove illegal material. Government guidance explains the legislation gives Ofcom powers to enforce compliance, including the ability to impose significant fines or to require measures that could lead to a de facto blocking of a service in the UK. [6][7]

X has said it removes illegal content, suspends accounts and works with law enforcement where necessary, and the company restricted Grok’s image-generation feature to paying subscribers in the wake of the backlash. That step was widely criticised by victims’ groups, politicians and campaigners as insufficient and as an attempt to limit scrutiny rather than address the underlying harms. According to reporting, the move to monetise access drew condemnation as an “affront to victims”. [4][5]

Campaigners and some ministers have urged rapid action, arguing that any delay compounds harm to victims whose images are generated and shared without consent. Industry observers said the case highlights broader tensions between fast-developing generative AI tools and existing regulatory frameworks, which were not designed for large-scale, automated image fabrication. [5]

Ofcom has the authority to impose fines of up to 10 percent of a company’s worldwide revenue for breaches of the Online Safety Act and, if necessary, to require internet service providers to block access to an offending service in the UK. The regulator said it would act if it found X had failed to comply with its obligations. [3][6]

The investigation places X and Grok at the centre of a high-profile test of the UK’s new online-safety regime, and will be watched closely by governments, civil-society groups and technology firms as regulators attempt to bind rapidly evolving AI capabilities to existing legal protections for privacy and children online. [3][6][5]

📌 Reference Map:

##Reference Map:

  • [1] (Al Jazeera) – Paragraph 1
  • [3] (Ofcom) – Paragraph 2, Paragraph 6, Paragraph 7
  • [6] (UK government – Online Safety Act) – Paragraph 3, Paragraph 6, Paragraph 7
  • [7] (UK Parliament) – Paragraph 3
  • [4] (The Guardian) – Paragraph 4
  • [5] (The Guardian) – Paragraph 4, Paragraph 5, Paragraph 7

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
10

Notes:
The narrative is current, with the Al Jazeera report dated 12 January 2026, and no evidence of prior publication of similar content.

Quotes check

Score:
10

Notes:
Direct quotes from Prime Minister Keir Starmer and Ofcom are unique to this report, with no earlier matches found online.

Source reliability

Score:
9

Notes:
Al Jazeera is a reputable news outlet. The report cites official statements from Ofcom and the UK government, enhancing credibility.

Plausability check

Score:
10

Notes:
The claims align with recent global concerns over AI-generated deepfakes. Ofcom’s investigation and government responses are consistent with known actions.

Overall assessment

Verdict (FAIL, OPEN, PASS): PASS

Confidence (LOW, MEDIUM, HIGH): HIGH

Summary:
The narrative is fresh, original, and corroborated by reputable sources. No significant issues were identified, and the content is not behind a paywall or of a distinctive content type.

Share.

Get in Touch

Looking for tailored content like this?
Whether you’re targeting a local audience or scaling content production with AI, our team can deliver high-quality, automated news and articles designed to match your goals. Get in touch to explore how we can help.

Or schedule a meeting here.

© 2026 Engage365. All Rights Reserved.
Exit mobile version