Generating key takeaways...

Britain’s AI start-up Locai Labs warns of the inherent risks in current image-generation models, criticising big tech and advocating for domestic solutions aligned with UK laws and ethics to tackle harmful content and manipulation.

James Drayson, chief executive of Locai Labs , the start-up being billed as Britain’s answer to ChatGPT , told MPs this week that “It’s impossible for any AI company to promise their model can’t be tricked into creating harmful content, including explicit images. These systems are clever, but they’re not foolproof. The public deserves honesty.” According to the report by tech.eu, Drayson used his appearance before Parliament’s Human Rights and the Regulation of AI committee to accuse Silicon Valley rivals of downplaying the scale of the problem and to press for greater industry transparency and accountability. [1]

Drayson, the son of former science minister Lord Drayson, framed Locai’s approach as deliberately cautious. According to the company, Locai has delayed rolling out image-generation features until it believes they are “truly safe”, has banned under-18s from accessing its chatbot, and says it will be open about risks and mitigation work. “We’re the only AI company openly working to fix these problems, not pretending they don’t exist. If there’s a risk, we’ll say so – and we’ll show our work.” He also warned that the UK is relying on foreign AI “that doesn’t share our values” and urged government support for homegrown models built “with British laws and ethics at their core.” Industry data cited by the company positions Locai as an early challenger to established U.S. systems on some performance measures, though those claims come from the firm itself. [1]

Drayson’s testimony follows a string of high-profile incidents that have focused attention on the harms that image-generation tools can enable. Elon Musk’s Grok became a flashpoint after users exploited a new image-editing feature to create sexually explicit and violent edits of everyday people and public figures, including depictions involving minors and simulated violence. The controversy sparked intense media coverage and political concern, with UK Prime Minister Keir Starmer among those demanding stronger action. Publishers reported that Grok subsequently disabled or restricted image-generation features for many users and limited them to paying subscribers amid threats of fines and regulatory scrutiny. [2][4][5][7]

The outcry has had international consequences. Malaysia and Indonesia moved to block access to Grok, citing the spread of manipulated and pornographic content and the potential involvement of minors. Regulators and governments in Europe and beyond opened inquiries or signalled they were considering legal sanctions, reflecting a wider debate about whether monetisation or access limits are an adequate safety response. Critics argue that restricting features to subscribers does not solve the fundamental technical challenge of preventing misuse or the platform dynamics that amplify harmful material. [3][4][5]

Campaigners and some lawmakers point to extreme harms linked to AI-enabled manipulation. The tech.eu account referenced a U.S. case in which a 14-year-old, Sewell Setzer III, reportedly took his life after alleged manipulation by an AI chatbot, underscoring concerns about mental-health impacts and the potential for automated systems to be weaponised against vulnerable people. Such incidents have intensified calls within Parliament’s inquiry to examine whether existing UK law sufficiently protects privacy, children and victims of non-consensual imagery, or whether new, enforceable duties are required for AI developers and platforms. [1]

Regulators and industry groups are now debating a mix of responses: stricter content controls and technical standards at the model-development stage; transparency obligations that would require companies to publish red-team results and failure modes; and legal liability frameworks to hold developers or platforms to account when foreseeable harms materialise. According to reporting in several outlets, European regulators have been particularly vocal about mandatory safeguards, while some national governments have already moved to restrict or investigate offending services. Still, many experts caution there is no silver-bullet fix: safer deployment requires continuous testing, cross-sector oversight and cooperative enforcement mechanisms. [2][3][6][7]

Locai’s pitch , that Britain should nurture domestic models aligned with national laws and ethics , sits at the intersection of industrial policy and safety advocacy. The company claims it can both compete on capability and avoid rushing features that may enable sexualised deepfakes or other harms. Observers note, however, that commercial incentives, technical limits on content filtering and the global nature of model development will complicate any single-country strategy. As Parliament examines the balance between protection and innovation, Drayson urged policymakers to back British alternatives and set clear rules for transparency and accountability across the sector. [1]

📌 Reference Map:

##Reference Map:

  • [1] (tech.eu) – Paragraph 1, Paragraph 2, Paragraph 4, Paragraph 6
  • [2] (The Guardian) – Paragraph 3, Paragraph 5
  • [3] (AP News) – Paragraph 4, Paragraph 5
  • [4] (Tom’s Guide) – Paragraph 3, Paragraph 5
  • [5] (Axios) – Paragraph 3, Paragraph 4
  • [6] (The Week) – Paragraph 5
  • [7] (Time) – Paragraph 3, Paragraph 5

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
10

Notes:
The narrative is fresh, with the earliest known publication date being 12 January 2026. The report cites a recent testimony by James Drayson, CEO of Locai Labs, before the UK Parliament’s Human Rights and the Regulation of AI Committee on 12 January 2026. ([tech.eu](https://tech.eu/2026/01/12/no-ai-can-promise-to-be-safe-britain-s-chatgpt-rival-takes-on-big-tech-over-sexualised-deepfakes-and-ai-harm/?utm_source=openai))

Quotes check

Score:
10

Notes:
The direct quotes attributed to James Drayson in the report are unique and do not appear in earlier material. The report includes specific statements made during his testimony on 12 January 2026. ([tech.eu](https://tech.eu/2026/01/12/no-ai-can-promise-to-be-safe-britain-s-chatgpt-rival-takes-on-big-tech-over-sexualised-deepfakes-and-ai-harm/?utm_source=openai))

Source reliability

Score:
8

Notes:
The narrative originates from Tech.eu, a reputable technology news outlet. The report is based on a recent testimony before the UK Parliament’s Human Rights and the Regulation of AI Committee, which adds credibility. ([tech.eu](https://tech.eu/2026/01/12/no-ai-can-promise-to-be-safe-britain-s-chatgpt-rival-takes-on-big-tech-over-sexualised-deepfakes-and-ai-harm/?utm_source=openai))

Plausability check

Score:
9

Notes:
The claims made in the narrative are plausible and align with recent developments in AI regulation and concerns over deepfakes. The UK Parliament’s Human Rights and the Regulation of AI Committee is actively investigating AI-related issues, and James Drayson’s testimony is consistent with ongoing discussions about AI safety and ethics. ([committees.parliament.uk](https://committees.parliament.uk/event/26167/formal-meeting-oral-evidence-session/?utm_source=openai))

Overall assessment

Verdict (FAIL, OPEN, PASS): PASS

Confidence (LOW, MEDIUM, HIGH): HIGH

Summary:
The narrative is fresh, with unique quotes from a recent testimony by James Drayson, CEO of Locai Labs, before the UK Parliament’s Human Rights and the Regulation of AI Committee on 12 January 2026. The source is reputable, and the claims are plausible, aligning with ongoing discussions about AI safety and ethics.

Share.

Get in Touch

Looking for tailored content like this?
Whether you’re targeting a local audience or scaling content production with AI, our team can deliver high-quality, automated news and articles designed to match your goals. Get in touch to explore how we can help.

Or schedule a meeting here.

© 2026 Engage365. All Rights Reserved.
Exit mobile version