The UK government signals ambitious plans to update online safety legislation before summer, focusing on tighter controls on AI chatbots, social media age limits, and rapid regulatory responses to safeguard children and public trust amidst technological advances.
The Technology Secretary, Liz Kendall, told BBC Breakfast that ministers will bring forward proposals on online safety before the summer as they seek faster, more flexible ways to update the law to keep pace with rapid advances in AI and social media. She warned that “the technology is developing much, much more quickly” than legislation and urged a swifter approach than was achieved with the Online Safety Act 2023. According to The Guardian, Kendall has also warned that the regulator Ofcom risks losing public confidence if enforcement and implementation lag behind technological change. [2],[3]
Kendall signalled the government is considering measures including minimum age limits for social media and tighter controls on virtual private networks to prevent children from bypassing age verification on pornography sites. She said, “We will definitely come forward with our proposals before the summer. We want to get the legislation right, whatever we decide to do in the end.” The move forms part of a broader push to accelerate policy-making so protections can be revised more frequently, drawing on the Finance Bill process as a model for more rapid parliamentary scrutiny. [2]
Officials have also said they intend to extend the duties in the Online Safety Act to cover one-to-one interactions with AI chatbots after incidents in which such systems were used to create sexualised deepfakes and other harmful content. Kendall described AI-generated intimate images as “weapons of abuse” when briefing MPs in January, and the government has proposed making the creation of non-consensual intimate images a criminal offence. Industry observers say the changes are meant to close loopholes that allowed some chatbot providers to avoid obligations that apply to platforms with user-to-user sharing. [4]
The Prime Minister, Sir Keir Starmer, reiterated the political priority of the agenda, saying: “Britain will be a leader not a follower when it comes to online safety.” He has framed the measures as part of protecting children and supporting parents, and ministers have warned that firms failing to comply could face tough sanctions. According to reporting in February 2026, the government has signalled that persistent breaches could attract penalties of up to 10% of global turnover or result in a ban from operating in the UK. [3]
The campaign to regulate chatbots followed high-profile controversies around services such as Grok, which critics allege enabled users to generate sexualised deepfakes. The Reuters-style pressure prompted an announcement that AI chatbot providers will be held responsible for preventing the generation of illegal content, a step reported internationally and picked up by outlets including The Times of India and The Guardian. Ministers have suggested additional platform measures, from restricting infinite scrolling to considering raising the minimum age for social media use. [6],[5]
Kendall said the government would act quickly where material presented an immediate risk, noting action has already been taken to block children’s access to content promoting self-harm and suicide. She added: “And I am concerned about these AI chatbots. Some are already covered by the Act if they have user-to-user sharing or live search. But when it’s just that one-on-one with AI chatbots, I’m really concerned, as is the Prime Minister, about the impact that is having on children and young people. And I would say, we’re taking steps so that any illegal content shared by AI chatbots, for anyone – adults too – will be stopped.” The policy thrust has been welcomed by ministers and campaigners but will require rapid regulatory work and clearer enforcement plans from Ofcom to sustain public trust. [7],[2]
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The article references recent statements by Technology Secretary Liz Kendall and Prime Minister Sir Keir Starmer, indicating timely reporting. However, the earliest known publication date of similar content is 20 November 2025, which is over two months prior. This suggests that while the article is current, the core information may have been previously reported, potentially affecting its originality.
Quotes check
Score:
7
Notes:
Direct quotes from Liz Kendall and Sir Keir Starmer are included. However, these quotes are also present in earlier reports from November 2025, raising concerns about the originality of the content. The lack of new, independently verifiable quotes diminishes the article’s freshness.
Source reliability
Score:
6
Notes:
The article is sourced from The Irish News, a regional publication. While it provides citations to reputable outlets like The Guardian and The Independent, the reliance on a single, lesser-known source for the main narrative raises questions about the independence and reliability of the reporting.
Plausibility check
Score:
8
Notes:
The claims about the government’s plans to update online safety laws and address AI chatbot concerns are plausible and align with known policy discussions. However, the lack of new, independently verifiable information or developments since the November 2025 reports suggests that the article may not offer substantial new insights.
Overall assessment
Verdict (FAIL, OPEN, PASS): FAIL
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The article presents information that has been previously reported, with direct quotes from Liz Kendall and Sir Keir Starmer also found in earlier publications from November 2025. The reliance on a single, lesser-known source for the main narrative and the lack of new, independently verifiable information diminish the article’s originality and reliability. Given these concerns, the content does not meet the necessary standards for publication under our editorial guidelines.
