Listen to the article
Social media platforms endangered the public by promoting dangerous misinformation after the 2024 Southport murders, according to British lawmakers who claim current online safety regulations contain “major holes.”
The House of Commons science and technology select committee has recommended new multimillion-pound fines for digital platforms that fail to outline strategies for combating harmful content spread through their recommendation systems.
In their report following a seven-month inquiry, MPs warned that advances in generative artificial intelligence, which can create convincing fake videos, could make future misinformation crises “even more dangerous” than the violent protests that erupted last August. Those disturbances followed the tragic killing of three children by an attacker who was falsely identified online as an asylum seeker who had arrived by small boat.
“It’s clear that the Online Safety Act just isn’t up to scratch,” said committee chair Chi Onwurah. “The government needs to go further to tackle the pervasive spread of misinformation that causes harm but doesn’t cross the line into illegality. Social media companies are not just neutral platforms but actively curate what you see online, and they must be held accountable.”
The committee’s investigation focused on the role of major platforms including X (formerly Twitter), Facebook, and TikTok following the murders of Bebe King, 6, Elsie Dot Stancombe, 7, and Alice da Silva Aguiar, 9, on July 29, 2023.
The report revealed a disturbing timeline of misinformation spread. Just over two hours after emergency services were first called, a post on X falsely claimed the suspect was a “Muslim immigrant.” Within five hours, a fabricated name—”Ali al-Shakati”—was circulating widely on the platform. Within 24 hours, these two posts had garnered more than 5 million views. The actual perpetrator was Axel Rudakubana, a British citizen born in Cardiff.
The investigation found that misinformation spread rapidly across multiple platforms. A post on X calling for violence against asylum hostels received over 300,000 views, while TikTok’s algorithm suggested searches for “Ali al-Shakti arrested in Southport” under its “others searched for” function. By the end of the day following the attack, social media posts containing the false name had accumulated 27 million impressions, and violence had erupted outside Southport mosque.
The committee has called for substantial penalties of at least £18 million ($23 million) for platforms that fail to address significant harms promoted by their recommendation systems, even when the content itself is not illegal.
“The act fails to keep UK citizens safe from a core and pervasive online harm,” the report concluded.
The MPs urged the government to require social media platforms to “identify and algorithmically deprioritize factchecked misleading content, or content that cites unreliable sources, where it has the potential to cause significant harm.” However, they emphasized that “it is vital that these measures do not censor legal free expression.”
The committee also recommended extending regulatory powers to tackle social media advertising systems that allow “the monetization of harmful and misleading content,” with penalties scaled according to severity and the proceeds allocated to support victims of online harms.
Notably, neither misinformation nor disinformation are currently classified as harms that companies must address under the Online Safety Act, which was enacted less than two years ago. State-sponsored disinformation can, however, constitute an offense of foreign interference.
Ofcom, the UK’s communications regulator, stated that while it holds platforms accountable for illegal content, the scope of laws requiring platforms to tackle legal but harmful content remains the responsibility of the government and parliament. A spokesperson added that the agency is “proposing stronger protections including asking platforms to do more on recommender systems and to have clear protocols for responding to surges in illegal content during crises.”
TikTok responded by stating that its community guidelines prohibit inaccurate, misleading, or false content that may cause significant harm, and that it works with factcheckers to keep unverified content off its “for you” feed. X and Meta (Facebook’s parent company) were approached for comment but had not responded at the time of publication.
The Department for Science, Innovation and Technology has also been contacted for comment regarding the committee’s findings and recommendations.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools


9 Comments
This report highlights the need for greater transparency around social media algorithms and their role in amplifying misinformation. Platforms must be more accountable to the public.
Troubling report on how social media platforms amplified misinformation during a crisis. Regulators need stronger tools to hold platforms accountable for the harm caused by their algorithmic content curation.
I hope policymakers can work constructively with tech companies to find the right balance between free expression and public safety. Innovative solutions are needed to address this complex challenge.
The Southport tragedy is a sobering example of how misinformation can have real-world consequences. Social media companies must be held accountable for the damaging content spread through their recommendation systems.
I’m concerned about the growing threat of AI-generated misinformation. Social media platforms must be more proactive in developing robust strategies to detect and limit the spread of fake content.
Agreed. The proliferation of deepfakes could make future misinformation crises even more dangerous. Policymakers should work with tech companies to stay ahead of these emerging threats.
While free speech is important, social platforms have a responsibility to limit the spread of demonstrably false and harmful information. Improving content moderation should be a top priority.
This underscores the urgent need for comprehensive online safety legislation. Fines and other penalties are necessary to incentivize social media platforms to take misinformation more seriously.
Absolutely. The current rules are clearly inadequate. Stronger regulations with meaningful enforcement are critical to protect the public from the harms of viral misinformation.