Demo

Foreign-operated content farms leveraging AI are spreading fabricated images and videos to distort UK political figures, raising alarms over election integrity as platforms struggle to curb the surge of unverifiable misinformation.

Overseas networks of cheaply run “content farms” are deploying artificial intelligence to manufacture and amplify false political material about UK figures on social media, raising fresh concerns ahead of elections in devolved nations this spring. According to a BBC Wales investigation, multiple Facebook pages operated from Vietnam were publishing AI-assisted images and videos that portrayed British politicians in fabricated scenarios, prompting Meta to remove several of the accounts after being alerted. (The BBC reported those findings as part of its Wales coverage.) Sources tracing similar operations point to a broader pattern of foreign-run sites and channels exploiting AI to pose as domestic news outlets. According to The Guardian, earlier investigations found fake UK-facing sites and networks that used manufactured journalism to spread misleading narratives beyond British audiences. [2],[3]

Experts who examined the material describe the operations as profit-driven “content farms” that churn out attention-grabbing posts designed to go viral rather than to inform. Martin Innes, director of Cardiff University’s Crime and Security Research Institute, told the BBC the pages were optimised to attract clicks and could be monetised under platform programmes, while often recycling near-identical content across dozens of closely linked pages. Research into AI-generated output has repeatedly flagged that automation lowers costs and technical barriers, enabling large volumes of low-quality or deceptive content to be produced quickly. The Guardian documented a surge of anonymous channels using AI to push false stories about UK politicians, accumulating vast audiences and views in 2025. [3],[2]

The BBC’s enquiry uncovered a mix of manipulations, from AI-generated still images to synthetic video clips that placed politicians in compromising or politically loaded situations. Some items were labelled obvious satire by their publishers, yet others imitated news-branding and lacked clear disclaimers, blurring lines for audiences. Academic studies suggest that while deepfakes are not uniformly more convincing than other forms of misinformation, the increasing ease of production and the sheer volume now circulating raise detection and moderation challenges that outstrip current safeguards. Phys.org summarised research indicating deepfakes had deception rates comparable to other fake media, underscoring that format alone does not determine harm. [7],[3]

Victims of the content described the personal and political toll. Labour MP Alex Davies-Jones told the BBC: “I don’t think you’ll find a politician who hasn’t had this done to them… to say it out loud makes me feel quite sad.” Other politicians recounted explicit synthetic images and fabricated statements being widely shared, with some noting that less tech-savvy voters could be misled. Calls for regulatory and technical responses are resonating across parties: Welsh politicians and peers warned that personalised deepfakes pose threats both to individuals and to democratic discourse, while opposition figures said the misuse of AI risks confusing voters about genuine policy positions. The Guardian has reported lawmakers urging stronger engagement with platforms to confront foreign and domestic disinformation campaigns. [3],[6]

Regulatory and platform responses are emerging but face limits. The BBC reported that Facebook applied labels where third-party fact-checkers had debunked content, and removed certain pages after being contacted; Meta reiterated policies against inauthentic accounts. At the same time, investigators found near-identical posts remained available without warnings, and new pages reappeared frequently, illustrating the cat-and-mouse nature of enforcement. Industry and government efforts include the Electoral Commission developing tools to detect and track synthetic media ahead of the Welsh and Scottish parliamentary votes; officials say such capabilities will improve post-hoc identification and reporting but may not prevent dissemination in real time. The UK government’s department for science, innovation and technology has warned platforms they must tackle illegal fraudulent content under the Online Safety Act or face enforcement. [3],[4]

Analysts caution that the problem is not limited to one platform or format. Studies and fact-checking reviews show AI-driven disinformation campaigns spread through websites, video channels and social pages, often leveraging the perceived credibility of UK-branded outlets or the reputations of established social accounts. A 2023 probe by The Guardian exposed dozens of AI-generated news sites publishing high volumes of repetitive articles, while later reporting highlighted hundreds of anonymous YouTube channels that monetised politically charged fabrications. Those patterns demonstrate both the scalability of the model and the difficulty platforms face in policing cross-border networks. [5],[2]

There is debate about how damaging such campaigns have been to electoral outcomes, but experts emphasise the cumulative effect on public trust. The Alan Turing Institute previously reported no clear evidence that AI-enabled deepfakes decisively altered the outcome of the 2024 UK general election, yet observers warn that falling production costs and improved generative tools increase the chance that misinformation could influence perceptions in more localised or closely contested contests. Politicians and campaigners have urged a combination of platform action, tighter regulation of AI tools, and public education so voters can better recognise manipulated content. The Guardian has quoted MPs saying the UK is “constantly suffering from disinformation campaigns from both state and non-state actors” and needs more robust engagement with social media firms. [7],[6]

As the May devolved elections approach, the phenomenon exposed by the BBC illustrates how foreign-operated, AI-assisted networks can masquerade as domestic media and amplify falsehoods at scale. Platform removals and nascent detection software represent initial responses, but specialists urge sustained, cross-sector measures, technical, legal and educational, to limit the reach and incentive structures that make these content farms viable. The experience of UK politicians who have been targeted underscores the human dimensions of what might otherwise be framed as a purely technological problem. [3],[2]

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
5

Notes:
The article references multiple sources from 2024 and 2025, indicating that the narrative has been reported previously. The earliest known publication date of similar content is from September 2024. The article includes updated data but recycles older material, which raises concerns about freshness.

Supercharge Your Content Strategy

Feel free to test this content on your social media sites to see whether it works for your community.

Get a personalized demo from Engage365 today.

Share.

Get in Touch

Looking for tailored content like this?
Whether you’re targeting a local audience or scaling content production with AI, our team can deliver high-quality, automated news and articles designed to match your goals. Get in touch to explore how we can help.

Or schedule a meeting here.

© 2026 AlphaRaaS. All Rights Reserved.