Demo

Arooj Shah, leader of Oldham Council, has publicly condemned racist and malicious AI-generated deepfake videos aimed at her, highlighting a growing trend of political and public figure harassment through sophisticated digital manipulation.

Arooj Shah, the leader of Oldham Council, has condemned a series of deeply offensive and “racist and malicious” AI-generated deepfake videos targeting her. The manipulated footage, which circulated within a public social media group, depicts Coun Shah speaking about council finances in a fabricated and exaggerated East Asian accent, alongside other deeply inappropriate videos involving political figures depicted in lewd or sexualised scenarios. Shah described the videos as “bigoted” and said they are designed to dehumanise her, emphasising that such hateful and false portrayals are “completely unacceptable” and should have no place in the public or community space.

Expressing her shock and horror at the incident, Shah stressed that no one, whether in public life or not, should be subjected to such “pathetic tactics.” She vowed that this intimidation will not deter her from serving her community, highlighting the personal and broader societal damage inflicted by these videos. The initial posts were traced to a local Facebook group associated with far-right sympathies, though the political group Advance UK later disavowed any official connection to the page or content, strongly condemning the videos as contrary to their values.

This incident is part of a wider and troubling trend involving the misuse of AI-generated media to harass and manipulate public figures. Several politicians across the UK have been targeted by deepfake content recently. Conservative MP George Freeman reported to police an AI video that falsely suggested he was defecting to another party. Female politicians have been disproportionately victimised, with an investigation revealing that several prominent figures, including Labour’s Angela Rayner, Conservative MPs Penny Mordaunt and Priti Patel, and others, have been subjected to non-consensual deepfake pornography. These fabricated intimate images have circulated online for years, eliciting significant concern and police involvement. While the UK’s Online Safety Act, enacted in January, criminalises the sharing of such imagery without consent, the law currently does not ban the creation of deepfake pornography, fueling ongoing debates about further legislative measures to combat this form of abuse.

The scale and sophistication of deepfake technology have drawn growing attention, with experts warning that the manipulation of videos and images is becoming increasingly difficult for the public to detect. Common signs of deepfakes include unnatural mouth movements, irregular lighting, and voice synchronization errors, as outlined in recent expert guidance. Beyond individual harassment, there is also a wider political dimension, as seen in deepfake advertisements targeting UK Prime Minister Rishi Sunak, which falsely portrayed his financial dealings and were disseminated widely, raising fears about AI’s role in election interference.

Social media platforms have responded with varying policies; for instance, Facebook banned deepfake videos designed to mislead ahead of the US elections, though it sometimes allowances content deemed newsworthy. The ongoing challenges of regulating AI-manipulated media underscore the persistent risks these technologies pose to public discourse, individual dignity, and democratic processes alike.

In condemning the attacks against her, Arooj Shah has joined a growing number of public figures calling for stronger protections and accountability around the use of AI-generated content, particularly as it pertains to misinformation, racism, harassment, and sexual exploitation. The interplay between technological innovation and ethical governance remains urgent as AI tools become more accessible and their misuse more harmful.

📌 Reference Map:

  • [1] Manchester Evening News – Paragraphs 1, 2, 3, 4, 5
  • [2] The Guardian – Paragraph 6, 7
  • [3] The Guardian – Paragraph 8
  • [4] The Guardian – Paragraph 9
  • [5] The Guardian – Paragraph 10
  • [6] The Guardian – Paragraph 11

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
8

Notes:
The narrative appears to be original, with no evidence of prior publication. The Manchester Evening News article was published on November 24, 2025, and is not found in earlier sources. The report includes recent data, such as the enactment of the UK’s Online Safety Act in January 2025, which criminalises the sharing of non-consensual deepfake imagery. This update justifies a higher freshness score but should be flagged as it recycles older material. ([theguardian.com](https://www.theguardian.com/australia-news/article/2024/jun/01/creating-or-sharing-deepfake-porn-without-consent-to-be-under-proposed-new-australian-laws?utm_source=openai)) The narrative is not based on a press release, as no such source is identified. No discrepancies in figures, dates, or quotes were found. The content is not republished across low-quality sites or clickbait networks. No similar content appeared more than 7 days earlier. The inclusion of updated data alongside older material may justify a higher freshness score but should be flagged.

Quotes check

Score:
9

Notes:
Direct quotes from Arooj Shah and other individuals are unique to this report, with no identical matches found in earlier material. This suggests potentially original or exclusive content. No variations in quote wording were noted.

Source reliability

Score:
8

Notes:
The narrative originates from the Manchester Evening News, a reputable UK news outlet. However, the website was inaccessible during the fact-checking process, preventing direct verification. The report references other reputable sources, such as The Guardian, to support its claims. No unverifiable entities are mentioned.

Plausability check

Score:
8

Notes:
The claims about AI-generated deepfake videos targeting public figures are plausible and align with known incidents, such as the misuse of AI in creating non-consensual deepfake pornography involving female politicians. ([theguardian.com](https://www.theguardian.com/australia-news/article/2024/jun/01/creating-or-sharing-deepfake-porn-without-consent-to-be-under-proposed-new-australian-laws?utm_source=openai)) The report lacks supporting detail from other reputable outlets, which is a concern. The language and tone are consistent with UK English and the topic. The structure is focused and relevant, without excessive or off-topic detail. The tone is formal and appropriate for a news report.

Overall assessment

Verdict (FAIL, OPEN, PASS): OPEN

Confidence (LOW, MEDIUM, HIGH): MEDIUM

Summary:
The narrative appears original and plausible, with direct quotes unique to this report. However, the Manchester Evening News website was inaccessible during the fact-checking process, preventing direct verification. The report references other reputable sources, such as The Guardian, to support its claims. The inclusion of updated data alongside older material may justify a higher freshness score but should be flagged. The lack of supporting detail from other reputable outlets is a concern. Given these factors, the overall assessment is ‘OPEN’ with medium confidence.

Supercharge Your Content Strategy

Feel free to test this content on your social media sites to see whether it works for your community.

Get a personalized demo from Engage365 today.

Share.

Get in Touch

Looking for tailored content like this?
Whether you’re targeting a local audience or scaling content production with AI, our team can deliver high-quality, automated news and articles designed to match your goals. Get in touch to explore how we can help.

Or schedule a meeting here.

© 2025 AlphaRaaS. All Rights Reserved.