Investigations reveal AI-generated deepfake videos impersonating recognised medical experts are promoting unverified treatments and products online, raising public health concerns amid challenges in detection and moderation.
AI-generated deepfake videos of recognised doctors and academics are being used on social media to push unproven health claims and steer viewers towards supplement vendors, an investigation has found. According to the original report by Full Fact, fabricated footage has been manipulated to show experts endorsing remedies for conditions such as menopausal symptoms. [1][2]
Professor David Taylor‑Robinson of the University of Liverpool was among those impersonated; footage of a real conference appearance was altered to make him appear to discuss a supposed symptom nicknamed “thermometer leg” and to recommend a natural probiotic. Dr Taylor‑Robinson told Full Fact: “One of my friends said his wife had seen it and was almost taken in by it, until their daughter said it’s obviously been faked.” The doctored video amassed hundreds of thousands of views before removal. [1][2]
The clips typically conclude by urging viewers to buy products from a US supplements company called Wellness Nest. The video describing the probiotic included the claim that it “features ten science-backed plant extracts, including turmeric, black cohosh, DIM, moringa, specifically chosen to tackle menopausal symptoms. ‘Women I work with often report deeper sleep, fewer hot flushes, and brighter mornings within weeks,’” text that was attached to the altered footage. Wellness Nest told Full Fact the content was “100% unaffiliated” with its business. [1]
Platforms have struggled to detect and moderate this new wave of fraud. Full Fact reported that TikTok initially said the videos did not breach its policy; after multiple reports from the university, Dr Taylor‑Robinson and his family, TikTok acknowledged a moderation error, restricted visibility and later removed the posts and account, apologising for the mistake. Other outlets investigating similar cases reached the same conclusion about uneven enforcement. [1][2][4]
Observers warn this is part of a wider pattern that poses public‑health risks. The Australian Medical Association has urged clearer, enforceable regulation of health advertising online after high‑profile clinicians were exploited in deepfakes, while news organisations and experts have highlighted cases where such videos target vulnerable people or those with chronic conditions, risking harm and financial loss. Industry commentary also emphasises the potential for deepfakes to erode trust in legitimate medical advice. [3][4][5][6]
Investigations indicate the problem is cross‑platform and international: similar impersonations of clinicians and academics have been identified promoting unverified products across TikTok and other social networks, and some promoted items were not even listed on the sellers’ official sites. Journalistic and medical bodies call for improved detection tools, faster takedowns, stronger advertising rules, and public education so viewers can better judge online health information. [2][5][7]
For now, experts advise caution when encountering medical endorsements online: check whether the expert has publicly linked to the claim, look for corroboration from reputable medical bodies, and report suspected deepfakes to the platform and to the named individual’s institution. Industry data and commentary show these steps, combined with regulatory action, are central to limiting the spread and impact of health‑related deepfakes. [4][6][3]
##Reference Map:
- [1] (AOL / Full Fact) – Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4
- [2] (The Guardian) – Paragraph 1, Paragraph 2, Paragraph 4, Paragraph 6
- [3] (Australian Medical Association) – Paragraph 5, Paragraph 7
- [4] (CBS News) – Paragraph 4, Paragraph 7
- [5] (ABC News) – Paragraph 5, Paragraph 6
- [6] (Forbes) – Paragraph 5, Paragraph 7
- [7] (Misbar) – Paragraph 6
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The narrative is recent, with the article published on December 5, 2025. Similar incidents involving AI-generated deepfake videos of doctors promoting unverified health claims have been reported earlier in 2025, such as the CBS News report on August 14, 2025, detailing deepfake videos impersonating Dr. Joel Bervell to promote false medical advice and treatments. ([cbsnews.com](https://www.cbsnews.com/news/deepfake-videos-impersonating-real-doctors-push-false-medical-advice-treatments/?utm_source=openai)) Additionally, a report from the Australian Medical Association on December 10, 2024, highlighted deepfake videos targeting health professionals to promote dubious products. ([abc.net.au](https://www.abc.net.au/news/health/2024-12-10/diabetes-supplements-deepfake-ads-targeting-health-professionals/104665824?utm_source=openai)) While the specific details in the article are new, the broader issue of AI-generated deepfake videos in the medical field has been previously reported. The article includes updated data but recycles older material, which may justify a higher freshness score but should still be flagged.
Quotes check
Score:
9
Notes:
The article includes direct quotes from Professor David Taylor-Robinson and Wellness Nest, which appear to be original and not found in earlier reports. No identical quotes were found in earlier material, suggesting the content is potentially original or exclusive.
Source reliability
Score:
7
Notes:
The narrative originates from AOL, a reputable organisation. However, the article references multiple sources, including Full Fact, The Guardian, and CBS News, which are also reputable. The inclusion of multiple sources strengthens the reliability of the information presented.
Plausability check
Score:
8
Notes:
The claims made in the narrative are plausible and align with known issues regarding AI-generated deepfake videos in the medical field. The article provides specific examples, such as the manipulation of Professor David Taylor-Robinson’s image to promote unverified health claims, which is consistent with previous reports on similar incidents. The narrative lacks supporting detail from other reputable outlets, which could further corroborate the claims. The tone and language used are consistent with typical reporting on such issues, and there are no signs of excessive or off-topic detail unrelated to the claim.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative presents a recent and plausible account of AI-generated deepfake videos of doctors promoting unverified health claims. While similar incidents have been reported earlier in 2025, the specific details and quotes in this article appear original and exclusive. The sources cited are reputable, and the claims made are consistent with known issues in the field. The lack of supporting detail from other reputable outlets is noted but does not significantly undermine the overall credibility of the narrative.
