Model and actress Savannah Adwoa Mensah revealed her online image was misused in AI-driven scams, highlighting the rising danger of synthetic identity abuse across Ghanaian social media and the need for enhanced legal and media literacy responses.
Model and actress Savannah Adwoa Mensah says she first realised how vulnerable her online identity had become when she spotted a flawless, unfamiliar image of herself used to sell a herbal skincare product on Facebook. She publicly warned followers: “If you see an ad of me promoting this product, it’s not me. It’s an AI-generated image used without my consent.” According to local reporting, her experience is far from isolated as synthetic likenesses proliferate across Ghanaian social media.
The misuse stretches beyond images. Journalists and broadcasters report cloned voices and fabricated endorsements being deployed to market dubious medical remedies and commercial products, sometimes without any identifiable company behind them. Industry analysts and international reporting have linked such scams to organised fraud rings that exploit generative technology to scale deception rapidly.
High-profile incidents elsewhere in the region underline how quickly false material can spread. Broadcasters in South Africa were impersonated in realistic videos that promoted investment scams and drew hundreds of thousands of views before platforms intervened, demonstrating the speed and reach of synthetic media when coupled with social networks.
Data firms and verification specialists have sounded the alarm about a sharp uptick in deepfake-enabled fraud. A recent industry analysis found a multi-fold rise in cases linked to synthetic identities in late 2024, warning that these techniques have moved from niche experiments to tools that cause measurable financial and reputational harm.
Ghana’s existing laws offer routes for redress but have yet to be tested thoroughly against the novel mechanics of AI impersonation. Legal practitioners note that unauthorised use of a person’s image or voice may engage data-protection provisions and constitutional privacy guarantees, but they caution that courts have limited precedent for assigning liability in cases where synthetic media are generated and distributed by opaque actors.
Enforcement faces practical hurdles. Investigators and prosecutors contend with scant forensic capacity to trace the provenance of synthetic content, challenges in preserving admissible digital evidence and jurisdictional obstacles when campaigns originate overseas. Observers say those gaps make quick takedowns and prosecutions difficult, even when the harms are clear.
Alongside legal responses, media literacy advocates emphasise prevention. Trainers and communications scholars have urged the public to develop verification habits ahead of elections and other high-stakes moments, offering practical checks to distinguish manipulated media and reduce the likelihood of viral amplification.
Security analysts warn the political implications are acute: AI-crafted audio or video can be tailored to sway voters, smear opponents or trigger financial consquences, particularly around election cycles. Commentators advise a mix of platform responsibility, stronger verification systems and public awareness campaigns to shore up trust in digital information flows.
For those targeted, the consequences are immediate and personal. Senior journalist Maame Esi Nyamekye Thompson responded online to a counterfeit diabetes advert bearing her likeness: “This is still ongoing. I never did this advert lol.” Her reaction captures the indignity and confusion victims face as they try to disentangle their reputations from synthetic falsehoods while regulators, platforms and civil society scramble to catch up.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The article was published on April 5, 2026, and reports on recent incidents involving AI-generated images of Ghanaian public figures. Similar cases have been reported in the past, such as the use of AI-generated images of Taylor Swift in 2024 ([citizen.digital](https://www.citizen.digital/business/explicit-ai-generated-taylor-swift-images-spread-quickly-on-social-media-n335549?utm_source=openai)). However, the specific incidents involving Savannah Adwoa Mensah and Maame Esi Nyamekye Thompson appear to be recent and not previously reported, indicating a high level of freshness.
Quotes check
Score:
7
Notes:
Direct quotes from Savannah Adwoa Mensah and Maame Esi Nyamekye Thompson are included. While these quotes are compelling, they cannot be independently verified through the available sources. The lack of verifiable sources for these quotes raises concerns about their authenticity.
Source reliability
Score:
5
Notes:
The article originates from CediRates, a website that appears to be a niche publication. The lack of information about the publication’s editorial standards and independence raises concerns about the reliability of the source. Additionally, the article relies heavily on quotes from individuals without independent verification, which further diminishes its reliability.
Plausibility check
Score:
6
Notes:
The incidents described are plausible, given the increasing use of AI-generated images without consent ([oecd.ai](https://oecd.ai/en/incidents/2023-07-25-07ed?utm_source=openai)). However, the lack of independent verification for the quotes and the reliance on a single source for the information about these incidents raises questions about the overall credibility of the claims.
Overall assessment
Verdict (FAIL, OPEN, PASS): FAIL
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The article reports on recent incidents involving AI-generated images of Ghanaian public figures. While the incidents are plausible and the content is recent, the lack of independent verification for the quotes and the reliance on a single, niche source raise significant concerns about the credibility and reliability of the information presented. Given these issues, the content does not meet the necessary standards for publication under our editorial guidelines.
