Shoppers, businesses and public services are waking up to a new fraud wave as AI deepfakes become cheaper, faster and more convincing. Anti-fraud pros warn consumers and organisations worldwide to sharpen defences, because what looks and sounds real can now be fake , and the stakes are financial trust and privacy.
- Widespread surge: 77% of anti-fraud professionals say deepfake social engineering has accelerated in the past two years.
- Preparedness gap: Fewer than one in 10 fraud experts feel well prepared to tackle AI-powered scams, leaving many organisations exposed.
- Real-time defence wins: Combining identity signals with AI analytics reduces false positives and speeds decisions, making fraud controls feel smarter and less intrusive.
- Sector impact: Banks, insurers and public programmes are already using machine learning and network analytics to spot fraud rings and cut investigation times.
- Practical tip: Treat unfamiliar voice or video requests as suspicious, verify via a second channel, and enable real-time behavioural checks where possible.
Why anti-fraud teams say deepfakes are changing the game now
Deepfakes no longer belong only to viral prank videos; they’re being weaponised to impersonate executives, customers and citizens in scams that feel shockingly real. That sensory shock , a voice you recognise or a face moving like someone you trust , is exactly what attackers are exploiting, and anti-fraud professionals are noticing the scale and speed of the shift.
Surveyed members of the Association of Certified Fraud Examiners reported a big uptick in AI-driven social engineering, and most expect it to grow further. The emotional impact is immediate: victims feel betrayed and institutions lose credibility fast, so awareness and simple verification habits matter more than ever.
How organisations are fighting back with AI , and why that sounds ironic
It’s ironic that AI powers both the attack and the defence, but that’s exactly what’s happening. Banks and national identity providers are feeding identity signals into real-time machine learning systems to spot odd behaviour, not just suspicious content. The result feels almost tactile , fewer false alarms and quicker, calmer decisions.
In Norway, a national digital ID provider linked identity signals to an AI fraud-scoring engine and moved from reacting to anticipating fraud. In the UAE and South Korea, real-time monitoring and network analytics have exposed hidden fraud rings and sped up investigations, showing that AI can scale protections as attackers scale attacks.
What consumers should do today to avoid falling for a deepfake
If a call, video or message asks for money, passwords or transfers, pause. Verify identity using an independent channel , call a known number, log into the official app, or check with a colleague in person. Trust your instincts: if a message feels off or urgent in a way that pressures you, treat it as suspicious.
Also enable multi-factor authentication, keep apps and devices updated, and be cautious about sharing recent photos or voice samples online. Those bits of personal data feed the very models scammers use to build convincing fakes.
Why smaller organisations and public services are especially vulnerable
Smaller teams often lack dedicated fraud units and tend to rely on manual checks, which are slow and inconsistent. That makes them ripe targets for scaled social engineering attacks where speed and believability matter. Public programmes with tight budgets face the same problem, yet smart automation can halve investigation times and free limited staff to focus on complex cases.
Investing in behaviour-based analytics and network detection is a practical, affordable step many organisations are already taking. It’s less about perfection and more about raising the baseline of detection and verification.
What to look for when choosing fraud-fighting tools and vendors
Look for solutions that combine identity signals, real-time decisioning and explainable AI so you can see why a transaction was flagged. Preference should go to systems that reduce false positives while surfacing real threats, with options to integrate into existing workflows.
Also prioritise vendors that emphasise training and public education. Technology helps, but human awareness , from call-centre staff to frontline public servants , closes many gaps attackers try to exploit.
Where the threat goes next and how to stay ahead
Expect deepfakes to get cheaper and more personalised, and attackers to mix social media data with voice and video cloning. That means verification habits must evolve too , second-channel checks, biometric liveness tests and continuous behaviour monitoring will become standard.
But there’s optimism: as more organisations share signals and best practice, detection improves. That collaboration, plus smarter AI defences, can blunt scammers’ edge. It won’t be quick, but it’s already working in places where identity data and analytics are combined.
Ready to make fraud prevention part of daily life? Check your security settings, verify unusual requests via another channel, and explore current fraud-detection options to find one that suits your organisation or household.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The narrative was published on November 19, 2025, and presents recent survey findings from the Association of Certified Fraud Examiners (ACFE) regarding the rise of AI-driven deepfake fraud. Similar reports have emerged in the past year, such as Sumsub’s research showing a 10-fold increase in deepfake incidents from 2022 to 2023. ([sumsub.com](https://sumsub.com/newsroom/sumsub-research-global-deepfake-incidents-surge-tenfold-from-2022-to-2023/?utm_source=openai)) However, the specific survey data cited in this narrative appears to be original and not recycled from previous publications.
Quotes check
Score:
9
Notes:
The narrative includes direct quotes from John Gill, President of the ACFE, and Stu Bradley, Senior Vice President of Risk, Fraud and Compliance Solutions at SAS. These quotes are not found in earlier publications, indicating they are original to this report.
Source reliability
Score:
7
Notes:
The narrative originates from IT Brief Asia, a technology news outlet. While it is a specialised publication, it is not as widely recognised as major international news organisations. The ACFE is a reputable organisation, lending credibility to the survey data presented.
Plausability check
Score:
8
Notes:
The claims about the rise of AI-driven deepfake fraud are consistent with recent trends reported by other sources. For instance, Sumsub’s research indicates a significant increase in deepfake incidents globally. ([sumsub.com](https://sumsub.com/newsroom/sumsub-research-global-deepfake-incidents-surge-tenfold-from-2022-to-2023/?utm_source=openai)) The narrative provides practical tips for consumers to avoid falling victim to deepfake scams, which are reasonable and align with current best practices in cybersecurity.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative presents original and timely information about the rise of AI-driven deepfake fraud, supported by recent survey data from the ACFE. The quotes included are unique to this report, and the claims made are consistent with other reputable sources. While the source is a specialised publication, the information provided is credible and aligns with current trends in cybersecurity.

