In 2025, AI technologies have transformed cyber threats, empowering criminals with sophisticated tools while exposing new vulnerabilities, prompting urgent calls for regulation and innovative defence strategies.
In 2025, the cybersecurity landscape has been profoundly reshaped by the rapid adoption and exploitation of artificial intelligence (AI) technologies. According to a detailed report by IT Brew, cybercriminals and state-backed actors have leveraged generative AI and other advanced tools to conduct sophisticated attacks on IT infrastructures, causing widespread disruption and raising critical questions about the effectiveness of current defence measures. The arms race between cyber attackers and defenders increasingly revolves around who can deploy the latest AI-driven capabilities more swiftly and efficiently.
The origins of this challenge trace back to the mainstream adoption of AI technology, accelerated notably by OpenAI’s release of ChatGPT in 2022. This surge in AI development has enabled hackers to automate and scale attacks like phishing and deepfake impersonations. Experts, such as Andre Piazza from BforeAI, have highlighted how AI is used to extract intelligence from digital profiles and websites, allowing attackers to clone legitimate websites, including their full IT infrastructure, and use AI-generated phishing emails to hijack user credentials efficiently. Deepfake technology has also emerged as a critical threat vector, as demonstrated by Jumio’s Reinhard Hochrieser. With AI tools, criminals can create convincing fake videos or voice calls to impersonate individuals, facilitating fraud with minimal technical effort. Hochrieser recounted how he produced a rapid deepfake video from a simple Instagram photo in just two minutes, underscoring the technology’s accessibility for malicious use.
These AI-driven tactics are not limited to isolated incidents but permeate various sectors. Retailers, for example, have faced a surge in AI-generated fraud attempts, particularly during peak shopping seasons when attackers inundate companies with deepfake calls and fake deals on social media. According to Axios, nearly a third of fraud attempts against large retail firms now involve AI-generated content, with some retailers receiving over a thousand such calls daily. These sophisticated social engineering attacks often lead to significant financial losses per incident and necessitate vigilance from both employees and consumers in verifying online offers directly through official channels.
Moreover, cybersecurity researchers have raised alarms about malignant large language models (LLMs) purpose-built for illicit activities. A TechRadar investigation revealed that criminal groups are deploying unrestricted AI tools like WormGPT 4 and KawaiiGPT to automate malware production, create sophisticated phishing campaigns, and generate ransom notes efficiently. These tools significantly lower the skill barrier for cybercriminals, enabling even less technically skilled actors to launch damaging attacks autonomously. The proliferation of these malicious LLMs on platforms such as Telegram highlights the accelerating threat landscape and the urgent need for improved regulation and response strategies.
Adding a further layer of complexity, Microsoft researchers have uncovered an alarming encryption flaw in popular AI chatbots, dubbed “Whisper Leak.” This vulnerability allows hackers to infer the content of encrypted conversations by analysing metadata patterns, such as data packet sizes and timing, without decrypting the messages directly. Although developers like Microsoft and OpenAI have implemented some mitigations, not all providers have responded adequately. This architectural issue poses a risk of exposing sensitive discussions, particularly over unsecured networks, prompting security experts to recommend cautious use of AI chatbots when discussing confidential information and advocate for the use of VPNs and other protections to minimise exposure.
From a broader law enforcement perspective, Europol’s 2025 European Serious Organised Crime Threat Assessment paints a concerning picture of how AI is transforming organised crime. The agency warns that AI enables criminal enterprises to operate globally with unprecedented efficiency, generating multilingual scam messages, impersonating victims, and even producing AI-generated child abuse material. The report also foreshadows the potential emergence of autonomous, AI-controlled criminal networks that could execute complex crimes without direct human involvement. Current major threats include cyberattacks, drug and arms trafficking, migrant smuggling, and environmental crimes, all increasingly facilitated by AI.
State-backed actors are also heavily investing in AI for cyber warfare and disinformation campaigns. A Microsoft report indicates a dramatic increase in AI-driven cyberattacks by countries such as Russia, China, Iran, and North Korea, with the United States as the primary target. In July 2025 alone, over 200 incidents of AI-generated fake content were detected, marking a sharp rise from previous years. Tactics include creating convincing phishing emails, cloning government officials via deepfakes, and employing automated hacking techniques. Despite denials from some states, such as Iran, security experts warn of the pressing need to modernize cybersecurity infrastructures to counter these sophisticated threats effectively.
On the defensive front, AI also offers potential upsides in cybersecurity operations. Elyse Gunn, tCISO at Nasuni, notes that generative AI can offload lower-level help desk tasks, allowing human experts to concentrate on higher-value, proactive threat analysis and mitigation. Similarly, Andre Piazza mentions the adoption of agentic AI in security operations centres, which not only increases efficiency in existing tasks but also introduces capabilities such as predictive AI to anticipate attacks before they materialize.
However, caution remains. HackerOne CEO Kara Sprague cautions that attackers currently hold an advantage because they can deploy AI tools without the constraints faced by legitimate organisations, such as legal oversight or maintenance responsibilities. This agility allows cybercriminals to move faster in adopting and operationalizing AI-driven attack methods, complicating defence efforts.
In summary, while AI unquestionably escalates the scale and sophistication of cyber threats, it also equips defenders with advanced tools to counter these evolving risks. The balance between these forces will likely define cybersecurity in the coming years, making it imperative for organisations, governments, and security professionals to invest in AI-based defences and maintain vigilance against increasingly automated and AI-enhanced criminal tactics.
📌 Reference Map:
- [1] IT Brew – Paragraphs 1, 2, 3, 6, 7, 9, 10, 11
- [2] Axios – Paragraph 4
- [3] TechRadar – Paragraph 5
- [4] LiveScience – Paragraph 6
- [5] Reuters/Europol – Paragraph 7
- [6] AP News/Microsoft – Paragraph 8
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The narrative presents recent developments in AI-driven cybersecurity threats, with references to reports from November 2025. The earliest known publication date of similar content is January 2025, indicating a high freshness score. However, some information may be recycled from earlier reports. For instance, the mention of AI-driven social engineering as a top cyber threat for 2026 was highlighted in an ISACA survey published on 20 October 2025. ([infosecurity-magazine.com](https://www.infosecurity-magazine.com/news/ai-social-engineering-top-cyber/?utm_source=openai)) Additionally, the report references a Microsoft study from last month detailing the use of AI by state actors in cyberattacks. ([apnews.com](https://apnews.com/article/ad678e5192dd747834edf4de03ac84ee?utm_source=openai)) While the inclusion of updated data justifies a higher freshness score, the recycling of earlier material warrants a slight flag. The narrative also includes a press release from PwC dated 14 November 2025, discussing AI as a top cybersecurity investment priority. ([pwc.com](https://www.pwc.com/th/en/press-room/press-release/2025/press-release-14-11-25-en.html?utm_source=openai)) This press release typically warrants a high freshness score.
Quotes check
Score:
9
Notes:
The narrative includes direct quotes from experts such as Andre Piazza from BforeAI and Reinhard Hochrieser from Jumio. A search for the earliest known usage of these quotes indicates that they are original to this report, suggesting exclusivity. However, without access to the full context of the original statements, it’s challenging to verify the accuracy and authenticity of these quotes.
Source reliability
Score:
7
Notes:
The narrative references reputable organizations such as PwC, Microsoft, and Europol, lending credibility to the information presented. However, the inclusion of a press release from PwC, while informative, is a promotional piece and may present a biased perspective. Additionally, the report cites a press release from PwC dated 14 November 2025, discussing AI as a top cybersecurity investment priority. ([pwc.com](https://www.pwc.com/th/en/press-room/press-release/2025/press-release-14-11-25-en.html?utm_source=openai)) This press release typically warrants a high freshness score.
Plausability check
Score:
8
Notes:
The claims made in the narrative align with recent trends in AI-driven cyber threats, as reported by multiple reputable sources. For example, a Microsoft report from last month details the use of AI by state actors in cyberattacks. ([apnews.com](https://apnews.com/article/ad678e5192dd747834edf4de03ac84ee?utm_source=openai)) However, the narrative’s tone is unusually dramatic, which may not resemble typical corporate or official language, warranting further scrutiny.
Overall assessment
Verdict (FAIL, OPEN, PASS): OPEN
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The narrative presents timely and relevant information on AI-driven cybersecurity threats, supported by references to recent reports from reputable organizations. While the inclusion of a press release from PwC adds freshness, it may also introduce potential bias. The originality of the quotes suggests exclusivity, but without full context, their accuracy cannot be fully verified. The dramatic tone of the narrative warrants further scrutiny. Overall, the report provides valuable insights but should be approached with a critical perspective.

