Generating key takeaways...

Immersive warns that reliance on automated threat detection may expose organisations to heightened cyber risks, with AI enabling increasingly sophisticated attack methods and social engineering by 2026.

Cyber security experts at Immersive warn that artificial intelligence is already reshaping how adversaries hunt, extort and deceive , and that by 2026 organisations that over-rely on automation will face heightened risk, particularly across critical infrastructure. According to the original report, the company’s specialists set out a series of predictions that echo broader industry findings showing rising use of AI by state and criminal actors and a surge in synthetic media and automated attack tooling. [1][2][3]

Dave Spencer, Director of Technical Product Management at Immersive, framed the core tension: organisations are racing to automate threat hunting, but automation cannot replace human judgement. As Spencer told IT Brief, “As conversations about automating threat hunting intensify, it’s clear that technology alone won’t define resilience. Signature-based detection still has its place, but attack methodologies evolve too quickly for static indicators to keep up. The best teams hunt for behavior and intent, not alerts. While AI may excel at spotting patterns, human judgment will remain the deciding factor.” He warned that this tension is most acute in operational environments where misguided automation could imperil safety-critical systems. [1]

The promise of automated threat hunting is real , academic and industry work demonstrates practical gains from AI-driven hypothesis generation and pattern recognition , but it also underlines a new dependency. Research prototypes such as APThreatHunter show automated planning can reduce human bias and surface risks from large telemetry sets, yet Immersive and other observers stress that humans must validate machine-generated hypotheses and maintain control of high-stakes decisions. That balance underpins resilience. [5][1]

Immersive expects the convergence of information technology and operational technology to accelerate, producing “smarter” control systems while legacy assets persist , a combination that widens the attack surface. The U.S. Department of Homeland Security has emphasised similar risks: integrating AI and IoT can improve efficiency and monitoring, but it also expands opportunities to manipulate smart-city infrastructure and industrial control systems, with potentially physical consequences. Industry data and recent incidents suggest attackers are already probing these blended environments for timing and context to maximise disruption. [1][6][7]

Regulatory and standards responses are likely to follow. Immersive predicts stronger OT- and critical national infrastructure-specific requirements shaped by frameworks such as ISA/IEC 62443 and NIST 800-82; European authorities are already signalling an uptick in coordinated countermeasures. Europol’s 2025 threat assessment warns that AI is accelerating organised crime across the EU and that closer cooperation, staffing, and funding for law enforcement will be necessary to counter increasingly precise, AI-enabled threats. [1][3]

On monetisation, Immersive’s analysts foresee extortion models evolving as datasets used to train AI gain commercial value. Ben McCarthy, Lead Cyber Security Engineer at Immersive, argued criminals may pivot from “name and shame” leaks toward selling data to buyers seeking fresh training material, while also leveraging LLM-assisted malware that adapts in real time. This prediction aligns with reports that state and criminal actors are weaponising AI to automate campaigns and refine phishing and impersonation techniques, and with the emergence of aggressive ransomware groups that combine data exfiltration with high-value demands. [1][2][4]

Human targets are expected to face more industrialised, AI-enhanced social engineering. John Blythe, Director of Cyber Psychology at Immersive, warned that “By 2026, AI-weaponized deception will define the threat landscape. Attackers will use AI to scale hyper-realistic social engineering, deepfakes, and phishing. Organisations that rely solely on technology, processes, and policies as their primary solution will fail.” He pointed to a worrying gap between perceived maturity of training programmes and measurable resilience, arguing that routine exercise and behavioural hardening must replace mere awareness. Europol and other agencies similarly highlight synthetic media and voice cloning as multiplying vectors for fraud, blackmail and deception. [1][3]

Taken together, the assessments point to a layered policy and operational response: invest in AI-assisted detection and anomaly monitoring, but retain human oversight and rigorous change management for OT; accelerate workforce upskilling and continuous, scenario-based exercising; and update standards and incident‑response playbooks to account for data-as-commodity extortion and adaptive, LLM-assisted malware. The company’s guidance echoes DHS analysis that the attack surface will grow as AI and IoT proliferate, and academic work showing automated hunting can be a force-multiplier if paired with rigorous human validation and controls. [1][6][5]

📌 Reference Map:

##Reference Map:

  • [1] (IT Brief) – Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4, Paragraph 5, Paragraph 6, Paragraph 7, Paragraph 8
  • [2] (AP) – Paragraph 1, Paragraph 6
  • [3] (Europol / AP summary) – Paragraph 1, Paragraph 5, Paragraph 7
  • [5] (arXiv APThreatHunter) – Paragraph 3, Paragraph 8
  • [6] (U.S. DHS report) – Paragraph 4, Paragraph 8
  • [7] (Human Security) – Paragraph 4, Paragraph 7
  • [4] (Wikipedia – Royal / BlackSuit) – Paragraph 6

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
8

Notes:
The narrative was published on 12 December 2025, making it current. However, similar themes have been discussed in recent months, such as the ISACA report from October 2025 highlighting AI-driven cyber threats as a major concern for 2026. ([isaca.org](https://www.isaca.org/about-us/newsroom/press-releases/2025/ai-driven-cyber-threats-are-the-biggest-concern-for-professionals-finds-new-isaca-research?utm_source=openai)) Additionally, an article from October 2025 discusses AI’s role in reshaping cybersecurity. ([meed.com](https://www.meed.com/ai-reshapes-the-future-of-cybersecurity?utm_source=openai)) Despite these parallels, the specific predictions and insights provided by Immersive in this report appear to be original.

Quotes check

Score:
9

Notes:
The direct quotes from Immersive’s experts are unique to this report. No identical quotes were found in earlier material, indicating original content.

Source reliability

Score:
7

Notes:
The narrative originates from IT Brief UK, a technology news outlet for CIOs and IT decision-makers. While it is a specialised publication, it is not as widely recognised as major outlets like the BBC or Reuters. The report cites experts from Immersive, a cybersecurity company, which adds credibility. However, the lack of external verification of Immersive’s claims and the absence of corroboration from other reputable sources slightly diminish the overall reliability.

Plausability check

Score:
8

Notes:
The claims made in the narrative align with current trends in cybersecurity, particularly regarding the integration of AI in threat detection and the challenges of automating critical infrastructure security. However, the specific predictions about the evolution of cyber extortion models and AI-driven deception by 2026 are speculative and lack direct evidence. The absence of supporting details from other reputable outlets and the reliance on a single company’s projections raise questions about the plausibility of these specific claims.

Overall assessment

Verdict (FAIL, OPEN, PASS): OPEN

Confidence (LOW, MEDIUM, HIGH): MEDIUM

Summary:
The narrative presents current and plausible concerns about AI’s impact on cybersecurity, with original insights from Immersive’s experts. However, the lack of corroboration from other reputable sources and the speculative nature of some claims about future developments reduce the overall confidence in the report’s accuracy.

Share.

Get in Touch

Looking for tailored content like this?
Whether you’re targeting a local audience or scaling content production with AI, our team can deliver high-quality, automated news and articles designed to match your goals. Get in touch to explore how we can help.

Or schedule a meeting here.

© 2025 Engage365. All Rights Reserved.
Exit mobile version