Refuge reports a 62% increase in cases where perpetrators exploit AI and connected devices to control women, amid warnings of global proliferation of technology-facilitated violence. Experts call for tighter regulation and industry accountability to protect victims from emerging digital threats.
Domestic abuse services are reporting a sharp rise in cases where perpetrators use artificial intelligence and everyday connected devices to exert control over women, with Refuge saying its specialist caseload grew markedly in the closing months of 2025. The charity recorded a 62% rise in its most complex technology-enabled abuse referrals, reaching 829 women, and a 24% increase in referrals of those under 30. According to Refuge, offenders are exploiting wearables, smart-home systems and AI tools to stalk, impersonate and otherwise manipulate victims. (Sources: Refuge, Refuge background).
Frontline workers describe a widening toolkit of digital tactics. Wearable gadgets such as smartwatches, Oura rings and Fitbits have been used to monitor locations through synced cloud accounts, while smart heating and lighting have been weaponised to unsettle and intimidate survivors. Refuge’s technology team warns devices designed without safeguards can be readily co-opted into patterns of coercive control. (Sources: Refuge, Refuge background).
Survivors’ accounts underline the practical and psychological toll. One woman, identified by Refuge as Mina, said she left a smartwatch behind when fleeing and was subsequently tracked via linked cloud accounts to emergency accommodation. “It was deeply shocking and frightening. I felt suddenly exposed and unsafe, knowing that my location was being tracked without my consent. It created a constant sense of paranoia; I couldn’t relax, sleep properly, or feel settled anywhere because I knew my movements weren’t private,” she said. Police returned the device, but Mina reported further breaches, including a private investigator locating her at a refuge; she said police told her no crime had been committed because she had “not come to any harm”. (Sources: Refuge, Refuge background).
Refuge also warns that AI is amplifying older forms of abuse. The charity’s tech team highlights examples where videos are manipulated to make survivors appear intoxicated or erratic, which can be used to discredit them with social services or others. The organisation has also seen AI-generated documents and spoofed communications sent to survivors to coerce them into actions or to lure them into locations. Emma Pickering, head of Refuge’s tech-facilitated abuse team, said: “Time and again, we see what happens when devices go to market without proper consideration of how they might be used to harm women and girls. It is currently far too easy for perpetrators to access and weaponise smart accessories, and our frontline teams are seeing the devastating consequences of this abuse.” (Sources: Refuge, event overview).
International bodies warn that these trends are not confined to the UK. The United Nations Office at Geneva and UN Women have highlighted a global surge in digital violence against women, noting that AI-driven deepfakes and anonymous online tools are increasingly used to produce non-consensual imagery and to harass and coerce female targets. According to the UN agency, a very large proportion of deepfakes online are non-consensual and overwhelmingly target women, underscoring the scale of the problem. (Sources: UN Office at Geneva, Refuge).
Refuge has expanded its response capacity: its Technology-Facilitated Abuse and Economic Empowerment Team, established in 2017, provides specialist support, bespoke safety planning and advice to survivors and to agencies and tech developers. The charity runs accredited training for professionals covering topics from online misogyny and the so-called manosphere to the specific risks posed by AI companions, and recently hosted an online session titled “Power and Control in the Age of AI” to share emerging insights. Meanwhile industry initiatives have begun to surface, with some technology firms and non-profits announcing partnerships to better protect victims from digitally enabled abuse. (Sources: Refuge background, Refuge training, industry initiative).
Refuge is pressing for stronger regulatory and law-enforcement responses, calling for investment in digital investigation teams and for the tech industry to be held to account for design and safety failures. “It is unacceptable for the safety and wellbeing of women and girls to be treated as an afterthought once a technology has been developed and distributed. Their safety must be a foundational principle shaping both the design of wearable technology and the regulatory frameworks that surround it,” Pickering added. The UK government said tackling violence against women and girls, including when facilitated by technology, is a priority and pointed to its VAWG strategy and work with Ofcom on online harms. (Sources: Refuge, Refuge event, government statement as reported).
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The article from The Guardian, dated 30 January 2026, reports on Refuge’s recent findings regarding the rise in technology-facilitated abuse. ([theguardian.com](https://www.theguardian.com/society/2026/jan/30/abusers-using-ai-and-digital-tech-to-attack-and-control-women-charity-warns?utm_source=openai)) Similar data from Refuge indicates a 62% increase in referrals to their Technology-Facilitated Abuse and Economic Empowerment Team in the first nine months of 2025 compared to the same period in 2024. ([refuge.org.uk](https://refuge.org.uk/news/refuge-reveals-rise-in-tech-abuse-including-spycam-surveillance-ahead-of-uns-16-days-of-activism/?utm_source=openai)) This suggests that the information is current and not recycled. However, the overlap in data points raises questions about the novelty of the report’s findings.
Quotes check
Score:
7
Notes:
The article includes direct quotes from Emma Pickering, head of Refuge’s tech-facilitated abuse team. ([theguardian.com](https://www.theguardian.com/society/2026/jan/30/abusers-using-ai-and-digital-tech-to-attack-and-control-women-charity-warns?utm_source=openai)) While these quotes are attributed to her, they are not independently verifiable through other sources. The lack of corroboration from external parties or publications makes it difficult to fully authenticate the statements.
Source reliability
Score:
9
Notes:
The Guardian is a reputable major news organisation, lending credibility to the article. However, the reliance on a single source, Refuge, for the data and quotes introduces potential bias. The absence of independent verification from other organisations or experts in the field is a concern.
Plausibility check
Score:
8
Notes:
The claims about the rise in technology-facilitated abuse are plausible and align with existing concerns about digital safety. However, the lack of independent verification and the reliance on a single source for the data and quotes reduce the overall credibility of the claims.
Overall assessment
Verdict (FAIL, OPEN, PASS): FAIL
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
While the article presents plausible claims about the rise in technology-facilitated abuse, the heavy reliance on a single source, Refuge, for both data and quotes without independent verification raises significant concerns about the accuracy and objectivity of the information. The lack of corroboration from other reputable sources diminishes the overall credibility of the report.
