A prominent Spanish women’s rights advocate, targeted by AI-generated nude images, urges authorities to enforce tougher online rules, linking accounts to real identities amid rising digital abuse and European regulatory shifts.
A Spanish women’s rights campaigner who became the target of AI-manufactured nude images has urged tougher online rules, pressing authorities to end the near-anonymity she says enables repeated digital assaults. According to recent reporting, she wants platforms required to link accounts to identifiable individuals so perpetrators cannot act with impunity. [2],[4]
Madrid is preparing a suite of measures that go beyond fines and takedown orders, including a proposed ban on under-16s using social media and the prospect of criminal liability for executives who fail to remove illegal or hateful material. The moves form part of a wider European turn towards stricter controls on large U.S. tech firms. [5],[2]
The activist, who combines legal training with a high public profile, said the scale of online abuse had forced governments to act only after the problem became impossible to ignore. “Social media isn’t new – and the violence is brutal, systematic, 24/7,” she told Reuters, and she recounted an encounter with law enforcement that left her frustrated after being told her case did not amount to a crime. “What hit me hardest wasn’t the deepfake, it was going to the police and being told it wasn’t even a crime.” Similar episodes, including investigations into AI-driven images of minors and prosecutions of young offenders, have exposed gaps in the law and enforcement. [3],[6]
She rejected a blanket age-based cut-off for social media as insufficient, describing proposals to bar children as “paternalistic” and arguing protections must extend to all users. Campaign groups and surveys have documented widespread harm: one study found large numbers of young people in Spain have been subject to AI-generated sexual imagery without consent. [4],[2]
While defending the right to use pseudonyms online, she said platforms should be required to hold verifiable identity information behind profiles: “Call yourself ‘PeppaPig88’ if you want – fine. But there has to be a real identity behind that account,” she said. As an alternative to small fines, she proposed stronger market sanctions against platforms that repeatedly fail to curb abuse, up to exclusion from major markets. Recent criminal probes into major social networks over alleged failures to prevent sexualised deepfakes of children have added urgency to such proposals. [7],[5]
Advocates and policymakers point to a string of domestic cases that illustrate how AI tools have expanded avenues for harassment, from sharing manipulated images of teenagers to orchestrated campaigns of hate. Save the Children and other organisations have called for clearer statutes, compulsory digital education and stronger enforcement to protect minors and adults alike. [4],[3]
The campaigner insists regulation and free speech can coexist, saying public safety requires accountability online in the same way it does offline: “It’s impossible to think that a man on the street could shout that they’ll rape you and nothing happens, but that’s what we’re seeing online,” she said. Policymakers now face the task of balancing civil liberties with new forms of harm driven by AI-enabled tools and platforms’ global reach. [6],[5]
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
7
Notes:
The article was published on February 27, 2026, and references events up to that date. However, the narrative closely mirrors previous reports from mid-2025, particularly concerning AI-generated deepfake images in Spain. For instance, a July 2025 report highlighted similar issues involving AI-manipulated images of minors. ([theguardian.com](https://www.theguardian.com/world/2025/jul/27/spanish-teenager-investigated-ai-generated-nude-videos?utm_source=openai)) This overlap raises concerns about the originality and freshness of the content. The article also cites sources from 2023 and 2024, which may not reflect the most current developments. Given these factors, the freshness score is reduced.
Quotes check
Score:
6
Notes:
The article includes direct quotes from Carla Galeote, a Spanish women’s rights activist. However, these quotes cannot be independently verified through the provided sources. Without access to the original interview or statement, the authenticity of these quotes remains uncertain. This lack of verifiable sources diminishes the credibility of the quotes.
Source reliability
Score:
5
Notes:
The article is sourced from KFGO, a regional radio station in Fargo, North Dakota. While KFGO may provide local news coverage, it is not a major international news organisation. The reliance on a single, less widely recognised source raises concerns about the reliability and comprehensiveness of the information presented.
Plausibility check
Score:
7
Notes:
The article discusses the issue of AI-generated deepfake images in Spain, a topic that has been reported on previously. However, the specific details and quotes attributed to Carla Galeote cannot be independently verified, which raises questions about the accuracy of the claims. The plausibility of the narrative is therefore uncertain.
Overall assessment
Verdict (FAIL, OPEN, PASS): FAIL
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The article presents a narrative that closely mirrors previous reports from mid-2025 regarding AI-generated deepfake images in Spain. The quotes attributed to Carla Galeote cannot be independently verified, and the reliance on a single, less widely recognised source raises concerns about the reliability and comprehensiveness of the information. Given these issues, the content does not meet the necessary standards for publication.

