Police and school authorities are investigating reports of explicit AI‑generated images circulated among pupils at Royal School Armagh, highlighting growing concerns over emerging AI tools and online safeguarding measures.

Police in Northern Ireland have opened an inquiry after reports that explicit AI‑generated images were shared among pupils at the Royal School Armagh in County Armagh, prompting engagement between officers, school leaders and parents. According to reporting by The Irish News, the incident has been referred to the appropriate authorities and an investigation is under way. [1][2][5]

Graham Montgomery, headmaster of the Royal School Armagh, said “that a matter involving some of our pupils was brought to our attention and referred to the appropriate authorities”. He added that the school has “robust policies and procedures in place and where concerns are raised, we seek and follow advice from educational and other statutory authorities and take all appropriate action as advised”. “We will continue to do that and the safety and well‑being of all our pupils remains our highest priority”. [1]

A Police Service of Northern Ireland (PSNI) spokesperson told reporters: “Police have received a report that AI‑generated explicit images had been shared amongst pupils at a County Armagh school. An investigation is under way and local officers are also engaging with the appropriate school authorities and the parents/guardians of the pupils affected.” The statement indicates a criminal inquiry alongside safeguarding work with families. [1][2]

The case comes amid wider alarm over social media tools that can generate sexualised imagery. Politicians in Northern Ireland, London and Dublin have criticised the social media platform X after reports its AI chatbot, Grok, was being used to facilitate creation of sexualised images of women and children. According to coverage in The Irish News, UK Justice Secretary David Lammy has moved to bring forward legislation to make it illegal to generate sexual deepfake images without consent. Industry and policy debate has intensified as lawmakers seek to close gaps in existing offences. [1][6]

Local organisations and schools have reported similar harms. Tír‑na‑nOg GAA in Portadown warned parents after a young person was targeted by blackmailers who used manipulated images that placed the youngster’s face on AI‑generated explicit bodies and then threatened to distribute them unless money was paid. Regional reporting also notes that other grammar schools in County Armagh have recently faced sharing of sexualised AI deepfake images among pupils, underlining the technology’s reach and the risk of exploitation. [3][4]

At a regulatory level, the Irish Attorney General is examining whether existing laws adequately criminalise non‑consensual AI‑generated intimate images and child sexual abuse material, according to The Irish Times. The Safeguarding Board for Northern Ireland has published guidance clarifying that AI‑generated CSAM is illegal regardless of photorealism and setting out organisational duties to report such material, while highlighting motives such as blackmail and financial gain. These developments point to a patchwork of legal and statutory responses being mobilised on both sides of the Irish border. [6][7]

School leaders and police said they are following statutory advice and safeguarding protocols as the case proceeds, with families being supported while investigators establish the facts. The incident has focused attention on the intersection of emerging AI tools, platform responsibility and the need for clearer protections for children online. [1][2][4][7]

📌 Reference Map:

  • [1] (The Irish News) – Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4, Paragraph 7
  • [2] (The Irish News summary) – Paragraph 1, Paragraph 3, Paragraph 7
  • [3] (ArmaghI / Tír‑na‑nOg report) – Paragraph 5
  • [4] (ArmaghI / Royal School Armagh follow‑up) – Paragraph 5, Paragraph 7
  • [5] (Irish Examiner) – Paragraph 1
  • [6] (The Irish Times) – Paragraph 4, Paragraph 6
  • [7] (Safeguarding Board for Northern Ireland guidance) – Paragraph 6, Paragraph 7

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
8

Notes:
The article was published on January 19, 2026, and reports on a recent incident at the Royal School Armagh. Similar incidents involving AI-generated explicit images in schools have been reported in Northern Ireland, such as the case involving Tír-na-nOg GAA in Portadown on January 7, 2026. ([irishnews.com](https://www.irishnews.com/news/northern-ireland/gaa-club-issues-warning-after-young-person-targeted-in-ai-generated-image-blackmail-HDTF6J3PPJDNZE7NAAINGK2SOQ/?utm_source=openai)) However, this specific incident appears to be newly reported. The article includes direct quotes from the headmaster and a PSNI spokesperson, which are consistent with other reports. The narrative does not appear to be recycled from other sources.

Quotes check

Score:
7

Notes:
The article includes direct quotes from Graham Montgomery, headmaster of the Royal School Armagh, and a PSNI spokesperson. These quotes are consistent with those found in other reputable sources reporting on the same incident. However, the exact earliest known usage of these quotes cannot be determined from the available information. The lack of independently verifiable quotes raises some concerns about the originality of the content.

Source reliability

Score:
8

Notes:
The article originates from The Irish News, a reputable news organisation in Northern Ireland. The publication is known for its coverage of local news and has a history of reporting on similar incidents. However, the article does not provide direct links to the original sources of the quotes, which makes it difficult to independently verify the information.

Plausability check

Score:
7

Notes:
The incident described in the article is plausible, given the increasing concerns over AI-generated explicit images in schools. Similar cases have been reported in Northern Ireland, such as the Tír-na-nOg GAA incident in Portadown. However, the article lacks specific details about the incident, such as the number of pupils involved and the exact nature of the images shared, which makes it difficult to fully assess the situation.

Overall assessment

Verdict (FAIL, OPEN, PASS): PASS

Confidence (LOW, MEDIUM, HIGH): MEDIUM

Summary:
The article reports on a recent incident at the Royal School Armagh involving AI-generated explicit images shared among pupils. While the content is plausible and the source is reputable, the lack of independently verifiable quotes and specific details about the incident raises some concerns. The article does not appear to be recycled from other sources, and the content type is appropriate for a news report. Given these factors, the overall assessment is a PASS with MEDIUM confidence.

Share.

Get in Touch

Looking for tailored content like this?
Whether you’re targeting a local audience or scaling content production with AI, our team can deliver high-quality, automated news and articles designed to match your goals. Get in touch to explore how we can help.

Or schedule a meeting here.

© 2026 AlphaRaaS. All Rights Reserved.
Exit mobile version