{"id":20661,"date":"2026-01-10T09:26:00","date_gmt":"2026-01-10T09:26:00","guid":{"rendered":"https:\/\/sawahsolutions.com\/lap\/uk-faces-mounting-pressure-to-tighten-laws-on-ai-generated-sexualised-images-including-children\/"},"modified":"2026-01-10T09:36:33","modified_gmt":"2026-01-10T09:36:33","slug":"uk-faces-mounting-pressure-to-tighten-laws-on-ai-generated-sexualised-images-including-children","status":"publish","type":"post","link":"https:\/\/sawahsolutions.com\/lap\/uk-faces-mounting-pressure-to-tighten-laws-on-ai-generated-sexualised-images-including-children\/","title":{"rendered":"UK faces mounting pressure to tighten laws on AI-generated sexualised images including children"},"content":{"rendered":"<p><\/p>\n<div>\n<p>As AI tools like Grok generate alarming amounts of sexualised images, including of minors, UK authorities grapple with legal gaps and enforcement challenges amid global scrutiny and calls for stronger regulation.<\/p>\n<\/div>\n<div>\n<p>The flood of images showing partly clothed women allegedly produced by the Grok AI tool on Elon Musk\u2019s X has intensified scrutiny of how existing UK law and regulators can respond to AI-driven image abuse, and whether platforms should be required to remove such content more quickly. The controversy has also drawn parallel demands from European and other national authorities for stronger action. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/news.google.com\/rss\/articles\/CBMitwFBVV95cUxPeXhURWF2cWUzbHFnQy1ONFBzelltbnlfUzZQQ29UNG5qQktzTkVKbzR2MWpDdHl2TUpSUmFXZzZEQVVkNmwzNWxtcnl2dEtlWF9nRGhQbnRDa0tXcmk1NU9MV1JfQmRoczlVWm1YQUVyMjhoekJUVXp1a2xpRDZ0ZnU2MmJpMzRUTEFBVEw0c3NxSVRXSE1PT3l1dkhVLWE1MFYySXZGUU96dkwyRzhlN1VKTXZFMnc?oc=5&amp;hl=en-US&amp;gl=US&amp;ceid=US:en\">[1]<\/a><\/sup> (news.google) <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/apnews.com\/article\/2bfa06805b323b1d7e5ea7bb01c9da77\">[2]<\/a><\/sup> (AP News)<\/p>\n<p>Under current criminal law in England and Wales, sharing intimate images without consent is an offence under the Sexual Offences Act, and that provision can extend to material created by AI. The statute defines intimate images to include exposed genitals, buttocks or breasts and situations where a person is in underwear or transparent clothing that reveals those body parts. However, legal experts caution the statutory boundaries are not absolute: according to Clare McGlynn, a professor of law at Durham University, \u201cjust the prompt \u2018bikini\u2019 would not strictly be covered\u201d. Separate provisions under the Online Safety Act also target the posting of false information intended to cause \u201cnon-trivial psychological or physical harm\u201d. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/news.google.com\/rss\/articles\/CBMitwFBVV95cUxPeXhURWF2cWUzbHFnQy1ONFBzelltbnlfUzZQQ29UNG5qQktzTkVKbzR2MWpDdHl2TUpSUmFXZzZEQVVkNmwzNWxtcnl2dEtlWF9nRGhQbnRDa0tXcmk1NU9MV1JfQmRoczlVWm1YQUVyMjhoekJUVXp1a2xpRDZ0ZnU2MmJpMzRUTEFBVEw0c3NxSVRXSE1PT3l1dkhVLWE1MFYySXZGUU96dkwyRzhlN1VKTXZFMnc?oc=5&amp;hl=en-US&amp;gl=US&amp;ceid=US:en\">[1]<\/a><\/sup> (news.google) <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/09\/grok-ai-x-explainer-legal-regulation-nudified-images-social-media\">[5]<\/a><\/sup> (The Guardian) <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.marieclaire.co.uk\/opinion\/grok-ai-sexual-abuse-threat-to-women\">[4]<\/a><\/sup> (Marie Claire)<\/p>\n<p>The Online Safety Act places duties on platforms to assess risks, reduce the likelihood of intimate image abuse appearing to users, and remove such content promptly when notified. Ofcom has told X and xAI it has made \u201curgent contact\u201d to establish what steps have been taken to comply, and can impose fines of up to 10% of global revenue or seek court orders to block services in the UK if it finds non-compliance. Industry observers say enforcement powers are significant on paper but face practical and jurisdictional obstacles when content or operators are based overseas. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/news.google.com\/rss\/articles\/CBMitwFBVV95cUxPeXhURWF2cWUzbHFnQy1ONFBzelltbnlfUzZQQ29UNG5qQktzTkVKbzR2MWpDdHl2TUpSUmFXZzZEQVVkNmwzNWxtcnl2dEtlWF9nRGhQbnRDa0tXcmk1NU9MV1JfQmRoczlVWm1YQUVyMjhoekJUVXp1a2xpRDZ0ZnU2MmJpMzRUTEFBVEw0c3NxSVRXSE1PT3l1dkhVLWE1MFYySXZGUU96dkwyRzhlN1VKTXZFMnc?oc=5&amp;hl=en-US&amp;gl=US&amp;ceid=US:en\">[1]<\/a><\/sup> (news.google) <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/09\/grok-ai-x-explainer-legal-regulation-nudified-images-social-media\">[5]<\/a><\/sup> (The Guardian)<\/p>\n<p>xAI and X have taken some steps amid global criticism: Grok\u2019s image-generation and editing features were reportedly restricted to paying subscribers and the image feature limited on the X platform, while regulators note those changes do not remove the underlying risk if the tool remains accessible via other apps or websites. The European Commission has ordered preservation of internal records relating to Grok through 2026 as part of a wider probe under EU digital safety laws, and numerous countries beyond the UK have opened inquiries. Regulators have signalled that monetisation or gating features are not a full solution to unlawful or harmful outputs. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/apnews.com\/article\/2bfa06805b323b1d7e5ea7bb01c9da77\">[2]<\/a><\/sup> (AP News) <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/apnews.com\/article\/2021bbdb508d080d46e3ae7b8f297d36\">[3]<\/a><\/sup> (AP News)<\/p>\n<p>Parliamentary and executive attempts to fill gaps in the law have advanced but not yet fully taken effect. The Data (Use and Access) Act contains provisions to ban the creation of non-consensual intimate images, but the government has not yet brought those measures into force, limiting immediate enforcement against creators or requesters of such images. Officials have said they will not tolerate degrading behaviour and are preparing legislative tools, but delays in commencement and the need for a \u201csubstantial connection\u201d to the UK complicate cross\u2011border prosecution. Separately, the Home Office-led Crime and Policing Bill and other measures have proposed criminalising the possession, creation and distribution of AI tools and manuals used to produce child sexual abuse material, with significant custodial penalties. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/news.google.com\/rss\/articles\/CBMitwFBVV95cUxPeXhURWF2cWUzbHFnQy1ONFBzelltbnlfUzZQQ29UNG5qQktzTkVKbzR2MWpDdHl2TUpSUmFXZzZEQVVkNmwzNWxtcnl2dEtlWF9nRGhQbnRDa0tXcmk1NU9MV1JfQmRoczlVWm1YQUVyMjhoekJUVXp1a2xpRDZ0ZnU2MmJpMzRUTEFBVEw0c3NxSVRXSE1PT3l1dkhVLWE1MFYySXZGUU96dkwyRzhlN1VKTXZFMnc?oc=5&amp;hl=en-US&amp;gl=US&amp;ceid=US:en\">[1]<\/a><\/sup> (news.google) <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/09\/grok-ai-x-explainer-legal-regulation-nudified-images-social-media\">[5]<\/a><\/sup> (The Guardian) <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2025\/feb\/01\/ai-tools-used-for-child-sexual-abuse-images-targeted-in-home-office-crackdown\">[6]<\/a><\/sup> (The Guardian)<\/p>\n<p>The most alarming reports concern AI-generated imagery of children. The Internet Watch Foundation has said analysts found images created with Grok that amount to child sexual abuse material and reported forum users claiming they used the tool to make sexualised images of girls aged around 11 to 13. Under UK law it is an offence to take, make, distribute, possess or publish an indecent photograph or pseudo\u2011photograph of an under\u201118, and Ofcom guidance instructs platforms to treat erotic or sexually suggestive depictions of children as indecent. The IWF and child\u2011protection advocates have called for urgent steps to prevent the mainstreaming of sexual AI imagery of children and to ensure platforms remove such material and cooperate with investigators. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/08\/ai-chatbot-grok-used-to-create-child-sexual-abuse-imagery-watchdog-says\">[7]<\/a><\/sup> (The Guardian) <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/news.google.com\/rss\/articles\/CBMitwFBVV95cUxPeXhURWF2cWUzbHFnQy1ONFBzelltbnlfUzZQQ29UNG5qQktzTkVKbzR2MWpDdHl2TUpSUmFXZzZEQVVkNmwzNWxtcnl2dEtlWF9nRGhQbnRDa0tXcmk1NU9MV1JfQmRoczlVWm1YQUVyMjhoekJUVXp1a2xpRDZ0ZnU2MmJpMzRUTEFBVEw0c3NxSVRXSE1PT3l1dkhVLWE1MFYySXZGUU96dkwyRzhlN1VKTXZFMnc?oc=5&amp;hl=en-US&amp;gl=US&amp;ceid=US:en\">[1]<\/a><\/sup> (news.google) <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/09\/grok-ai-x-explainer-legal-regulation-nudified-images-social-media\">[5]<\/a><\/sup> (The Guardian)<\/p>\n<p>Campaigners and legal scholars frame the problem as foreseeable and structural: they argue that rapid product roll\u2011outs without adequate safety design and enforcement mechanisms have enabled a new form of image\u2011based sexual violence that inflicts real psychological harm on victims and normalises degrading conduct. Voices including Professor Clare McGlynn and researchers cited by survivor\u2011advocacy outlets warn that existing laws, regulatory duties and corporate statements must be turned into effective, enforceable practice rather than rhetoric. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.marieclaire.co.uk\/opinion\/grok-ai-sexual-abuse-threat-to-women\">[4]<\/a><\/sup> (Marie Claire) <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/news.google.com\/rss\/articles\/CBMitwFBVV95cUxPeXhURWF2cWUzbHFnQy1ONFBzelltbnlfUzZQQ29UNG5qQktzTkVKbzR2MWpDdHl2TUpSUmFXZzZEQVVkNmwzNWxtcnl2dEtlWF9nRGhQbnRDa0tXcmk1NU9MV1JfQmRoczlVWm1YQUVyMjhoekJUVXp1a2xpRDZ0ZnU2MmJpMzRUTEFBVEw0c3NxSVRXSE1PT3l1dkhVLWE1MFYySXZGUU96dkwyRzhlN1VKTXZFMnc?oc=5&amp;hl=en-US&amp;gl=US&amp;ceid=US:en\">[1]<\/a><\/sup> (news.google)<\/p>\n<p>Regulators have concrete levers, Ofcom\u2019s enforcement remit under the Online Safety Act, the EU\u2019s investigatory powers, and criminal law against intimate\u2011image abuse and child sexual exploitation, but the current situation exposes gaps between statutory promises and operational reality. With new UK measures on AI and child sexual abuse tools under consideration and cross\u2011border investigations underway, the coming months will test whether governments and platforms can translate scrutiny into faster takedowns, stronger access controls and prosecutions where appropriate. In the meantime, authorities say they will pursue investigations and preservation orders and expect platforms to demonstrate they are meeting their legal duties. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/09\/grok-ai-x-explainer-legal-regulation-nudified-images-social-media\">[5]<\/a><\/sup> (The Guardian) <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/apnews.com\/article\/2bfa06805b323b1d7e5ea7bb01c9da77\">[2]<\/a><\/sup> (AP News) <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2025\/feb\/01\/ai-tools-used-for-child-sexual-abuse-images-targeted-in-home-office-crackdown\">[6]<\/a><\/sup> (The Guardian)<\/p>\n<p>##Reference Map:<\/p>\n<ul>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/news.google.com\/rss\/articles\/CBMitwFBVV95cUxPeXhURWF2cWUzbHFnQy1ONFBzelltbnlfUzZQQ29UNG5qQktzTkVKbzR2MWpDdHl2TUpSUmFXZzZEQVVkNmwzNWxtcnl2dEtlWF9nRGhQbnRDa0tXcmk1NU9MV1JfQmRoczlVWm1YQUVyMjhoekJUVXp1a2xpRDZ0ZnU2MmJpMzRUTEFBVEw0c3NxSVRXSE1PT3l1dkhVLWE1MFYySXZGUU96dkwyRzhlN1VKTXZFMnc?oc=5&amp;hl=en-US&amp;gl=US&amp;ceid=US:en\">[1]<\/a><\/sup> (news.google) &#8211; Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 6, Paragraph 7<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/apnews.com\/article\/2bfa06805b323b1d7e5ea7bb01c9da77\">[2]<\/a><\/sup> (AP News) &#8211; Paragraph 1, Paragraph 4, Paragraph 8<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/apnews.com\/article\/2021bbdb508d080d46e3ae7b8f297d36\">[3]<\/a><\/sup> (AP News) &#8211; Paragraph 4<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.marieclaire.co.uk\/opinion\/grok-ai-sexual-abuse-threat-to-women\">[4]<\/a><\/sup> (Marie Claire) &#8211; Paragraph 2, Paragraph 7<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/09\/grok-ai-x-explainer-legal-regulation-nudified-images-social-media\">[5]<\/a><\/sup> (The Guardian) &#8211; Paragraph 2, Paragraph 3, Paragraph 5, Paragraph 6, Paragraph 8<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2025\/feb\/01\/ai-tools-used-for-child-sexual-abuse-images-targeted-in-home-office-crackdown\">[6]<\/a><\/sup> (The Guardian) &#8211; Paragraph 5, Paragraph 8<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/08\/ai-chatbot-grok-used-to-create-child-sexual-abuse-imagery-watchdog-says\">[7]<\/a><\/sup> (The Guardian) &#8211; Paragraph 6<\/li>\n<\/ul>\n<p>Source: <a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.noahwire.com\">Noah Wire Services<\/a><\/p>\n<\/p><\/div>\n<div>\n<h3 class=\"mt-0\">Noah Fact Check Pro<\/h3>\n<p class=\"text-sm\">The draft above was created using the information available at the time the story first<br \/>\n        emerged. We\u2019ve since applied our fact-checking process to the final narrative, based on the criteria listed<br \/>\n        below. The results are intended to help you assess the credibility of the piece and highlight any areas that may<br \/>\n        warrant further investigation.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Freshness check<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>10<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The narrative is current, with the earliest known publication date being 9 January 2026. The report includes recent developments, such as Grok&#8217;s restriction of image generation features to paying subscribers and the UK&#8217;s Technology Secretary&#8217;s statement on the issue. ([gov.uk](https:\/\/www.gov.uk\/government\/news\/technology-secretary-statement-on-xais-grok-image-generation-and-editing-tool?utm_source=openai))<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Quotes check<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>10<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The direct quotes from Technology Secretary Liz Kendall and other officials are unique to this report, with no earlier matches found online. This suggests the content is original or exclusive. ([gov.uk](https:\/\/www.gov.uk\/government\/news\/technology-secretary-statement-on-xais-grok-image-generation-and-editing-tool?utm_source=openai))<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Source reliability<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>10<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The narrative originates from reputable sources, including the UK government&#8217;s official website and major news outlets like The Guardian and AP News. This enhances the credibility of the information presented. ([gov.uk](https:\/\/www.gov.uk\/government\/news\/technology-secretary-statement-on-xais-grok-image-generation-and-editing-tool?utm_source=openai))<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Plausability check<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>10<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n    <\/span>The claims made in the narrative are consistent with recent reports and official statements regarding Grok&#8217;s image generation capabilities and the resulting regulatory actions. The narrative aligns with the broader context of AI-generated content and its implications for privacy and consent. ([theguardian.com](https:\/\/www.theguardian.com\/technology\/2026\/jan\/09\/grok-ai-x-explainer-legal-regulation-nudified-images-social-media?utm_source=openai))<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Overall assessment<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Verdict<\/span> (FAIL, OPEN, PASS): <span class=\"font-bold\">PASS<\/span><\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Confidence<\/span> (LOW, MEDIUM, HIGH): <span class=\"font-bold\">HIGH<\/span><\/p>\n<p class=\"text-sm mb-3 pt-0\"><span class=\"font-bold\">Summary:<br \/>\n        <\/span>The narrative is current, original, and sourced from reputable outlets, with claims that are consistent with recent developments and official statements. There are no significant credibility risks identified.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>As AI tools like Grok generate alarming amounts of sexualised images, including of minors, UK authorities grapple with legal gaps and enforcement challenges amid global scrutiny and calls for stronger regulation. The flood of images showing partly clothed women allegedly produced by the Grok AI tool on Elon Musk\u2019s X has intensified scrutiny of how<\/p>\n","protected":false},"author":1,"featured_media":20662,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[40],"tags":[],"class_list":{"0":"post-20661","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-london-news"},"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/posts\/20661","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/comments?post=20661"}],"version-history":[{"count":1,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/posts\/20661\/revisions"}],"predecessor-version":[{"id":20663,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/posts\/20661\/revisions\/20663"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/media\/20662"}],"wp:attachment":[{"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/media?parent=20661"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/categories?post=20661"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/tags?post=20661"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}