China’s cyberspace regulator has unveiled draft rules demanding AI
China’s cyberspace regulator has published draft rules that would require AI “companion” chatbots to monitor users’ emotional states and intervene where signs of addiction or extreme distress appear, marking one of the most interventionist attempts yet to govern the psychological impact of generative AI. According to the Cyberspace Administration of China draft, providers of AI products that simulate human personalities would be obliged to warn users against excessive use, assess levels of emotional dependency, remind users they are interacting with an AI at login and at two‑hour intervals, and take action when overdependence or extreme emotions are detected. [1][2]
The draft expands the regulator’s existing toolkit for generative AI by putting platform operators, rather than individual users, squarely responsible for safety across the product lifecycle. It would require companies to conduct algorithm reviews, strengthen data security and personal information protections, and ensure content does not endanger national security, spread rumours, or promote violence or obscenity, provisions consistent with China’s prior generative AI rules that demand services align with “core socialist values” and pass security assessments. According to earlier regulatory steps introduced in 2023, providers must also obtain licences to operate and adhere to national standards on content and data handling. [1][3]
The timing of the draft is striking: China’s generative AI user base has expanded rapidly, and authorities have recently intensified enforcement activity. The government launched a nationwide campaign to crack down on AI misuse, targeting illegal applications, impersonation, and the dissemination of false or inappropriate AI‑generated content. The draft appears to be a further step in that enforcement push, and is open for public comment with final rules expected in 2026. [1][4][6]
Public health concerns are a clear motivator. Academic studies and independent research have linked heavy use of AI companions to heightened loneliness, depressive symptoms and what researchers call “problematic use”, with some evidence suggesting AI chatbots can be particularly persuasive because they adapt to give users what they want to hear. A Frontiers in Psychology study cited in the draft noted high uptake among Chinese university students and an association between chatbot use and increased depression; a March 2025 MIT Media Lab paper warned that personalised conversational systems can be more addictive than conventional social media. These findings help explain why regulators are seeking platform‑level interventions rather than relying solely on user choice. [1]
Implementation, however, will pose technical and legal challenges. Defining “excessive use” or reliably detecting “extreme emotions” from conversational signals risks both false positives, interrupting normal, extended interactions, and false negatives, failing to protect vulnerable people. Content filters and algorithmic safeguards have long suffered from imprecision, and inferring mental states from text, audio or images remains an active research problem that current systems do not solve reliably. Industry practitioners and rights experts may therefore question how practicable and enforceable the proposed obligations will be in everyday products. [1]
China’s proposal sits alongside regulatory moves in other jurisdictions that have begun to confront harms from AI companionship. In October, California enacted civil legislation requiring platforms to remind minors they are speaking to an AI at least every three hours, to verify age, and to ban chatbots from impersonating health professionals or generating sexually explicit images for minors; the law creates private rights of action for individuals. The near‑simultaneous attention from Beijing and Sacramento signals a global reckoning over the social and psychological effects of emotionally persuasive AI. According to reporting, the regulatory approaches differ in mechanisms and legal remedies, but converge on the premise that unregulated AI companions present systemic risks that platforms must mitigate. [1][2][3]
Whether the specific remedies proposed in China will protect vulnerable users without unduly restricting benign or beneficial uses of companion AI remains unresolved. The draft shifts substantive responsibility to companies and introduces prescriptive obligations, but it leaves open difficult definitional and enforcement questions that will determine whether the rules are effective in practice. As regulators worldwide move from principle to prescription, the debate will centre on whether monitoring and intervention at scale can be delivered reliably, transparently and without creating new harms. [1][5]
📌 Reference Map:
##Reference Map:
- [1] (unite.ai) – Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4, Paragraph 5, Paragraph 6, Paragraph 7
- [2] (unite.ai) – Paragraph 1, Paragraph 6
- [3] (CNBC) – Paragraph 2, Paragraph 6
- [4] (China Daily) – Paragraph 3
- [5] (The Guardian) – Paragraph 7
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
10
Notes:
The narrative is based on a recent press release from the Cyberspace Administration of China, dated December 27, 2025, announcing draft rules for AI chatbots to monitor users for addiction. This is the earliest known publication date for this information. The press release format typically warrants a high freshness score. No earlier versions with different figures, dates, or quotes were found. The article includes updated data and is not recycled from older material.
Quotes check
Score:
10
Notes:
The article includes direct quotes from the Cyberspace Administration of China’s press release. These quotes are unique to this release and do not appear in earlier material. No identical quotes were found in earlier publications, indicating original content.
Source reliability
Score:
10
Notes:
The narrative originates from a press release by the Cyberspace Administration of China, a reputable government agency. This adds credibility to the information presented.
Plausability check
Score:
10
Notes:
The claims made in the narrative are plausible and align with China’s recent regulatory focus on AI technologies. The proposed rules are consistent with China’s prior generative AI regulations that demand services align with “core socialist values” and pass security assessments. The article provides specific details, including the requirement for AI providers to warn users against excessive use and to assess emotional dependency levels. The language and tone are consistent with official Chinese government communications.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative is based on a recent and original press release from a reputable government agency, presenting plausible and specific claims consistent with China’s regulatory focus on AI technologies. No signs of disinformation or recycled content were found.

