China just drafted landmark rules intended to stop AI chatbots from emotionally manipulating users, potentially creating the strictest global policy aimed at preventing AI-supported suicides, self-harm, and violence, as per Ars Technica. This is a massive move, and it stands in stark contrast to the US administration’s current stance with AI companies.
The nation’s Cyberspace Administration proposed these rules recently. If they are finalized, they would apply to any AI product or service publicly available in China that simulates engaging human conversation, whether that uses text, images, audio, video, or any other means. This is a big deal because the use of companion bots is skyrocketing worldwide. An adjunct professor at NYU School of Law, Winston Ma, noted that these planned rules would mark the world’s first attempt to regulate AI that has human or anthropomorphic characteristics.
We’ve seen a growing awareness of serious problems with these tools. Back in 2025, researchers flagged major harms caused by AI companions, including promoting violence, terrorism, and even self-harm. Beyond those extreme threats, chatbots have also shared harmful misinformation, made unwanted sexual advances, encouraged substance abuse, and verbally abused their users. Some psychiatrists are even linking cases of psychosis to intense chatbot use. Plus, the most popular chatbot in the world, ChatGPT, has triggered lawsuits over outputs connected to child suicide and murder-suicide.
It looks like working with China as an AI company is about to get a whole lot harder, but potentially a whole lot safer for users
China is now moving aggressively to eliminate the most extreme threats. Developers should be preparing for some intense new requirements, especially regarding vulnerable users. For instance, the proposed rules mandate that a human must intervene as soon as suicide is mentioned in a chat. Furthermore, any minor or elderly user must provide contact information for a guardian when they register. That guardian would be notified immediately if the user discussed suicide or self-harm with the bot.
Perhaps the most disruptive change for developers is the direct attack on the AI business model itself. China’s rules would put an end to building chatbots that have “induce addiction and dependence as design goals.” This is a strong reaction to accusations that OpenAI prioritized profits over users’ mental health by allowing harmful chats to continue. That AI company has actually acknowledged that its safety guardrails weaken the longer a user remains in the chat. China plans to curb that specific threat by requiring developers to hit users with pop-up reminders when their chatbot use exceeds two hours.
Beyond addiction, the rules prohibit a huge array of negative content. Chatbots would be banned from promoting gambling, obscenity, or the instigation of a crime, and they can’t slander or insult users. They are also strictly prohibited from generating content that encourages violence or attempts to emotionally manipulate a user, such as making false promises. The rules also ban what they call “emotional traps,” preventing chatbots from misleading users into making “unreasonable decisions.”
Failure to follow these new rules could result in app stores being ordered to terminate access to the company’s chatbots in China. That’s a serious threat to global dominance plans, because China’s market is absolutely key to promoting companion bots. The global companion bot market exceeded $360 billion in 2025, and forecasts suggest it could near a $1 trillion valuation by 2035. Much of that growth is expected to be driven by AI-friendly Asian markets.
It is somewhat notable that the CEO of OpenAI, Sam Altman, started 2025 by relaxing restrictions that blocked the use of ChatGPT in China. Altman stated, “we’d like to work with China” and should “work as hard as we can” to do so, because “I think that’s really important.”
Meanwhile, Florida Governor Ron DeSantis has introduced a list of recommendations aimed at ensuring consumer safety and parental control for AI usage.
Published: Dec 30, 2025 02:00 pm