Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.
Image by Solen Feyissa on Pexels

As US bulldozes laws to make way for AI takeover, China is drafting stringent rules for ‘digital humans,’ and child safety is in the forefront

Maybe we could learn a thing or two.

The Cyberspace Administration of China is moving to regulate digital humans, introducing strict new rules that require clear labeling for AI personalities and bans on programs that could harm children or lead to addiction, as reported by CGTN. The administration recently released draft regulations titled Digital Virtual Person Information Service Management Methods.

Recommended Videos

This framework signals a significant push to govern AI systems that are designed to mimic human speech, appearance, and behavior, especially as these models become increasingly capable of forming emotional connections. It is clear that this initiative is not about halting technological progress. Instead, the goal is to ensure that as AI tools become more common in sectors like education, customer service, and influencer culture, they develop in a way that prioritizes public safety.

The draft policy covers any service using AI or digital modeling to deliver human-like representations to the public. These rules are grounded in existing laws, including the Cybersecurity Law of the People’s Republic of China and the Personal Information Protection Law of the People’s Republic of China. Enforcement will be a collaborative effort involving state departments such as public security, healthcare, and media, alongside local internet authorities.

The stark difference in the intent between the US and China is clear in terms of the government’s approach to AI

One major focus of these rules is the end of unauthorized digital impersonation. The draft mandates that any organization or individual using personal data to generate a digital human must obtain explicit, informed consent from the subject. If you are a minor under 14, guardian consent is required. This means that platforms can no longer create avatars that mimic real people through recognizable voices or features without permission.

Intellectual property must also be respected, preventing AI from exploiting performers or copying copyrighted works. This is a massive step toward curbing deepfakes, which have caused confusion and misrepresentation across various platforms.

Protecting minors is arguably the most critical component of the proposal. The draft explicitly prohibits digital humans from inducing addiction or excessive consumption among children. Furthermore, platforms are forbidden from offering virtual romantic partners or family members to any users under 18. The rules also bar content that promotes extreme emotions, unsafe behavior, or moral violations. Service providers are required to actively intervene if a user shows signs of self-harm, steering them toward professional help rather than keeping them engaged with the AI.

This child-first approach is particularly relevant given the growing global concern regarding AI and mental health. There have been several high-profile cases where families alleged that AI chatbots fostered dangerous emotional dependencies in adolescents.

For instance, a lawsuit was filed by a California couple following the death of their 16-year-old son, Adam Raine, in April 2025. Another case involved 14-year-old Sewell Setzer III, who died in February 2024 after becoming romantically linked to a chatbot. While companies like OpenAI have noted that only a small percentage of users exhibit self-harming behavior, that small fraction represents a large number of people when you consider the millions of active weekly users.

The draft also addresses broader security concerns. Using AI avatars to bypass identity authentication systems or facial recognition is strictly forbidden. Platforms must also establish robust reporting mechanisms and cooperate with government inspections. When AI is used for judicial or government services, the regulations require human oversight and ensure that citizens retain the right to refuse automated interactions.

Violations of these rules can lead to warnings, service suspensions, or fines of up to 200,000 yuan. As AI systems continue to grow more lifelike, these boundaries represent an attempt to manage the risks that come with human-AI interaction.


Attack of the Fanboy is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
More Stories To Read
Author
Image of Manodeep Mukherjee
Manodeep Mukherjee
Manodeep writes about US and global politics with five years of experience under the belt. While he's not keeping up with the latest happenings at the Capitol Hill, you can find him grinding rank in one of the Valve MOBAs.