The rules come as AI use grows rapidly in China and worldwide. Once in effect, all AI products and services in China will have to follow them, ensuring that AI tools are safe and responsible.
Under the draft rules, AI companies must offer personalized settings for children, limit the time they spend on chatbots, and get parental consent before providing emotional support. If a conversation involves suicide or self-harm, a human must step in immediately and notify a guardian or emergency contact.
The CAC also supports AI that promotes culture or provides companionship for older adults, as long as it is safe and regulated. Public feedback is being collected before the rules are finalized.
Chinese AI companies are growing quickly. Apps like DeepSeek have topped download charts, while startups Z.ai and Minimax, which have millions of users, plan to go public. Many people use AI for companionship or therapy, raising questions about its effect on mental health.
Globally, AI safety is getting close attention. OpenAI CEO Sam Altman said handling conversations about self-harm is one of the company’s biggest challenges. In the U.S., OpenAI faces a lawsuit claiming ChatGPT played a role in a teenager’s death, and the company is hiring a “head of preparedness” to manage AI risks related to mental health and cybersecurity.
These regulations show China’s effort to protect vulnerable users while supporting safe innovation. Both developers and users will need to adjust to the new safety measures.
