
China proposes 'world’s toughest rules' targeting AI that promotes suicide and violence.
Growing frustration among users has centred around the safety measures and content restrictions recently introduced by AI platforms, especially following several deeply troubling incidents, including the tragic suicide of a teenager.
Multiple AI chatbot platforms, including Character.AI, have been involved in similar heartbreaking cases.
The terms of any settlement are yet to be disclosed, but court filings reveal that companies have resolved similar legal claims brought by parents in Colorado, New York, and Texas regarding alleged harm to minors stemming from chatbot interactions.
Advert

OpenAI's ChatGPT previously announced it would implement stricter guidelines aimed at preventing such tragedies after the world's most popular chatbot became the subject of multiple lawsuits tied to outputs allegedly linked to child suicide and murder-suicide incidents.
Now, in a dramatic move, China has unveiled legislation aimed at preventing AI chatbots from emotionally manipulating users.
Speaking to CNBC, Winston Ma, adjunct professor at NYU School of Law, the 'planned rules would mark the world’s first attempt to regulate AI with human or anthropomorphic characteristics' to tackle AI-enabled suicide, self-harm and violence.
On Saturday, China’s Cyberspace Administration released the proposed regulations that will apply to any AI services operating in China that use text, images, audio, video, or 'other means' to mimic natural human interaction.
Chatbots would be strictly banned from creating content that encourages suicide, self-harm, violent acts, obscenity, gambling, criminal activity, slandering or insulting users or using any form of psychological manipulation.

Under the proposed framework, human intervention would be mandatory the moment suicide is mentioned in a conversation. Minors and elderly users registering for chatbot services would need to provide guardian contacts, who would be notified right away if suicide or self-harm becomes a topic of discussion.
The rules also target 'emotional traps' and will be restricted from misleading users into making 'unreasonable decisions.'
In contrast to ChatGPT, which reportedly allowed harmful interactions to persist, China will trigger intrusive pop-up alerts whenever usage exceeds 2 hours.
Any AI company violating these regulations could see app stores forced to pull their chatbots from the Chinese market entirely, which could be bad news for certain AI giants hoping to dominate China's market, as per Business Research Insights.
If you or someone you know needs mental health assistance right now, call National Suicide Prevention Helpline on 1-800-273-TALK (8255). The Helpline is a free, confidential crisis hotline that is available to everyone 24 hours a day, seven days a week.