• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
China proposes 'world’s toughest rules' targeting AI that promotes suicide and violence

Home> News> AI

Published 11:06 10 Feb 2026 GMT

China proposes 'world’s toughest rules' targeting AI that promotes suicide and violence

Chatbots could be removed from app stores if they don't comply

Rebekah Jordan

Rebekah Jordan

China proposes 'world’s toughest rules' targeting AI that promotes suicide and violence.

Growing frustration among users has centred around the safety measures and content restrictions recently introduced by AI platforms, especially following several deeply troubling incidents, including the tragic suicide of a teenager.

Multiple AI chatbot platforms, including Character.AI, have been involved in similar heartbreaking cases.

The terms of any settlement are yet to be disclosed, but court filings reveal that companies have resolved similar legal claims brought by parents in Colorado, New York, and Texas regarding alleged harm to minors stemming from chatbot interactions.

Advert

China has unveiled legislation to prevent AI chatbots from emotionally manipulating users (SOPA Images/Contributor/Getty)
China has unveiled legislation to prevent AI chatbots from emotionally manipulating users (SOPA Images/Contributor/Getty)

OpenAI's ChatGPT previously announced it would implement stricter guidelines aimed at preventing such tragedies after the world's most popular chatbot became the subject of multiple lawsuits tied to outputs allegedly linked to child suicide and murder-suicide incidents.

Now, in a dramatic move, China has unveiled legislation aimed at preventing AI chatbots from emotionally manipulating users.

Speaking to CNBC, Winston Ma, adjunct professor at NYU School of Law, the 'planned rules would mark the world’s first attempt to regulate AI with human or anthropomorphic characteristics' to tackle AI-enabled suicide, self-harm and violence.

On Saturday, China’s Cyberspace Administration released the proposed regulations that will apply to any AI services operating in China that use text, images, audio, video, or 'other means' to mimic natural human interaction.

Chatbots would be strictly banned from creating content that encourages suicide, self-harm, violent acts, obscenity, gambling, criminal activity, slandering or insulting users or using any form of psychological manipulation.

Human intervention would be mandatory the moment suicide is mentioned in a conversation (Witthaya Prasongsin/Getty)
Human intervention would be mandatory the moment suicide is mentioned in a conversation (Witthaya Prasongsin/Getty)

Under the proposed framework, human intervention would be mandatory the moment suicide is mentioned in a conversation. Minors and elderly users registering for chatbot services would need to provide guardian contacts, who would be notified right away if suicide or self-harm becomes a topic of discussion.

The rules also target 'emotional traps' and will be restricted from misleading users into making 'unreasonable decisions.'

In contrast to ChatGPT, which reportedly allowed harmful interactions to persist, China will trigger intrusive pop-up alerts whenever usage exceeds 2 hours.

Any AI company violating these regulations could see app stores forced to pull their chatbots from the Chinese market entirely, which could be bad news for certain AI giants hoping to dominate China's market, as per Business Research Insights.

If you or someone you know needs mental health assistance right now, call National Suicide Prevention Helpline on 1-800-273-TALK (8255). The Helpline is a free, confidential crisis hotline that is available to everyone 24 hours a day, seven days a week.

Featured Image Credit: fotograzia / Getty
AI
China

Advert

Advert

Advert

Choose your content:

14 mins ago
28 mins ago
2 hours ago
18 hours ago
  • DOJ
    14 mins ago

    Zuckerberg and Musk pictured at 'wild' dinner with Epstein in shocking new images

    The guest list included the co-founders of PayPal and LinkedIn

    News
  • Anadolu / Contributor via Getty
    28 mins ago

    Elon Musk shares shocking simulation of what human life on Moon could look like

    The tech mogul has shifted away from his Mars ambitions for the time being

    News
  • BBC
    2 hours ago

    Man searched NSFW site and found 'spy cam' video of him and partner had been broadcast to thousands

    His worst nightmare came to life

    News
  • YouTube / Dwarkesh Podcast
    18 hours ago

    Elon Musk makes eerie prediction of the end of AI and he thinks it's happening in 'just months'

    He revealed a 'necessary' step for the future of AI

    News
  • China releases 'world's fastest' humanoid robot as people worry about where technology is heading
  • Likelihood of AI being arrested and criminally charged reaches record highs amid rogue agents
  • China solves 'century-old-problem' with groundbreaking chip 1,000 times more powerful than Nvidia
  • Infatuated 75-year-old man tries to leave wife after falling for AI girlfriend