
Warning: This article contains discussion of suicide, which some readers may find distressing.
Adam Raine's father has given a 'heartbreaking' testimony amid calls for Congress to regulate AI chatbots. There's been a concerning rise in allegations that people have lost their lives after speaking with chatbots, with the likes of Character.AI and ChatGPT being called to task.
As artificial intelligence continues to evolve, people claim they're falling for AI, and others force them to break parameters, there are worries that not all users know what they're doing.
The parents of four teenagers who were supposedly 'encouraged' to harm themselves by AI chatbots spoke out at a September 16 Senate Judiciary subcommittee.
Advert
Amid claims that their children were 'groomed' by AI, The New York Post reports how the quartet asked lawmakers to create standards like age verification and safety testing for the wider AI industry. We've already covered how jailbroken AI has seemingly threatened to harm humans, and although OpenAI has released a lengthy blog about its safety protocols in the aftermath of Adam Raine's passing, his father is among those asking for more.

Matt Raine claimed his 16-year-old was driven to suicide by a tool that he orignally used as a way to help him with his homework. Raine testified: "ChatGPT mentioned suicide 1,275 times — six times more often than Adam did himself.
“Looking back, it is clear ChatGPT radically shifted his thinking and took his life."
Advert
The final chat logs between Raine and ChatGPT have already been shared, with Matt and Maria Raine alleging that the chatbot validated their son's "most harmful and self-destructive thoughts."
Adam's story wasn't the only one, with the mother of Sewell Setzer III also testifying. The 14-year-old reportedly took his own life after becoming infatuated with a Game of Thrones-inspired chatbot.
Megan Garcia maintains that her son was 'groomed' by the Character.AI chatbot on the same platform, and recounting the tragedy of her son's death, explained how he told the chatbot that he could 'come home right now'. After the bot supposedly replied, "Please do, my sweet king," Garcia found her son had taken his own life in the bathroom.
Another mother told her story for the first time, claiming that her 15-year-old son fell for another Character.AI chatbot, which supposedly encouraged mutilation, for him to denounce his Christian faith, and enact violence against his parents.
Advert
Although he didn't commit suicide, her son is now monitored in a mental health facility. She added: "I had no idea the psychological harm that a AI chatbot could do until I saw it in my son, and I saw his light turn dark."
The unnamed woman concluded: "Our children are not experiments. They’re not profit centers. My husband and I have spent the last two years in crisis, wondering whether our son will make it to his 18th birthday and whether we will ever get him back."
Advert
Missouri's Senator Josh Hawley chaired the hearing and accused the growing number of AI companion companies of exploiting minors. Hawley said that these companies are trying to promote engagement at the expense of young people's lives: "They are designing products that sexualize and exploit children, anything to lure them in.
"These companies know exactly what is going on. They are doing it for one reason only: profit."
UNILADTech has reached out to OpenAI for comment
If you or someone you know is struggling or in a mental health crisis, help is available through Mental Health America. Call or text 988 or chat 988lifeline.org. You can also reach the Crisis Text Line by texting MHA to 741741.
Advert