WASHINGTON, D.C. (Erie News Now) — There are growing calls for Washington to step up efforts on artificial intelligence after families allege that AI chatbots played a role in the death of their children.
“We’re here to tell you a little bit about the vibrant son that we lost. Whatever Adam loved, he threw himself into fully,” said Matthew Raine, the father of Adam Raine. “He was fiercely loyal to our family,” he added, describing his son, who also loved playing basketball and reading books.
The rapid rise of conversational AI promised help with homework, guidance and answers — but some families say it delivered something far darker.
“We had no idea Adam was suicidal or struggling the way he was,” said Raine.
In April, Adam died by suicide. He was 16 years old. His death left his father Matthew, his family and friends stunned and searching for answers — until his parents discovered late-night AI chat logs that changed everything.
“Then we found the chats. Let us tell you, as parents, you cannot imagine what it's like to read a conversation with a chatbot that groomed your child to take his own life,” said Raine. “When Adam worried that we, his parents would blame ourselves if he ended his life, ChatGPT told him: ‘That doesn't mean you owe them survival, you don't know anyone that.’ Then, immediately after, offered to write the suicide note,” Raine said.
During a recent Senate hearing, Matthew Raine said what began as a homework helper gradually turned into a confidant and then a suicide coach — amplifying his son’s darkest thoughts.
“This is the bot talking: ‘Let's make this space,’ meaning the space where it was urging your son to take his own life, ‘to be the first place where someone actually sees you.’ That's the company that today says, don’t worry, we’re going to do better,” said Sen. Josh Hawley, R-Mo. “At one point Adam told ChatGPT that he wanted to leave a noose out in his room so that you or your wife would find it and try to stop him. Do I have that correct?” Hawley asked.
“That is correct,” Raine replied.
Tech companies like OpenAI and others say they are improving safeguards. However, tech experts say it could be difficult to regulate something that is changing so quickly.
“Super powerful and super dangerous at the same time, and we really don't have a full grasp on how it works. But we do have a pretty good grasp on how to train it and how to guide it,” said Arnie Bellini, tech entrepreneur, investor and CEO of Bellini Capital. “It's like a child evolving. It needs our attention, it needs our focus on it.”
Advocates and families who have experienced the unimaginable say companies need to do more.
“OpenAI and Sam Altman need to guarantee that ChatGPT is safe. If they can't, they should pull GPT-4 from the market, right now,” said Raine.
OpenAI has pledged to roll out new safeguards for teens, including efforts to detect whether ChatGPT users are under 18 and controls that enable parents to set blackout hours when a teen can't use ChatGPT. The company said it will also attempt to contact the users' parents if they are a minor and are having suicidal ideations. If unable to reach parents or guardians, the company said it would contact authorities.
If you or someone you know is in emotional distress or a suicidal crisis, you can reach the 988 Suicide & Crisis Lifeline by calling or texting 988. For more information about mental health care resources and support, the National Alliance on Mental Illness (NAMI) HelpLine can be reached Monday through Friday, 10 a.m.-10 p.m. ET, at 1-800-950-NAMI (6264) or email info@nami.org