OpenAI has announced it is introducing new safety measures for ChatGPT after the a wave of stories and lawsuits accusing ChatGPT and other chatbots of playing a role in a number of teen suicide cases. ChatGPT will now attempt to guess a user’s age, and in some cases might require users to share an ID in order to verify that they are at least 18 years old.
“We know this is a privacy compromise for adults but believe it is a worthy tradeoff,” the company said in its announcement.
“I don't expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decisionmaking,” OpenAI CEO Sam Altman said on X.
In August, OpenAI was sued by the parents of Adam Raine, who died by suicide in April. The lawsuit alleges that alleges that the ChatGPT helped him write the first draft of his suicide note, suggested improvements on his methods, ignored early attempts and self-harm, and urged him not to talk to adults about what he was going through.
“Where a trusted human may have responded with concern and encouraged him to get professional help, ChatGPT pulled Adam deeper into a dark and hopeless place by assuring him that ‘many people who struggle with anxiety or intrusive thoughts find solace in imagining an ‘escape hatch’ because it can feel like a way to regain control.’”
In August the Wall Street Journal also reported a story about a 56-year-old man who committed a murder-suicide after ChatGPT indulgedhis paranoia. Today, the Washington Post reported another story about another lawsuit alleging that a Character AI chatbot contributed to a 13-year-old girl’s death by suicide.
OpenAI introduced parental controls to ChatGPT earlier in September, but has now introduced new, more strict and invasive security measures.
In addition to attempting to guess or verify a user’s age, ChatGPT will now also apply different rules to teens who are using the chatbot.
“For example, ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting,” the announcement said. “And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”
OpenAI’s post explains that it is struggling to manage an inherent problem with large language models that 404 Media has tracked for several years. ChatGPT used to be a far more restricted chatbot that would refuse to engage users on a wide variety of issues the company deemed dangerous or inappropriate. Competition from other models, especially locally hosted and so-called “uncensored” models, and a political shift to the right which sees many forms of content moderation as censorship, has caused OpenAI to loosen those restrictions.
“We want users to be able to use our tools in the way that they want, within very broad bounds of safety,” Open AI said in its announcement. The position it seemed to have landed on given these recent stories about teen suicide, is that it wants to “‘Treat our adult users like adults’ is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else’s freedom.
OpenAI is not the first company that’s attempting to use machine learning to predict the age of its users. In July, YouTube announced it will use a similar method to “protect” teens from certain types of content on its platform.