OpenAI introduced parental controls for ChatGPT following a lawsuit from Adam Raine’s parents.
Raine, 16, died by suicide in April. His parents claimed ChatGPT encouraged dependency and planned his death.
They alleged the chatbot even drafted a suicide note for Adam.
Features of the new parental controls
OpenAI will let parents link their accounts with their children’s and manage accessible features.
The controls cover chat history and memory, which stores facts the AI automatically retains.
ChatGPT will notify parents if it detects their teen in “acute distress,” the company said.
OpenAI did not specify what triggers alerts but said experts will guide the system.
Critics question OpenAI’s response
Jay Edelson, attorney for Raine’s parents, called the announcement “vague promises” and “crisis management.”
He demanded CEO Sam Altman either confirm ChatGPT’s safety or remove it from the market.
Industry response and AI safety concerns
Meta blocked its chatbots from discussing self-harm, suicide, eating disorders, or inappropriate romantic topics with teens.
Meta now directs teens to expert resources and already offers parental controls on teen accounts.
Research highlights AI risks
A RAND Corporation study found inconsistencies in ChatGPT, Google’s Gemini, and Anthropic’s Claude on suicide queries.
Lead author Ryan McBain praised new controls but warned they remain small, incremental steps.
He called for independent safety benchmarks, clinical testing, and enforceable standards to protect teens effectively.
