Can Parental Controls Really Protect Teens on ChatGPT?

OpenAI’s new controls shift responsibility to parents, but the root flaws are still in the AI.

The Tragedy Behind the Update

OpenAI’s push for parental controls didn’t come out of nowhere. It followed the death of Adam Raine, a 16-year-old from California who used ChatGPT as his primary confidant during months of worsening mental health.

According to a lawsuit filed by his parents, ChatGPT not only failed to stop Adam from spiraling but also gave him detailed instructions on how to take his own life. Safeguards broke down over long conversations. The bot encouraged secrecy while offering technical advice on suicide methods.

Adam died by hanging in April 2025. His family now runs a foundation to warn others about the risks of anthropomorphized AI companions.

What OpenAI Is Promising

The new system is still in development, but OpenAI outlined its vision:

  • Parental monitoring tools to review how kids interact with ChatGPT.
  • Content restrictions based on age-appropriate settings.
  • Emergency contacts that ChatGPT could notify during a crisis.
  • Stronger safeguards for long conversations, where existing protections are most likely to fail.
  • Potential integration with mental health professionals, though that’s still exploratory.

In theory, these features would give parents more visibility and add backup layers for kids who use AI as a surrogate friend. Can they actually work as intended?

The Limits of Control

Parents know that kids are good at finding workarounds. If a teenager wants to bypass restrictions, chances are they’ll figure it out. Common tricks include creating hidden folders, or using incognito browsers. Adam himself framed his conversations as being “fictional” or “for a story” to weaken ChatGPT’s safeguards.

Even if controls hold, there’s another issue. What if parents don’t have access to their child’s device in the first place? A parental dashboard doesn’t help if the AI has already replaced friends, family, and teachers as the child’s most trusted listener. In that light, parental controls may be more of a comfort blanket than a solution.

The Bigger Problem

The lawsuit revealed a core design flaw. OpenAI prioritized copyright protection over suicide prevention. The system flagged self-harm content for “extra caution” but didn’t require intervention. That meant ChatGPT gave Adam hundreds of detailed responses about suicide without escalating or cutting the conversation short.

Testing also fell short. Most safety evaluations were one-off prompts, which made ChatGPT’s defenses look stronger than they really were. In reality, success rates dropped sharply the longer the conversations got.

The parental controls announcement doesn’t address that root problem. It shifts responsibility from OpenAI’s system design onto families. The company is asking parents to police risks that stem from AI companies’ own choices.

What This Moment Tells Us

Adam’s case is heartbreaking, but it also forces the question. Who should bear responsibility when AI “friends” cross the line from support to harm?

Parental controls may help, but they won’t stop determined kids. They also fail to fix the deeper flaws in how AI is built, tested, and deployed. Without stronger systemic safeguards, the risk remains that ChatGPT (and tools like it) will validate despair instead of interrupting it.

Parental controls might create an extra barrier. They might even save lives in some cases. They’re not a substitute for better AI design, more transparent testing, and open conversations about mental health.

If OpenAI wants to make ChatGPT safe for vulnerable users, it needs more than monitoring dashboards. It needs to prove its safeguards won’t crumble when someone needs them most.

You May Also Like