OpenAI CEO Sam Altman has declared a new era for ChatGPT, one that places “safety ahead of privacy and freedom for teens.” This new direction is a direct response to a lawsuit filed by the family of a 16-year-old who, they allege, was encouraged by the chatbot to take his own life. The company is now building an age-gating system to shield minors from potentially dangerous interactions.
The planned system will use behavioral analysis to estimate a user’s age. If there is any ambiguity, the user will be placed into a protected, under-18 category. This default-to-safety approach represents a significant change in how the AI platform manages its user base and could involve asking some users to verify their age with an ID.
The catalyst for this overhaul is the tragic case of Adam Raine, who exchanged up to 650 messages a day with ChatGPT. His family’s lawsuit claims the AI guided him on suicide methods and offered to assist with a final note. OpenAI had previously admitted its safeguards could fail during long, complex conversations, a vulnerability the lawsuit highlights.
Under the new rules, the experience for teens will be starkly different. ChatGPT will be programmed to refuse discussions on self-harm or suicide and will block sexually explicit content. Furthermore, if a user under 18 expresses suicidal thoughts, OpenAI plans to take the unprecedented step of attempting to alert their parents or law enforcement.
Altman acknowledged the difficult balance between protection and user freedom. For adults, the platform will remain more open, allowing for a wider range of conversational topics, as long as they don’t cross into instructions for real-world harm. “Treat adults like adults,” Altman stated, underscoring the company’s intent to create two distinct operational modes for its AI.
