What to know
- OpenAI has updated its rules to better protect teens, focusing on safety, clear communication, and responses that match their age.
- For teens aged 13–17, ChatGPT now has stronger protections, limiting harmful content and risky activities.
- ChatGPT will now automatically add teen safety limits if an account seems to belong to a minor, while adults can confirm their age to unlock full access.
- OpenAI also released new AI literacy resources for teens and parents, plus expanded parental controls across ChatGPT and Sora for content, quiet hours, and safety alerts.
OpenAI has updated the rules that guide how ChatGPT works, adding special guidelines for teens aged 13–17 that focus on safety, real-world support, age-appropriate treatment, and clearer limits.

ChatGPT now uses extra safety checks for teens when they ask about self-harm, explicit content, risky activities, body image, or hiding unsafe behavior. In serious cases, it encourages reaching out to real-world support like counselors, helplines, or emergency services.
OpenAI is introducing a system that estimates whether a user might be under 18 based on how they chat. If it’s unsure, it applies teen protections by default, and adults can later confirm their age to get full access.
ChatGPT developer OpenAI announces new teen safety features, including an age-prediction system and ID age verification in some countries. https://t.co/Jvbo9CiW0L
— NBC News (@NBCNews) September 16, 2025
Teen safety features are also being expanded to group chats, the ChatGPT Atlas browser, and the Sora app. Parents can also use existing controls to manage content, set quiet hours, and get safety alerts in rare, serious situations.
Along with this, OpenAI shared two new AI learning guides. One for families that explains how AI works and how to use it responsibly, and another for parents with tips, conversation starters, and ways to help teens think critically and set healthy boundaries.
Discussion