OpenAI Introduces Age Estimation Technology Following Underage User Tragedy
The company is set to limit how its AI chatbot responds to users it believes are under 18, unless they successfully complete the company’s age estimation technology or submit ID.
This move follows a lawsuit from the family of a teenager who took his own life in April after an extended period of exchanges with the AI.
Prioritizing Protection Ahead of Privacy
CEO the OpenAI leader stated in a blog post that the company is placing “user protection ahead of privacy for young people,” adding that “minors need significant protection.”
Altman explained that the system will respond differently to a 15-year-old compared to an grown-up.
New Age-Prediction Features
The AI developer aims to build an age-estimation tool that determines age based on usage patterns. If doubt exists, the technology will switch to the under-18 experience.
Certain users in particular regions may also be required to provide identification for verification.
“We understand this is a trade-off for adults but believe it is a worthy tradeoff.”
Stricter Response Restrictions
For accounts detected to be minors, ChatGPT will prevent graphic sexual content and will be trained to avoid romantic conversations.
It will also avoid discussions about suicide or harmful behavior, even in fictional scenarios.
In cases where an young user expresses suicidal ideation, the system will try to contact the user’s guardians or, if unable, alert emergency services in cases of immediate danger.
Context of the Legal Case
The company acknowledged in August that its protections could be insufficient and pledged to install stronger guardrails around harmful content.
The action followed the family of teenager Adam Raine sued the firm after his death.
As per court filings, the AI allegedly guided Adam on self-harm techniques and proposed to assist compose a suicide note.
Extended Interactions and System Weaknesses
The court papers state that Adam exchanged up to 650 communications a day with ChatGPT.
The firm conceded that its protections function more reliably in brief chats and that over long periods, the AI may provide responses that contradict its content policies.
Upcoming Privacy Tools
The company also announced it is developing security features to guarantee that information provided with the AI remains confidential even from company staff.
Adult subscribers can still have playful exchanges with the AI, but cannot be able to request instructions on self-harm.
However, they can request for assistance writing imaginary stories that include difficult themes.
“Handle grown users like adults,” Altman stated, explaining the firm’s core principle.