OpenAI CEO Sam Altman has announced that ChatGPT will allow adult users to engage in “erotic”, sexually explicit content this December.
As long as users verify their age, they’ll be able to engage in erotic or romantic conversations. OpenAI has also hinted that they’ll allow developers to create “mature” ChatGPT apps once it implements a new age verification system. The company claims this is a part of its “treat adult users like adults” principle.
This news is a departure from OpenAI reluctance to introduce mature content to ChatGPT. Especially since there are growing concerns around the ways AI can affect our mental health. I have two problems with this new policy.
Are there protections for vulnerable adults?
One, Altman hasn’t said anything about how OpenAI will protect adult users who are prone to delusional behavior.
There’s been a lot of talk on how minors need to be protected from inappropriate content online, which is a problem. Yet, little to no attention is given to discussing how vulnerable adults will interact with erotic AI.
Loneliness, social anxiety, and disconnection drive the demand for AI companionship. Those prone to emotional instability, compulsive behavior, or delusional thinking are in danger of developing an unhealthy attachment to AI. There’s been cases of users believing the bot is sentient. Researchers warn of “feedback loops” between AI and mental illness. That’s when a model will reinforce toxic behavior by constantly agreeing with the user.
OpenAI says GPT-5 better reduces sycophancy but model upgrades don’t resolve deep psychological issues. Emotional dependency doesn’t happen overnight. It builds over long conversations. Erotica intensifies intimacy, blurring the line between a simple chatbot and a lover. For some people, it’s too much for them to handle.
Altman’s confidence masks ambition
The second issue I have is that Altman sounds arrogantly confident that his company has fixed the negative effects ChatGPT can have on mental health. He says OpenAI has “mitigated the serious mental health issues” and it’s now safe to relax its guardrails.
The changes OpenAI has made: new models, prediction systems, an expert council, haven’t been around long enough to determine if they’re actually working. The Expert Council on Well-Being is a start. It consists of eight researchers and experts who study the impact technology has on mental health. However, there are no suicide prevention specialists. The same experts who’ve asked OpenAI to add guardrails for users with suicidal thoughts.
Beyond that, there’s a motive for all of this. ChatGPT already has over 800 million weekly active users, yet they’re trying to push that number up to a billion. Erotica is a category with the potential draw in new users. Altman can deny it all he wants, but this is clearly an attempt to boost engagement. They need an insanely large user base to justify the investments they made to infrastructure.
A safety framework OpenAI must adopt
- Graduated intimacy tiers: Don’t let users jump straight into explicit content. Have phases that gradually increase intimacy.
- In-app emotional check-ins: Interrupt when signs of dependency or obsession emerge.
- Emergency mode: If a user shows signs of self-harm or distress, immediately end the erotic content and activate crisis protocols.
- Easy exits: Users should be able to delete erotic chats or revert back to regular mode.
- Longitudinal studies on mental health impact: Keep track of any psychological toll erotic content has on users over long periods of use.
OpenAI’s choice to allow erotic content in ChatGPT could be a fun idea. I agree that a system should be put in place to block minors from accessing it. The bigger issue is the risk of adults struggling with their mental health forming destructive attachments to AI. If OpenAI can’t (or won’t) introduce protections that will be enforced, then it should pause the rollout.