Look, this is a significant shift we’re seeing across the AI industry, and OpenAI is now putting a clear timeline on something that’s been more theoretical until now.
Here’s what’s actually happening: OpenAI is saying they’ll launch an “adult mode” in ChatGPT by the first quarter of 2026. That’s roughly 15 months from now. And the key factor holding them back isn’t the technology to generate adult content – it’s the technology to verify who’s actually using it.
Let’s be clear about what’s driving this. Companies like xAI with Grok have already moved into this space. There’s competitive pressure. There’s user demand. And perhaps most importantly, there’s a business calculation happening here about where these platforms are headed.
But here’s where it gets complicated: OpenAI says they’re developing an age prediction model. Not age verification – age prediction. They’re trying to build AI that can guess whether you’re under 18 based on… what exactly? Your typing patterns? Your conversation topics? The company isn’t being fully transparent about that yet.
Fidji Simo says they want to avoid “mis-identifying adults” as teens. But what about the other direction? What happens when the system mis-identifies a 15-year-old as an adult? That’s the question that should concern parents, educators, and frankly, anyone thinking about the implications here.
We’ve seen this pattern before with social media – rush to capture market share, figure out the guardrails later. The difference now is that we’re dealing with generative AI that can create custom content on demand.
What’s your take on this? Are you concerned about how they’re approaching age verification, or do you see this as an inevitable evolution of these platforms?
