ChatGPT lost me at 'erotica' — here’s the chatbot I use now for productivity

The other day my 11-year-old son said, "Hey mom, check out the cool soccer videos I made with Sora 2." Before I could look at the AI video generation he was so proud of, I had to know how he had access to Sora 2 without a code. "I'm using your account," he said.
I wasn't angry that he was using my account, but the surprise gave me pause. OpenAI's CEO Sam Altman announced on X that ChatGPT will now allow "erotica for verified adults" just weeks after announcing Parental Controls. This doesn't sit well with me. Anyone with a child old enough to feed themselves (I realize this leaves new dad Sam Altman out) knows first-hand that kids can get into anything. Anything.
The juxtaposition is jarring: OpenAI rolls out parental controls, then announces adult content access weeks later. It's not that these two features can't coexist; plenty of platforms manage age-gated content responsibly. But the speed and timing suggest a company moving fast and thinking later, which is exactly the wrong approach when your product is increasingly embedded in family life.
My son using my account isn't unusual. ChatGPT has become homework help, creative partner and entertainment for millions of kids. And yes, kids under 13 should not be on ChatGPT or share accounts, but that's what kids do — they figure out passwords, they use family iPads, they ask Siri to open mom's ChatGPT app. The idea that "verified adults" creates any meaningful barrier is, to put it mildly, optimistic.
The pattern that bothers me
I live and breathe all things AI for a living. So, I know just one policy decision isn’t where this ends. This is in fact a starting point and a pattern I've been watching with growing concern.
When ChatGPT first launched, it felt revolutionary, almost safe. Three years ago we weren’t even thinking about guardrails yet because everyday users were just trying to decide what was possible. Because OpenAI was the most known AI available, it positioned itself as the responsible AI company.
A few years later, that narrative feels strained.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
The erotica announcement is just the latest example. There's also the rushed release of Sora 2 without clear content policies. The integration of Sora video generation — impressive technology, yes, but also a deepfake engine — without serious verification systems is intolerable. The pivot to a for-profit structure that prioritizes growth over the original mission.
Each decision individually seems defensible. Together, they underscore a company that's stopped asking "should we?" and started asking "how fast can we ship this?"
What’s wild is that OpenAI isn’t the only one with questionable guardrails. Grok anyone? How about Meta? These systems are all flawed and without the recognition that kids will find ways around barriers, so multiple layers of protection matter.
The transparency problem
Beyond content policies, there's the question of trust. OpenAI has been vague about web scraping, contradictory about user conversations, and opaque about partnerships. When The New York Times sued over copyright infringement, OpenAI's defense amounted to "AI needs data to learn." Fair enough — but whose data, obtained how, with what consent?
As someone who writes content professionally, this matters to me. As a parent whose children will grow up using ChatGPT, it matters even more. What happens to the essays my son feeds into ChatGPT for feedback? The creative stories my daughter generates? OpenAI says it doesn't train on user data by default, but the privacy policy leaves room for interpretation, and the settings are Byzantine.
What I'm looking for instead
So what does a responsible AI assistant look like in 2025?
For me, it comes down to three things:
1. Clear content policies designed for shared devices. If your product markets itself to families, act like it. That means assuming kids will access adult accounts. It means building friction into sensitive features — not just "Are you 18?" checkboxes, but actual barriers that require parent involvement.
2. Genuine transparency regarding training data and privacy. I want to know what my conversations are used for. I want simple controls to opt out. And I want the company to be honest about the copyrighted material in its training sets, rather than hiding behind "the internet is our dataset" vagueness.
3. A mission that hasn't been compromised by growth targets. I understand companies need to make money. But when the company that promised to "ensure [artificial general intelligence] benefits all of humanity" starts offering erotica generation as a premium feature, it's fair to question whether that mission still guides decisions.
Where I landed
I've switched my family to Claude, Anthropic's AI assistant. It's not perfect, but the differences matter. Anthropic prioritizes safety first and foremost.
The compnay has a Constitutional AI approach that bakes safety into the model itself, not just as post-training filters. They're more transparent about training data; they don't scrape the open web indiscriminately, and they've been clearer about sourcing. Their privacy policy is straightforward: conversations aren't used for training unless you explicitly opt in.
Most importantly, they don't seem to be in a race to the bottom on content policies. Claude won't generate erotica, deepfakes or help you write malware, because those capabilities weren't built in.
There's a difference between "we can but we block it" and "we chose not to build it that way."
For writing, research and helping my kids with homework, Claude does everything ChatGPT can do, but better, with less hedging and more thoughtful responses.
Claude's Artifacts feature (which lets you build interactive code, documents, and visualizations) has been a game-changer for my son's science projects. The point isn't that everyone needs to switch. The point is that we should demand better from the companies building these tools.
What OpenAI can do
Here's the thing: I don't want OpenAI to fail. ChatGPT introduced millions of people to AI, including my family. That's valuable. But leadership means making hard choices, not just shipping features. Here's what could restore my trust:
- Real family accounts. Not parental controls bolted onto adult accounts, but genuinely separate environments where kids can use AI safely. Think Netflix profiles, not content filters.
- A moratorium on sensitive features until better safeguards exist. Erotica generation, voice cloning, video synthesis — these are powerful tools with obvious abuse potential. Slow down. Get the safety infrastructure right first.
- Actual transparency about training data. Publish detailed information about data sources, licensing, and consent. Let creators opt out proactively, not just reactively when they sue.
- A return to mission. Remember when OpenAI's charter said profit would never override safety? Hold to that. Even if it means growing slower than Anthropic or Google.
Bottom line
AI is here to stay and parents like me have a choice to make.
The night my son showed me his Sora videos, I realized we're in a new parenting phase.
In just a few years I’m doing more than monitoring screen time, I’m deciding which AI tools my kids can use and learn from. We're now monitoring screen time, checking browser history and deciding which AI tools our kids use, what they learn from them and what values those tools embody.
Just as I’ve taught my kids to evaluate websites for credibility and social media for authenticity, I’m consistently teaching them to evaluate AI tools for alignment with our values.
OpenAI's recent decisions have shown me their values don't align with mine — not as a parent, not as a creator, not as someone who believes AI should be built thoughtfully rather than recklessly.
So we're done with ChatGPT in our house. Sure, I still have to use and test it for work, but out of the same protective instinct that makes a mom switch pediatricians when something feels off, I’m doing the same thing with AI tools at home.
For families still using AI tools, I encourage you to ask questions. Read the policies. Check what your kids can access. And remember that "verified adults only" has never stopped a determined 11-year-old.
The AI revolution is here, and it's not going away. But we get to choose which companies we trust to build it.
Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!
More from Tom's Guide
- Forget Sora 2 — Veo 3.1 just launched and it’s faster and offers more tools
- Claude Haiku 4.5 just launched — and vibe coding will never be the same
- Gemini 3 is rumored to launch soon — here are 5 reasons I can’t wait to use it











Amanda Caswell is an award-winning journalist, bestselling YA author, and one of today’s leading voices in AI and technology. A celebrated contributor to various news outlets, her sharp insights and relatable storytelling have earned her a loyal readership. Amanda’s work has been recognized with prestigious honors, including outstanding contribution to media.
Known for her ability to bring clarity to even the most complex topics, Amanda seamlessly blends innovation and creativity, inspiring readers to embrace the power of AI and emerging technologies. As a certified prompt engineer, she continues to push the boundaries of how humans and AI can work together.
Beyond her journalism career, Amanda is a long-distance runner and mom of three. She lives in New Jersey.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.