Anthropic just released a 'Civilian' version of its 'Mythos' AI that's too dangerous for the public

Dario Amodei, Anthropic CEO
(Image credit: Getty Images)

Today, Anthropic officially released Claude Opus 4.7, the most powerful AI model available to the general public. On paper, it is promised to be a beast: a notable leap in advanced software engineering, substantially better vision for analysis capabilities and a new "self-verification" mode that allows it to audit its own work before it reports back to the user.

But there is a shadow hanging over this launch. For the first time in the history of frontier AI, a company has admitted to purposely making a model dumber in order to protect the world from it. Let me explain.

Opus 4.7 is the 'civilian-safe' version of the Mythos model

claude logo

(Image credit: Claude/Anthropic)

To truly get why the release of Opus 4.7 is such a milestone, you first have to understand the implications of Anthropic's Claude Mythos Preview. I'm mentioning it alongside today's launch mainly because Mythos remains the company's most powerful model. However, its release is strictly limited to cyber defenders and critical infrastructure partners. While Opus 4.7 is a "notable improvement" over previous versions, it is fundamentally a secondary tier.

Article continues below

In the release notes for Opus 4.7, Anthropic dropped a bombshell stating that during the training of Opus 4.7, the team experimented with efforts to "differentially reduce" the model’s cyber-offensive capabilities.

For you and me, that means the company intentionally nerfed the model’s ability to be used as a digital weapon.

Project Glasswing and the first real-world test

Claude voice mode

(Image credit: Anthropic)

Opus 4.7 serves as the first live guinea pig for Project Glasswing, the security initiative Anthropic unveiled last week. This framework introduces automated safeguards that detect and block prohibited or high-risk cybersecurity requests in real-time.

For the average developer, this means a more helpful assistant. For the security community, it means a gatekeeper.

If you are a professional researcher, you can no longer access these features anonymously. You must now apply for Anthropic’s new Cyber Verification Program. That move effectively puts "Frontier AI" behind a background check.

Opus 4.7 upgrades

Claude on a computer screen

(Image credit: Shutterstock)

Even with its wings clipped in cybersecurity, Opus 4.7 is promised to be a massive upgrade for professional workflows. If you aren't trying to hack a mainframe, here is what you’re getting:

  • Autonomous engineering: This new model makes it easier than ever to hand off your hardest coding work. Anthropic promises tasks that previously required "close supervision" can now be done with total confidence.
  • Self-verification: Opus 4.7 no longer just "guesses." It devises ways to verify its own outputs, running internal logical checks before reporting back. This is huge for hallucination reduction and fact-checking.
  • High-resolution vision: While image generation is still not part of Claude's features, the model can now see images in significantly greater resolution. This breakthrough could be useful for parsing complex technical diagrams, UI/UX mockups and even professional slides for your next presentation.
  • Creative "taste": Anthropic claims the model is more "tasteful" when generating professional documents, producing higher-quality interfaces and docs that feel less "AI-generated" and more human-refined. This is something I'm still eager to play around with, as it's been studied that "taste" is one of the hardest human aspects to replicate.

The takeaway

Claude Opus 4.7 is a "safe" powerhouse with pricing remaining the same as Opus 4.6: $5/M input tokens, $25/M output tokens. It promises to deliver a massive 3x increase in production task completion and nearly perfect vision accuracy (98.5%), all at the same price as its predecessor.

However, I'm cautiously optimistic because the real story here is that it’s the "civilian" version of Anthropic’s secret Mythos model; purposefully limited in its hacking abilities to test a new era of gated, identity-verified AI. We've entered a new era of AI and I'll be watching (and reporting) closely.

Have you tried it yet? Let me know in the comments what you think.


Google News

Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.


More from Tom's Guide

Amanda Caswell
AI Editor

Amanda Caswell is one of today’s leading voices in AI and technology. A celebrated contributor to various news outlets, her sharp insights and relatable storytelling have earned her a loyal readership. Amanda’s work has been recognized with prestigious honors, including outstanding contribution to media.

Known for her ability to bring clarity to even the most complex topics, Amanda seamlessly blends innovation and creativity, inspiring readers to embrace the power of AI and emerging technologies. As a certified prompt engineer, she continues to push the boundaries of how humans and AI can work together.

Beyond her journalism career, Amanda is a long-distance runner and mom of three. She lives in New Jersey.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.