
Anthropic just dropped Claude Opus 4.7, their newest flagship AI model. Worth a closer look if you’re paying for any of these tools.
Coding scores jumped from 80.8% to 87.6% on SWE-bench Verified, the standard test for fixing real GitHub bugs. Image resolution more than tripled, so it actually reads dense screenshots and scanned docs. New memory system holds context across long sessions. Same price as 4.6 at $5/M input and $25/M output.
It beats OpenAI’s GPT-5.4 and Google’s Gemini 3.1 Pro on the benchmarks that matter for real work, including coding, computer use, and financial analysis. Cleanest choice on the market today for production use.
Less talked about is the model they didn’t ship. Anthropic has a model called Mythos Preview that they say is more capable than Opus 4.7. Mythos isn’t for sale. It sits with ~40 partner companies through Project Glasswing, including JPMorgan Chase, Apple, AWS, Microsoft, Google, Cisco, CrowdStrike, Nvidia, and Broadcom. Opus 4.7 was trained with some capabilities deliberately reduced before shipping.
New commercial pattern in enterprise AI. The most powerful model goes to a handful of NDA partners. Public version ships with guardrails. Customers used to get the best model the vendor had. Now they get the best one the vendor will sell them.
For business buyers, the “best available model” label depends entirely on which tier of access your contract permits. The vendor’s most capable model and the one you can buy might be very different things now.
anthropic.com/news/claude-op…

English










