Anthropic introduces next gen models Claude Opus 4 and Sonnet 4
After whirlwind week of announcements from Google and OpenAI, Anthropic has its own news to share.
On Thursday, Anthropic announced Claude Opus 4 and Claude Sonnet 4, its next generation of models, with an emphasis on coding, reasoning, and agentic capabilities. According to Rakuten, which got early access to the model, Claude Opus 4 ran “independently for seven hours with sustained performance.”
Claude Opus is Anthropic’s largest version of the model family with more power for longer, complex tasks, whereas Sonnet is generally speedier and more efficient. Claude Opus 4 is a step up from its previous version, Opus 3, and Sonnet 4 replaces Sonnet 3.7.
Mashable Light Speed
Anthropic says Claude Opus 4 and Sonnet 4 outperform rivals like OpenAI’s o3 and Gemini 2.5 Pro on key benchmarks for agentic coding tasks like SWE-bench and Terminal-bench. It’s worth noting however, that self-reported benchmarks aren’t considered the best markers of performance since these evaluations don’t always translate to real-world use cases, plus AI labs aren’t into the whole transparency thing these days, which AI researchers and policy makers increasingly call for. “AI benchmarks need to be subjected to the same demands concerning transparency, fairness, and explainability, as algorithmic systems and AI models writ large,” said the European Commission’s Joint Research Center.

Opus 4 and Sonnet 4 outperform rivals in SWE-bench, but take benchmark performance with a grain of salt.
Credit: Anthropic
Alongside the launch of Opus 4 and Sonnet 4, Anthropic also introduced new features. That includes web search while Claude is in extended thinking mode, and summaries of Claude’s reasoning log “instead of Claude’s raw thought process.” This is described in the blog post as being more helpful to users, but also “protecting [its] competitive advantage,” i.e. not revealing the ingredients of its secret sauce. Anthropic also announced improved memory and tool use in parallel with other operations, general availability of its agentic coding tool Claude Code, and additional tools for the Claude API.
In the safety and alignment realm, Anthropic said both models are “65 percent less likely to engage in reward hacking than Claude Sonnet 3.7.” Reward hacking is a slightly terrifying phenomenon where models can essentially cheat and lie to earn a reward (successfully perform a task).
One of the best indicators we have in evaluating a model’s performance is users’ own experience with it, although even more subjective than benchmarks. But we’ll soon find out how Claude Opus 4 and Sonnet 4 chalk up to competitors in that regard.
Topics
Artificial Intelligence