1 vs 1 benchmarks won
Anthropic Claude Opus 4.6 | OpenAI GPT-5.4 | |
|---|---|---|
| Overview | ||
| Company | Anthropic | OpenAI |
| Release date | Feb 5 2026 | Mar 5 2026 |
| Model type | — | — |
| Open source | No | No |
| Specifications | ||
Parameters | — | — |
Context window | — | — |
| Benchmarks | ||
Science reasoning GPQA Diamond | 91.3% | 92.8% |
Software engineering SWE-Bench Verified | 80.8% | — |
Multimodal understanding MMMU | — | — |
| Timeline | ||
| Release gap | Claude Opus 4.6 shipped 28 days before GPT-5.4 | |
Claude Opus 4.6 and GPT-5.4 are evenly matched across the benchmarks they both publish. Claude Opus 4.6 shipped 28 days before GPT-5.4, so benchmark comparisons should account for the intervening progress.
Published specifications for these two models are limited — see each model page for the latest details.
On GPQA Diamond, GPT-5.4 scores 92.8%, 1.5 points above Claude Opus 4.6 at 91.3%.