0 vs 2 benchmarks won
Anthropic Claude Opus 4.1 | OpenAI GPT-5.2 | |
|---|---|---|
| Overview | ||
| Company | Anthropic | OpenAI |
| Release date | Aug 5 2025 | Dec 11 2025 |
| Model type | — | — |
| Open source | No | No |
| Specifications | ||
Parameters | — | — |
Context window | — | — |
| Benchmarks | ||
Science reasoning GPQA Diamond | 80.9% | 92.4% |
Software engineering SWE-Bench Verified | 74.5% | 80% |
Multimodal understanding MMMU | — | — |
| Timeline | ||
| Release gap | Claude Opus 4.1 shipped 128 days before GPT-5.2 | |
GPT-5.2 leads Claude Opus 4.1 on 2 of the tracked benchmarks (GPQA Diamond, SWE-Bench Verified, MMMU). Claude Opus 4.1 shipped 128 days before GPT-5.2, so benchmark comparisons should account for the intervening progress.
Published specifications for these two models are limited — see each model page for the latest details.
On GPQA Diamond, GPT-5.2 scores 92.4%, 11.5 points above Claude Opus 4.1 at 80.9%. On SWE-Bench Verified, GPT-5.2 scores 80%, 5.5 points above Claude Opus 4.1 at 74.5%.