0 vs 3 benchmarks won
Anthropic Claude Haiku 4.5 | OpenAI GPT-5.1 | |
|---|---|---|
| Overview | ||
| Company | Anthropic | OpenAI |
| Release date | Oct 15 2025 | Nov 12 2025 |
| Model type | — | — |
| Open source | No | No |
| Specifications | ||
Parameters | — | — |
Context window | — | — |
| Benchmarks | ||
Science reasoning GPQA Diamond | 73% | 88.1% |
Software engineering SWE-Bench Verified | 73.3% | 76.3% |
Multimodal understanding MMMU | — | 76% |
| Timeline | ||
| Release gap | Claude Haiku 4.5 shipped 28 days before GPT-5.1 | |
GPT-5.1 leads Claude Haiku 4.5 on 3 of the tracked benchmarks (GPQA Diamond, SWE-Bench Verified, MMMU). Claude Haiku 4.5 shipped 28 days before GPT-5.1, so benchmark comparisons should account for the intervening progress.
Published specifications for these two models are limited — see each model page for the latest details.
On GPQA Diamond, GPT-5.1 scores 88.1%, 15.1 points above Claude Haiku 4.5 at 73%. On SWE-Bench Verified, GPT-5.1 scores 76.3%, 3 points above Claude Haiku 4.5 at 73.3%.