1 vs 2 benchmarks won
Anthropic Claude Opus 4.5 | Google Gemini 3.0 Pro | |
|---|---|---|
| Overview | ||
| Company | Anthropic | |
| Release date | Nov 24 2025 | Nov 18 2025 |
| Model type | — | — |
| Open source | No | No |
| Specifications | ||
Parameters | — | — |
Context window | — | — |
| Benchmarks | ||
Science reasoning GPQA Diamond | 87% | 91.9% |
Software engineering SWE-Bench Verified | 80.9% | 76.2% |
Multimodal understanding MMMU | — | 81% |
| Timeline | ||
| Release gap | Gemini 3.0 Pro shipped 6 days before Claude Opus 4.5 | |
Gemini 3.0 Pro leads Claude Opus 4.5 on 2 of the tracked benchmarks (GPQA Diamond, SWE-Bench Verified, MMMU). Gemini 3.0 Pro shipped 6 days before Claude Opus 4.5, so benchmark comparisons should account for the intervening progress.
Published specifications for these two models are limited — see each model page for the latest details.
On GPQA Diamond, Gemini 3.0 Pro scores 91.9%, 4.9 points above Claude Opus 4.5 at 87%. On SWE-Bench Verified, Claude Opus 4.5 scores 80.9%, 4.7 points above Gemini 3.0 Pro at 76.2%.