0 vs 3 benchmarks won
Anthropic Claude Opus 4 | Google Gemini 3.0 Pro | |
|---|---|---|
| Overview | ||
| Company | Anthropic | |
| Release date | May 22 2025 | Nov 18 2025 |
| Model type | — | — |
| Open source | No | No |
| Specifications | ||
Parameters | — | — |
Context window | — | — |
| Benchmarks | ||
Science reasoning GPQA Diamond | 79.6% | 91.9% |
Software engineering SWE-Bench Verified | 72.5% | 76.2% |
Multimodal understanding MMMU | — | 81% |
| Timeline | ||
| Release gap | Claude Opus 4 shipped 180 days before Gemini 3.0 Pro | |
Gemini 3.0 Pro leads Claude Opus 4 on 3 of the tracked benchmarks (GPQA Diamond, SWE-Bench Verified, MMMU). Claude Opus 4 shipped 180 days before Gemini 3.0 Pro, so benchmark comparisons should account for the intervening progress.
Published specifications for these two models are limited — see each model page for the latest details.
On GPQA Diamond, Gemini 3.0 Pro scores 91.9%, 12.3 points above Claude Opus 4 at 79.6%. On SWE-Bench Verified, Gemini 3.0 Pro scores 76.2%, 3.7 points above Claude Opus 4 at 72.5%.