2 vs 0 benchmarks won
Anthropic Claude Opus 4.1 | xAI Grok Code Fast 1 | |
|---|---|---|
| Overview | ||
| Company | Anthropic | xAI |
| Release date | Aug 5 2025 | Aug 28 2025 |
| Model type | — | — |
| Open source | No | No |
| Specifications | ||
Parameters | — | — |
Context window | — | — |
| Benchmarks | ||
Science reasoning GPQA Diamond | 80.9% | — |
Software engineering SWE-Bench Verified | 74.5% | — |
Multimodal understanding MMMU | — | — |
| Timeline | ||
| Release gap | Claude Opus 4.1 shipped 23 days before Grok Code Fast 1 | |
Claude Opus 4.1 leads Grok Code Fast 1 on 2 of the tracked benchmarks (GPQA Diamond, SWE-Bench Verified, MMMU). Claude Opus 4.1 shipped 23 days before Grok Code Fast 1, so benchmark comparisons should account for the intervening progress.
Published specifications for these two models are limited — see each model page for the latest details.
Direct benchmark comparisons are unavailable — at least one of these models has not published scores on GPQA Diamond, SWE-Bench Verified, or MMMU.