Side-by-side comparison of DeepSeek V3 (DeepSeek) and Claude 3 Opus (Anthropic) — benchmarks, pricing, and capabilities.
| DeepSeek V3 DeepSeek | Claude 3 Opus Anthropic | |
|---|---|---|
| Category | LLMs | LLMs |
| Specifications | ||
| Context Window | 128K | 200K |
| Pricing (per 1M tokens) | ||
| Input Cost | $0.27 | $15.00 |
| Output Cost | $1.10 | $75.00 |
| Performance | ||
| Overall Score | 90.0 | 90.5 |
| ARC-Challenge | — | 90.7 |
| BigBench Hard | 85.5 | 84.5 |
| Chatbot Arena ELO | — | 1178.0 |
| DROP | 85.0 | 85.5 |
| GSM8K | 93.0 | 95.0 |
| HumanEval | 89.2 | 84.9 |
| MATH | 75.8 | 60.1 |
| MMLU | 87.1 | 86.8 |
| TruthfulQA | 68.0 | 69.0 |
| WinoGrande | — | 84.0 |
| Community | ||
| User Rating | ★ 4.5 | ★ 4.6 |
| Reviews | 620 | 720 |