Side-by-side comparison of Claude 3.5 Sonnet (Anthropic) and DeepSeek V3 (DeepSeek) — benchmarks, pricing, and capabilities.
| Claude 3.5 Sonnet Anthropic | DeepSeek V3 DeepSeek | |
|---|---|---|
| Category | LLMs | LLMs |
| Specifications | ||
| Context Window | 200K | 128K |
| Pricing (per 1M tokens) | ||
| Input Cost | $3.00 | $0.27 |
| Output Cost | $15.00 | $1.10 |
| Performance | ||
| Overall Score | 91.2 | 90.0 |
| ARC-Challenge | 96.2 | — |
| BigBench Hard | 86.0 | 85.5 |
| Chatbot Arena ELO | 1104.0 | — |
| DROP | 86.5 | 85.0 |
| GSM8K | 78.3 | 93.0 |
| HumanEval | 90.2 | 89.2 |
| MATH | 75.2 | 75.8 |
| MMLU | 88.1 | 87.1 |
| TruthfulQA | 71.0 | 68.0 |
| WinoGrande | 82.0 | — |
| Community | ||
| User Rating | ★ 4.6 | ★ 4.5 |
| Reviews | 890 | 620 |