Side-by-side comparison of DeepSeek V3 (DeepSeek) and Claude 3.5 Sonnet (Anthropic) — benchmarks, pricing, and capabilities.
| DeepSeek V3 DeepSeek | Claude 3.5 Sonnet Anthropic | |
|---|---|---|
| Category | LLMs | LLMs |
| Specifications | ||
| Context Window | 128K | 200K |
| Pricing (per 1M tokens) | ||
| Input Cost | $0.27 | $3.00 |
| Output Cost | $1.10 | $15.00 |
| Performance | ||
| Overall Score | 90.0 | 91.2 |
| ARC-Challenge | — | 96.2 |
| BigBench Hard | 85.5 | 86.0 |
| Chatbot Arena ELO | — | 1104.0 |
| DROP | 85.0 | 86.5 |
| GSM8K | 93.0 | 78.3 |
| HumanEval | 89.2 | 90.2 |
| MATH | 75.8 | 75.2 |
| MMLU | 87.1 | 88.1 |
| TruthfulQA | 68.0 | 71.0 |
| WinoGrande | — | 82.0 |
| Community | ||
| User Rating | ★ 4.5 | ★ 4.6 |
| Reviews | 620 | 890 |