DeepSeek-V3.2 is a large language model designed to harmonize high computational efficiency with strong reasoning and agentic tool-use performance. It introduces DeepSeek Sparse Attention (DSA), a fine-grained sparse attention mechanism that reduces training and inference cost while preserving quality in long-context scenarios. A scalable reinforcement learning post-training framework further improves reasoning, with reported performance in the GPT-5 class, and the model has demonstrated gold-medal results on the 2025 IMO and IOI. V3.2 also uses a large-scale agentic task synthesis pipeline to better connect reasoning into tool-use settings, boosting compliance and generalization in interactive environments.
Users can control the reasoning behaviour with the reasoning enabled boolean. Learn more in our docs
Share
| Usage pricing | |
|---|---|
| Prompt | $0.00000028 |
| Completion | $0.0000004 |
| Request | FREE |
| Image | FREE |
| Web Search | FREE |
| Internal Reasoning | FREE |
| Input Cache Read | FREE |
| Input Cache Write | FREE |