← Back to explorer

DeepSeek: DeepSeek V3.2 Exp

Server-rendered model summary page for indexing/share previews. Use the interactive explorer for full filtering and comparison.

Match confidence: UnmatchedSource type: model_only
Context window
0
Arena overall rank
Input price
$0.000 / 1M
Output price
$0.000 / 1M

Identifiers & provenance

Primary ID
deepseek/deepseek-v3.2-exp
OpenRouter ID
deepseek/deepseek-v3.2-exp
Canonical slug
deepseek/deepseek-v3.2-exp

Source semantics

  • Arena rank is a human-preference leaderboard signal, not a universal truth metric.
  • OpenRouter usage/popularity reflects adoption/traffic, not benchmark quality.
  • Pricing fields may differ by provider and can include extra modes beyond prompt/completion.

Read more on Methodology & data sources.

Description

DeepSeek-V3.2-Exp is an experimental large language model released by DeepSeek as an intermediate step between V3.1 and future architectures. It introduces DeepSeek Sparse Attention (DSA), a fine-grained sparse attention mechanism designed to improve training and inference efficiency in long-context scenarios while maintaining output quality. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config) The model was trained under conditions aligned with V3.1-Terminus to enable direct comparison. Benchmarking shows performance roughly on par with V3.1 across reasoning, coding, and agentic tool-use tasks, with minor tradeoffs and gains depending on the domain. This release focuses on validating architectural optimizations for extended context lengths rather than advancing raw task accuracy, making it primarily a research-oriented model for exploring efficient transformer designs.

Raw fields snapshot

{
  "id": "deepseek/deepseek-v3.2-exp",
  "canonical_slug": "deepseek/deepseek-v3.2-exp",
  "name": "DeepSeek: DeepSeek V3.2 Exp",
  "display_name": "DeepSeek: DeepSeek V3.2 Exp",
  "provider": "deepseek",
  "description": "DeepSeek-V3.2-Exp is an experimental large language model released by DeepSeek as an intermediate step between V3.1 and future architectures. It introduces DeepSeek Sparse Attention (DSA), a fine-grained sparse attention mechanism designed to improve training and inference efficiency in long-context scenarios while maintaining output quality. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config)\n\nThe model was trained under conditions aligned with V3.1-Terminus to enable direct comparison. Benchmarking shows performance roughly on par with V3.1 across reasoning, coding, and agentic tool-use tasks, with minor tradeoffs and gains depending on the domain. This release focuses on validating architectural optimizations for extended context lengths rather than advancing raw task accuracy, making it primarily a research-oriented model for exploring efficient transformer designs.",
  "context_length": null,
  "source_type": "model_only",
  "best_rank": null,
  "pricing": {
    "prompt": null,
    "completion": null
  },
  "pricing_summary": {},
  "capabilities": {
    "modalities": [
      "text"
    ],
    "context_length": null,
    "architecture": {
      "modality": "text->text",
      "input_modalities": [
        "text"
      ],
      "output_modalities": [
        "text"
      ],
      "tokenizer": "DeepSeek",
      "instruct_type": "deepseek-v3.1"
    }
  },
  "__detail_source": "model_snapshot",
  "__raw_snapshot": {
    "model": {
      "id": "deepseek/deepseek-v3.2-exp",
      "slug": "deepseek/deepseek-v3.2-exp",
      "display_name": "DeepSeek: DeepSeek V3.2 Exp",
      "provider": "deepseek",
      "description": "DeepSeek-V3.2-Exp is an experimental large language model released by DeepSeek as an intermediate step between V3.1 and future architectures. It introduces DeepSeek Sparse Attention (DSA), a fine-grained sparse attention mechanism designed to improve training and inference efficiency in long-context scenarios while maintaining output quality. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config)\n\nThe model was trained under conditions aligned with V3.1-Terminus to enable direct comparison. Benchmarking shows performance roughly on par with V3.1 across reasoning, coding, and agentic tool-use tasks, with minor tradeoffs and gains depending on the domain. This release focuses on validating architectural optimizations for extended context lengths rather than advancing raw task accuracy, making it primarily a research-oriented model for exploring efficient transformer designs.",
      "context_length": null,
      "modalities": [
        "text"
      ],
      "tags": [],
      "source_type": "model_only",
      "updated_at": "2026-03-01T02:42:37.600751+00:00",
      "source": "model_only"
    },
    "overall_score": null,
    "best_rank": null,
    "ranks_by_category": {},
    "scores_by_category": {},
    "pricing_summary": {},
    "capabilities": {
      "modalities": [
        "text"
      ],
      "context_length": null,
      "architecture": {
        "modality": "text->text",
        "input_modalities": [
          "text"
        ],
        "output_modalities": [
          "text"
        ],
        "tokenizer": "DeepSeek",
        "instruct_type": "deepseek-v3.1"
      }
    }
  }
}
DeepSeek: DeepSeek V3.2 Exp · NNZen