← Back to explorer

Qwen: Qwen3 Coder Next

Server-rendered model summary page for indexing/share previews. Use the interactive explorer for full filtering and comparison.

Match confidence: UnmatchedSource type: openrouter_only
Context window
262.1K
Arena overall rank
Input price
$0.000 / 1M
Output price
$0.000 / 1M

Identifiers & provenance

Primary ID
qwen/qwen3-coder-next
OpenRouter ID
qwen/qwen3-coder-next
Canonical slug
qwen/qwen3-coder-next-2025-02-03

Source semantics

  • Arena rank is a human-preference leaderboard signal, not a universal truth metric.
  • OpenRouter usage/popularity reflects adoption/traffic, not benchmark quality.
  • Pricing fields may differ by provider and can include extra modes beyond prompt/completion.

Read more on Methodology & data sources.

Description

Qwen3-Coder-Next is an open-weight causal language model optimized for coding agents and local development workflows. It uses a sparse MoE design with 80B total parameters and only 3B activated per token, delivering performance comparable to models with 10 to 20x higher active compute, which makes it well suited for cost-sensitive, always-on agent deployment. The model is trained with a strong agentic focus and performs reliably on long-horizon coding tasks, complex tool usage, and recovery from execution failures. With a native 256k context window, it integrates cleanly into real-world CLI and IDE environments and adapts well to common agent scaffolds used by modern coding tools. The model operates exclusively in non-thinking mode and does not emit <think> blocks, simplifying integration for production coding agents.

Raw fields snapshot

{
  "id": "qwen/qwen3-coder-next",
  "name": "Qwen: Qwen3 Coder Next",
  "description": "Qwen3-Coder-Next is an open-weight causal language model optimized for coding agents and local development workflows. It uses a sparse MoE design with 80B total parameters and only 3B activated per token, delivering performance comparable to models with 10 to 20x higher active compute, which makes it well suited for cost-sensitive, always-on agent deployment.\n\nThe model is trained with a strong agentic focus and performs reliably on long-horizon coding tasks, complex tool usage, and recovery from execution failures. With a native 256k context window, it integrates cleanly into real-world CLI and IDE environments and adapts well to common agent scaffolds used by modern coding tools. The model operates exclusively in non-thinking mode and does not emit <think> blocks, simplifying integration for production coding agents.",
  "created": 1770164101,
  "canonical_slug": "qwen/qwen3-coder-next-2025-02-03",
  "hugging_face_id": "Qwen/Qwen3-Coder-Next",
  "source_type": "openrouter_only",
  "context_length": 262144,
  "max_completion_tokens": 65536,
  "is_moderated": false,
  "architecture": {
    "modality": "text->text",
    "input_modalities": [
      "text"
    ],
    "output_modalities": [
      "text"
    ],
    "tokenizer": "Qwen",
    "instruct_type": null
  },
  "input_modalities": [
    "text"
  ],
  "output_modalities": [
    "text"
  ],
  "modality": "text->text",
  "tokenizer": "Qwen",
  "instruct_type": null,
  "supported_parameters": [
    "frequency_penalty",
    "logit_bias",
    "max_tokens",
    "min_p",
    "presence_penalty",
    "repetition_penalty",
    "response_format",
    "seed",
    "stop",
    "structured_outputs",
    "temperature",
    "tool_choice",
    "tools",
    "top_k",
    "top_p"
  ],
  "default_parameters": {
    "temperature": 1,
    "top_p": 0.95,
    "frequency_penalty": null
  },
  "per_request_limits": null,
  "top_provider": {
    "context_length": 262144,
    "max_completion_tokens": 65536,
    "is_moderated": false
  },
  "pricing": {
    "prompt": "0.00000012",
    "completion": "0.00000075",
    "input_cache_read": "0.00000006"
  },
  "PPM": {
    "prompt": 0.12,
    "completion": 0.75,
    "input_cache_read": 0.06
  },
  "openrouter_raw": {
    "id": "qwen/qwen3-coder-next",
    "canonical_slug": "qwen/qwen3-coder-next-2025-02-03",
    "hugging_face_id": "Qwen/Qwen3-Coder-Next",
    "name": "Qwen: Qwen3 Coder Next",
    "created": 1770164101,
    "description": "Qwen3-Coder-Next is an open-weight causal language model optimized for coding agents and local development workflows. It uses a sparse MoE design with 80B total parameters and only 3B activated per token, delivering performance comparable to models with 10 to 20x higher active compute, which makes it well suited for cost-sensitive, always-on agent deployment.\n\nThe model is trained with a strong agentic focus and performs reliably on long-horizon coding tasks, complex tool usage, and recovery from execution failures. With a native 256k context window, it integrates cleanly into real-world CLI and IDE environments and adapts well to common agent scaffolds used by modern coding tools. The model operates exclusively in non-thinking mode and does not emit <think> blocks, simplifying integration for production coding agents.",
    "context_length": 262144,
    "architecture": {
      "modality": "text->text",
      "input_modalities": [
        "text"
      ],
      "output_modalities": [
        "text"
      ],
      "tokenizer": "Qwen",
      "instruct_type": null
    },
    "pricing": {
      "prompt": "0.00000012",
      "completion": "0.00000075",
      "input_cache_read": "0.00000006"
    },
    "top_provider": {
      "context_length": 262144,
      "max_completion_tokens": 65536,
      "is_moderated": false
    },
    "per_request_limits": null,
    "supported_parameters": [
      "frequency_penalty",
      "logit_bias",
      "max_tokens",
      "min_p",
      "presence_penalty",
      "repetition_penalty",
      "response_format",
      "seed",
      "stop",
      "structured_outputs",
      "temperature",
      "tool_choice",
      "tools",
      "top_k",
      "top_p"
    ],
    "default_parameters": {
      "temperature": 1,
      "top_p": 0.95,
      "frequency_penalty": null
    },
    "expiration_date": null
  }
}