← Back to explorer

Qwen: Qwen3 Coder 30B A3B Instruct

Server-rendered model summary page for indexing/share previews. Use the interactive explorer for full filtering and comparison.

Match confidence: UnmatchedSource type: openrouter_only
Context window
160K
Arena overall rank
Input price
$0.000 / 1M
Output price
$0.000 / 1M

Identifiers & provenance

Primary ID
qwen/qwen3-coder-30b-a3b-instruct
OpenRouter ID
qwen/qwen3-coder-30b-a3b-instruct
Canonical slug
qwen/qwen3-coder-30b-a3b-instruct

Source semantics

  • Arena rank is a human-preference leaderboard signal, not a universal truth metric.
  • OpenRouter usage/popularity reflects adoption/traffic, not benchmark quality.
  • Pricing fields may differ by provider and can include extra modes beyond prompt/completion.

Read more on Methodology & data sources.

Description

Qwen3-Coder-30B-A3B-Instruct is a 30.5B parameter Mixture-of-Experts (MoE) model with 128 experts (8 active per forward pass), designed for advanced code generation, repository-scale understanding, and agentic tool use. Built on the Qwen3 architecture, it supports a native context length of 256K tokens (extendable to 1M with Yarn) and performs strongly in tasks involving function calls, browser use, and structured code completion. This model is optimized for instruction-following without “thinking mode”, and integrates well with OpenAI-compatible tool-use formats.

Raw fields snapshot

{
  "id": "qwen/qwen3-coder-30b-a3b-instruct",
  "name": "Qwen: Qwen3 Coder 30B A3B Instruct",
  "description": "Qwen3-Coder-30B-A3B-Instruct is a 30.5B parameter Mixture-of-Experts (MoE) model with 128 experts (8 active per forward pass), designed for advanced code generation, repository-scale understanding, and agentic tool use. Built on the Qwen3 architecture, it supports a native context length of 256K tokens (extendable to 1M with Yarn) and performs strongly in tasks involving function calls, browser use, and structured code completion.\n\nThis model is optimized for instruction-following without “thinking mode”, and integrates well with OpenAI-compatible tool-use formats. ",
  "created": 1753972379,
  "canonical_slug": "qwen/qwen3-coder-30b-a3b-instruct",
  "hugging_face_id": "Qwen/Qwen3-Coder-30B-A3B-Instruct",
  "source_type": "openrouter_only",
  "context_length": 160000,
  "max_completion_tokens": 32768,
  "is_moderated": false,
  "architecture": {
    "modality": "text->text",
    "input_modalities": [
      "text"
    ],
    "output_modalities": [
      "text"
    ],
    "tokenizer": "Qwen3",
    "instruct_type": null
  },
  "input_modalities": [
    "text"
  ],
  "output_modalities": [
    "text"
  ],
  "modality": "text->text",
  "tokenizer": "Qwen3",
  "instruct_type": null,
  "supported_parameters": [
    "frequency_penalty",
    "max_tokens",
    "presence_penalty",
    "repetition_penalty",
    "response_format",
    "seed",
    "stop",
    "structured_outputs",
    "temperature",
    "tool_choice",
    "tools",
    "top_k",
    "top_p"
  ],
  "default_parameters": {},
  "per_request_limits": null,
  "top_provider": {
    "context_length": 160000,
    "max_completion_tokens": 32768,
    "is_moderated": false
  },
  "pricing": {
    "prompt": "0.00000007",
    "completion": "0.00000027"
  },
  "PPM": {
    "prompt": 0.07,
    "completion": 0.27
  },
  "openrouter_raw": {
    "id": "qwen/qwen3-coder-30b-a3b-instruct",
    "canonical_slug": "qwen/qwen3-coder-30b-a3b-instruct",
    "hugging_face_id": "Qwen/Qwen3-Coder-30B-A3B-Instruct",
    "name": "Qwen: Qwen3 Coder 30B A3B Instruct",
    "created": 1753972379,
    "description": "Qwen3-Coder-30B-A3B-Instruct is a 30.5B parameter Mixture-of-Experts (MoE) model with 128 experts (8 active per forward pass), designed for advanced code generation, repository-scale understanding, and agentic tool use. Built on the Qwen3 architecture, it supports a native context length of 256K tokens (extendable to 1M with Yarn) and performs strongly in tasks involving function calls, browser use, and structured code completion.\n\nThis model is optimized for instruction-following without “thinking mode”, and integrates well with OpenAI-compatible tool-use formats. ",
    "context_length": 160000,
    "architecture": {
      "modality": "text->text",
      "input_modalities": [
        "text"
      ],
      "output_modalities": [
        "text"
      ],
      "tokenizer": "Qwen3",
      "instruct_type": null
    },
    "pricing": {
      "prompt": "0.00000007",
      "completion": "0.00000027"
    },
    "top_provider": {
      "context_length": 160000,
      "max_completion_tokens": 32768,
      "is_moderated": false
    },
    "per_request_limits": null,
    "supported_parameters": [
      "frequency_penalty",
      "max_tokens",
      "presence_penalty",
      "repetition_penalty",
      "response_format",
      "seed",
      "stop",
      "structured_outputs",
      "temperature",
      "tool_choice",
      "tools",
      "top_k",
      "top_p"
    ],
    "default_parameters": {},
    "expiration_date": null
  }
}
Qwen: Qwen3 Coder 30B A3B Instruct · NNZen