Qwen: Qwen3 30B A3B Thinking 2507
Server-rendered model summary page for indexing/share previews. Use the interactive explorer for full filtering and comparison.
Identifiers & provenance
- Primary ID
- qwen/qwen3-30b-a3b-thinking-2507
- OpenRouter ID
- qwen/qwen3-30b-a3b-thinking-2507
- Canonical slug
- qwen/qwen3-30b-a3b-thinking-2507
Source semantics
- Arena rank is a human-preference leaderboard signal, not a universal truth metric.
- OpenRouter usage/popularity reflects adoption/traffic, not benchmark quality.
- Pricing fields may differ by provider and can include extra modes beyond prompt/completion.
Read more on Methodology & data sources.
Description
Qwen3-30B-A3B-Thinking-2507 is a 30B parameter Mixture-of-Experts reasoning model optimized for complex tasks requiring extended multi-step thinking. The model is designed specifically for “thinking mode,” where internal reasoning traces are separated from final answers. Compared to earlier Qwen3-30B releases, this version improves performance across logical reasoning, mathematics, science, coding, and multilingual benchmarks. It also demonstrates stronger instruction following, tool use, and alignment with human preferences. With higher reasoning efficiency and extended output budgets, it is best suited for advanced research, competitive problem solving, and agentic applications requiring structured long-context reasoning.
Raw fields snapshot
{
"id": "qwen/qwen3-30b-a3b-thinking-2507",
"name": "Qwen: Qwen3 30B A3B Thinking 2507",
"description": "Qwen3-30B-A3B-Thinking-2507 is a 30B parameter Mixture-of-Experts reasoning model optimized for complex tasks requiring extended multi-step thinking. The model is designed specifically for “thinking mode,” where internal reasoning traces are separated from final answers.\n\nCompared to earlier Qwen3-30B releases, this version improves performance across logical reasoning, mathematics, science, coding, and multilingual benchmarks. It also demonstrates stronger instruction following, tool use, and alignment with human preferences. With higher reasoning efficiency and extended output budgets, it is best suited for advanced research, competitive problem solving, and agentic applications requiring structured long-context reasoning.",
"created": 1756399192,
"canonical_slug": "qwen/qwen3-30b-a3b-thinking-2507",
"hugging_face_id": "Qwen/Qwen3-30B-A3B-Thinking-2507",
"source_type": "openrouter_only",
"context_length": 32768,
"max_completion_tokens": null,
"is_moderated": false,
"architecture": {
"modality": "text->text",
"input_modalities": [
"text"
],
"output_modalities": [
"text"
],
"tokenizer": "Qwen3",
"instruct_type": null
},
"input_modalities": [
"text"
],
"output_modalities": [
"text"
],
"modality": "text->text",
"tokenizer": "Qwen3",
"instruct_type": null,
"supported_parameters": [
"frequency_penalty",
"include_reasoning",
"max_tokens",
"presence_penalty",
"reasoning",
"repetition_penalty",
"response_format",
"seed",
"structured_outputs",
"temperature",
"tool_choice",
"tools",
"top_k",
"top_p"
],
"default_parameters": {},
"per_request_limits": null,
"top_provider": {
"context_length": 32768,
"max_completion_tokens": null,
"is_moderated": false
},
"pricing": {
"prompt": "0.000000051",
"completion": "0.00000034"
},
"PPM": {
"prompt": 0.051,
"completion": 0.34
},
"openrouter_raw": {
"id": "qwen/qwen3-30b-a3b-thinking-2507",
"canonical_slug": "qwen/qwen3-30b-a3b-thinking-2507",
"hugging_face_id": "Qwen/Qwen3-30B-A3B-Thinking-2507",
"name": "Qwen: Qwen3 30B A3B Thinking 2507",
"created": 1756399192,
"description": "Qwen3-30B-A3B-Thinking-2507 is a 30B parameter Mixture-of-Experts reasoning model optimized for complex tasks requiring extended multi-step thinking. The model is designed specifically for “thinking mode,” where internal reasoning traces are separated from final answers.\n\nCompared to earlier Qwen3-30B releases, this version improves performance across logical reasoning, mathematics, science, coding, and multilingual benchmarks. It also demonstrates stronger instruction following, tool use, and alignment with human preferences. With higher reasoning efficiency and extended output budgets, it is best suited for advanced research, competitive problem solving, and agentic applications requiring structured long-context reasoning.",
"context_length": 32768,
"architecture": {
"modality": "text->text",
"input_modalities": [
"text"
],
"output_modalities": [
"text"
],
"tokenizer": "Qwen3",
"instruct_type": null
},
"pricing": {
"prompt": "0.000000051",
"completion": "0.00000034"
},
"top_provider": {
"context_length": 32768,
"max_completion_tokens": null,
"is_moderated": false
},
"per_request_limits": null,
"supported_parameters": [
"frequency_penalty",
"include_reasoning",
"max_tokens",
"presence_penalty",
"reasoning",
"repetition_penalty",
"response_format",
"seed",
"structured_outputs",
"temperature",
"tool_choice",
"tools",
"top_k",
"top_p"
],
"default_parameters": {},
"expiration_date": null
}
}