Qwen: Qwen3 VL 235B A22B Thinking
Server-rendered model summary page for indexing/share previews. Use the interactive explorer for full filtering and comparison.
Identifiers & provenance
- Primary ID
- qwen/qwen3-vl-235b-a22b-thinking
- OpenRouter ID
- qwen/qwen3-vl-235b-a22b-thinking
- Canonical slug
- qwen/qwen3-vl-235b-a22b-thinking
Source semantics
- Arena rank is a human-preference leaderboard signal, not a universal truth metric.
- OpenRouter usage/popularity reflects adoption/traffic, not benchmark quality.
- Pricing fields may differ by provider and can include extra modes beyond prompt/completion.
Read more on Methodology & data sources.
Description
Qwen3-VL-235B-A22B Thinking is a multimodal model that unifies strong text generation with visual understanding across images and video. The Thinking model is optimized for multimodal reasoning in STEM and math. The series emphasizes robust perception (recognition of diverse real-world and synthetic categories), spatial understanding (2D/3D grounding), and long-form visual comprehension, with competitive results on public multimodal benchmarks for both perception and reasoning. Beyond analysis, Qwen3-VL supports agentic interaction and tool use: it can follow complex instructions over multi-image, multi-turn dialogues; align text to video timelines for precise temporal queries; and operate GUI elements for automation tasks. The models also enable visual coding workflows, turning sketches or mockups into code and assisting with UI debugging, while maintaining strong text-only performance comparable to the flagship Qwen3 language models. This makes Qwen3-VL suitable for production scenarios spanning document AI, multilingual OCR, software/UI assistance, spatial/embodied tasks, and research on vision-language agents.
Raw fields snapshot
{
"id": "qwen/qwen3-vl-235b-a22b-thinking",
"name": "Qwen: Qwen3 VL 235B A22B Thinking",
"description": "Qwen3-VL-235B-A22B Thinking is a multimodal model that unifies strong text generation with visual understanding across images and video. The Thinking model is optimized for multimodal reasoning in STEM and math. The series emphasizes robust perception (recognition of diverse real-world and synthetic categories), spatial understanding (2D/3D grounding), and long-form visual comprehension, with competitive results on public multimodal benchmarks for both perception and reasoning.\n\nBeyond analysis, Qwen3-VL supports agentic interaction and tool use: it can follow complex instructions over multi-image, multi-turn dialogues; align text to video timelines for precise temporal queries; and operate GUI elements for automation tasks. The models also enable visual coding workflows, turning sketches or mockups into code and assisting with UI debugging, while maintaining strong text-only performance comparable to the flagship Qwen3 language models. This makes Qwen3-VL suitable for production scenarios spanning document AI, multilingual OCR, software/UI assistance, spatial/embodied tasks, and research on vision-language agents.",
"created": 1758668690,
"canonical_slug": "qwen/qwen3-vl-235b-a22b-thinking",
"hugging_face_id": "Qwen/Qwen3-VL-235B-A22B-Thinking",
"source_type": "openrouter_only",
"context_length": 131072,
"max_completion_tokens": 32768,
"is_moderated": false,
"architecture": {
"modality": "text+image->text",
"input_modalities": [
"text",
"image"
],
"output_modalities": [
"text"
],
"tokenizer": "Qwen3",
"instruct_type": null
},
"input_modalities": [
"text",
"image"
],
"output_modalities": [
"text"
],
"modality": "text+image->text",
"tokenizer": "Qwen3",
"instruct_type": null,
"supported_parameters": [
"frequency_penalty",
"include_reasoning",
"max_tokens",
"presence_penalty",
"reasoning",
"repetition_penalty",
"response_format",
"seed",
"stop",
"structured_outputs",
"temperature",
"tool_choice",
"tools",
"top_k",
"top_p"
],
"default_parameters": {
"temperature": 0.8,
"top_p": 0.95,
"frequency_penalty": null
},
"per_request_limits": null,
"top_provider": {
"context_length": 131072,
"max_completion_tokens": 32768,
"is_moderated": false
},
"pricing": {
"prompt": "0",
"completion": "0",
"request": "0",
"image": "0",
"web_search": "0",
"internal_reasoning": "0"
},
"PPM": {
"prompt": 0,
"completion": 0,
"request": 0,
"image": 0,
"web_search": 0,
"internal_reasoning": 0
},
"openrouter_raw": {
"id": "qwen/qwen3-vl-235b-a22b-thinking",
"canonical_slug": "qwen/qwen3-vl-235b-a22b-thinking",
"hugging_face_id": "Qwen/Qwen3-VL-235B-A22B-Thinking",
"name": "Qwen: Qwen3 VL 235B A22B Thinking",
"created": 1758668690,
"description": "Qwen3-VL-235B-A22B Thinking is a multimodal model that unifies strong text generation with visual understanding across images and video. The Thinking model is optimized for multimodal reasoning in STEM and math. The series emphasizes robust perception (recognition of diverse real-world and synthetic categories), spatial understanding (2D/3D grounding), and long-form visual comprehension, with competitive results on public multimodal benchmarks for both perception and reasoning.\n\nBeyond analysis, Qwen3-VL supports agentic interaction and tool use: it can follow complex instructions over multi-image, multi-turn dialogues; align text to video timelines for precise temporal queries; and operate GUI elements for automation tasks. The models also enable visual coding workflows, turning sketches or mockups into code and assisting with UI debugging, while maintaining strong text-only performance comparable to the flagship Qwen3 language models. This makes Qwen3-VL suitable for production scenarios spanning document AI, multilingual OCR, software/UI assistance, spatial/embodied tasks, and research on vision-language agents.",
"context_length": 131072,
"architecture": {
"modality": "text+image->text",
"input_modalities": [
"text",
"image"
],
"output_modalities": [
"text"
],
"tokenizer": "Qwen3",
"instruct_type": null
},
"pricing": {
"prompt": "0",
"completion": "0",
"request": "0",
"image": "0",
"web_search": "0",
"internal_reasoning": "0"
},
"top_provider": {
"context_length": 131072,
"max_completion_tokens": 32768,
"is_moderated": false
},
"per_request_limits": null,
"supported_parameters": [
"frequency_penalty",
"include_reasoning",
"max_tokens",
"presence_penalty",
"reasoning",
"repetition_penalty",
"response_format",
"seed",
"stop",
"structured_outputs",
"temperature",
"tool_choice",
"tools",
"top_k",
"top_p"
],
"default_parameters": {
"temperature": 0.8,
"top_p": 0.95,
"frequency_penalty": null
},
"expiration_date": null
}
}