← Back to explorer

Qwen: Qwen3 VL 8B Thinking

Server-rendered model summary page for indexing/share previews. Use the interactive explorer for full filtering and comparison.

Match confidence: UnmatchedSource type: openrouter_only
Context window
131.1K
Arena overall rank
Input price
$0.000 / 1M
Output price
$0.000 / 1M

Identifiers & provenance

Primary ID
qwen/qwen3-vl-8b-thinking
OpenRouter ID
qwen/qwen3-vl-8b-thinking
Canonical slug
qwen/qwen3-vl-8b-thinking

Source semantics

  • Arena rank is a human-preference leaderboard signal, not a universal truth metric.
  • OpenRouter usage/popularity reflects adoption/traffic, not benchmark quality.
  • Pricing fields may differ by provider and can include extra modes beyond prompt/completion.

Read more on Methodology & data sources.

Description

Qwen3-VL-8B-Thinking is the reasoning-optimized variant of the Qwen3-VL-8B multimodal model, designed for advanced visual and textual reasoning across complex scenes, documents, and temporal sequences. It integrates enhanced multimodal alignment and long-context processing (native 256K, expandable to 1M tokens) for tasks such as scientific visual analysis, causal inference, and mathematical reasoning over image or video inputs. Compared to the Instruct edition, the Thinking version introduces deeper visual-language fusion and deliberate reasoning pathways that improve performance on long-chain logic tasks, STEM problem-solving, and multi-step video understanding. It achieves stronger temporal grounding via Interleaved-MRoPE and timestamp-aware embeddings, while maintaining robust OCR, multilingual comprehension, and text generation on par with large text-only LLMs.

Raw fields snapshot

{
  "id": "qwen/qwen3-vl-8b-thinking",
  "name": "Qwen: Qwen3 VL 8B Thinking",
  "description": "Qwen3-VL-8B-Thinking is the reasoning-optimized variant of the Qwen3-VL-8B multimodal model, designed for advanced visual and textual reasoning across complex scenes, documents, and temporal sequences. It integrates enhanced multimodal alignment and long-context processing (native 256K, expandable to 1M tokens) for tasks such as scientific visual analysis, causal inference, and mathematical reasoning over image or video inputs.\n\nCompared to the Instruct edition, the Thinking version introduces deeper visual-language fusion and deliberate reasoning pathways that improve performance on long-chain logic tasks, STEM problem-solving, and multi-step video understanding. It achieves stronger temporal grounding via Interleaved-MRoPE and timestamp-aware embeddings, while maintaining robust OCR, multilingual comprehension, and text generation on par with large text-only LLMs.",
  "created": 1760463746,
  "canonical_slug": "qwen/qwen3-vl-8b-thinking",
  "hugging_face_id": "Qwen/Qwen3-VL-8B-Thinking",
  "source_type": "openrouter_only",
  "context_length": 131072,
  "max_completion_tokens": 32768,
  "is_moderated": false,
  "architecture": {
    "modality": "text+image->text",
    "input_modalities": [
      "image",
      "text"
    ],
    "output_modalities": [
      "text"
    ],
    "tokenizer": "Qwen3",
    "instruct_type": null
  },
  "input_modalities": [
    "image",
    "text"
  ],
  "output_modalities": [
    "text"
  ],
  "modality": "text+image->text",
  "tokenizer": "Qwen3",
  "instruct_type": null,
  "supported_parameters": [
    "include_reasoning",
    "max_tokens",
    "presence_penalty",
    "reasoning",
    "response_format",
    "seed",
    "structured_outputs",
    "temperature",
    "tool_choice",
    "tools",
    "top_p"
  ],
  "default_parameters": {
    "temperature": 1,
    "top_p": 0.95
  },
  "per_request_limits": null,
  "top_provider": {
    "context_length": 131072,
    "max_completion_tokens": 32768,
    "is_moderated": false
  },
  "pricing": {
    "prompt": "0.000000117",
    "completion": "0.000001365"
  },
  "PPM": {
    "prompt": 0.117,
    "completion": 1.365
  },
  "openrouter_raw": {
    "id": "qwen/qwen3-vl-8b-thinking",
    "canonical_slug": "qwen/qwen3-vl-8b-thinking",
    "hugging_face_id": "Qwen/Qwen3-VL-8B-Thinking",
    "name": "Qwen: Qwen3 VL 8B Thinking",
    "created": 1760463746,
    "description": "Qwen3-VL-8B-Thinking is the reasoning-optimized variant of the Qwen3-VL-8B multimodal model, designed for advanced visual and textual reasoning across complex scenes, documents, and temporal sequences. It integrates enhanced multimodal alignment and long-context processing (native 256K, expandable to 1M tokens) for tasks such as scientific visual analysis, causal inference, and mathematical reasoning over image or video inputs.\n\nCompared to the Instruct edition, the Thinking version introduces deeper visual-language fusion and deliberate reasoning pathways that improve performance on long-chain logic tasks, STEM problem-solving, and multi-step video understanding. It achieves stronger temporal grounding via Interleaved-MRoPE and timestamp-aware embeddings, while maintaining robust OCR, multilingual comprehension, and text generation on par with large text-only LLMs.",
    "context_length": 131072,
    "architecture": {
      "modality": "text+image->text",
      "input_modalities": [
        "image",
        "text"
      ],
      "output_modalities": [
        "text"
      ],
      "tokenizer": "Qwen3",
      "instruct_type": null
    },
    "pricing": {
      "prompt": "0.000000117",
      "completion": "0.000001365"
    },
    "top_provider": {
      "context_length": 131072,
      "max_completion_tokens": 32768,
      "is_moderated": false
    },
    "per_request_limits": null,
    "supported_parameters": [
      "include_reasoning",
      "max_tokens",
      "presence_penalty",
      "reasoning",
      "response_format",
      "seed",
      "structured_outputs",
      "temperature",
      "tool_choice",
      "tools",
      "top_p"
    ],
    "default_parameters": {
      "temperature": 1,
      "top_p": 0.95
    },
    "expiration_date": null
  }
}
Qwen: Qwen3 VL 8B Thinking · NNZen