← Back to explorer

Qwen: Qwen3 VL 8B Instruct

Server-rendered model summary page for indexing/share previews. Use the interactive explorer for full filtering and comparison.

Match confidence: UnmatchedSource type: openrouter_only
Context window
131.1K
Arena overall rank
Input price
$0.000 / 1M
Output price
$0.000 / 1M

Identifiers & provenance

Primary ID
qwen/qwen3-vl-8b-instruct
OpenRouter ID
qwen/qwen3-vl-8b-instruct
Canonical slug
qwen/qwen3-vl-8b-instruct

Source semantics

  • Arena rank is a human-preference leaderboard signal, not a universal truth metric.
  • OpenRouter usage/popularity reflects adoption/traffic, not benchmark quality.
  • Pricing fields may differ by provider and can include extra modes beyond prompt/completion.

Read more on Methodology & data sources.

Description

Qwen3-VL-8B-Instruct is a multimodal vision-language model from the Qwen3-VL series, built for high-fidelity understanding and reasoning across text, images, and video. It features improved multimodal fusion with Interleaved-MRoPE for long-horizon temporal reasoning, DeepStack for fine-grained visual-text alignment, and text-timestamp alignment for precise event localization. The model supports a native 256K-token context window, extensible to 1M tokens, and handles both static and dynamic media inputs for tasks like document parsing, visual question answering, spatial reasoning, and GUI control. It achieves text understanding comparable to leading LLMs while expanding OCR coverage to 32 languages and enhancing robustness under varied visual conditions.

Raw fields snapshot

{
  "id": "qwen/qwen3-vl-8b-instruct",
  "name": "Qwen: Qwen3 VL 8B Instruct",
  "description": "Qwen3-VL-8B-Instruct is a multimodal vision-language model from the Qwen3-VL series, built for high-fidelity understanding and reasoning across text, images, and video. It features improved multimodal fusion with Interleaved-MRoPE for long-horizon temporal reasoning, DeepStack for fine-grained visual-text alignment, and text-timestamp alignment for precise event localization.\n\nThe model supports a native 256K-token context window, extensible to 1M tokens, and handles both static and dynamic media inputs for tasks like document parsing, visual question answering, spatial reasoning, and GUI control. It achieves text understanding comparable to leading LLMs while expanding OCR coverage to 32 languages and enhancing robustness under varied visual conditions.",
  "created": 1760463308,
  "canonical_slug": "qwen/qwen3-vl-8b-instruct",
  "hugging_face_id": "Qwen/Qwen3-VL-8B-Instruct",
  "source_type": "openrouter_only",
  "context_length": 131072,
  "max_completion_tokens": 32768,
  "is_moderated": false,
  "architecture": {
    "modality": "text+image->text",
    "input_modalities": [
      "image",
      "text"
    ],
    "output_modalities": [
      "text"
    ],
    "tokenizer": "Qwen3",
    "instruct_type": null
  },
  "input_modalities": [
    "image",
    "text"
  ],
  "output_modalities": [
    "text"
  ],
  "modality": "text+image->text",
  "tokenizer": "Qwen3",
  "instruct_type": null,
  "supported_parameters": [
    "frequency_penalty",
    "logit_bias",
    "max_tokens",
    "min_p",
    "presence_penalty",
    "repetition_penalty",
    "response_format",
    "seed",
    "stop",
    "structured_outputs",
    "temperature",
    "tool_choice",
    "tools",
    "top_k",
    "top_p"
  ],
  "default_parameters": {
    "temperature": 0.7,
    "top_p": 0.8,
    "frequency_penalty": null
  },
  "per_request_limits": null,
  "top_provider": {
    "context_length": 131072,
    "max_completion_tokens": 32768,
    "is_moderated": false
  },
  "pricing": {
    "prompt": "0.00000008",
    "completion": "0.0000005"
  },
  "PPM": {
    "prompt": 0.08,
    "completion": 0.5
  },
  "openrouter_raw": {
    "id": "qwen/qwen3-vl-8b-instruct",
    "canonical_slug": "qwen/qwen3-vl-8b-instruct",
    "hugging_face_id": "Qwen/Qwen3-VL-8B-Instruct",
    "name": "Qwen: Qwen3 VL 8B Instruct",
    "created": 1760463308,
    "description": "Qwen3-VL-8B-Instruct is a multimodal vision-language model from the Qwen3-VL series, built for high-fidelity understanding and reasoning across text, images, and video. It features improved multimodal fusion with Interleaved-MRoPE for long-horizon temporal reasoning, DeepStack for fine-grained visual-text alignment, and text-timestamp alignment for precise event localization.\n\nThe model supports a native 256K-token context window, extensible to 1M tokens, and handles both static and dynamic media inputs for tasks like document parsing, visual question answering, spatial reasoning, and GUI control. It achieves text understanding comparable to leading LLMs while expanding OCR coverage to 32 languages and enhancing robustness under varied visual conditions.",
    "context_length": 131072,
    "architecture": {
      "modality": "text+image->text",
      "input_modalities": [
        "image",
        "text"
      ],
      "output_modalities": [
        "text"
      ],
      "tokenizer": "Qwen3",
      "instruct_type": null
    },
    "pricing": {
      "prompt": "0.00000008",
      "completion": "0.0000005"
    },
    "top_provider": {
      "context_length": 131072,
      "max_completion_tokens": 32768,
      "is_moderated": false
    },
    "per_request_limits": null,
    "supported_parameters": [
      "frequency_penalty",
      "logit_bias",
      "max_tokens",
      "min_p",
      "presence_penalty",
      "repetition_penalty",
      "response_format",
      "seed",
      "stop",
      "structured_outputs",
      "temperature",
      "tool_choice",
      "tools",
      "top_k",
      "top_p"
    ],
    "default_parameters": {
      "temperature": 0.7,
      "top_p": 0.8,
      "frequency_penalty": null
    },
    "expiration_date": null
  }
}
Qwen: Qwen3 VL 8B Instruct · NNZen