Xiaomi: MiMo-V2-Flash
Server-rendered model summary page for indexing/share previews. Use the interactive explorer for full filtering and comparison.
Identifiers & provenance
- Primary ID
- xiaomi/mimo-v2-flash
- OpenRouter ID
- xiaomi/mimo-v2-flash
- Canonical slug
- xiaomi/mimo-v2-flash-20251210
Source semantics
- Arena rank is a human-preference leaderboard signal, not a universal truth metric.
- OpenRouter usage/popularity reflects adoption/traffic, not benchmark quality.
- Pricing fields may differ by provider and can include extra modes beyond prompt/completion.
Read more on Methodology & data sources.
Description
MiMo-V2-Flash is an open-source foundation language model developed by Xiaomi. It is a Mixture-of-Experts model with 309B total parameters and 15B active parameters, adopting hybrid attention architecture. MiMo-V2-Flash supports a hybrid-thinking toggle and a 256K context window, and excels at reasoning, coding, and agent scenarios. On SWE-bench Verified and SWE-bench Multilingual, MiMo-V2-Flash ranks as the top #1 open-source model globally, delivering performance comparable to Claude Sonnet 4.5 while costing only about 3.5% as much. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config).
Raw fields snapshot
{
"id": "xiaomi/mimo-v2-flash",
"canonical_slug": "xiaomi/mimo-v2-flash-20251210",
"name": "Xiaomi: MiMo-V2-Flash",
"display_name": "Xiaomi: MiMo-V2-Flash",
"provider": "xiaomi",
"description": "MiMo-V2-Flash is an open-source foundation language model developed by Xiaomi. It is a Mixture-of-Experts model with 309B total parameters and 15B active parameters, adopting hybrid attention architecture. MiMo-V2-Flash supports a hybrid-thinking toggle and a 256K context window, and excels at reasoning, coding, and agent scenarios. On SWE-bench Verified and SWE-bench Multilingual, MiMo-V2-Flash ranks as the top #1 open-source model globally, delivering performance comparable to Claude Sonnet 4.5 while costing only about 3.5% as much.\n\nUsers can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config).",
"context_length": null,
"source_type": "model_only",
"best_rank": null,
"pricing": {
"prompt": null,
"completion": null
},
"pricing_summary": {},
"capabilities": {
"modalities": [
"text"
],
"context_length": null,
"architecture": {
"modality": "text->text",
"input_modalities": [
"text"
],
"output_modalities": [
"text"
],
"tokenizer": "Other",
"instruct_type": null
}
},
"__detail_source": "model_snapshot",
"__raw_snapshot": {
"model": {
"id": "xiaomi/mimo-v2-flash",
"slug": "xiaomi/mimo-v2-flash-20251210",
"display_name": "Xiaomi: MiMo-V2-Flash",
"provider": "xiaomi",
"description": "MiMo-V2-Flash is an open-source foundation language model developed by Xiaomi. It is a Mixture-of-Experts model with 309B total parameters and 15B active parameters, adopting hybrid attention architecture. MiMo-V2-Flash supports a hybrid-thinking toggle and a 256K context window, and excels at reasoning, coding, and agent scenarios. On SWE-bench Verified and SWE-bench Multilingual, MiMo-V2-Flash ranks as the top #1 open-source model globally, delivering performance comparable to Claude Sonnet 4.5 while costing only about 3.5% as much.\n\nUsers can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config).",
"context_length": null,
"modalities": [
"text"
],
"tags": [],
"source_type": "model_only",
"updated_at": "2026-03-01T02:42:39.931293+00:00",
"source": "model_only"
},
"overall_score": null,
"best_rank": null,
"ranks_by_category": {},
"scores_by_category": {},
"pricing_summary": {},
"capabilities": {
"modalities": [
"text"
],
"context_length": null,
"architecture": {
"modality": "text->text",
"input_modalities": [
"text"
],
"output_modalities": [
"text"
],
"tokenizer": "Other",
"instruct_type": null
}
}
}
}