← Back to explorer

MiniMax: MiniMax M2.5

Server-rendered model summary page for indexing/share previews. Use the interactive explorer for full filtering and comparison.

Match confidence: UnmatchedSource type: model_only
Context window
0
Arena overall rank
Input price
$0.000 / 1M
Output price
$0.000 / 1M

Identifiers & provenance

Primary ID
minimax/minimax-m2.5
OpenRouter ID
minimax/minimax-m2.5
Canonical slug
minimax/minimax-m2.5-20260211

Source semantics

  • Arena rank is a human-preference leaderboard signal, not a universal truth metric.
  • OpenRouter usage/popularity reflects adoption/traffic, not benchmark quality.
  • Pricing fields may differ by provider and can include extra modes beyond prompt/completion.

Read more on Methodology & data sources.

Description

MiniMax-M2.5 is a SOTA large language model designed for real-world productivity. Trained in a diverse range of complex real-world digital working environments, M2.5 builds upon the coding expertise of M2.1 to extend into general office work, reaching fluency in generating and operating Word, Excel, and Powerpoint files, context switching between diverse software environments, and working across different agent and human teams. Scoring 80.2% on SWE-Bench Verified, 51.3% on Multi-SWE-Bench, and 76.3% on BrowseComp, M2.5 is also more token efficient than previous generations, having been trained to optimize its actions and output through planning.

Raw fields snapshot

{
  "id": "minimax/minimax-m2.5",
  "canonical_slug": "minimax/minimax-m2.5-20260211",
  "name": "MiniMax: MiniMax M2.5",
  "display_name": "MiniMax: MiniMax M2.5",
  "provider": "minimax",
  "description": "MiniMax-M2.5 is a SOTA large language model designed for real-world productivity. Trained in a diverse range of complex real-world digital working environments, M2.5 builds upon the coding expertise of M2.1 to extend into general office work, reaching fluency in generating and operating Word, Excel, and Powerpoint files, context switching between diverse software environments, and working across different agent and human teams. Scoring 80.2% on SWE-Bench Verified, 51.3% on Multi-SWE-Bench, and 76.3% on BrowseComp, M2.5 is also more token efficient than previous generations, having been trained to optimize its actions and output through planning.",
  "context_length": null,
  "source_type": "model_only",
  "best_rank": null,
  "pricing": {
    "prompt": null,
    "completion": null
  },
  "pricing_summary": {},
  "capabilities": {
    "modalities": [
      "text"
    ],
    "context_length": null,
    "architecture": {
      "modality": "text->text",
      "input_modalities": [
        "text"
      ],
      "output_modalities": [
        "text"
      ],
      "tokenizer": "Other",
      "instruct_type": null
    }
  },
  "__detail_source": "model_snapshot",
  "__raw_snapshot": {
    "model": {
      "id": "minimax/minimax-m2.5",
      "slug": "minimax/minimax-m2.5-20260211",
      "display_name": "MiniMax: MiniMax M2.5",
      "provider": "minimax",
      "description": "MiniMax-M2.5 is a SOTA large language model designed for real-world productivity. Trained in a diverse range of complex real-world digital working environments, M2.5 builds upon the coding expertise of M2.1 to extend into general office work, reaching fluency in generating and operating Word, Excel, and Powerpoint files, context switching between diverse software environments, and working across different agent and human teams. Scoring 80.2% on SWE-Bench Verified, 51.3% on Multi-SWE-Bench, and 76.3% on BrowseComp, M2.5 is also more token efficient than previous generations, having been trained to optimize its actions and output through planning.",
      "context_length": null,
      "modalities": [
        "text"
      ],
      "tags": [],
      "source_type": "model_only",
      "updated_at": "2026-03-01T02:42:37.446680+00:00",
      "source": "model_only"
    },
    "overall_score": null,
    "best_rank": null,
    "ranks_by_category": {},
    "scores_by_category": {},
    "pricing_summary": {},
    "capabilities": {
      "modalities": [
        "text"
      ],
      "context_length": null,
      "architecture": {
        "modality": "text->text",
        "input_modalities": [
          "text"
        ],
        "output_modalities": [
          "text"
        ],
        "tokenizer": "Other",
        "instruct_type": null
      }
    }
  }
}
MiniMax: MiniMax M2.5 · NNZen