← Back to explorer

Mistral: Mistral Medium 3.1

Server-rendered model summary page for indexing/share previews. Use the interactive explorer for full filtering and comparison.

Match confidence: UnmatchedSource type: openrouter_only
Context window
131.1K
Arena overall rank
Input price
$0.000 / 1M
Output price
$0.000 / 1M

Identifiers & provenance

Primary ID
mistralai/mistral-medium-3.1
OpenRouter ID
mistralai/mistral-medium-3.1
Canonical slug
mistralai/mistral-medium-3.1

Source semantics

  • Arena rank is a human-preference leaderboard signal, not a universal truth metric.
  • OpenRouter usage/popularity reflects adoption/traffic, not benchmark quality.
  • Pricing fields may differ by provider and can include extra modes beyond prompt/completion.

Read more on Methodology & data sources.

Description

Mistral Medium 3.1 is an updated version of Mistral Medium 3, which is a high-performance enterprise-grade language model designed to deliver frontier-level capabilities at significantly reduced operational cost. It balances state-of-the-art reasoning and multimodal performance with 8× lower cost compared to traditional large models, making it suitable for scalable deployments across professional and industrial use cases. The model excels in domains such as coding, STEM reasoning, and enterprise adaptation. It supports hybrid, on-prem, and in-VPC deployments and is optimized for integration into custom workflows. Mistral Medium 3.1 offers competitive accuracy relative to larger models like Claude Sonnet 3.5/3.7, Llama 4 Maverick, and Command R+, while maintaining broad compatibility across cloud environments.

Raw fields snapshot

{
  "id": "mistralai/mistral-medium-3.1",
  "name": "Mistral: Mistral Medium 3.1",
  "description": "Mistral Medium 3.1 is an updated version of Mistral Medium 3, which is a high-performance enterprise-grade language model designed to deliver frontier-level capabilities at significantly reduced operational cost. It balances state-of-the-art reasoning and multimodal performance with 8× lower cost compared to traditional large models, making it suitable for scalable deployments across professional and industrial use cases.\n\nThe model excels in domains such as coding, STEM reasoning, and enterprise adaptation. It supports hybrid, on-prem, and in-VPC deployments and is optimized for integration into custom workflows. Mistral Medium 3.1 offers competitive accuracy relative to larger models like Claude Sonnet 3.5/3.7, Llama 4 Maverick, and Command R+, while maintaining broad compatibility across cloud environments.",
  "created": 1755095639,
  "canonical_slug": "mistralai/mistral-medium-3.1",
  "hugging_face_id": "",
  "source_type": "openrouter_only",
  "context_length": 131072,
  "max_completion_tokens": null,
  "is_moderated": false,
  "architecture": {
    "modality": "text+image->text",
    "input_modalities": [
      "text",
      "image"
    ],
    "output_modalities": [
      "text"
    ],
    "tokenizer": "Mistral",
    "instruct_type": null
  },
  "input_modalities": [
    "text",
    "image"
  ],
  "output_modalities": [
    "text"
  ],
  "modality": "text+image->text",
  "tokenizer": "Mistral",
  "instruct_type": null,
  "supported_parameters": [
    "frequency_penalty",
    "max_tokens",
    "presence_penalty",
    "response_format",
    "seed",
    "stop",
    "structured_outputs",
    "temperature",
    "tool_choice",
    "tools",
    "top_p"
  ],
  "default_parameters": {
    "temperature": 0.3
  },
  "per_request_limits": null,
  "top_provider": {
    "context_length": 131072,
    "max_completion_tokens": null,
    "is_moderated": false
  },
  "pricing": {
    "prompt": "0.0000004",
    "completion": "0.000002"
  },
  "PPM": {
    "prompt": 0.4,
    "completion": 2
  },
  "openrouter_raw": {
    "id": "mistralai/mistral-medium-3.1",
    "canonical_slug": "mistralai/mistral-medium-3.1",
    "hugging_face_id": "",
    "name": "Mistral: Mistral Medium 3.1",
    "created": 1755095639,
    "description": "Mistral Medium 3.1 is an updated version of Mistral Medium 3, which is a high-performance enterprise-grade language model designed to deliver frontier-level capabilities at significantly reduced operational cost. It balances state-of-the-art reasoning and multimodal performance with 8× lower cost compared to traditional large models, making it suitable for scalable deployments across professional and industrial use cases.\n\nThe model excels in domains such as coding, STEM reasoning, and enterprise adaptation. It supports hybrid, on-prem, and in-VPC deployments and is optimized for integration into custom workflows. Mistral Medium 3.1 offers competitive accuracy relative to larger models like Claude Sonnet 3.5/3.7, Llama 4 Maverick, and Command R+, while maintaining broad compatibility across cloud environments.",
    "context_length": 131072,
    "architecture": {
      "modality": "text+image->text",
      "input_modalities": [
        "text",
        "image"
      ],
      "output_modalities": [
        "text"
      ],
      "tokenizer": "Mistral",
      "instruct_type": null
    },
    "pricing": {
      "prompt": "0.0000004",
      "completion": "0.000002"
    },
    "top_provider": {
      "context_length": 131072,
      "max_completion_tokens": null,
      "is_moderated": false
    },
    "per_request_limits": null,
    "supported_parameters": [
      "frequency_penalty",
      "max_tokens",
      "presence_penalty",
      "response_format",
      "seed",
      "stop",
      "structured_outputs",
      "temperature",
      "tool_choice",
      "tools",
      "top_p"
    ],
    "default_parameters": {
      "temperature": 0.3
    },
    "expiration_date": null
  }
}
Mistral: Mistral Medium 3.1 · NNZen