← Back to explorer
Mistral: Mistral Small 3.2 24B
Server-rendered model summary page for indexing/share previews. Use the interactive explorer for full filtering and comparison.
Match confidence: UnmatchedSource type: openrouter_only
Context window
131.1K
Arena overall rank
—
Input price
$0.000 / 1M
Output price
$0.000 / 1M
Identifiers & provenance
- Primary ID
- mistralai/mistral-small-3.2-24b-instruct
- OpenRouter ID
- mistralai/mistral-small-3.2-24b-instruct
- Canonical slug
- mistralai/mistral-small-3.2-24b-instruct-2506
Source semantics
- Arena rank is a human-preference leaderboard signal, not a universal truth metric.
- OpenRouter usage/popularity reflects adoption/traffic, not benchmark quality.
- Pricing fields may differ by provider and can include extra modes beyond prompt/completion.
Read more on Methodology & data sources.
Description
Mistral-Small-3.2-24B-Instruct-2506 is an updated 24B parameter model from Mistral optimized for instruction following, repetition reduction, and improved function calling. Compared to the 3.1 release, version 3.2 significantly improves accuracy on WildBench and Arena Hard, reduces infinite generations, and delivers gains in tool use and structured output tasks. It supports image and text inputs with structured outputs, function/tool calling, and strong performance across coding (HumanEval+, MBPP), STEM (MMLU, MATH, GPQA), and vision benchmarks (ChartQA, DocVQA).
Raw fields snapshot
{
"id": "mistralai/mistral-small-3.2-24b-instruct",
"name": "Mistral: Mistral Small 3.2 24B",
"description": "Mistral-Small-3.2-24B-Instruct-2506 is an updated 24B parameter model from Mistral optimized for instruction following, repetition reduction, and improved function calling. Compared to the 3.1 release, version 3.2 significantly improves accuracy on WildBench and Arena Hard, reduces infinite generations, and delivers gains in tool use and structured output tasks.\n\nIt supports image and text inputs with structured outputs, function/tool calling, and strong performance across coding (HumanEval+, MBPP), STEM (MMLU, MATH, GPQA), and vision benchmarks (ChartQA, DocVQA).",
"created": 1750443016,
"canonical_slug": "mistralai/mistral-small-3.2-24b-instruct-2506",
"hugging_face_id": "mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"source_type": "openrouter_only",
"context_length": 131072,
"max_completion_tokens": 131072,
"is_moderated": false,
"architecture": {
"modality": "text+image->text",
"input_modalities": [
"image",
"text"
],
"output_modalities": [
"text"
],
"tokenizer": "Mistral",
"instruct_type": null
},
"input_modalities": [
"image",
"text"
],
"output_modalities": [
"text"
],
"modality": "text+image->text",
"tokenizer": "Mistral",
"instruct_type": null,
"supported_parameters": [
"frequency_penalty",
"logit_bias",
"max_tokens",
"min_p",
"presence_penalty",
"repetition_penalty",
"response_format",
"seed",
"stop",
"structured_outputs",
"temperature",
"tool_choice",
"tools",
"top_k",
"top_p"
],
"default_parameters": {
"temperature": 0.3
},
"per_request_limits": null,
"top_provider": {
"context_length": 131072,
"max_completion_tokens": 131072,
"is_moderated": false
},
"pricing": {
"prompt": "0.00000006",
"completion": "0.00000018",
"input_cache_read": "0.00000003"
},
"PPM": {
"prompt": 0.06,
"completion": 0.18,
"input_cache_read": 0.03
},
"openrouter_raw": {
"id": "mistralai/mistral-small-3.2-24b-instruct",
"canonical_slug": "mistralai/mistral-small-3.2-24b-instruct-2506",
"hugging_face_id": "mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"name": "Mistral: Mistral Small 3.2 24B",
"created": 1750443016,
"description": "Mistral-Small-3.2-24B-Instruct-2506 is an updated 24B parameter model from Mistral optimized for instruction following, repetition reduction, and improved function calling. Compared to the 3.1 release, version 3.2 significantly improves accuracy on WildBench and Arena Hard, reduces infinite generations, and delivers gains in tool use and structured output tasks.\n\nIt supports image and text inputs with structured outputs, function/tool calling, and strong performance across coding (HumanEval+, MBPP), STEM (MMLU, MATH, GPQA), and vision benchmarks (ChartQA, DocVQA).",
"context_length": 131072,
"architecture": {
"modality": "text+image->text",
"input_modalities": [
"image",
"text"
],
"output_modalities": [
"text"
],
"tokenizer": "Mistral",
"instruct_type": null
},
"pricing": {
"prompt": "0.00000006",
"completion": "0.00000018",
"input_cache_read": "0.00000003"
},
"top_provider": {
"context_length": 131072,
"max_completion_tokens": 131072,
"is_moderated": false
},
"per_request_limits": null,
"supported_parameters": [
"frequency_penalty",
"logit_bias",
"max_tokens",
"min_p",
"presence_penalty",
"repetition_penalty",
"response_format",
"seed",
"stop",
"structured_outputs",
"temperature",
"tool_choice",
"tools",
"top_k",
"top_p"
],
"default_parameters": {
"temperature": 0.3
},
"expiration_date": null
}
}