OpenAI: GPT-5.1 Chat
Server-rendered model summary page for indexing/share previews. Use the interactive explorer for full filtering and comparison.
Identifiers & provenance
- Primary ID
- openai/gpt-5.1-chat
- OpenRouter ID
- openai/gpt-5.1-chat
- Canonical slug
- openai/gpt-5.1-chat-20251113
Source semantics
- Arena rank is a human-preference leaderboard signal, not a universal truth metric.
- OpenRouter usage/popularity reflects adoption/traffic, not benchmark quality.
- Pricing fields may differ by provider and can include extra modes beyond prompt/completion.
Read more on Methodology & data sources.
Description
GPT-5.1 Chat (AKA Instant is the fast, lightweight member of the 5.1 family, optimized for low-latency chat while retaining strong general intelligence. It uses adaptive reasoning to selectively “think” on harder queries, improving accuracy on math, coding, and multi-step tasks without slowing down typical conversations. The model is warmer and more conversational by default, with better instruction following and more stable short-form reasoning. GPT-5.1 Chat is designed for high-throughput, interactive workloads where responsiveness and consistency matter more than deep deliberation.
Raw fields snapshot
{
"id": "openai/gpt-5.1-chat",
"name": "OpenAI: GPT-5.1 Chat",
"description": "GPT-5.1 Chat (AKA Instant is the fast, lightweight member of the 5.1 family, optimized for low-latency chat while retaining strong general intelligence. It uses adaptive reasoning to selectively “think” on harder queries, improving accuracy on math, coding, and multi-step tasks without slowing down typical conversations. The model is warmer and more conversational by default, with better instruction following and more stable short-form reasoning. GPT-5.1 Chat is designed for high-throughput, interactive workloads where responsiveness and consistency matter more than deep deliberation.\n",
"created": 1763060302,
"canonical_slug": "openai/gpt-5.1-chat-20251113",
"hugging_face_id": "",
"source_type": "openrouter_only",
"context_length": 128000,
"max_completion_tokens": 16384,
"is_moderated": true,
"architecture": {
"modality": "text+image+file->text",
"input_modalities": [
"file",
"image",
"text"
],
"output_modalities": [
"text"
],
"tokenizer": "GPT",
"instruct_type": null
},
"input_modalities": [
"file",
"image",
"text"
],
"output_modalities": [
"text"
],
"modality": "text+image+file->text",
"tokenizer": "GPT",
"instruct_type": null,
"supported_parameters": [
"max_tokens",
"response_format",
"seed",
"structured_outputs",
"tool_choice",
"tools"
],
"default_parameters": {
"temperature": null,
"top_p": null,
"frequency_penalty": null
},
"per_request_limits": null,
"top_provider": {
"context_length": 128000,
"max_completion_tokens": 16384,
"is_moderated": true
},
"pricing": {
"prompt": "0.00000125",
"completion": "0.00001",
"web_search": "0.01",
"input_cache_read": "0.000000125"
},
"PPM": {
"prompt": 1.25,
"completion": 10,
"web_search": 10000,
"input_cache_read": 0.125
},
"openrouter_raw": {
"id": "openai/gpt-5.1-chat",
"canonical_slug": "openai/gpt-5.1-chat-20251113",
"hugging_face_id": "",
"name": "OpenAI: GPT-5.1 Chat",
"created": 1763060302,
"description": "GPT-5.1 Chat (AKA Instant is the fast, lightweight member of the 5.1 family, optimized for low-latency chat while retaining strong general intelligence. It uses adaptive reasoning to selectively “think” on harder queries, improving accuracy on math, coding, and multi-step tasks without slowing down typical conversations. The model is warmer and more conversational by default, with better instruction following and more stable short-form reasoning. GPT-5.1 Chat is designed for high-throughput, interactive workloads where responsiveness and consistency matter more than deep deliberation.\n",
"context_length": 128000,
"architecture": {
"modality": "text+image+file->text",
"input_modalities": [
"file",
"image",
"text"
],
"output_modalities": [
"text"
],
"tokenizer": "GPT",
"instruct_type": null
},
"pricing": {
"prompt": "0.00000125",
"completion": "0.00001",
"web_search": "0.01",
"input_cache_read": "0.000000125"
},
"top_provider": {
"context_length": 128000,
"max_completion_tokens": 16384,
"is_moderated": true
},
"per_request_limits": null,
"supported_parameters": [
"max_tokens",
"response_format",
"seed",
"structured_outputs",
"tool_choice",
"tools"
],
"default_parameters": {
"temperature": null,
"top_p": null,
"frequency_penalty": null
},
"expiration_date": null
}
}