← Back to explorer

OpenAI: GPT-5.2 Chat

Server-rendered model summary page for indexing/share previews. Use the interactive explorer for full filtering and comparison.

Match confidence: UnmatchedSource type: model_only
Context window
0
Arena overall rank
Input price
$0.000 / 1M
Output price
$0.000 / 1M

Identifiers & provenance

Primary ID
openai/gpt-5.2-chat
OpenRouter ID
openai/gpt-5.2-chat
Canonical slug
openai/gpt-5.2-chat-20251211

Source semantics

  • Arena rank is a human-preference leaderboard signal, not a universal truth metric.
  • OpenRouter usage/popularity reflects adoption/traffic, not benchmark quality.
  • Pricing fields may differ by provider and can include extra modes beyond prompt/completion.

Read more on Methodology & data sources.

Description

GPT-5.2 Chat (AKA Instant) is the fast, lightweight member of the 5.2 family, optimized for low-latency chat while retaining strong general intelligence. It uses adaptive reasoning to selectively “think” on harder queries, improving accuracy on math, coding, and multi-step tasks without slowing down typical conversations. The model is warmer and more conversational by default, with better instruction following and more stable short-form reasoning. GPT-5.2 Chat is designed for high-throughput, interactive workloads where responsiveness and consistency matter more than deep deliberation.

Raw fields snapshot

{
  "id": "openai/gpt-5.2-chat",
  "canonical_slug": "openai/gpt-5.2-chat-20251211",
  "name": "OpenAI: GPT-5.2 Chat",
  "display_name": "OpenAI: GPT-5.2 Chat",
  "provider": "openai",
  "description": "GPT-5.2 Chat (AKA Instant) is the fast, lightweight member of the 5.2 family, optimized for low-latency chat while retaining strong general intelligence. It uses adaptive reasoning to selectively “think” on harder queries, improving accuracy on math, coding, and multi-step tasks without slowing down typical conversations. The model is warmer and more conversational by default, with better instruction following and more stable short-form reasoning. GPT-5.2 Chat is designed for high-throughput, interactive workloads where responsiveness and consistency matter more than deep deliberation.",
  "context_length": null,
  "source_type": "model_only",
  "best_rank": null,
  "pricing": {
    "prompt": null,
    "completion": null
  },
  "pricing_summary": {},
  "capabilities": {
    "modalities": [
      "text",
      "image",
      "file"
    ],
    "context_length": null,
    "architecture": {
      "modality": "text+image+file->text",
      "input_modalities": [
        "file",
        "image",
        "text"
      ],
      "output_modalities": [
        "text"
      ],
      "tokenizer": "GPT",
      "instruct_type": null
    }
  },
  "__detail_source": "model_snapshot",
  "__raw_snapshot": {
    "model": {
      "id": "openai/gpt-5.2-chat",
      "slug": "openai/gpt-5.2-chat-20251211",
      "display_name": "OpenAI: GPT-5.2 Chat",
      "provider": "openai",
      "description": "GPT-5.2 Chat (AKA Instant) is the fast, lightweight member of the 5.2 family, optimized for low-latency chat while retaining strong general intelligence. It uses adaptive reasoning to selectively “think” on harder queries, improving accuracy on math, coding, and multi-step tasks without slowing down typical conversations. The model is warmer and more conversational by default, with better instruction following and more stable short-form reasoning. GPT-5.2 Chat is designed for high-throughput, interactive workloads where responsiveness and consistency matter more than deep deliberation.",
      "context_length": null,
      "modalities": [
        "text",
        "image",
        "file"
      ],
      "tags": [],
      "source_type": "model_only",
      "updated_at": "2026-03-01T02:42:36.596655+00:00",
      "source": "model_only"
    },
    "overall_score": null,
    "best_rank": null,
    "ranks_by_category": {},
    "scores_by_category": {},
    "pricing_summary": {},
    "capabilities": {
      "modalities": [
        "text",
        "image",
        "file"
      ],
      "context_length": null,
      "architecture": {
        "modality": "text+image+file->text",
        "input_modalities": [
          "file",
          "image",
          "text"
        ],
        "output_modalities": [
          "text"
        ],
        "tokenizer": "GPT",
        "instruct_type": null
      }
    }
  }
}