← Back to explorer

OpenAI: GPT-5.3-Codex

Server-rendered model summary page for indexing/share previews. Use the interactive explorer for full filtering and comparison.

Match confidence: UnmatchedSource type: openrouter_only
Context window
400K
Arena overall rank
Input price
$0.000 / 1M
Output price
$0.000 / 1M

Identifiers & provenance

Primary ID
openai/gpt-5.3-codex
OpenRouter ID
openai/gpt-5.3-codex
Canonical slug
openai/gpt-5.3-codex-20260224

Source semantics

  • Arena rank is a human-preference leaderboard signal, not a universal truth metric.
  • OpenRouter usage/popularity reflects adoption/traffic, not benchmark quality.
  • Pricing fields may differ by provider and can include extra modes beyond prompt/completion.

Read more on Methodology & data sources.

Description

GPT-5.3-Codex is OpenAI’s most advanced agentic coding model, combining the frontier software engineering performance of GPT-5.2-Codex with the broader reasoning and professional knowledge capabilities of GPT-5.2. It achieves state-of-the-art results on SWE-Bench Pro and strong performance on Terminal-Bench 2.0 and OSWorld-Verified, reflecting improved multi-language coding, terminal proficiency, and real-world computer-use skills. The model is optimized for long-running, tool-using workflows and supports interactive steering during execution, making it suitable for complex development tasks, debugging, deployment, and iterative product work. Beyond coding, GPT-5.3-Codex performs strongly on structured knowledge-work benchmarks such as GDPval, supporting tasks like document drafting, spreadsheet analysis, slide creation, and operational research across domains. It is trained with enhanced cybersecurity awareness, including vulnerability identification capabilities, and deployed with additional safeguards for high-risk use cases. Compared to prior Codex models, it is more token-efficient and approximately 25% faster, targeting professional end-to-end workflows that span reasoning, execution, and computer interaction.

Raw fields snapshot

{
  "id": "openai/gpt-5.3-codex",
  "name": "OpenAI: GPT-5.3-Codex",
  "description": "GPT-5.3-Codex is OpenAI’s most advanced agentic coding model, combining the frontier software engineering performance of GPT-5.2-Codex with the broader reasoning and professional knowledge capabilities of GPT-5.2. It achieves state-of-the-art results on SWE-Bench Pro and strong performance on Terminal-Bench 2.0 and OSWorld-Verified, reflecting improved multi-language coding, terminal proficiency, and real-world computer-use skills. The model is optimized for long-running, tool-using workflows and supports interactive steering during execution, making it suitable for complex development tasks, debugging, deployment, and iterative product work.\n\nBeyond coding, GPT-5.3-Codex performs strongly on structured knowledge-work benchmarks such as GDPval, supporting tasks like document drafting, spreadsheet analysis, slide creation, and operational research across domains. It is trained with enhanced cybersecurity awareness, including vulnerability identification capabilities, and deployed with additional safeguards for high-risk use cases. Compared to prior Codex models, it is more token-efficient and approximately 25% faster, targeting professional end-to-end workflows that span reasoning, execution, and computer interaction.",
  "created": 1771959164,
  "canonical_slug": "openai/gpt-5.3-codex-20260224",
  "hugging_face_id": "",
  "source_type": "openrouter_only",
  "context_length": 400000,
  "max_completion_tokens": 128000,
  "is_moderated": true,
  "architecture": {
    "modality": "text+image->text",
    "input_modalities": [
      "text",
      "image"
    ],
    "output_modalities": [
      "text"
    ],
    "tokenizer": "GPT",
    "instruct_type": null
  },
  "input_modalities": [
    "text",
    "image"
  ],
  "output_modalities": [
    "text"
  ],
  "modality": "text+image->text",
  "tokenizer": "GPT",
  "instruct_type": null,
  "supported_parameters": [
    "include_reasoning",
    "max_tokens",
    "reasoning",
    "response_format",
    "seed",
    "structured_outputs",
    "tool_choice",
    "tools"
  ],
  "default_parameters": {
    "temperature": null,
    "top_p": null,
    "frequency_penalty": null
  },
  "per_request_limits": null,
  "top_provider": {
    "context_length": 400000,
    "max_completion_tokens": 128000,
    "is_moderated": true
  },
  "pricing": {
    "prompt": "0.00000175",
    "completion": "0.000014",
    "web_search": "0.01",
    "input_cache_read": "0.000000175"
  },
  "PPM": {
    "prompt": 1.75,
    "completion": 14,
    "web_search": 10000,
    "input_cache_read": 0.175
  },
  "openrouter_raw": {
    "id": "openai/gpt-5.3-codex",
    "canonical_slug": "openai/gpt-5.3-codex-20260224",
    "hugging_face_id": "",
    "name": "OpenAI: GPT-5.3-Codex",
    "created": 1771959164,
    "description": "GPT-5.3-Codex is OpenAI’s most advanced agentic coding model, combining the frontier software engineering performance of GPT-5.2-Codex with the broader reasoning and professional knowledge capabilities of GPT-5.2. It achieves state-of-the-art results on SWE-Bench Pro and strong performance on Terminal-Bench 2.0 and OSWorld-Verified, reflecting improved multi-language coding, terminal proficiency, and real-world computer-use skills. The model is optimized for long-running, tool-using workflows and supports interactive steering during execution, making it suitable for complex development tasks, debugging, deployment, and iterative product work.\n\nBeyond coding, GPT-5.3-Codex performs strongly on structured knowledge-work benchmarks such as GDPval, supporting tasks like document drafting, spreadsheet analysis, slide creation, and operational research across domains. It is trained with enhanced cybersecurity awareness, including vulnerability identification capabilities, and deployed with additional safeguards for high-risk use cases. Compared to prior Codex models, it is more token-efficient and approximately 25% faster, targeting professional end-to-end workflows that span reasoning, execution, and computer interaction.",
    "context_length": 400000,
    "architecture": {
      "modality": "text+image->text",
      "input_modalities": [
        "text",
        "image"
      ],
      "output_modalities": [
        "text"
      ],
      "tokenizer": "GPT",
      "instruct_type": null
    },
    "pricing": {
      "prompt": "0.00000175",
      "completion": "0.000014",
      "web_search": "0.01",
      "input_cache_read": "0.000000175"
    },
    "top_provider": {
      "context_length": 400000,
      "max_completion_tokens": 128000,
      "is_moderated": true
    },
    "per_request_limits": null,
    "supported_parameters": [
      "include_reasoning",
      "max_tokens",
      "reasoning",
      "response_format",
      "seed",
      "structured_outputs",
      "tool_choice",
      "tools"
    ],
    "default_parameters": {
      "temperature": null,
      "top_p": null,
      "frequency_penalty": null
    },
    "expiration_date": null
  }
}