← Back to explorer

Z.ai: GLM 4.5

Server-rendered model summary page for indexing/share previews. Use the interactive explorer for full filtering and comparison.

Match confidence: UnmatchedSource type: model_only
Context window
0
Arena overall rank
Input price
$0.000 / 1M
Output price
$0.000 / 1M

Identifiers & provenance

Primary ID
z-ai/glm-4.5
OpenRouter ID
z-ai/glm-4.5
Canonical slug
z-ai/glm-4.5

Source semantics

  • Arena rank is a human-preference leaderboard signal, not a universal truth metric.
  • OpenRouter usage/popularity reflects adoption/traffic, not benchmark quality.
  • Pricing fields may differ by provider and can include extra modes beyond prompt/completion.

Read more on Methodology & data sources.

Description

GLM-4.5 is our latest flagship foundation model, purpose-built for agent-based applications. It leverages a Mixture-of-Experts (MoE) architecture and supports a context length of up to 128k tokens. GLM-4.5 delivers significantly enhanced capabilities in reasoning, code generation, and agent alignment. It supports a hybrid inference mode with two options, a "thinking mode" designed for complex reasoning and tool use, and a "non-thinking mode" optimized for instant responses. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config)

Raw fields snapshot

{
  "id": "z-ai/glm-4.5",
  "canonical_slug": "z-ai/glm-4.5",
  "name": "Z.ai: GLM 4.5",
  "display_name": "Z.ai: GLM 4.5",
  "provider": "z-ai",
  "description": "GLM-4.5 is our latest flagship foundation model, purpose-built for agent-based applications. It leverages a Mixture-of-Experts (MoE) architecture and supports a context length of up to 128k tokens. GLM-4.5 delivers significantly enhanced capabilities in reasoning, code generation, and agent alignment. It supports a hybrid inference mode with two options, a \"thinking mode\" designed for complex reasoning and tool use, and a \"non-thinking mode\" optimized for instant responses. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config)",
  "context_length": null,
  "source_type": "model_only",
  "best_rank": null,
  "pricing": {
    "prompt": null,
    "completion": null
  },
  "pricing_summary": {},
  "capabilities": {
    "modalities": [
      "text"
    ],
    "context_length": null,
    "architecture": {
      "modality": "text->text",
      "input_modalities": [
        "text"
      ],
      "output_modalities": [
        "text"
      ],
      "tokenizer": "Other",
      "instruct_type": null
    }
  },
  "__detail_source": "model_snapshot",
  "__raw_snapshot": {
    "model": {
      "id": "z-ai/glm-4.5",
      "slug": "z-ai/glm-4.5",
      "display_name": "Z.ai: GLM 4.5",
      "provider": "z-ai",
      "description": "GLM-4.5 is our latest flagship foundation model, purpose-built for agent-based applications. It leverages a Mixture-of-Experts (MoE) architecture and supports a context length of up to 128k tokens. GLM-4.5 delivers significantly enhanced capabilities in reasoning, code generation, and agent alignment. It supports a hybrid inference mode with two options, a \"thinking mode\" designed for complex reasoning and tool use, and a \"non-thinking mode\" optimized for instant responses. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config)",
      "context_length": null,
      "modalities": [
        "text"
      ],
      "tags": [],
      "source_type": "model_only",
      "updated_at": "2026-03-01T02:42:37.891754+00:00",
      "source": "model_only"
    },
    "overall_score": null,
    "best_rank": null,
    "ranks_by_category": {},
    "scores_by_category": {},
    "pricing_summary": {},
    "capabilities": {
      "modalities": [
        "text"
      ],
      "context_length": null,
      "architecture": {
        "modality": "text->text",
        "input_modalities": [
          "text"
        ],
        "output_modalities": [
          "text"
        ],
        "tokenizer": "Other",
        "instruct_type": null
      }
    }
  }
}
Z.ai: GLM 4.5 · NNZen