MoonshotAI: Kimi K2 0711
Server-rendered model summary page for indexing/share previews. Use the interactive explorer for full filtering and comparison.
Identifiers & provenance
- Primary ID
- moonshotai/kimi-k2
- OpenRouter ID
- moonshotai/kimi-k2
- Canonical slug
- moonshotai/kimi-k2
Source semantics
- Arena rank is a human-preference leaderboard signal, not a universal truth metric.
- OpenRouter usage/popularity reflects adoption/traffic, not benchmark quality.
- Pricing fields may differ by provider and can include extra modes beyond prompt/completion.
Read more on Methodology & data sources.
Description
Kimi K2 Instruct is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters with 32 billion active per forward pass. It is optimized for agentic capabilities, including advanced tool use, reasoning, and code synthesis. Kimi K2 excels across a broad range of benchmarks, particularly in coding (LiveCodeBench, SWE-bench), reasoning (ZebraLogic, GPQA), and tool-use (Tau2, AceBench) tasks. It supports long-context inference up to 128K tokens and is designed with a novel training stack that includes the MuonClip optimizer for stable large-scale MoE training.
Raw fields snapshot
{
"id": "moonshotai/kimi-k2",
"canonical_slug": "moonshotai/kimi-k2",
"name": "MoonshotAI: Kimi K2 0711",
"display_name": "MoonshotAI: Kimi K2 0711",
"provider": "moonshotai",
"description": "Kimi K2 Instruct is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters with 32 billion active per forward pass. It is optimized for agentic capabilities, including advanced tool use, reasoning, and code synthesis. Kimi K2 excels across a broad range of benchmarks, particularly in coding (LiveCodeBench, SWE-bench), reasoning (ZebraLogic, GPQA), and tool-use (Tau2, AceBench) tasks. It supports long-context inference up to 128K tokens and is designed with a novel training stack that includes the MuonClip optimizer for stable large-scale MoE training.",
"context_length": null,
"source_type": "model_only",
"best_rank": null,
"pricing": {
"prompt": null,
"completion": null
},
"pricing_summary": {},
"capabilities": {
"modalities": [
"text"
],
"context_length": null,
"architecture": {
"modality": "text->text",
"input_modalities": [
"text"
],
"output_modalities": [
"text"
],
"tokenizer": "Other",
"instruct_type": null
}
},
"__detail_source": "model_snapshot",
"__raw_snapshot": {
"model": {
"id": "moonshotai/kimi-k2",
"slug": "moonshotai/kimi-k2",
"display_name": "MoonshotAI: Kimi K2 0711",
"provider": "moonshotai",
"description": "Kimi K2 Instruct is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters with 32 billion active per forward pass. It is optimized for agentic capabilities, including advanced tool use, reasoning, and code synthesis. Kimi K2 excels across a broad range of benchmarks, particularly in coding (LiveCodeBench, SWE-bench), reasoning (ZebraLogic, GPQA), and tool-use (Tau2, AceBench) tasks. It supports long-context inference up to 128K tokens and is designed with a novel training stack that includes the MuonClip optimizer for stable large-scale MoE training.",
"context_length": null,
"modalities": [
"text"
],
"tags": [],
"source_type": "model_only",
"updated_at": "2026-03-01T02:42:40.158382+00:00",
"source": "model_only"
},
"overall_score": null,
"best_rank": null,
"ranks_by_category": {},
"scores_by_category": {},
"pricing_summary": {},
"capabilities": {
"modalities": [
"text"
],
"context_length": null,
"architecture": {
"modality": "text->text",
"input_modalities": [
"text"
],
"output_modalities": [
"text"
],
"tokenizer": "Other",
"instruct_type": null
}
}
}
}