Nous: Hermes 4 70B
Server-rendered model summary page for indexing/share previews. Use the interactive explorer for full filtering and comparison.
Identifiers & provenance
- Primary ID
- nousresearch/hermes-4-70b
- OpenRouter ID
- nousresearch/hermes-4-70b
- Canonical slug
- nousresearch/hermes-4-70b
Source semantics
- Arena rank is a human-preference leaderboard signal, not a universal truth metric.
- OpenRouter usage/popularity reflects adoption/traffic, not benchmark quality.
- Pricing fields may differ by provider and can include extra modes beyond prompt/completion.
Read more on Methodology & data sources.
Description
Hermes 4 70B is a hybrid reasoning model from Nous Research, built on Meta-Llama-3.1-70B. It introduces the same hybrid mode as the larger 405B release, allowing the model to either respond directly or generate explicit <think>...</think> reasoning traces before answering. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config) This 70B variant is trained with the expanded post-training corpus (~60B tokens) emphasizing verified reasoning data, leading to improvements in mathematics, coding, STEM, logic, and structured outputs while maintaining general assistant performance. It supports JSON mode, schema adherence, function calling, and tool use, and is designed for greater steerability with reduced refusal rates.
Raw fields snapshot
{
"id": "nousresearch/hermes-4-70b",
"name": "Nous: Hermes 4 70B",
"description": "Hermes 4 70B is a hybrid reasoning model from Nous Research, built on Meta-Llama-3.1-70B. It introduces the same hybrid mode as the larger 405B release, allowing the model to either respond directly or generate explicit <think>...</think> reasoning traces before answering. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config)\n\nThis 70B variant is trained with the expanded post-training corpus (~60B tokens) emphasizing verified reasoning data, leading to improvements in mathematics, coding, STEM, logic, and structured outputs while maintaining general assistant performance. It supports JSON mode, schema adherence, function calling, and tool use, and is designed for greater steerability with reduced refusal rates.",
"created": 1756236182,
"canonical_slug": "nousresearch/hermes-4-70b",
"hugging_face_id": "NousResearch/Hermes-4-70B",
"source_type": "openrouter_only",
"context_length": 131072,
"max_completion_tokens": null,
"is_moderated": false,
"architecture": {
"modality": "text->text",
"input_modalities": [
"text"
],
"output_modalities": [
"text"
],
"tokenizer": "Llama3",
"instruct_type": null
},
"input_modalities": [
"text"
],
"output_modalities": [
"text"
],
"modality": "text->text",
"tokenizer": "Llama3",
"instruct_type": null,
"supported_parameters": [
"frequency_penalty",
"include_reasoning",
"max_tokens",
"presence_penalty",
"reasoning",
"repetition_penalty",
"response_format",
"temperature",
"top_k",
"top_p"
],
"default_parameters": {},
"per_request_limits": null,
"top_provider": {
"context_length": 131072,
"max_completion_tokens": null,
"is_moderated": false
},
"pricing": {
"prompt": "0.00000013",
"completion": "0.0000004"
},
"PPM": {
"prompt": 0.13,
"completion": 0.4
},
"openrouter_raw": {
"id": "nousresearch/hermes-4-70b",
"canonical_slug": "nousresearch/hermes-4-70b",
"hugging_face_id": "NousResearch/Hermes-4-70B",
"name": "Nous: Hermes 4 70B",
"created": 1756236182,
"description": "Hermes 4 70B is a hybrid reasoning model from Nous Research, built on Meta-Llama-3.1-70B. It introduces the same hybrid mode as the larger 405B release, allowing the model to either respond directly or generate explicit <think>...</think> reasoning traces before answering. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config)\n\nThis 70B variant is trained with the expanded post-training corpus (~60B tokens) emphasizing verified reasoning data, leading to improvements in mathematics, coding, STEM, logic, and structured outputs while maintaining general assistant performance. It supports JSON mode, schema adherence, function calling, and tool use, and is designed for greater steerability with reduced refusal rates.",
"context_length": 131072,
"architecture": {
"modality": "text->text",
"input_modalities": [
"text"
],
"output_modalities": [
"text"
],
"tokenizer": "Llama3",
"instruct_type": null
},
"pricing": {
"prompt": "0.00000013",
"completion": "0.0000004"
},
"top_provider": {
"context_length": 131072,
"max_completion_tokens": null,
"is_moderated": false
},
"per_request_limits": null,
"supported_parameters": [
"frequency_penalty",
"include_reasoning",
"max_tokens",
"presence_penalty",
"reasoning",
"repetition_penalty",
"response_format",
"temperature",
"top_k",
"top_p"
],
"default_parameters": {},
"expiration_date": null
}
}