← Back to explorer
Venice: Uncensored (free)
Server-rendered model summary page for indexing/share previews. Use the interactive explorer for full filtering and comparison.
Match confidence: UnmatchedSource type: openrouter_only
Context window
32.8K
Arena overall rank
—
Input price
$0.000 / 1M
Output price
$0.000 / 1M
Identifiers & provenance
- Primary ID
- cognitivecomputations/dolphin-mistral-24b-venice-edition:free
- OpenRouter ID
- cognitivecomputations/dolphin-mistral-24b-venice-edition:free
- Canonical slug
- venice/uncensored
Source semantics
- Arena rank is a human-preference leaderboard signal, not a universal truth metric.
- OpenRouter usage/popularity reflects adoption/traffic, not benchmark quality.
- Pricing fields may differ by provider and can include extra modes beyond prompt/completion.
Read more on Methodology & data sources.
Description
Venice Uncensored Dolphin Mistral 24B Venice Edition is a fine-tuned variant of Mistral-Small-24B-Instruct-2501, developed by dphn.ai in collaboration with Venice.ai. This model is designed as an “uncensored” instruct-tuned LLM, preserving user control over alignment, system prompts, and behavior. Intended for advanced and unrestricted use cases, Venice Uncensored emphasizes steerability and transparent behavior, removing default safety and alignment layers typically found in mainstream assistant models.
Raw fields snapshot
{
"id": "cognitivecomputations/dolphin-mistral-24b-venice-edition:free",
"name": "Venice: Uncensored (free)",
"description": "Venice Uncensored Dolphin Mistral 24B Venice Edition is a fine-tuned variant of Mistral-Small-24B-Instruct-2501, developed by dphn.ai in collaboration with Venice.ai. This model is designed as an “uncensored” instruct-tuned LLM, preserving user control over alignment, system prompts, and behavior. Intended for advanced and unrestricted use cases, Venice Uncensored emphasizes steerability and transparent behavior, removing default safety and alignment layers typically found in mainstream assistant models.",
"created": 1752094966,
"canonical_slug": "venice/uncensored",
"hugging_face_id": "cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"source_type": "openrouter_only",
"context_length": 32768,
"max_completion_tokens": null,
"is_moderated": false,
"architecture": {
"modality": "text->text",
"input_modalities": [
"text"
],
"output_modalities": [
"text"
],
"tokenizer": "Other",
"instruct_type": null
},
"input_modalities": [
"text"
],
"output_modalities": [
"text"
],
"modality": "text->text",
"tokenizer": "Other",
"instruct_type": null,
"supported_parameters": [
"frequency_penalty",
"max_tokens",
"presence_penalty",
"response_format",
"stop",
"structured_outputs",
"temperature",
"top_k",
"top_p"
],
"default_parameters": {},
"per_request_limits": null,
"top_provider": {
"context_length": 32768,
"max_completion_tokens": null,
"is_moderated": false
},
"pricing": {
"prompt": "0",
"completion": "0"
},
"PPM": {
"prompt": 0,
"completion": 0
},
"openrouter_raw": {
"id": "cognitivecomputations/dolphin-mistral-24b-venice-edition:free",
"canonical_slug": "venice/uncensored",
"hugging_face_id": "cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"name": "Venice: Uncensored (free)",
"created": 1752094966,
"description": "Venice Uncensored Dolphin Mistral 24B Venice Edition is a fine-tuned variant of Mistral-Small-24B-Instruct-2501, developed by dphn.ai in collaboration with Venice.ai. This model is designed as an “uncensored” instruct-tuned LLM, preserving user control over alignment, system prompts, and behavior. Intended for advanced and unrestricted use cases, Venice Uncensored emphasizes steerability and transparent behavior, removing default safety and alignment layers typically found in mainstream assistant models.",
"context_length": 32768,
"architecture": {
"modality": "text->text",
"input_modalities": [
"text"
],
"output_modalities": [
"text"
],
"tokenizer": "Other",
"instruct_type": null
},
"pricing": {
"prompt": "0",
"completion": "0"
},
"top_provider": {
"context_length": 32768,
"max_completion_tokens": null,
"is_moderated": false
},
"per_request_limits": null,
"supported_parameters": [
"frequency_penalty",
"max_tokens",
"presence_penalty",
"response_format",
"stop",
"structured_outputs",
"temperature",
"top_k",
"top_p"
],
"default_parameters": {},
"expiration_date": null
}
}