← Back to explorer
Google: Gemma 3n 2B (free)
Server-rendered model summary page for indexing/share previews. Use the interactive explorer for full filtering and comparison.
Match confidence: UnmatchedSource type: openrouter_only
Context window
8.2K
Arena overall rank
—
Input price
$0.000 / 1M
Output price
$0.000 / 1M
Identifiers & provenance
- Primary ID
- google/gemma-3n-e2b-it:free
- OpenRouter ID
- google/gemma-3n-e2b-it:free
- Canonical slug
- google/gemma-3n-e2b-it
Source semantics
- Arena rank is a human-preference leaderboard signal, not a universal truth metric.
- OpenRouter usage/popularity reflects adoption/traffic, not benchmark quality.
- Pricing fields may differ by provider and can include extra modes beyond prompt/completion.
Read more on Methodology & data sources.
Description
Gemma 3n E2B IT is a multimodal, instruction-tuned model developed by Google DeepMind, designed to operate efficiently at an effective parameter size of 2B while leveraging a 6B architecture. Based on the MatFormer architecture, it supports nested submodels and modular composition via the Mix-and-Match framework. Gemma 3n models are optimized for low-resource deployment, offering 32K context length and strong multilingual and reasoning performance across common benchmarks. This variant is trained on a diverse corpus including code, math, web, and multimodal data.
Raw fields snapshot
{
"id": "google/gemma-3n-e2b-it:free",
"name": "Google: Gemma 3n 2B (free)",
"description": "Gemma 3n E2B IT is a multimodal, instruction-tuned model developed by Google DeepMind, designed to operate efficiently at an effective parameter size of 2B while leveraging a 6B architecture. Based on the MatFormer architecture, it supports nested submodels and modular composition via the Mix-and-Match framework. Gemma 3n models are optimized for low-resource deployment, offering 32K context length and strong multilingual and reasoning performance across common benchmarks. This variant is trained on a diverse corpus including code, math, web, and multimodal data.",
"created": 1752074904,
"canonical_slug": "google/gemma-3n-e2b-it",
"hugging_face_id": "google/gemma-3n-E2B-it",
"source_type": "openrouter_only",
"context_length": 8192,
"max_completion_tokens": 2048,
"is_moderated": false,
"architecture": {
"modality": "text->text",
"input_modalities": [
"text"
],
"output_modalities": [
"text"
],
"tokenizer": "Other",
"instruct_type": null
},
"input_modalities": [
"text"
],
"output_modalities": [
"text"
],
"modality": "text->text",
"tokenizer": "Other",
"instruct_type": null,
"supported_parameters": [
"frequency_penalty",
"max_tokens",
"presence_penalty",
"response_format",
"seed",
"stop",
"temperature",
"top_p"
],
"default_parameters": {},
"per_request_limits": null,
"top_provider": {
"context_length": 8192,
"max_completion_tokens": 2048,
"is_moderated": false
},
"pricing": {
"prompt": "0",
"completion": "0"
},
"PPM": {
"prompt": 0,
"completion": 0
},
"openrouter_raw": {
"id": "google/gemma-3n-e2b-it:free",
"canonical_slug": "google/gemma-3n-e2b-it",
"hugging_face_id": "google/gemma-3n-E2B-it",
"name": "Google: Gemma 3n 2B (free)",
"created": 1752074904,
"description": "Gemma 3n E2B IT is a multimodal, instruction-tuned model developed by Google DeepMind, designed to operate efficiently at an effective parameter size of 2B while leveraging a 6B architecture. Based on the MatFormer architecture, it supports nested submodels and modular composition via the Mix-and-Match framework. Gemma 3n models are optimized for low-resource deployment, offering 32K context length and strong multilingual and reasoning performance across common benchmarks. This variant is trained on a diverse corpus including code, math, web, and multimodal data.",
"context_length": 8192,
"architecture": {
"modality": "text->text",
"input_modalities": [
"text"
],
"output_modalities": [
"text"
],
"tokenizer": "Other",
"instruct_type": null
},
"pricing": {
"prompt": "0",
"completion": "0"
},
"top_provider": {
"context_length": 8192,
"max_completion_tokens": 2048,
"is_moderated": false
},
"per_request_limits": null,
"supported_parameters": [
"frequency_penalty",
"max_tokens",
"presence_penalty",
"response_format",
"seed",
"stop",
"temperature",
"top_p"
],
"default_parameters": {},
"expiration_date": null
}
}