← Back to explorer

Google: Gemma 3n 4B (free)

Server-rendered model summary page for indexing/share previews. Use the interactive explorer for full filtering and comparison.

Match confidence: UnmatchedSource type: openrouter_only
Context window
8.2K
Arena overall rank
Input price
$0.000 / 1M
Output price
$0.000 / 1M

Identifiers & provenance

Primary ID
google/gemma-3n-e4b-it:free
OpenRouter ID
google/gemma-3n-e4b-it:free
Canonical slug
google/gemma-3n-e4b-it

Source semantics

  • Arena rank is a human-preference leaderboard signal, not a universal truth metric.
  • OpenRouter usage/popularity reflects adoption/traffic, not benchmark quality.
  • Pricing fields may differ by provider and can include extra modes beyond prompt/completion.

Read more on Methodology & data sources.

Description

Gemma 3n E4B-it is optimized for efficient execution on mobile and low-resource devices, such as phones, laptops, and tablets. It supports multimodal inputs—including text, visual data, and audio—enabling diverse tasks such as text generation, speech recognition, translation, and image analysis. Leveraging innovations like Per-Layer Embedding (PLE) caching and the MatFormer architecture, Gemma 3n dynamically manages memory usage and computational load by selectively activating model parameters, significantly reducing runtime resource requirements. This model supports a wide linguistic range (trained in over 140 languages) and features a flexible 32K token context window. Gemma 3n can selectively load parameters, optimizing memory and computational efficiency based on the task or device capabilities, making it well-suited for privacy-focused, offline-capable applications and on-device AI solutions. [Read more in the blog post](https://developers.googleblog.com/en/introducing-gemma-3n/)

Raw fields snapshot

{
  "id": "google/gemma-3n-e4b-it:free",
  "name": "Google: Gemma 3n 4B (free)",
  "description": "Gemma 3n E4B-it is optimized for efficient execution on mobile and low-resource devices, such as phones, laptops, and tablets. It supports multimodal inputs—including text, visual data, and audio—enabling diverse tasks such as text generation, speech recognition, translation, and image analysis. Leveraging innovations like Per-Layer Embedding (PLE) caching and the MatFormer architecture, Gemma 3n dynamically manages memory usage and computational load by selectively activating model parameters, significantly reducing runtime resource requirements.\n\nThis model supports a wide linguistic range (trained in over 140 languages) and features a flexible 32K token context window. Gemma 3n can selectively load parameters, optimizing memory and computational efficiency based on the task or device capabilities, making it well-suited for privacy-focused, offline-capable applications and on-device AI solutions. [Read more in the blog post](https://developers.googleblog.com/en/introducing-gemma-3n/)",
  "created": 1747776824,
  "canonical_slug": "google/gemma-3n-e4b-it",
  "hugging_face_id": "google/gemma-3n-E4B-it",
  "source_type": "openrouter_only",
  "context_length": 8192,
  "max_completion_tokens": 2048,
  "is_moderated": false,
  "architecture": {
    "modality": "text->text",
    "input_modalities": [
      "text"
    ],
    "output_modalities": [
      "text"
    ],
    "tokenizer": "Other",
    "instruct_type": null
  },
  "input_modalities": [
    "text"
  ],
  "output_modalities": [
    "text"
  ],
  "modality": "text->text",
  "tokenizer": "Other",
  "instruct_type": null,
  "supported_parameters": [
    "frequency_penalty",
    "max_tokens",
    "presence_penalty",
    "response_format",
    "seed",
    "stop",
    "temperature",
    "top_p"
  ],
  "default_parameters": {},
  "per_request_limits": null,
  "top_provider": {
    "context_length": 8192,
    "max_completion_tokens": 2048,
    "is_moderated": false
  },
  "pricing": {
    "prompt": "0",
    "completion": "0"
  },
  "PPM": {
    "prompt": 0,
    "completion": 0
  },
  "openrouter_raw": {
    "id": "google/gemma-3n-e4b-it:free",
    "canonical_slug": "google/gemma-3n-e4b-it",
    "hugging_face_id": "google/gemma-3n-E4B-it",
    "name": "Google: Gemma 3n 4B (free)",
    "created": 1747776824,
    "description": "Gemma 3n E4B-it is optimized for efficient execution on mobile and low-resource devices, such as phones, laptops, and tablets. It supports multimodal inputs—including text, visual data, and audio—enabling diverse tasks such as text generation, speech recognition, translation, and image analysis. Leveraging innovations like Per-Layer Embedding (PLE) caching and the MatFormer architecture, Gemma 3n dynamically manages memory usage and computational load by selectively activating model parameters, significantly reducing runtime resource requirements.\n\nThis model supports a wide linguistic range (trained in over 140 languages) and features a flexible 32K token context window. Gemma 3n can selectively load parameters, optimizing memory and computational efficiency based on the task or device capabilities, making it well-suited for privacy-focused, offline-capable applications and on-device AI solutions. [Read more in the blog post](https://developers.googleblog.com/en/introducing-gemma-3n/)",
    "context_length": 8192,
    "architecture": {
      "modality": "text->text",
      "input_modalities": [
        "text"
      ],
      "output_modalities": [
        "text"
      ],
      "tokenizer": "Other",
      "instruct_type": null
    },
    "pricing": {
      "prompt": "0",
      "completion": "0"
    },
    "top_provider": {
      "context_length": 8192,
      "max_completion_tokens": 2048,
      "is_moderated": false
    },
    "per_request_limits": null,
    "supported_parameters": [
      "frequency_penalty",
      "max_tokens",
      "presence_penalty",
      "response_format",
      "seed",
      "stop",
      "temperature",
      "top_p"
    ],
    "default_parameters": {},
    "expiration_date": null
  }
}
Google: Gemma 3n 4B (free) · NNZen