Gemma · Gemma License
Google's 9B model with strong knowledge distillation. Competitive with Llama 3.1 8B on most benchmarks. Shorter max context (8K) is the main limitation vs Llama.
Gemma 2 9B Instruct (9.24B) requires 6.6 GB VRAM at recommended quality (Q5_K_M). At efficient quality (Q4_K_M), it fits in 5.6 GB VRAM, making it compatible with the NVIDIA GeForce RTX 3060 12GB. On NVIDIA GeForce RTX 4090, expect approximately 80 tok/s at Q8_0. For the best experience, Starter AI Desktop ($582) is recommended.
— OwnRig methodology, data updated 2026-03-01
| Quality | Quantization | VRAM | File Size |
|---|---|---|---|
| full | Q8_0 | 10.2 GB | 9.2 GB |
| recommended | Q5_K_M | 6.6 GB | 5.5 GB |
| efficient | Q4_K_M | 5.6 GB | 4.6 GB |
| compressed | Q3_K_M | 4.6 GB | 3.6 GB |
KV cache VRAM at Q5_K_M quality. Longer context = more memory.
| Context | KV Cache | Total VRAM |
|---|---|---|
| 2K | 102 MB | 6.7 GB |
| 4K | 307 MB | 6.9 GB |
| 8K | 614 MB | 7.2 GB |
Performance data for Gemma 2 9B Instruct across different hardware.
| Device | Quantization | Speed | Rating | Fits in VRAM |
|---|---|---|---|---|
| NVIDIA GeForce RTX 3060 12GB | Q5_K_M | 30 tok/s | Good | ✓ |
| NVIDIA GeForce RTX 4090 | Q8_0 | 80 tok/s | Excellent | ✓ |
| NVIDIA GeForce RTX 4060 8GB | Q4_K_M | 28 tok/s | Good | ✓ |
| NVIDIA GeForce RTX 4070 Ti 12GB | Q5_K_M | 48 tok/s | Excellent | ✓ |
| NVIDIA GeForce RTX 3080 10GB | Q4_K_M | 45 tok/s | Excellent | ✓ |
| Apple M3 Pro (18GB Unified) | Q4_K_M | 13 tok/s | Acceptable | ✓ |
Complete PC builds that can run Gemma 2 9B Instruct.
Data confidence: verified. Last updated: 2026-03-01. Source