Gemma · Gemma license
Google's efficient 12B model with strong chat and coding support.
Gemma 3 12B (12.2B) requires 10.5 GB VRAM at recommended quality (Q6_K). At efficient quality (Q4_K_M), it fits in 7 GB VRAM, making it compatible with the NVIDIA GeForce RTX 3060 12GB. On NVIDIA GeForce RTX 4090, expect approximately 75 tok/s at Q5_K_M. For the best experience, Starter AI Desktop ($582) is recommended.
— OwnRig methodology, data updated 2026-03-15
| Quality | Quantization | VRAM | File Size |
|---|---|---|---|
| full | Q8_0 | 13.5 GB | 12 GB |
| recommended | Q6_K | 10.5 GB | 9.2 GB |
| recommended | Q5_K_M | 8.8 GB | 7.5 GB |
| efficient | Q4_K_M | 7 GB | 6 GB |
| compressed | Q3_K_M | 5.7 GB | 4.8 GB |
KV cache VRAM at Q6_K quality. Longer context = more memory.
| Context | KV Cache | Total VRAM |
|---|---|---|
| 2K | 205 MB | 10.7 GB |
| 4K | 410 MB | 10.9 GB |
| 8K | 819 MB | 11.3 GB |
| 16K | 1.5 GB | 12 GB |
| 32K | 3.1 GB | 13.6 GB |
Performance data for Gemma 3 12B across different hardware.
| Device | Quantization | Speed | Rating | Fits in VRAM |
|---|---|---|---|---|
| NVIDIA GeForce RTX 3060 12GB | Q4_K_M | 32 tok/s | Good | ✓ |
| NVIDIA GeForce RTX 4060 Ti 16GB | Q5_K_M | 42 tok/s | Good | ✓ |
| NVIDIA GeForce RTX 4090 | Q5_K_M | 75 tok/s | Excellent | ✓ |
| NVIDIA GeForce RTX 5080 | Q5_K_M | 72 tok/s | Excellent | ✓ |
| NVIDIA GeForce RTX 4060 8GB | Q3_K_M | 18 tok/s | Marginal | ✓ |
| NVIDIA GeForce RTX 4070 Ti 12GB | Q4_K_M | 32 tok/s | Good | ✓ |
| NVIDIA GeForce RTX 3080 10GB | Q3_K_M | 28 tok/s | Acceptable | ✓ |
| Apple M3 Pro (18GB Unified) | Q3_K_M | 5 tok/s | Marginal | ✓ |
Gemma 3 12B is commonly used with Cursor, Continue, Aider, Open WebUI, LM Studio. For an AI coding workflow, pair it with an embedding model like nomic-embed-text for local RAG.
Complete PC builds that can run Gemma 3 12B.
Data confidence: estimated. Last updated: 2026-03-15. Source