Phi · MIT
14B model with strong reasoning and coding performance. Fits comfortably on 16GB GPUs at Q4 and excels at structured output tasks. MIT license makes it attractive for commercial use.
Phi-3 Medium 14B Instruct (14B) requires 9.7 GB VRAM at recommended quality (Q5_K_M). At efficient quality (Q4_K_M), it fits in 8.2 GB VRAM, making it compatible with the NVIDIA GeForce RTX 4060 8GB. On NVIDIA GeForce RTX 4090, expect approximately 55 tok/s at Q8_0. For the best experience, Starter AI Desktop ($582) is recommended.
— OwnRig methodology, data updated 2026-03-01
| Quality | Quantization | VRAM | File Size |
|---|---|---|---|
| full | Q8_0 | 15.2 GB | 14 GB |
| recommended | Q5_K_M | 9.7 GB | 8.4 GB |
| efficient | Q4_K_M | 8.2 GB | 7 GB |
| compressed | Q3_K_M | 6.7 GB | 5.5 GB |
KV cache VRAM at Q5_K_M quality. Longer context = more memory.
| Context | KV Cache | Total VRAM |
|---|---|---|
| 2K | 205 MB | 9.9 GB |
| 4K | 410 MB | 10.1 GB |
| 8K | 819 MB | 10.5 GB |
| 16K | 1.5 GB | 11.2 GB |
| 32K | 3.1 GB | 12.8 GB |
| 64K | 6.1 GB | 15.8 GB |
Performance data for Phi-3 Medium 14B Instruct across different hardware.
| Device | Quantization | Speed | Rating | Fits in VRAM |
|---|---|---|---|---|
| NVIDIA GeForce RTX 4060 Ti 16GB | Q5_K_M | 28 tok/s | Good | ✓ |
| NVIDIA GeForce RTX 4090 | Q8_0 | 55 tok/s | Excellent | ✓ |
| NVIDIA GeForce RTX 4060 8GB | Q3_K_M | 20 tok/s | Marginal | ✓ |
| NVIDIA GeForce RTX 4070 Ti 12GB | Q3_K_M | 35 tok/s | Acceptable | ✓ |
| NVIDIA GeForce RTX 3080 10GB | Q3_K_M | 32 tok/s | Acceptable | ✓ |
| Apple M3 Pro (18GB Unified) | Q3_K_M | 6 tok/s | Marginal | ✓ |
Complete PC builds that can run Phi-3 Medium 14B Instruct.
Data confidence: verified. Last updated: 2026-03-01. Source