Mistral · Apache 2.0
Mistral's efficient 24B model with strong chat, coding, and reasoning.
Mistral Small 24B Instruct (24B) requires 20.5 GB VRAM at recommended quality (Q6_K). At efficient quality (Q4_K_M), it fits in 14 GB VRAM, making it compatible with the NVIDIA GeForce RTX 4070 Ti Super. On NVIDIA GeForce RTX 5090, expect approximately 55 tok/s at Q5_K_M. For the best experience, AMD AI Powerhouse ($1,818) is recommended.
— OwnRig methodology, data updated 2026-03-15
| Quality | Quantization | VRAM | File Size |
|---|---|---|---|
| full | Q8_0 | 26.5 GB | 24 GB |
| recommended | Q6_K | 20.5 GB | 18 GB |
| recommended | Q5_K_M | 17.2 GB | 15 GB |
| efficient | Q4_K_M | 14 GB | 12 GB |
| compressed | Q3_K_M | 11.2 GB | 9.5 GB |
KV cache VRAM at Q6_K quality. Longer context = more memory.
| Context | KV Cache | Total VRAM |
|---|---|---|
| 2K | 410 MB | 20.9 GB |
| 4K | 819 MB | 21.3 GB |
| 8K | 1.5 GB | 22 GB |
| 16K | 3.1 GB | 23.6 GB |
| 32K | 6.1 GB | 26.6 GBexceeds 24 GB |
Performance data for Mistral Small 24B Instruct across different hardware.
| Device | Quantization | Speed | Rating | Fits in VRAM |
|---|---|---|---|---|
| NVIDIA GeForce RTX 4090 | Q5_K_M | 32 tok/s | Good | ✓ |
| NVIDIA GeForce RTX 4070 Ti Super | Q3_K_M | 18 tok/s | Acceptable | ✓ |
| Apple M4 Max (64GB Unified) | Q5_K_M | 22 tok/s | Good | ✓ |
| NVIDIA GeForce RTX 5090 | Q5_K_M | 55 tok/s | Excellent | ✓ |
| NVIDIA GeForce RTX 4060 8GB | Q3_K_M | — | Not Viable | ✗ (offload) |
| NVIDIA GeForce RTX 4070 Ti 12GB | Q3_K_M | — | Not Viable | ✗ (offload) |
| NVIDIA GeForce RTX 3080 10GB | Q2_K | — | Not Viable | ✗ (offload) |
| Apple M3 Pro (18GB Unified) | Q3_K_M | — | Not Viable | ✗ (offload) |
Mistral Small 24B Instruct is commonly used with Cursor, Continue, Aider, Open WebUI, LM Studio. For an AI coding workflow, pair it with an embedding model like nomic-embed-text for local RAG.
Data confidence: estimated. Last updated: 2026-03-15. Source