Mistral Small 24B Instruct on NVIDIA GeForce RTX 3080 10GB
NVIDIA GeForce RTX 3080 10GB cannot run Mistral Small 24B Instruct at any quantization level. The 10 GB of VRAM is insufficient.
Model Size
24B
Device VRAM
10 GB
Bandwidth
760 GB/s
Quantizations Tested
1
Performance by Quantization
Each row shows Mistral Small 24B Instruct performance at a different quality level on NVIDIA GeForce RTX 3080 10GB.
| Quantization | Speed | TTFT | Fits in VRAM | Rating | Confidence |
|---|---|---|---|---|---|
| Q2_K | — | — | ✗ Offload | Not Viable | estimated |
Notes
Q2_K
24B exceeds 10GB even at Q2. Doesn't fit.
About Mistral Small 24B Instruct
Mistral Small 24B Instruct (24B) is a chat, coding, reasoning model. Mistral's efficient 24B model with strong chat, coding, and reasoning.
View all Mistral Small 24B Instruct hardware options →About NVIDIA GeForce RTX 3080 10GB
NVIDIA GeForce RTX 3080 10GB has 10 GB at 760 GB/s. Street price: $399.
See all models NVIDIA GeForce RTX 3080 10GB can run →Source: Performance estimates based on model size and device bandwidth (2026-03-15)
Data last updated: 2026-03-15