Mistral Small 24B Instruct on NVIDIA GeForce RTX 4090
NVIDIA GeForce RTX 4090 handles Mistral Small 24B Instruct well at 32 tok/s at Q5_K_M. A solid choice for this model.
Model Size
24B
Device VRAM
24 GB
Bandwidth
1008 GB/s
Quantizations Tested
1
Performance by Quantization
Each row shows Mistral Small 24B Instruct performance at a different quality level on NVIDIA GeForce RTX 4090.
| Quantization | Speed | TTFT | Fits in VRAM | Rating | Confidence |
|---|---|---|---|---|---|
| Q5_K_M | 32 tok/s | 280ms | ✓ Yes | Good | estimated |
Notes
Q5_K_M
24B at Q5 fits on 4090. Good quality general-purpose model.
About Mistral Small 24B Instruct
Mistral Small 24B Instruct (24B) is a chat, coding, reasoning model. Mistral's efficient 24B model with strong chat, coding, and reasoning.
View all Mistral Small 24B Instruct hardware options →About NVIDIA GeForce RTX 4090
NVIDIA GeForce RTX 4090 has 24 GB at 1008 GB/s. Street price: $1,799.
See all models NVIDIA GeForce RTX 4090 can run →Builds with NVIDIA GeForce RTX 4090
Source: Community benchmarks and estimated performance (2026-03-01)
Data last updated: 2026-03-15
