Mixtral 8x7B Instruct on NVIDIA GeForce RTX 4090
NVIDIA GeForce RTX 4090 handles Mixtral 8x7B Instruct well at 35 tok/s at Q3_K_M. A solid choice for this model.
Model Size
46.7B
Device VRAM
24 GB
Bandwidth
1008 GB/s
Quantizations Tested
1
Performance by Quantization
Each row shows Mixtral 8x7B Instruct performance at a different quality level on NVIDIA GeForce RTX 4090.
| Quantization | Speed | TTFT | Fits in VRAM | Rating | Confidence |
|---|---|---|---|---|---|
| Q3_K_M | 35 tok/s | 300ms | ✓ Yes | Good | estimated |
Notes
Q3_K_M
Q3 at 21GB fits on 24GB VRAM with 3GB headroom. MoE sparsity means speed is good despite quantization level. For Q4 quality, need 36GB+ VRAM (M4 Max).
About Mixtral 8x7B Instruct
Mixtral 8x7B Instruct (46.7B) is a chat, coding, reasoning, multi-purpose model. Mixture of Experts model — 46.7B total params but only ~12.9B active per token. Excellent quality-to-speed ratio. Despite large total params, inference speed closer to a 13B model. Needs more VRAM for the full weight set though.
View all Mixtral 8x7B Instruct hardware options →About NVIDIA GeForce RTX 4090
NVIDIA GeForce RTX 4090 has 24 GB at 1008 GB/s. Street price: $1,799.
See all models NVIDIA GeForce RTX 4090 can run →Builds with NVIDIA GeForce RTX 4090
Source: MoE model benchmarks (2026-01-15)
Data last updated: 2026-03-01
