OwnRig

QwQ 32B Preview on NVIDIA GeForce RTX 4090

NVIDIA GeForce RTX 4090 handles QwQ 32B Preview well at 24 tok/s at Q4_K_M. A solid choice for this model.

Model Size

32.5B

Device VRAM

24 GB

Bandwidth

1008 GB/s

Quantizations Tested

1

Performance by Quantization

Each row shows QwQ 32B Preview performance at a different quality level on NVIDIA GeForce RTX 4090.

QuantizationSpeedTTFTFits in VRAMRatingConfidence
Q4_K_M24 tok/s380ms✓ YesGoodestimated

Notes

Q4_K_M

Q4 at 18.4GB fits on 4090 with 5.6GB headroom. Reasoning responses are long (chain-of-thought) so tok/s matters. Good but not fast.

About QwQ 32B Preview

QwQ 32B Preview (32.5B) is a reasoning, coding, ai coding, ai building model. Reasoning-focused model — uses chain-of-thought natively. Builders pair this with a coding model for complex architecture decisions. Approaches o1-mini on reasoning benchmarks. Same size as Qwen 2.5 Coder 32B, so they compete for VRAM — run one at a time, not concurrently.

View all QwQ 32B Preview hardware options →

About NVIDIA GeForce RTX 4090

NVIDIA GeForce RTX 4090 has 24 GB at 1008 GB/s. Street price: $1,799.

See all models NVIDIA GeForce RTX 4090 can run →

Builds with NVIDIA GeForce RTX 4090

Source: 32B model performance extrapolation (2026-01-15)

Data last updated: 2026-03-01