OwnRig

Llama 3.3 70B Instruct on NVIDIA GeForce RTX 5090

NVIDIA GeForce RTX 5090 handles Llama 3.3 70B Instruct well at 12 tok/s at Q4_K_M. A solid choice for this model.

Model Size

70.6B

Device VRAM

32 GB

Bandwidth

1792 GB/s

Quantizations Tested

1

Performance by Quantization

Each row shows Llama 3.3 70B Instruct performance at a different quality level on NVIDIA GeForce RTX 5090.

QuantizationSpeedTTFTFits in VRAMRatingConfidence
Q4_K_M12 tok/s1200ms✓ YesGoodestimated

Notes

Q4_K_M

70B Q4 fits on 32GB 5090. Usable without offloading.

About Llama 3.3 70B Instruct

Llama 3.3 70B Instruct (70.6B) is a chat, coding, reasoning, multi-purpose model. Flagship Llama 3.3 model with best-in-class general and coding performance.

View all Llama 3.3 70B Instruct hardware options →

About NVIDIA GeForce RTX 5090

NVIDIA GeForce RTX 5090 has 32 GB at 1792 GB/s. Street price: $2,199.

See all models NVIDIA GeForce RTX 5090 can run →

Builds with NVIDIA GeForce RTX 5090

Source: Community benchmarks and estimated performance (2026-03-01)

Data last updated: 2026-03-15