OwnRig

Llama 3.1 70B Instruct on NVIDIA GeForce RTX 5090

NVIDIA GeForce RTX 5090 handles Llama 3.1 70B Instruct well at 14 tok/s at Q4_K_M. A solid choice for this model.

Model Size

70.6B

Device VRAM

32 GB

Bandwidth

1792 GB/s

Quantizations Tested

1

Performance by Quantization

Each row shows Llama 3.1 70B Instruct performance at a different quality level on NVIDIA GeForce RTX 5090.

QuantizationSpeedTTFTFits in VRAMRatingConfidence
Q4_K_M14 tok/s1100ms✓ YesGoodestimated

Notes

Q4_K_M

32GB VRAM fits 70B Q4. 5090 bandwidth makes it usable without offloading.

About Llama 3.1 70B Instruct

Llama 3.1 70B Instruct (70.6B) is a chat, coding, ai coding, reasoning, multi-purpose model. Frontier-class open model. Approaches GPT-4 quality on many benchmarks. Requires significant VRAM — 48GB+ recommended for usable quantizations. Excellent for serious local deployment.

View all Llama 3.1 70B Instruct hardware options →

About NVIDIA GeForce RTX 5090

NVIDIA GeForce RTX 5090 has 32 GB at 1792 GB/s. Street price: $2,199.

See all models NVIDIA GeForce RTX 5090 can run →

Builds with NVIDIA GeForce RTX 5090

Source: Community benchmarks and estimated performance (2026-03-01)

Data last updated: 2026-03-01