OwnRig

Llama 3.3 70B Instruct on NVIDIA GeForce RTX 4090

NVIDIA GeForce RTX 4090 cannot run Llama 3.3 70B Instruct at any quantization level. The 24 GB of VRAM is insufficient.

Model Size

70.6B

Device VRAM

24 GB

Bandwidth

1008 GB/s

Quantizations Tested

1

Performance by Quantization

Each row shows Llama 3.3 70B Instruct performance at a different quality level on NVIDIA GeForce RTX 4090.

QuantizationSpeedTTFTFits in VRAMRatingConfidence
Q3_K_M6 tok/s2200ms✗ OffloadMarginalestimated

Notes

Q3_K_M

Q3 exceeds 24GB VRAM — requires CPU offloading. Usable but slow.

About Llama 3.3 70B Instruct

Llama 3.3 70B Instruct (70.6B) is a chat, coding, reasoning, multi-purpose model. Flagship Llama 3.3 model with best-in-class general and coding performance.

View all Llama 3.3 70B Instruct hardware options →

About NVIDIA GeForce RTX 4090

NVIDIA GeForce RTX 4090 has 24 GB at 1008 GB/s. Street price: $1,799.

See all models NVIDIA GeForce RTX 4090 can run →

Builds with NVIDIA GeForce RTX 4090

Source: Community benchmarks and estimated performance (2026-03-01)

Data last updated: 2026-03-15