OwnRig

Mixtral 8x7B Instruct on NVIDIA GeForce RTX 3080 10GB

NVIDIA GeForce RTX 3080 10GB cannot run Mixtral 8x7B Instruct at any quantization level. The 10 GB of VRAM is insufficient.

Model Size

46.7B

Device VRAM

10 GB

Bandwidth

760 GB/s

Quantizations Tested

1

Performance by Quantization

Each row shows Mixtral 8x7B Instruct performance at a different quality level on NVIDIA GeForce RTX 3080 10GB.

QuantizationSpeedTTFTFits in VRAMRatingConfidence
Q2_K✗ OffloadNot Viableestimated

Notes

Q2_K

MoE 46.7B total. Q2 16.4GB exceeds 10GB. Doesn't fit.

About Mixtral 8x7B Instruct

Mixtral 8x7B Instruct (46.7B) is a chat, coding, reasoning, multi-purpose model. Mixture of Experts model — 46.7B total params but only ~12.9B active per token. Excellent quality-to-speed ratio. Despite large total params, inference speed closer to a 13B model. Needs more VRAM for the full weight set though.

View all Mixtral 8x7B Instruct hardware options →

About NVIDIA GeForce RTX 3080 10GB

NVIDIA GeForce RTX 3080 10GB has 10 GB at 760 GB/s. Street price: $399.

See all models NVIDIA GeForce RTX 3080 10GB can run →

Source: Performance estimates based on model size and device bandwidth (2026-03-15)

Data last updated: 2026-03-01