OwnRig

DeepSeek Coder V2 Lite 16B on NVIDIA GeForce RTX 3060 12GB

NVIDIA GeForce RTX 3060 12GB handles DeepSeek Coder V2 Lite 16B well at 40 tok/s at Q4_K_M. A solid choice for this model.

Model Size

15.7B

Device VRAM

12 GB

Bandwidth

360 GB/s

Quantizations Tested

1

Performance by Quantization

Each row shows DeepSeek Coder V2 Lite 16B performance at a different quality level on NVIDIA GeForce RTX 3060 12GB.

QuantizationSpeedTTFTFits in VRAMRatingConfidence
Q4_K_M40 tok/s200ms✓ YesGoodestimated

Notes

Q4_K_M

MoE architecture — only 2.4B active params per token. Speed closer to a 3B model despite 15.7B total. Fits at Q4 (9.1GB) with headroom.

About DeepSeek Coder V2 Lite 16B

DeepSeek Coder V2 Lite 16B (15.7B) is a coding, ai coding, ai building model. MoE architecture — 15.7B total, ~2.4B active per token. Excellent code generation and completion. Extremely fast inference despite total param count. One of the best coding models for its effective size.

View all DeepSeek Coder V2 Lite 16B hardware options →

About NVIDIA GeForce RTX 3060 12GB

NVIDIA GeForce RTX 3060 12GB has 12 GB at 360 GB/s. Street price: $269.

See all models NVIDIA GeForce RTX 3060 12GB can run →

Builds with NVIDIA GeForce RTX 3060 12GB

Source: MoE model performance reports (2026-01-15)

Data last updated: 2026-03-01