OwnRig

LLaVA 1.6 13B on NVIDIA GeForce RTX 4090

NVIDIA GeForce RTX 4090 handles LLaVA 1.6 13B well at 30 tok/s at Q5_K_M. A solid choice for this model.

Model Size

13B

Device VRAM

24 GB

Bandwidth

1008 GB/s

Quantizations Tested

1

Performance by Quantization

Each row shows LLaVA 1.6 13B performance at a different quality level on NVIDIA GeForce RTX 4090.

QuantizationSpeedTTFTFits in VRAMRatingConfidence
Q5_K_M30 tok/s350ms✓ YesGoodestimated

Notes

Q5_K_M

Q5 at 9.1GB. Good for image analysis tasks. Vision encoding adds latency.

About LLaVA 1.6 13B

LLaVA 1.6 13B (13B) is a chat, reasoning, multi-purpose model. Multimodal model — processes images and text together. Built on Vicuna 13B with a vision encoder. Can analyze screenshots, diagrams, and photos. Useful for builders who need to process visual content in their workflows.

View all LLaVA 1.6 13B hardware options →

About NVIDIA GeForce RTX 4090

NVIDIA GeForce RTX 4090 has 24 GB at 1008 GB/s. Street price: $1,799.

See all models NVIDIA GeForce RTX 4090 can run →

Builds with NVIDIA GeForce RTX 4090

Source: Multimodal model reports (2026-01-15)

Data last updated: 2026-03-01