OwnRig

all-MiniLM-L6-v2 on NVIDIA GeForce RTX 4090

NVIDIA GeForce RTX 4090 runs all-MiniLM-L6-v2 excellently at FP16. This is a strong pairing.

Model Size

23M

Device VRAM

24 GB

Bandwidth

1008 GB/s

Quantizations Tested

1

Performance by Quantization

Each row shows all-MiniLM-L6-v2 performance at a different quality level on NVIDIA GeForce RTX 4090.

QuantizationSpeedTTFTFits in VRAMRatingConfidence
FP163ms✓ YesExcellentestimated

Notes

FP16

Ultra-lightweight. Use when every MB of VRAM counts.

Can run alongside even the largest 32B models with zero meaningful VRAM impact.

About all-MiniLM-L6-v2

all-MiniLM-L6-v2 (23M) is a embeddings, ai building model. Ultra-lightweight embedding model at 23M params. Fastest option for local embeddings. Lower quality than nomic-embed but practically free in VRAM. Ideal when running alongside large coding models and every MB of VRAM matters. The tiny model for builders who need RAG without the VRAM cost.

View all all-MiniLM-L6-v2 hardware options →

About NVIDIA GeForce RTX 4090

NVIDIA GeForce RTX 4090 has 24 GB at 1008 GB/s. Street price: $1,799.

See all models NVIDIA GeForce RTX 4090 can run →

Builds with NVIDIA GeForce RTX 4090

Source: Model card (2026-01-15)

Data last updated: 2026-03-01