OwnRig

all-MiniLM-L6-v2 on NVIDIA GeForce RTX 4060 8GB

NVIDIA GeForce RTX 4060 8GB runs all-MiniLM-L6-v2 excellently at 8500 tok/s at FP16. This is a strong pairing.

Model Size

23M

Device VRAM

8 GB

Bandwidth

272 GB/s

Quantizations Tested

1

Performance by Quantization

Each row shows all-MiniLM-L6-v2 performance at a different quality level on NVIDIA GeForce RTX 4060 8GB.

QuantizationSpeedTTFTFits in VRAMRatingConfidence
FP168500 tok/s5ms✓ YesExcellentestimated

Notes

FP16

Tiny 23M param embedding model. Negligible VRAM (~0.25GB). Can run concurrently with any other model.

About all-MiniLM-L6-v2

all-MiniLM-L6-v2 (23M) is a embeddings, ai building model. Ultra-lightweight embedding model at 23M params. Fastest option for local embeddings. Lower quality than nomic-embed but practically free in VRAM. Ideal when running alongside large coding models and every MB of VRAM matters. The tiny model for builders who need RAG without the VRAM cost.

View all all-MiniLM-L6-v2 hardware options →

About NVIDIA GeForce RTX 4060 8GB

NVIDIA GeForce RTX 4060 8GB has 8 GB at 272 GB/s. Street price: $289.

See all models NVIDIA GeForce RTX 4060 8GB can run →

Source: Performance estimates based on model size and device bandwidth (2026-03-15)

Data last updated: 2026-03-01