OwnRig

all-MiniLM-L6-v2 on NVIDIA GeForce RTX 3080 10GB

NVIDIA GeForce RTX 3080 10GB runs all-MiniLM-L6-v2 excellently at 5000 tok/s at FP16. This is a strong pairing.

Model Size

23M

Device VRAM

10 GB

Bandwidth

760 GB/s

Quantizations Tested

1

Performance by Quantization

Each row shows all-MiniLM-L6-v2 performance at a different quality level on NVIDIA GeForce RTX 3080 10GB.

QuantizationSpeedTTFTFits in VRAMRatingConfidence
FP165000 tok/s8ms✓ YesExcellentestimated

Notes

FP16

23M params. Practically free in VRAM. Ultra-fast embeddings.

About all-MiniLM-L6-v2

all-MiniLM-L6-v2 (23M) is a embeddings, ai building model. Ultra-lightweight embedding model at 23M params. Fastest option for local embeddings. Lower quality than nomic-embed but practically free in VRAM. Ideal when running alongside large coding models and every MB of VRAM matters. The tiny model for builders who need RAG without the VRAM cost.

View all all-MiniLM-L6-v2 hardware options →

About NVIDIA GeForce RTX 3080 10GB

NVIDIA GeForce RTX 3080 10GB has 10 GB at 760 GB/s. Street price: $399.

See all models NVIDIA GeForce RTX 3080 10GB can run →

Source: Performance estimates based on model size and device bandwidth (2026-03-15)

Data last updated: 2026-03-01