all-MiniLM-L6-v2 on Apple M3 Pro (18GB Unified)
Apple M3 Pro (18GB Unified) handles all-MiniLM-L6-v2 well at 1200 tok/s at FP16. A solid choice for this model.
Model Size
23M
Device VRAM
18 GB
Bandwidth
150 GB/s
Quantizations Tested
1
Performance by Quantization
Each row shows all-MiniLM-L6-v2 performance at a different quality level on Apple M3 Pro (18GB Unified).
| Quantization | Speed | TTFT | Fits in VRAM | Rating | Confidence |
|---|---|---|---|---|---|
| FP16 | 1200 tok/s | 25ms | ✓ Yes | Good | estimated |
Notes
FP16
Lightweight embeddings. Good for concurrent use with coding models.
About all-MiniLM-L6-v2
all-MiniLM-L6-v2 (23M) is a embeddings, ai building model. Ultra-lightweight embedding model at 23M params. Fastest option for local embeddings. Lower quality than nomic-embed but practically free in VRAM. Ideal when running alongside large coding models and every MB of VRAM matters. The tiny model for builders who need RAG without the VRAM cost.
View all all-MiniLM-L6-v2 hardware options →About Apple M3 Pro (18GB Unified)
Apple M3 Pro (18GB Unified) has 18 GB at 150 GB/s. Available in MacBook Pro 14", MacBook Pro 16".
See all models Apple M3 Pro (18GB Unified) can run →Source: MLX performance estimates (2026-03-15)
Data last updated: 2026-03-01