OwnRig

Llama 3.1 8B Instruct on Apple M3 Pro (18GB Unified)

Apple M3 Pro (18GB Unified) can run Llama 3.1 8B Instruct at 15 tok/s at Q4_K_M, though performance is acceptable. Consider a higher-end GPU for better results.

Model Size

8.03B

Device VRAM

18 GB

Bandwidth

150 GB/s

Quantizations Tested

1

Performance by Quantization

Each row shows Llama 3.1 8B Instruct performance at a different quality level on Apple M3 Pro (18GB Unified).

QuantizationSpeedTTFTFits in VRAMRatingConfidence
Q4_K_M15 tok/s400ms✓ YesAcceptableestimated

Notes

Q4_K_M

150 GB/s is the bottleneck. Usable but slower than M4 Pro.

About Llama 3.1 8B Instruct

Llama 3.1 8B Instruct (8.03B) is a chat, coding, ai coding, reasoning, multi-purpose model. Best-in-class 8B model. Strong general capabilities with excellent coding support. The go-to small model for local inference — fast, accurate, and well-supported across all inference engines.

View all Llama 3.1 8B Instruct hardware options →

About Apple M3 Pro (18GB Unified)

Apple M3 Pro (18GB Unified) has 18 GB at 150 GB/s. Available in MacBook Pro 14", MacBook Pro 16".

See all models Apple M3 Pro (18GB Unified) can run →

Source: MLX performance estimates (2026-03-15)

Data last updated: 2026-03-01