Whisper Large V3 on NVIDIA GeForce RTX 4090
NVIDIA GeForce RTX 4090 runs Whisper Large V3 excellently at FP16. This is a strong pairing.
Model Size
1.55B
Device VRAM
24 GB
Bandwidth
1008 GB/s
Quantizations Tested
1
Performance by Quantization
Each row shows Whisper Large V3 performance at a different quality level on NVIDIA GeForce RTX 4090.
| Quantization | Speed | TTFT | Fits in VRAM | Rating | Confidence |
|---|---|---|---|---|---|
| FP16 | — | — | ✓ Yes | Excellent | benchmarked |
Notes
FP16
~30x faster than real-time on 4090. Practically instant transcription.
At 3.1GB, runs alongside any coding model without meaningful VRAM impact.
About Whisper Large V3
Whisper Large V3 (1.55B) is a transcription model. OpenAI's best open speech-to-text model. Supports 99 languages. Near-human accuracy for English. Low VRAM requirements — runs on any GPU. Useful for builders who need voice-to-code or meeting transcription.
View all Whisper Large V3 hardware options →About NVIDIA GeForce RTX 4090
NVIDIA GeForce RTX 4090 has 24 GB at 1008 GB/s. Street price: $1,799.
See all models NVIDIA GeForce RTX 4090 can run →Builds with NVIDIA GeForce RTX 4090
Source: whisper.cpp benchmarks (2026-01-15)
Data last updated: 2026-03-01
