OwnRig

AI Image & Video Generation

creative

Local image and video generation with FLUX.1, SDXL, and Stable Diffusion 3. Run ComfyUI workflows with LoRAs, ControlNet, and upscaling — no cloud credits, no content filters, no rate limits. VRAM is the bottleneck: FLUX.1 at full quality needs 24GB, but Q4 quantization brings it to 8GB GPUs.

ComfyUIFooocusAutomatic1111InvokeAIdiffusers

Concurrent VRAM

13 GB

Peak VRAM

13 GB

Min Bandwidth

400 GB/s

Models Required

3

VRAM Breakdown

How the 13 GB concurrent VRAM is used.

Switched (Loaded As Needed)

These share VRAM with the largest concurrent model — only one runs at a time.

FLUX.1 Dev(primary image generation)
13 GB

Q8_0

Stable Diffusion XL 1.0(image generation with loras)
6.5 GB

FP16

Stable Diffusion 3 Medium(fast image generation)
5 GB

FP16

Local vs API Costs

Typical Monthly API Cost

$60/mo

Break-Even Point

15 months

Annual Savings After Break-Even

~$576/yr

Based on ~1000 images/month via Midjourney ($30/mo) or DALL-E 3 API ($0.04/image = $40/mo). Local generation is unlimited once hardware is purchased. Electricity cost ~$10/mo at 4hr/day GPU usage. Mid-Range Workstation at ~$1,400. Break-even assumes moderate usage; heavy users (5000+ images/mo) break even in 3-4 months.

Recommended Builds

Pre-configured builds that can run the AI Image & Video Generation workflow.

Prefer a Mac? Apple Silicon with unified memory can run this workflow too. See the Mac AI Builder workflow →

Get a personalized recommendation for this workflow →

Author: Ada. Last updated: 2026-03-14.