OwnRig
MODELS
42 models

AI Models

Browse models with VRAM requirements, quantization options, and compatible hardware.

23M
Embeddings23M

all-MiniLM-L6-v2

MiniLM

VRAM: 256 MB (full)

AnythingLLMOpen WebUI
137M
Embeddings137M

nomic-embed-text v1.5

Nomic

VRAM: 410 MB (recommended)

CursorContinueAnythingLLM+1
1.24B
Chat1.24B

Llama 3.2 1B Instruct

Llama

VRAM: 1.1 GB (recommended)

819 MB (efficient)

OllamaLM Studio
810M
Transcription810M

Whisper Large V3 Turbo

Whisper

VRAM: 1.6 GB (full)

1.55B
Transcription1.55B

Whisper Large V3

Whisper

VRAM: 1.5 GB (recommended)

1.3 GB (efficient)

3.21B
Chat3.21B

Llama 3.2 3B Instruct

Llama

VRAM: 2.8 GB (recommended)

2.1 GB (efficient)

CursorContinueOllama+2
3.82B
Chat3.82B

Phi-4 Mini

Phi

VRAM: 3.3 GB (recommended)

2.4 GB (efficient)

CursorContinueOllama+1
3.82B
Chat3.82B

Phi-3 Mini 3.8B Instruct

Phi

VRAM: 3 GB (recommended)

2.6 GB (efficient)

ContinueLM Studio
4.3B
Chat4.3B

Gemma 3 4B

Gemma

VRAM: 3.8 GB (recommended)

2.5 GB (efficient)

CursorContinueAider+2
2B
Image Gen2B

Stable Diffusion 3 Medium

Stable Diffusion

VRAM: 5 GB (full)

6.6B
Image Gen6.6B

Stable Diffusion XL 1.0

Stable Diffusion

VRAM: 6.5 GB (full)

7.24B
Chat7.24B

Mistral 7B Instruct v0.3

Mistral

VRAM: 5.3 GB (recommended)

4.5 GB (efficient)

7.62B
Reasoning7.62B

DeepSeek R1 Distill Qwen 7B

DeepSeek

VRAM: 6.6 GB (recommended)

4.4 GB (efficient)

7.62B
Chat7.62B

Qwen 2.5 7B Instruct

Qwen

VRAM: 5.5 GB (recommended)

4.7 GB (efficient)

ContinueLM StudioOpen WebUI
7.62B
Coding7.62B

Qwen 2.5 Coder 7B Instruct

Qwen

VRAM: 6.6 GB (recommended)

4.4 GB (efficient)

CursorContinueAider+1
7.74B
Chat7.74B

InternLM 2.5 7B Chat

InternLM

VRAM: 6.7 GB (recommended)

4.5 GB (efficient)

CursorContinueAider+2
8.03B
Chat8.03B

Llama 3.1 8B Instruct

Llama

VRAM: 6.7 GB (recommended)

4.9 GB (efficient)

CursorContinueAider+2
13B
Chat13B

LLaVA 1.6 13B

LLaVA

VRAM: 9.1 GB (recommended)

7.7 GB (efficient)

9.24B
Chat9.24B

Gemma 2 9B Instruct

Gemma

VRAM: 6.6 GB (recommended)

5.6 GB (efficient)

8.1B
Image Gen8.1B

Stable Diffusion 3.5 Large

Stable Diffusion

VRAM: 12.5 GB (full)

9 GB (efficient)

12.2B
Chat12.2B

Gemma 3 12B

Gemma

VRAM: 10.5 GB (recommended)

7 GB (efficient)

CursorContinueAider+2
22.2B
Coding22.2B

Codestral 22B

Mistral

VRAM: 15.1 GB (recommended)

12.7 GB (efficient)

CursorContinueWindsurf
14B
Chat14B

Phi-3 Medium 14B Instruct

Phi

VRAM: 9.7 GB (recommended)

8.2 GB (efficient)

14.7B
Reasoning14.7B

Phi-4 14B

Phi

VRAM: 12.6 GB (recommended)

8.4 GB (efficient)

CursorContinueAider+2
14.77B
Chat14.77B

Qwen 2.5 14B Instruct

Qwen

VRAM: 12.7 GB (recommended)

8.5 GB (efficient)

CursorContinueAider+2
15.5B
Coding15.5B

StarCoder 2 15B

StarCoder

VRAM: 10.7 GB (recommended)

9 GB (efficient)

ContinueLM Studio
15.7B
Coding15.7B

DeepSeek Coder V2 Lite 16B

DeepSeek

VRAM: 10.9 GB (recommended)

9.1 GB (efficient)

CursorContinueAider+1
27.23B
Chat27.23B

Gemma 2 27B Instruct

Gemma

VRAM: 18.5 GB (recommended)

15.5 GB (efficient)

32.5B
Coding32.5B

Qwen 2.5 Coder 32B Instruct

Qwen

VRAM: 21.9 GB (recommended)

18.4 GB (efficient)

CursorContinueAider+2
32.5B
Reasoning32.5B

QwQ 32B Preview

Qwen

VRAM: 21.9 GB (recommended)

18.4 GB (efficient)

Open WebUILM Studio
33.7B
Coding33.7B

Code Llama 34B Instruct

Llama

VRAM: 22.7 GB (recommended)

19 GB (efficient)

ContinueAider
12B
Image Gen12B

FLUX.1 Dev

FLUX

VRAM: 13 GB (recommended)

7.2 GB (efficient)

24B
Chat24B

Mistral Small 24B Instruct

Mistral

VRAM: 20.5 GB (recommended)

14 GB (efficient)

CursorContinueAider+2
27.23B
Chat27.23B

Gemma 3 27B

Gemma

VRAM: 22.3 GB (recommended)

16.3 GB (efficient)

CursorContinueLM Studio+1
46.7B
Chat46.7B

Mixtral 8x7B Instruct

Mixtral

VRAM: 31.4 GB (recommended)

26.2 GB (efficient)

32.5B
Reasoning32.5B

DeepSeek R1 Distill Qwen 32B

DeepSeek

VRAM: 28 GB (recommended)

19 GB (efficient)

CursorContinueAider+2
34.4B
Chat34.4B

Yi 1.5 34B Chat

Yi

VRAM: 29.5 GB (recommended)

19.5 GB (efficient)

35B
Chat35B

Command R 35B

Cohere

VRAM: 30 GB (recommended)

20 GB (efficient)

72.7B
Chat72.7B

Qwen 2.5 72B Instruct

Qwen

VRAM: 40.5 GB (efficient)

70.6B
Chat70.6B

Llama 3.1 70B Instruct

Llama

VRAM: 47 GB (recommended)

39.5 GB (efficient)

CursorOpen WebUI
70.6B
Chat70.6B

Llama 3.3 70B Instruct

Llama

VRAM: 61 GB (recommended)

41 GB (efficient)

CursorContinueAider+2
671B
Chat671B

DeepSeek V3

DeepSeek

VRAM: 360 GB (full)

180 GB (efficient)

Showing 42 of 42 models