-
-
-
-
-
-
Inference Providers
Active filters:
codeqwen
Qwen/Qwen2.5-Coder-32B-Instruct
Text Generation
•
33B
•
Updated
•
232k
•
•
1.96k
Qwen/Qwen2.5-Coder-7B-Instruct
Text Generation
•
8B
•
Updated
•
447k
•
•
584
Qwen/Qwen2.5-Coder-7B-Instruct-GGUF
Text Generation
•
8B
•
Updated
•
42.8k
•
152
Qwen/Qwen2.5-Coder-3B-Instruct
Text Generation
•
3B
•
Updated
•
47.4k
•
•
88
lmstudio-community/Qwen2.5-Coder-7B-Instruct-MLX-4bit
Text Generation
•
1B
•
Updated
•
1.99k
•
3
bartowski/Qwen2.5-Coder-3B-Instruct-abliterated-GGUF
Text Generation
•
3B
•
Updated
•
2.48k
•
9
Solaren/Qwen3-MOE-6x0.6B-3.6B-Writing-On-Fire-Uncensored-Q8_0-GGUF
Text Generation
•
2B
•
Updated
•
157
•
6
Qwen/Qwen2.5-Coder-1.5B-Instruct-GGUF
Text Generation
•
2B
•
Updated
•
32.3k
•
27
bartowski/Qwen2.5-Coder-1.5B-Instruct-GGUF
Text Generation
•
2B
•
Updated
•
1.69k
•
10
lmstudio-community/Qwen2.5-Coder-1.5B-Instruct-GGUF
Text Generation
•
2B
•
Updated
•
614
•
3
Qwen/Qwen2.5-Coder-0.5B-Instruct
Text Generation
•
0.5B
•
Updated
•
3.89M
•
56
unsloth/Qwen2.5-Coder-1.5B-Instruct-GGUF
2B
•
Updated
•
979
•
9
unsloth/Qwen2.5-Coder-1.5B-Instruct-128K-GGUF
2B
•
Updated
•
610
•
9
bartowski/Qwen2.5-Coder-32B-Instruct-abliterated-GGUF
Text Generation
•
33B
•
Updated
•
2.7k
•
29
mradermacher/Qwen2.5-Microsoft-NextCoder-Instruct-FUSED-CODER-Fast-11B-GGUF
11B
•
Updated
•
20
•
1
mradermacher/Qwen2.5-Microsoft-NextCoder-Instruct-FUSED-CODER-Fast-11B-i1-GGUF
11B
•
Updated
•
159
•
1
DavidAU/Qwen3-53B-A3B-2507-THINKING-TOTAL-RECALL-v2-MASTER-CODER
Text Generation
•
53B
•
Updated
•
56
•
10
DavidAU/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER
Text Generation
•
42B
•
Updated
•
2.19k
•
29
mradermacher/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-i1-GGUF
42B
•
Updated
•
7.8k
•
11
DavidAU/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL
Text Generation
•
42B
•
Updated
•
6
•
3
DavidAU/Qwen3-42B-A3B-YOYO-V5-TOTAL-RECALL-NEO-imatrix-GGUF
Text Generation
•
Updated
•
596
•
3
study-hjt/CodeQwen1.5-7B-Chat-GPTQ-Int4
Text Generation
•
7B
•
Updated
•
4
study-hjt/CodeQwen1.5-7B-Chat-GPTQ-Int8
Text Generation
•
7B
•
Updated
•
3
•
1
Text Generation
•
8B
•
Updated
•
155k
•
•
130
lmstudio-community/Qwen2.5-Coder-7B-Instruct-GGUF
Text Generation
•
8B
•
Updated
•
3.53k
•
20
bartowski/Qwen2.5-Coder-7B-Instruct-GGUF
Text Generation
•
8B
•
Updated
•
11.2k
•
32
Text Generation
•
2B
•
Updated
•
515k
•
•
76
Qwen/Qwen2.5-Coder-1.5B-Instruct
Text Generation
•
2B
•
Updated
•
95.8k
•
•
96
mlx-community/Qwen2.5-Coder-7B-Instruct-bf16
Text Generation
•
Updated
•
64
•
2
mlx-community/Qwen2.5-Coder-7B-Instruct-8bit
Text Generation
•
Updated
•
45