100 Coder/Programming - MOE, Reasoning, Reg, Imatrix, Fused.
Models (0.8B to 87B) in regular, "reasoning", "Brainstorm", MOE (1x to 8x / 128 experts), and expanded to create better and stronger code, faster.
Text Generation • 53B • Updated • 37 • 14Note 128 experts (MOE) - Mixture of experts. All experts are coders. 256K context ; using Brainstorm 40x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page.
DavidAU/Qwen3-Coder-42B-A3B-Instruct-TOTAL-RECALL-MASTER-CODER-M
Text Generation • 42B • Updated • 15 • 4Note 128 experts (MOE) - Mixture of experts. All experts are coders. 256K context ; using Brainstorm 20x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page.
DavidAU/Qwen3-Coder-42B-A3B-Instruct-TOTAL-RECALL-MASTER-CODER-M-512k-ctx
Text Generation • 42B • Updated • 5 • 2Note 128 experts (MOE) - Mixture of experts. All experts are coders. 512K context ; using Brainstorm 20x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page. Special note: Even you do not need the context, try this model as context changes will change generation.
DavidAU/Qwen3-Coder-42B-A3B-Instruct-TOTAL-RECALL-MASTER-CODER-M-1million-ctx
Text Generation • 42B • Updated • 136 • 6Note 128 experts (MOE) - Mixture of experts. All experts are coders. 1 million context ; using Brainstorm 20x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page. Special note: Even you do not need the context, try this model as context changes will change generation.
DavidAU/Qwen3-53B-A3B-2507-THINKING-TOTAL-RECALL-v2-MASTER-CODER
Text Generation • Updated • 29 • 11Note 128 experts (MOE) - Mixture of experts. 256K context ; using Brainstorm 40x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page.
DavidAU/Qwen3-53B-A3B-2507-TOTAL-RECALL-v2-MASTER-CODER
Text Generation • 53B • Updated • 36 • 15Note 128 experts (MOE) - Mixture of experts. 256K context ; using Brainstorm 40x to enhance performance. Links to GGUFs on this page. Non-thinking model => STRAIGHT to coding.
DavidAU/Qwen3-42B-A3B-2507-Thinking-TOTAL-RECALL-v2-Medium-MASTER-CODER
Text Generation • 42B • Updated • 19 • 6Note 128 experts (MOE) - Mixture of experts. 256K context ; using Brainstorm 20x to enhance performance. Links to GGUFs on this page. Enhanced Thinking model => Smarter thinking, fewer tokens, better code.
DavidAU/Qwen3-42B-A3B-2507-TOTAL-RECALL-v2-Medium-MASTER-CODER
Text Generation • 42B • Updated • 18 • 3Note 128 experts (MOE) - Mixture of experts. 256K context ; using Brainstorm 20x to enhance performance. Links to GGUFs on this page. Non-thinking model => STRAIGHT to coding.
DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-21B-Brainstorm20x
Text Generation • 21B • Updated • 29 • 2Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-12B-Brainstorm20x
Text Generation • 12B • Updated • 34 • 3Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen3-Jan-Nano-128k-6B-Brainstorm20x
Text Generation • 6B • Updated • 5 • 5Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model.
DavidAU/Qwen3-Blitzar-Coder-F1-6B-Brainstorm20x
Text Generation • 6B • Updated • 8 • 3Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x
Text Generation • 6B • Updated • 37 • 1Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model.
DavidAU/Qwen3-Zero-Coder-Reasoning-0.8B-NEO-EX-GGUF
Text Generation • 0.8B • Updated • 1.22k • 20Note Uses NEO Imatrix dataset (by DavidAU) to augment model performance. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Model is fused together from 2 coder models
DavidAU/Qwen3-Zero-Coder-Reasoning-V2-0.8B-NEO-EX-GGUF
Text Generation • 0.8B • Updated • 6.31k • 19Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Stronger than V1. Model is fused together from 2 coder models.
DavidAU/Qwen3-Zero-Coder-Reasoning-0.8B
Text Generation • 0.8B • Updated • 13 • 3Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Model is fused together from 2 coder models
DavidAU/Qwen3-Zero-Coder-Reasoning-V2-0.8B
Text Generation • 0.8B • Updated • 308 • 2Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Stronger than V1. Model is fused together from 2 coder models.
-
DavidAU/Openai_gpt-oss-20b-CODER-NEO-CODE-DI-MATRIX-GGUF
Text Generation • 21B • Updated • 880 • 14 -
DavidAU/Openai_gpt-oss-20b-NEO-GGUF
Text Generation • 21B • Updated • 1.56k • 22 -
DavidAU/Openai_gpt-oss-120b-NEO-Imatrix-GGUF
Text Generation • 117B • Updated • 545 • 22 -
DavidAU/OpenAi-GPT-oss-20b-MODERATE-uncensored-NEO-Imatrix-gguf
Text Generation • 21B • Updated • 357 • 9 -
DavidAU/Qwen3-Jan-v1-256k-ctx-6B-Brainstorm20x
Text Generation • 6B • Updated • 17 • 5 -
DavidAU/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER
Text Generation • 42B • Updated • 717 • 37 -
DavidAU/Qwen3-VL-12B-Instruct-Brainstorm20x-NEO-MAX-GGUF
Image-Text-to-Text • 12B • Updated • 435 • 4 -
DavidAU/Qwen3-VL-12B-Thinking-Brainstorm20x-NEO-MAX-GGUF
Image-Text-to-Text • 12B • Updated • 539 • 3 -
DavidAU/Qwen3-VL-42B-A3B-Thinking-Brainstorm20x-GGUF
Image-Text-to-Text • 42B • Updated • 79 • 7 -
DavidAU/Qwen3-VLTO-TNG-12B-256k-NEO-imatrix-GGUF
Text Generation • 12B • Updated • 474 • 1 -
DavidAU/Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning
Text Generation • 8B • Updated • 176 • 100 -
DavidAU/Llama-3.3-8B-Thinking-Gemini-Flash-11000x-128k
Text Generation • 8B • Updated • 13 • 5 -
DavidAU/Llama-3.3-8B-Instruct-Thinking-Claude-Haiku-4.5-High-Reasoning-1700x
Text Generation • 8B • Updated • 18 • 4 -
DavidAU/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL
Text Generation • 8B • Updated • 123 • 6 -
DavidAU/Mistral-Nemo-2407-Instruct-12B-Deep-Thinking-Claude-Gemini-GPT5.2
Text Generation • 12B • Updated • 10 • 2 -
DavidAU/GLM-4.7-Flash-NEO-CODE-Imatrix-MAX-GGUF
Text Generation • 30B • Updated • 374 • 7 -
DavidAU/Qwen3-24B-MOE-6x-4B-Star-Trek-AwayTeam-Instruct
Text Generation • 18B • Updated • 9 • 3 -
DavidAU/Qwen3-VL-32B-Gemini-Heretic-Uncensored-Thinking
Image-Text-to-Text • 33B • Updated • 260 • 14 -
DavidAU/Qwen3-30B-A3B-YOYO-V2-Claude-4.6-Opus-High-INSTRUCT
Text Generation • 31B • Updated • 157 • 9 -
DavidAU/Qwen3-30B-A3B-YOYO-V4-Gemini250-Instruct
Text Generation • 31B • Updated • 17 • 1 -
DavidAU/Qwen3-30B-A3B-Thinking-2507-GLM-4.7-Flash-High-Reasoning
Text Generation • 31B • Updated • 19 • 1 -
DavidAU/Qwen3-32B-VL-GLM-4.7-Flash-HI16-Heretic-Uncensored-Thinking
Image-Text-to-Text • 33B • Updated • 139 • 2 -
DavidAU/Qwen3.5-27B-Gemini3-Pro-High-Reasoning-Compact-Thinking
Image-Text-to-Text • 27B • Updated • 73 • 18 -
DavidAU/Qwen3.5-27B-Claude-4.6-OS-INSTRUCT
Image-Text-to-Text • 27B • Updated • 250 • 8 -
DavidAU/Qwen3.5-27B-Claude-4.6-OS-Auto-Variable-Thinking
Image-Text-to-Text • 27B • Updated • 11.7k • 2 -
DavidAU/Qwen3.5-9B-Claude-4.6-OS-Auto-Variable-THINKING
Image-Text-to-Text • 9B • Updated • 198 • 3 -
DavidAU/Qwen3.5-9B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING
Image-Text-to-Text • 9B • Updated • 1.95k • 1 -
DavidAU/Qwen3.5-9B-Claude-4.6-OS-HERETIC-UNCENSORED-INSTRUCT
Image-Text-to-Text • 9B • Updated • 1.74k • 23 -
DavidAU/Qwen3.5-9B-DeepSeek-3.2-Intense-Auto-Variable-Thinking
Image-Text-to-Text • 9B • Updated • 169 • 3 -
DavidAU/Qwen3.5-9B-Polaris-HighIQ-INSTRUCT
Image-Text-to-Text • 9B • Updated • 36 -
DavidAU/Qwen3.5-9B-Polaris-HighIQ-THINKING
Image-Text-to-Text • 9B • Updated • 46 • 3 -
DavidAU/Qwen3.5-21B-Claude-4.6-Opus-Thinking-EXP
Image-Text-to-Text • 21B • Updated • 27 -
DavidAU/Qwen3.5-21B-Claude-4.6-Opus-Thinking-EXP2
Image-Text-to-Text • 21B • Updated • 67 • 1 -
DavidAU/Qwen3.5-27B-Claude-4.6-OS-Auto-Variable-Heretic-Uncensored-Thinking
Image-Text-to-Text • 27B • Updated • 757 • 4 -
DavidAU/Qwen3.5-4B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING
Image-Text-to-Text • 5B • Updated • 843 • 9 -
DavidAU/Qwen3.5-4B-Claude-4.6-HighIQ-THINKING
Image-Text-to-Text • 5B • Updated • 246 • 4 -
DavidAU/Qwen3.5-4B-Gemini-Pro-HighIQ-THINKING
Image-Text-to-Text • 5B • Updated • 210 • 1 -
DavidAU/Qwen3.5-27B-Deckard-PKD-Heretic-Uncensored-Thinking
Image-Text-to-Text • 27B • Updated • 6.62k • 1 -
DavidAU/Qwen3.5-13B-GLM-4.7-Flash-Grande-Deep-Thinking
Image-Text-to-Text • 13B • Updated • 59 -
DavidAU/Qwen3.5-13B-Strict-Instruct
Image-Text-to-Text • 13B • Updated • 162 • 2 -
DavidAU/Qwen3.5-40B-Claude-4.5-Opus-High-Reasoning-Thinking
Image-Text-to-Text • 40B • Updated • 1.05k • 39 -
DavidAU/Qwen3.5-13B-GLM-4.7-Flash-DeepSeek-Polaris-Grande-Deep-Thinking
Image-Text-to-Text • 13B • Updated • 396 • 14 -
DavidAU/Qwen3.5-2B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING
Image-Text-to-Text • 2B • Updated • 579 • 7 -
DavidAU/gemma-4-E4B-it-The-DECKARD-V2-Strong-HERETIC-UNCENSORED-Thinking
Any-to-Any • Updated • 236 • 1 -
DavidAU/gemma-4-E4B-it-The-DECKARD-HERETIC-UNCENSORED-Thinking
Any-to-Any • Updated • 274 • 5 -
DavidAU/gemma-4-31B-it-Mystery-Fine-Tune-HERETIC-UNCENSORED-Thinking
Image-Text-to-Text • 31B • Updated • 7.63k • 35 -
DavidAU/gemma-4-31B-it-Grand-Horror-X-INTENSE-HERETIC-UNCENSORED-Thinking
Image-Text-to-Text • 31B • Updated • 142 • 4 -
DavidAU/gemma-4-31B-it-The-DECKARD-HERETIC-UNCENSORED-Thinking
Image-Text-to-Text • 31B • Updated • 3.41k • 12 -
DavidAU/gemma-4-E4B-it-The-DECKARD-Expresso-Universe-HERETIC-UNCENSORED-Thinking
Any-to-Any • Updated • 226 • 5 -
DavidAU/gemma-4-E4B-it-The-DECKARD-V3-Expresso-HERETIC-UNCENSORED-Thinking
Any-to-Any • Updated • 103 • 3 -
DavidAU/gemma-4-E4B-it-The-DECKARD-Expresso-Universe-HERETIC-UNCENSORED-Thinking-GGUF
Any-to-Any • 8B • Updated • 909 • 1 -
DavidAU/gemma-4-E4B-it-Claude-Opus-4.5-HERETIC-UNCENSORED-Thinking
Any-to-Any • Updated • 1 -
DavidAU/gemma-4-E4B-it-GLM-4.7-Flash-HERETIC-UNCENSORED-Thinking
Any-to-Any • Updated • 1 -
DavidAU/gemma-4-19B-A4B-it-The-DECKARD-Thinking
Image-Text-to-Text • 19B • Updated • 2 -
DavidAU/gemma-4-E4B-it-The-DECKARD-Claude-Opus-Expresso-Universe-HERETIC-UNCENSORED-Thinking
Any-to-Any • Updated • 2 • 2 -
DavidAU/gemma-4-19B-A4B-it-The-DECKARD-Heretic-Uncensored-Thinking
Image-Text-to-Text • 19B • Updated