--- license: apache-2.0 base_model: Jackrong/Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled tags: - gguf - quantized - apex - moe - mixture-of-experts - qwen3.5 - claude-distilled --- # Qwen3.5-35B-A3B Claude-Distilled APEX GGUF **APEX (Adaptive Precision for EXpert Models)** quantizations of [Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled](https://huggingface.co/Jackrong/Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled). **Brought to you by the [LocalAI](https://github.com/mudler/LocalAI) team** | [APEX Project](https://github.com/mudler/apex-quant) | [Technical Report](https://github.com/mudler/apex-quant/blob/main/paper/APEX_Technical_Report.pdf) ## Benchmark Results Benchmarks coming soon. For reference APEX benchmarks on the same Qwen3.5-MoE architecture, see [mudler/Qwen3.5-35B-A3B-APEX-GGUF](https://huggingface.co/mudler/Qwen3.5-35B-A3B-APEX-GGUF). ## Available Files | File | Profile | Size | Best For | |------|---------|------|----------| | Qwen3.5-35B-A3B-Claude-Distilled-APEX-I-Balanced.gguf | I-Balanced | ~24 GB | Best overall quality/size ratio | | Qwen3.5-35B-A3B-Claude-Distilled-APEX-I-Quality.gguf | I-Quality | ~22 GB | Highest quality with imatrix | | Qwen3.5-35B-A3B-Claude-Distilled-APEX-Quality.gguf | Quality | ~22 GB | Highest quality standard | | Qwen3.5-35B-A3B-Claude-Distilled-APEX-Balanced.gguf | Balanced | ~24 GB | General purpose | | Qwen3.5-35B-A3B-Claude-Distilled-APEX-I-Compact.gguf | I-Compact | ~17 GB | Consumer GPUs, best quality/size | | Qwen3.5-35B-A3B-Claude-Distilled-APEX-Compact.gguf | Compact | ~17 GB | Consumer GPUs | | Qwen3.5-35B-A3B-Claude-Distilled-APEX-I-Mini.gguf | I-Mini | ~13 GB | Smallest viable, fastest inference | ## What is APEX? APEX is a quantization strategy for Mixture-of-Experts (MoE) models. It classifies tensors by role (routed expert, shared expert, attention) and applies a layer-wise precision gradient -- edge layers get higher precision, middle layers get more aggressive compression. I-variants use diverse imatrix calibration (chat, code, reasoning, tool-calling, agentic traces, Wikipedia). See the [APEX project](https://github.com/mudler/apex-quant) for full details, technical report, and scripts. ## Architecture - **Model**: Qwen3.5-35B-A3B-Claude-Distilled (Qwen3.5-MoE, distilled from Claude 4.6 Opus reasoning) - **Layers**: 40 - **Experts**: 256 routed + 1 shared (8 active per token) - **Total Parameters**: ~35B - **Active Parameters**: ~3B per token - **APEX Config**: 5+5 symmetric edge gradient across 40 layers - **Calibration**: v1.3 diverse dataset (chat, code, reasoning, multilingual, tool-calling, Wikipedia) ## Run with LocalAI ```bash local-ai run mudler/Qwen3.5-35B-A3B-Claude-Distilled-APEX-GGUF@Qwen3.5-35B-A3B-Claude-Distilled-APEX-I-Balanced.gguf ``` ## Credits APEX is brought to you by the [LocalAI](https://github.com/mudler/LocalAI) team. Developed through human-driven, AI-assisted research. Built on [llama.cpp](https://github.com/ggerganov/llama.cpp).