Gemma 4 26B-A4B Claude Opus Distill APEX GGUF

APEX (Adaptive Precision for EXpert Models) quantizations of gemma-4-26B-A4B-it-Claude-Opus-Distill — a Claude Opus reasoning-distilled version of google/gemma-4-26B-A4B-it by TeichAI.

Brought to you by the LocalAI team | APEX Project | Technical Report

Benchmark Results

Benchmarks coming soon. For reference APEX benchmarks on the Qwen3.5-35B-A3B architecture, see mudler/Qwen3.5-35B-A3B-APEX-GGUF.

What is APEX?

APEX is a quantization strategy for Mixture-of-Experts (MoE) models. It classifies tensors by role (routed expert, shared expert, attention) and applies a layer-wise precision gradient -- edge layers get higher precision, middle layers get more aggressive compression. I-variants use diverse imatrix calibration (chat, code, reasoning, tool-calling, agentic traces, Wikipedia).

See the APEX project for full details, technical report, and scripts.

Architecture

  • Model: gemma-4-26B-A4B-it-Claude-Opus-Distill (same architecture as gemma-4-26B-A4B-it)
  • Layers: 30
  • Experts: 128 routed (8 active per token)
  • Total Parameters: 26B
  • Active Parameters: ~4B per token
  • Vision: Built-in vision encoder (mmproj included)
  • APEX Config: 5+5 symmetric edge gradient across 30 layers
  • Calibration: v1.3 diverse dataset

Run with LocalAI

local-ai run mudler/gemma-4-26B-A4B-it-Claude-Opus-Distill-APEX-GGUF@gemma-4-26B-A4B-Claude-Distill-APEX-I-Balanced.gguf

Credits

APEX is brought to you by the LocalAI team. Developed through human-driven, AI-assisted research. Built on llama.cpp.

Downloads last month
11,109
GGUF
Model size
25B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mudler/gemma-4-26B-A4B-it-Claude-Opus-Distill-APEX-GGUF

Collection including mudler/gemma-4-26B-A4B-it-Claude-Opus-Distill-APEX-GGUF