m51Lab-MiniMax-M2.7-REAP-139B-A10B-GGUF
GGUF quantizations of dervig/m51Lab-MiniMax-M2.7-REAP-139B-A10B, the first publicly available REAP-40% pruned variant of MiniMax-M2.7.
Available quantizations
Sizes are approximate; the model card will refresh as each quant is uploaded to this repo.
| Variant | Approx. size | Target hardware | Notes |
|---|---|---|---|
Q4_K_M |
~84 GB | 96 GB Apple Silicon (Mac Studio M4 Max) | Recommended sweet spot. Smoke-test verified 5/5. |
IQ4_XS |
~74 GB | 96 GB Apple Silicon with extra headroom | Smaller than Q4_K_M, marginally lower quality. |
Q3_K_M |
~66 GB | 64 GB Mac / 2ΓRTX 3090 | Budget option; expect some reasoning loss. |
Q6_K |
~114 GB | 128 GB Mac Ultra | High-quality. |
Q8_0 |
~148 GB | 192+ GB systems | Near-lossless. |
IQ4_NL-MoE |
~80 GB | 96 GB Mac / 2ΓRTX 3090 | MoE-aware: attn=Q8_0, experts=IQ4_NL, embed/output=Q6_K. Mirrors ubergarm's mainline-compatible recipe. |
Which should you pick?
- 96 GB Apple Silicon (Mac Studio M4 Max): Q4_K_M β ~84 GB leaves ~12 GB for KV cache at ~16K context.
- 64 GB Mac: Q3_K_M is the only variant that fits. Expect some reasoning-quality degradation.
- 128 GB Mac Ultra / 2Γ A6000: Q6_K for near-baseline quality.
- 192+ GB system (dual H100 / RTX 6000 Ada): Q8_0 for minimal quality loss.
- Alternative to Q4_K_M on 96 GB:
IQ4_NL-MoEkeeps attention at Q8_0 and quantizes only expert FFN tensors. Similar size, often better code/reasoning.
Evaluation
HumanEval pass@1 on Q4_K_M (on completed): 83.3 % (90 / 108)
For problems where the model completed its <think> reasoning within a 32 K-token generation budget, the Q4_K_M quant solved 90 of 108 correctly.
Strict pass@1 (all 164 problems, cap-outs counted as fails): 54.9 %
56 of 164 problems exhausted the 32 K reasoning budget mid-<think> and are counted as fails under strict academic scoring. Allocate β₯64 K tokens to approach the 83 % ceiling.
Methodology: 2 Γ H100 80 GB, llama.cpp /v1/chat/completions, native <think> enabled, temperature=0.2, top_p=0.95, max_tokens=32000.
Prior methodology note: an earlier evaluation using raw /v1/completions with chat-prose stripping (non-canonical for reasoning models) reported 65.2 %. The numbers above use the canonical chat-completion path.
Smoke test (5 diverse pre-publish prompts): 5 / 5 PASS β trivial arithmetic, Python Fibonacci, Norwegian response, MoE semantic explanation, JSON tool-call echo.
Memory & context sizing for consumer hardware
96 GB Apple Silicon (primary target)
| Variant | File size | ctx 8K | ctx 32K | ctx 60K | ctx 131K |
|---|---|---|---|---|---|
| Q4_K_M | 84 GB | β | β w/ KV q8_0 |
β w/ KV q4_0 |
requires KV q4_0 |
| IQ4_XS | 74 GB | β | β | β | β w/ KV q8_0 |
| Q3_K_M | 66 GB | β | β | β | β |
| IQ4_NL-MoE | 80 GB | β | β w/ KV q8_0 |
β w/ KV q4_0 |
requires KV q4_0 |
| Q6_K / Q8_0 | 114 / 148 GB | too large for 96 GB system | β | β | β |
The native FP16 KV cache costs ~0.25 GB per 1K tokens for this architecture (62 layers Γ 1024 KV dim Γ 2 bytes). That is non-trivial at long context: Q4_K_M at ctx=60K needs ~15 GB of KV cache alone.
KV cache quantization β essential for long context on 96 GB
llama.cpp supports quantizing the KV cache with near-zero quality loss:
./llama-server -m MiniMax-M2.7-REAP-139B-A10B-Q4_K_M.gguf -c 65536 -ngl 99 --cache-type-k q8_0 --cache-type-v q8_0
| KV type | Size @ ctx=60K | Quality impact |
|---|---|---|
| FP16 (default) | 15 GB | baseline |
q8_0 |
7.5 GB | essentially lossless (recommended) |
q4_0 / q4_1 |
3.8 GB | very small degradation, worth it for extreme context |
Other systems
- 64 GB Mac / 2Γ RTX 3090: Q3_K_M with
q8_0KV fits at ctx=32K. - 128 GB Mac Ultra: Q6_K comfortably at ctx=32K, tight at longer context.
- Dual H100 (160 GB) / 192 GB+ systems: Q8_0 near-lossless, full context.
Known minor imperfection
During integrity audit, one layer (layer 0) had expert keep-indices that differed from the REAP-retained set in ~86 of 154 positions. The bias-value mismatch is bounded by the layer-0 bias natural variance (max |Ξ|=0.75 on values β [8.06, 8.88]), so router behavior is essentially unchanged β confirmed by the 5/5 smoke test above. All other 61 layers are bit-perfect. Details in the safetensors model card.
Citation
See the safetensors repo for full citation details. Core references:
- Lasby et al., REAP the Experts (arXiv:2510.13999)
- MiniMax-M2.7 base model (MiniMaxAI)
License
Inherits the Modified MIT License from MiniMaxAI/MiniMax-M2.7.
Published by m51Lab β open-source LLM contributions from the M51 AI OS group.
- Downloads last month
- 3,044
3-bit
4-bit
6-bit
8-bit
Model tree for dervig/m51Lab-MiniMax-M2.7-REAP-139B-A10B-GGUF
Base model
MiniMaxAI/MiniMax-M2.7