Notice
- This quant is deprecated. It was used as a stopgap while support for the native MXFP4 was being implemented.
- Please use the native MXFP4 model found here OpenAI gpt-oss-20b
Information
See gpt-oss-20b 6.5bit MLX in action - demonstration video
q6.5bit quant typically achieves 1.128 perplexity in our testing which is equivalent to q8.
| Quantization | Perplexity |
|---|---|
| q2 | 41.293 |
| q3 | 1.900 |
| q4 | 1.168 |
| q6 | 1.128 |
| q8 | 1.128 |
Usage Notes
- Tested to run with Inferencer app
- Memory usage: ~17 GB
- Expect ~100 tokens/s
- Quantized with a modified version of MLX 0.26
- For more details see demonstration video or visit OpenAI gpt-oss-20b.
Disclaimer
We are not the creator, originator, or owner of any model listed. Each model is created and provided by third parties. Models may not always be accurate or contextually appropriate. You are responsible for verifying the information before making important decisions. We are not liable for any damages, losses, or issues arising from its use, including data loss or inaccuracies in AI-generated content.
- Downloads last month
- 467
Model size
21B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
6-bit
Model tree for inferencerlabs/openai-gpt-oss-20b-MLX-6.5bit
Base model
openai/gpt-oss-20b