Qwen3.6-27B

Quality: quantized (mixed quants per tensor, group size: 32, 5.627 bpw)

Most tensors use 4-bit or 8-bit affine quantization with a group size 32; some important tensors are saved in bf16.

Recommended settings

  1. Sampling Parameters:
    • The developers suggest using the following sets of sampling parameters depending on the mode and task type:
      • Thinking mode for general tasks:
        temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
      • Thinking mode for precise coding tasks (e.g., WebDev):
        temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0
      • Instruct (or non-thinking) mode for general tasks:
        temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
      • Instruct (or non-thinking) mode for reasoning tasks:
        temperature=1.0, top_p=1.0, top_k=40, min_p=0.0, presence_penalty=2.0, repetition_penalty=1.0
    • For supported frameworks, you can adjust the presence_penalty parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.

Source

This model was converted to MLX format from Qwen/Qwen3.6-27B using mlx-vlm version 0.4.4.

Downloads last month
544
Safetensors
Model size
28B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TheCluster/Qwen3.6-27B-MLX-mixed-5.6bit

Base model

Qwen/Qwen3.6-27B
Quantized
(167)
this model

Collection including TheCluster/Qwen3.6-27B-MLX-mixed-5.6bit