Qwen3.6-35B-A3B Heretic

Quality: quantized (mixed quants per tensor, group size: 32, 6.439 bpw)

Quantization: 5 bit for experts, 8 bit for head, shared experts and attention tensors, bf16 for embeddings and some linear attention tensors.

This is an uncensored version of Qwen/Qwen3.6-35B-A3B, made using Heretic v1.2.0 (mpoa+soma).

Abliteration metrics

Metric This model Original model (unsloth/Qwen3.6-35B-A3B)
KL divergence 0.0097 0 (by definition)
Refusals 5/100 86/100

Abliteration parameters

Parameter Value
direction_index per layer
attn.o_proj.max_weights.0 0: 0.93
attn.o_proj.max_weights.1 1: 1.38
attn.o_proj.max_weights.2 2: 1.37
attn.o_proj.max_weights.3 3: 1.08
attn.o_proj.max_weight_position 24.08
attn.o_proj.min_weights.0 0: 0.34
attn.o_proj.min_weights.1 1: 0.95
attn.o_proj.min_weights.2 2: 1.35
attn.o_proj.min_weights.3 3: 0.54
attn.o_proj.min_weight_distance 9.81

Sampling Parameters:

  • I suggest using the following sets of sampling parameters depending on the mode and task type:
    • Thinking mode for general tasks:
      temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
    • Thinking mode for precise coding tasks (e.g., WebDev):
      temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0
    • Instruct (or non-thinking) mode for general tasks:
      temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
    • Instruct (or non-thinking) mode for reasoning tasks:
      temperature=1.0, top_p=1.0, top_k=40, min_p=0.0, presence_penalty=2.0, repetition_penalty=1.0
  • For supported frameworks, you can adjust the presence_penalty parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.

Source

This model was converted to MLX format from tvall43/Qwen3.6-35B-A3B-heretic using mlx-vlm version 0.4.4.

Downloads last month
1,489
Safetensors
Model size
35B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TheCluster/Qwen3.6-35B-A3B-Heretic-MLX-mixed-6.4bit

Quantized
(11)
this model

Collection including TheCluster/Qwen3.6-35B-A3B-Heretic-MLX-mixed-6.4bit