Image-Text-to-Text
Safetensors
MLX
English
Chinese
mlx - DavidAU/Qwen3.5-27B-Deckard-PKD-Heretic-Uncensored-Thinking
qwen3_5
unsloth
fine tune
heretic
uncensored
abliterated
multi-stage tuned.
all use cases
coder
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
bfloat16
conversational
4-bit precision
🦆 zecanard/Qwen3.5-40B-Claude-4.6-Opus-Deckard-Heretic-Uncensored-Thinking-MLX-4bit-nvfp4
This model was converted to MLX from DavidAU/Qwen3.5-40B-Claude-4.6-Opus-Deckard-Heretic-Uncensored-Thinking using mlx-vlm version 0.4.4.
Please refer to the original model card for more details.
🌟 Quality
Quantized vision language model with 4.635 bits per weight.
mlx_vlm.convert --quantize --q-bits 4 --q-group-size 16 --q-mode nvfp4
🛠️ Customizations
This quant is aware of the current date, and also enables thinking (if available). You may disable this behavior by deleting the following line from the chat template:
{%- set enable_thinking = true %}
🖥️ Use with mlx
pip install -U mlx-vlm
mlx_vlm.generate --model zecanard/Qwen3.5-40B-Claude-4.6-Opus-Deckard-Heretic-Uncensored-Thinking-MLX-4bit-nvfp4 --max-tokens 100 --temperature 0 --prompt "Describe this image." --image <path_to_image>
- Downloads last month
- 504
Model size
10B params
Tensor type
U8
·
U32 ·
BF16 ·
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for zecanard/Qwen3.5-40B-Claude-4.6-Opus-Deckard-Heretic-Uncensored-Thinking-MLX-4bit-nvfp4
Base model
Qwen/Qwen3.5-27B Finetuned
coder3101/Qwen3.5-27B-heretic