Mixed Precision GGUF layer quantization of Qwen3-Omni-30B-A3B-Instruct by Qwen

Original model: https://huggingface.co/Qwen/Qwen3-Omni-30B-A3B-Instruct

The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. This particular quant was optimized for high performance across a set of test prompts with ~IQ4_XS size.

The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the layer quants are as follows:

Q4_K_L : attn_v = q6_k attn_o = q6_k ffn_d = q6_k
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K

LAYER_TYPES='[
   [0 ,"Q6_K_S"],[1 ,"Q5_K_S"],[2 ,"Q3_K_L"],[3 ,"Q3_K_M"],[4 ,"Q3_K_M"],[5 ,"Q3_K_M"],[6 ,"Q3_K_M"],[7 ,"Q3_K_M"],
   [8 ,"Q3_K_M"],[9 ,"Q3_K_M"],[10,"Q3_K_M"],[11,"Q3_K_M"],[12,"Q3_K_M"],[13,"Q3_K_M"],[14,"Q3_K_M"],[15,"Q3_K_M"],
   [16,"Q3_K_L"],[17,"Q3_K_M"],[18,"Q3_K_L"],[19,"Q3_K_M"],[20,"Q3_K_L"],[21,"Q3_K_M"],[22,"Q3_K_L"],[23,"Q3_K_M"],
   [24,"Q3_K_L"],[25,"Q3_K_L"],[26,"Q3_K_L"],[27,"Q3_K_L"],[28,"Q3_K_L"],[29,"Q3_K_L"],[30,"Q3_K_L"],[31,"Q3_K_L"],
   [32,"Q4_K_S"],[33,"Q4_K_S"],[34,"Q4_K_S"],[35,"Q4_K_S"],[36,"Q4_K_S"],[37,"Q4_K_S"],[38,"Q4_K_S"],[39,"Q4_K_S"],
   [40,"Q4_K_S"],[41,"Q4_K_S"],[42,"Q4_K_M"],[43,"Q4_K_L"],[44,"Q5_K_S"],[45,"Q5_K_M"],[46,"Q5_K_L"],[47,"Q6_K_S"]
   ]'
   FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"

Comparison:

Quant size PPL Comment
IQ4_XS 16.6e9 7.3 IQ4_XS with default embedding and output
Q4_K_H 16.9e9 7.4 Hybrid quant with Q6_K embedding Q6_K output

Usage:

Qwen3-VL-30B-A3B Instruct is a vision and audio capable moe model. It can be used together with its multimedia projector layers to process images, audio, and text inputs and generate text outputs. The mmproj file is made available in this repository.

This moe model can be run efficiently by offloading expert layers to CPU. Some example configs for use with a 12G VRAM GPU:

# Offload all experts to CPU, maximize context size on GPU : 24tps gen rate on 9900k+4070
OT="-ot exps=CPU -ngl 99"

# Offload only layers 30 to 47 experts to CPU for max inference speed with usable context size : 36tps gen rate
OT="-ot blk\.3[0-9]|4[0-7].*exps=CPU -ngl 99"

# Offload layers 25 to 47 experts to CPU to give bigger context size with still high gen speed : 29tps gen rate
OT="-ot blk\.2[5-9]|3[0-9]|4[0-7].*exps=CPU -ngl 99"

Llama.cpp minimum version to run Qwen3-Omni is b8769

Benchmarks:

A full set of audio and vision benchmarks for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm

Download the file from below:

Link Type Size/e9 B Notes
Qwen3-Omni-30B-A3B-Instruct.Q4_K_H.gguf Q4_K_H 16.9e9 ~IQ4_XS size
Qwen3-Omni-30B-A3B-Instruct.mmproj.gguf F16 2.2e9 multimedia projector

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040

Downloads last month
194
GGUF
Model size
31B params
Architecture
qwen3vlmoe
Hardware compatibility
Log In to add your hardware
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for steampunque/Qwen3-Omni-30B-A3B-Instruct-MP-GGUF

Quantized
(15)
this model