GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers
Paper • 2210.17323 • Published • 10
Quantized version of DeepSeek-R1-Distill-Llama-8B.
This model was obtained by quantizing the weights of DeepSeek-R1-Distill-Llama-8B to INT4 data type. This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 75%.
Only the weights of the linear operators within transformers blocks are quantized. Weights are quantized using asymmetric per-group scheme, with group size 128. The GPTQ algorithm is applied for quantization, as implemented in the llm-compressor library.
This model can be deployed efficiently using the vLLM backend, as shown in the example below.
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
number_gpus = 1
model_name = "Saktsant/DeepSeek-R1-Distill-Llama-8B-quantized.w4a16"
tokenizer = AutoTokenizer.from_pretrained(model_name)
sampling_params = SamplingParams(temperature=0.6, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])
llm = LLM(model=model_name, tensor_parallel_size=number_gpus, trust_remote_code=True)
messages_list = [
[{"role": "user", "content": "Who are you? Please respond in pirate speak!"}],
]
prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
generated_text = [output.outputs[0].text for output in outputs]
print(generated_text)
vLLM also supports OpenAI-compatible serving. See the documentation for more details.
from transformers import AutoModelForCausalLM, AutoTokenizer
from llmcompressor.modifiers.awq import AWQModifier
from llmcompressor import oneshot
from datasets import load_dataset
model_stub = "deepseek-ai/DeepSeek-R1-Distill-Llama-8B"
model_name = model_stub.split("/")[-1]
num_samples = 512
max_seq_len = 2048
tokenizer = AutoTokenizer.from_pretrained(model_stub)
model = AutoModelForCausalLM.from_pretrained(
model_stub,
device_map="auto",
torch_dtype="auto",
)
def preprocess_fn(example):
return {
"text": tokenizer.apply_chat_template(
example["messages"], add_generation_prompt=False, tokenize=False
)
}
ds = load_dataset("neuralmagic/LLM_compression_calibration", split="train")
ds = ds.map(preprocess_fn)
recipe = [
AWQModifier(ignore=["lm_head"], scheme="W4A16_ASYM", targets=["Linear"]),
]
oneshot(
model=model,
dataset=ds,
recipe=recipe,
max_seq_length=max_seq_len,
num_calibration_samples=num_samples,
)
save_path = model_name + "-quantized.w4a16"
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
Base model
deepseek-ai/DeepSeek-R1-Distill-Llama-8B