MiniMaxAI/SynLogic
Viewer • Updated • 49.3k • 569 • 104
How to use MiniMaxAI/SynLogic-32B with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="MiniMaxAI/SynLogic-32B")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MiniMaxAI/SynLogic-32B")
model = AutoModelForCausalLM.from_pretrained("MiniMaxAI/SynLogic-32B")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use MiniMaxAI/SynLogic-32B with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "MiniMaxAI/SynLogic-32B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "MiniMaxAI/SynLogic-32B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/MiniMaxAI/SynLogic-32B
How to use MiniMaxAI/SynLogic-32B with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "MiniMaxAI/SynLogic-32B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "MiniMaxAI/SynLogic-32B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "MiniMaxAI/SynLogic-32B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "MiniMaxAI/SynLogic-32B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use MiniMaxAI/SynLogic-32B with Docker Model Runner:
docker model run hf.co/MiniMaxAI/SynLogic-32B
SynLogic-32B is a state-of-the-art reasoning model built on Qwen2.5-32B-Base and trained using reinforcement learning on our comprehensive SynLogic dataset. The model excels at logical reasoning tasks and demonstrates strong generalization to mathematical domains.
| Model | BBEH | KOR-Bench | BBH |
|---|---|---|---|
| Qwen2.5-32B-Instruct | 17.5 | 54.7 | 84.5 |
| DeepSeek-R1-Distill-Qwen-32B | 19.2 | 66.6 | 88.3 |
| SynLogic-32B | 25.5 | 62.2 | 85.8 |
Key Achievement: +6 points improvement over DeepSeek-R1-Distill-Qwen-32B on the challenging BBEH benchmark, establishing state-of-the-art performance among open-source logical reasoning models.
@misc{liu2025synlogic,
title={SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond},
author={Junteng Liu and Yuanxiang Fan and Zhuo Jiang and Han Ding and Yongyi Hu and Chi Zhang and Yiqi Shi and Shitong Weng and Aili Chen and Shiqi Chen and Yunan Huang and Mozhi Zhang and Pengyu Zhao and Junjie Yan and Junxian He},
year={2025},
eprint={2505.19641},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2505.19641},
}