LittleLamb 0.3B

Powered by CompactifAI

License HuggingFace Discord

Tiny Model · 50% Compressed · Thinking & Non-Thinking Modes


Table of Contents


Model Overview

LittleLamb 0.3B is a general-purpose bilingual model at 290M parameters, a similar size class to 270M models such as gemma3-270m-it and functiongemma-270m-it—developed based on Qwen3-0.6B, by Multiverse Computing. The original Qwen3-0.6B is an open-weight, instruction-tuned model with thinking and non-thinking capabilities and multilingual coverage. LittleLamb 0.3B is compressed at a 50% compression rate using CompactifAI, Multiverse Computing's proprietary technology. The model supports English and Spanish and retains Qwen3's dual thinking/non-thinking modes.


Key Characteristics

Characteristic Description
Base model Qwen3-0.6B (0.6B params, 0.44B non-embedding; open-weight, Apache 2.0)
Parameters 290M total parameters after CompactifAI compression (50% compression rate from base 0.6B)
Architecture Decoder-only Transformer (Qwen3 family)
Compression CompactifAI (proprietary)
Languages English and Spanish; inherits broader multilingual tokenizer coverage from Qwen3
Modes Thinking (enable_thinking=True) and non-thinking (enable_thinking=False) via chat template

Quick Start

This model can be loaded with the Transformers library. Requires transformers>=4.51.0 for Qwen3 architecture support.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "MultiverseComputingCAI/LittleLamb"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype="auto",
    device_map="auto",
)

messages = [{"role": "user", "content": "Hello!"}]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
    enable_thinking=True,
)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
output_ids = model.generate(**inputs, max_new_tokens=256)[0]
response = tokenizer.decode(
    output_ids[len(inputs.input_ids[0]) :], skip_special_tokens=True
)
print(response)

For OpenAI-compatible serving, use a stack that supports Qwen3 reasoning (e.g. recent vLLM or SGLang with Qwen3 parsers); see the Qwen3-0.6B model card for deployment examples.


What's New in LittleLamb 0.3B

Summary

  • Ultra-compact general-purpose model at 290M parameters, suitable for edge and on-device deployment.
  • Developed based on Qwen3-0.6B with CompactifAI compression (~50% parameter reduction vs. base non-embedding count).
  • Bilingual focus: English and Spanish for supported use cases.

Dual-Mode Inference (Thinking / Non-Thinking)

LittleLamb 0.3B inherits Qwen3's dual-mode capability, supporting seamless switching between thinking mode (for complex reasoning) and non-thinking mode (for efficient general-purpose dialogue).

The model generates internal reasoning in Qwen3’s thinking format (see the Qwen3 chat template) before producing the final response. Use this for tasks requiring multi-step reasoning, math, or code generation.

Set enable_thinking=False for lower-latency dialogue without explicit chain-of-thought in the template. Follow the sampling parameters recommended in the Qwen3-0.6B model card for each mode.


Training & Fine-Tuning

Base Model: Qwen3-0.6B

The base model Qwen3-0.6B is a causal language model from the Qwen3 family, supporting thinking/non-thinking. See the Qwen3 technical report for details.


Architecture

Model Specifications

Field Value
Base model Qwen/Qwen3-0.6B (0.6B params)
Total parameters 290M dense

Evaluation & Benchmarks

Evaluation Methodology

Benchmark scores were obtained with the following setups. Methodology varies by benchmark family.

For LittleLamb 0.3B and Qwen3-0.6B (base), benchmark runs are reported under both thinking and non-thinking chat modes using the sampling settings recommended in the Qwen3-0.6B model card.

MMLU-Pro, GPQA Diamond, HLE (Humanity's Last Exam)

  • Evaluation framework: Nemo-skills
  • Inference library: vLLM 0.18.0
  • Thinking mode (enable_thinking=True, per Qwen3-0.6B instruct): temperature = 0.6, top_p = 0.95, top_k = 20, min_p = 0
  • Non-thinking mode (enable_thinking=False, per Qwen3-0.6B instruct): temperature = 0.7, top_p = 0.8, top_k = 20, min_p = 0

Quantitative Results (Reported & Planned)

Reported numbers use the methodology described above.

Thinking mode

Benchmark gemma3-270m-it Qwen3-0.6B (think) LittleLamb-0.3B (think)
HLE 4.00 5.65 6.12
GPQA Diamond 21.21 29.59 28.18
MMLU-Pro 6.23 38.27 31.21

Non-thinking mode

Benchmark gemma3-270m-it Qwen3-0.6B (no think) LittleLamb-0.3B (no think)
HLE 4.00 4.54 5.37
GPQA Diamond 21.21 27.77 24.04
MMLU-Pro 6.23 25.72 25.11

Intelligence Thinking Intelligence No-Thinking

Quantitative Results (Inference Performance)

Metrics reported

  • System Output Throughput (higher is better): Mean output tokens per second across all concurrent requests over the benchmarking phase.
  • End-to-End Latency per Query (lower is better): Median end-to-end response time for each query from the time the query is sent.
  • Output Speed per Query (higher is better): Median output tokens per second after the first token is received for each query.
  • Time to first token (TTFT) (lower is better): Median
  • Estimated Peak Memory Usage (lower is better): KV cache utilization is monitored during the phase and we estimate memory usage as follows: $model_ weights_{gb} + kv_ cache_{usage_pct} × (nvml_used_{gb} − model_ weights_{gb})$
  • Model weights (lower is better):

Summary of improvements: LittleLamb shows a slight improvement in performance with respect to the original Qwen Model. This is expected as for such small models, VRAM usage is dominated by KV cache and not model weights.

Performance evaluation conditions

Our performance evaluation follows the spirit of Artificial Analysis.

  • Inference library: vLLM 0.18.0
  • Monitoring libraries: GuideLLM 0.6.0, nvidia-ml-py 13.590.48
  • Hardware: 1× NVIDIA L4 GPU
  • Conditions: concurrency=16
  • Phase duration: Each phase lasts 3 minutes (excluding ramp-up and cool-down periods).
  • Workload shape: 1,000 input tokens and 1,000 output tokens per query.
  • Streaming: Benchmarking is conducted with streaming enabled.

Summary of improvements: LittleLamb shows a slight improvement in performance with respect to the original Qwen Model. This is expected as for such small models, VRAM usage is dominated by KV cache and not model weights.

Performance


Languages

  • Primary languages: English and Spanish (supported for product use cases).

Intended Use

Recommended Use Cases

Aligned with Qwen3-0.6B use cases, with the benefit of a smaller footprint suitable for edge and on-device deployment:

  • On-device and edge inference where memory and compute are constrained
  • Reasoning tasks with configurable thinking/non-thinking modes
  • Bilingual applications (English and Spanish)
  • Chatbots and virtual assistants in resource-constrained environments
  • General knowledge, math, and science question answering

Out-of-Scope Uses

  • Harmful, illegal, or deceptive content generation
  • Impersonation of real individuals without consent
  • High-risk decision-making without human oversight
  • Surveillance or tracking of individuals
  • Any use that violates applicable laws or regulations

Safety & Limitations

Known Limitations

  • Model scale: At ~0.3B parameters, this is an ultra-compact model. Several frontier-scale benchmarks (GDPval-AA, Terminal-Bench Hard, AA-LCR, CritPt) produce no discriminative signal at this model size, as the base Qwen3-0.6B itself scores near zero on them.
  • Thinking mode: Performance differs substantially between thinking and non-thinking modes across benchmarks. Users should evaluate both modes for their specific use case.

Recommendations

  • Use human oversight for critical applications
  • Perform task-specific evaluation prior to deployment
  • Test both thinking and non-thinking modes for your use case

Model Information

Field Value
Model name LittleLamb
Based on Qwen/Qwen3-0.6B
Version 2604
Release date 28/04/2026
Developed by Multiverse Computing
License Apache 2.0
Contact business@multiversecomputing.com

Citation

If you use this model, please cite the base model and this variant:

@misc{qwen3technicalreport,
  title         = {Qwen3 Technical Report},
  author        = {Qwen Team},
  year          = {2025},
  eprint        = {2505.09388},
  archivePrefix = {arXiv},
  primaryClass  = {cs.CL},
  url           = {https://arxiv.org/abs/2505.09388}
}
@misc{littlelamb,
  title  = {LittleLamb: Compressed Qwen3-0.6B via CompactifAI},
  author = {Multiverse Computing},
  year   = {2026},
  url    = {https://huggingface.co/MultiverseComputingCAI/LittleLamb},
  note   = {Model developed based on Qwen/Qwen3-0.6B using CompactifAI technology}
}

Built by Multiverse Computing · Report an issue · Discord

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MultiverseComputingCAI/LittleLamb

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(826)
this model
Quantizations
1 model

Paper for MultiverseComputingCAI/LittleLamb