gemma-4-E2B-it-Uncensored-MAX
gemma-4-E2B-it-Uncensored-MAX is an uncensored evolution built on top of google/gemma-4-E2B-it. This model applies advanced refusal direction analysis and abliteration-based training strategies to significantly reduce internal refusal behaviors while preserving the reasoning and instruction-following strengths of the original architecture. The result is a powerful E2B parameter language model optimized for detailed responses and improved instruction adherence.
This model is materialized for research and learning purposes only. The model has reduced internal refusal behaviors, and any content generated by it is used at the user’s own risk. The authors and hosting page disclaim any liability for content generated by this model. Users are responsible for ensuring that the model is used in a safe, ethical, and lawful manner.
Key Highlights
- Advanced Refusal Direction Analysis: Uses targeted activation analysis to identify and mitigate refusal directions within the model’s latent space.
- Uncensored MAX Training: Fine-tuned to significantly reduce refusal patterns while maintaining coherent and detailed outputs.
- E2B Parameter Architecture: Built on gemma-4-E2B-it, offering efficient reasoning and lightweight deployment.
- Improved Instruction Adherence: Optimized to follow complex prompts with minimal unnecessary refusals.
- High-Capability Deployment: Suitable for advanced research experimentation and resource-efficient inference setups.
Quick Start with Transformers
pip install transformers==5.5.3
# or
pip install git+https://github.com/huggingface/transformers.git
from transformers import Gemma4ForConditionalGeneration, AutoProcessor
import torch
model = Gemma4ForConditionalGeneration.from_pretrained(
"prithivMLmods/gemma-4-E2B-it-Uncensored-MAX",
torch_dtype="auto",
device_map="auto"
)
processor = AutoProcessor.from_pretrained(
"prithivMLmods/gemma-4-E2B-it-Uncensored-MAX"
)
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "Explain how transformer models work in simple terms."}
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
inputs = processor(
text=[text],
padding=True,
return_tensors="pt"
).to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=256)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(output_text)
Intended Use
- Alignment & Refusal Research: Studying refusal behaviors and activation-level modifications.
- Red-Teaming Experiments: Evaluating robustness across adversarial or edge-case prompts.
- Efficient Local AI Deployment: Running lightweight instruction models on modest hardware.
- Research Prototyping: Experimentation with transformer architectures.
Limitations & Risks
Important Note: This model intentionally reduces built-in refusal mechanisms.
- Sensitive Output Possibility: The model may generate controversial or explicit responses depending on prompts.
- User Responsibility: Outputs should be handled responsibly and within legal and ethical boundaries.
- Compute Requirements: Lower than larger variants, but still benefits from GPU acceleration for optimal performance.
Dataset & Acknowledgements
- Uncensor any LLM with Abliteration – by Maxime Labonne
- harmful_behaviors and harmless_alpaca – by Maxime Labonne
- Remove Refusals with Transformers (a proof-of-concept implementation to remove refusals from an LLM without using TransformerLens) – by Sumandora
- LLM-LAT/harmful-dataset – by LLM Latent Adversarial Training
- Downloads last month
- 146
