HuggingFaceH4/ultrachat_200k
Viewer • Updated • 515k • 68.4k • 705
How to use Serega6678/prototype_joint_trained with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
model = PeftModel.from_pretrained(base_model, "Serega6678/prototype_joint_trained")This model is a fine-tuned version of Serega6678/My_script_50pct_LLM_pretraining on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 1.014 | 1.0 | 907 | 1.0049 |
Base model
mistralai/Mistral-7B-v0.1