Instructions to use Edaizi/EvolveR with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Edaizi/EvolveR with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Edaizi/EvolveR")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Edaizi/EvolveR", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Edaizi/EvolveR with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Edaizi/EvolveR" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Edaizi/EvolveR", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Edaizi/EvolveR
- SGLang
How to use Edaizi/EvolveR with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Edaizi/EvolveR" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Edaizi/EvolveR", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Edaizi/EvolveR" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Edaizi/EvolveR", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Edaizi/EvolveR with Docker Model Runner:
docker model run hf.co/Edaizi/EvolveR
EvolveR
EvolveR is a framework designed to enable LLM agents to self-improve through a complete, closed-loop experience lifecycle. This repository contains the model weights introduced in the paper EvolveR: Self-Evolving LLM Agents through an Experience-Driven Lifecycle.
Resources
- Paper: EvolveR: Self-Evolving LLM Agents through an Experience-Driven Lifecycle
- Code: https://github.com/Edaizi/EvolveR
Description
Current Large Language Model (LLM) agents show strong performance in tool use but often lack the capability to systematically learn from their own experiences. EvolveR addresses this by introducing a lifecycle comprising:
- Offline Self-Distillation: Synthesizing interaction trajectories into a structured repository of abstract, reusable strategic principles.
- Online Interaction: Task interaction guided by retrieved distilled principles to guide decision-making and accumulate behavioral trajectories.
This loop employs a policy reinforcement mechanism to iteratively update the agent based on its performance.
Citation
@article{wu2025evolver,
title={EvolveR: Self-Evolving LLM Agents through an Experience-Driven Lifecycle},
author={Wu, Rong and Wang, Xiaoman and Mei, Jianbiao and Cai, Pinlong and Fu, Daocheng and Yang, Cheng and Wen, Licheng and Yang, Xuemeng and Shen, Yufan and Wang, Yuxin and Shi, Botian},
journal={arXiv preprint arXiv:2510.16079},
year={2025}
}