Instructions to use OpenGVLab/ASMv2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OpenGVLab/ASMv2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="OpenGVLab/ASMv2")# Load model directly from transformers import AutoProcessor, AutoModelForCausalLM processor = AutoProcessor.from_pretrained("OpenGVLab/ASMv2") model = AutoModelForCausalLM.from_pretrained("OpenGVLab/ASMv2") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use OpenGVLab/ASMv2 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "OpenGVLab/ASMv2" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/ASMv2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/OpenGVLab/ASMv2
- SGLang
How to use OpenGVLab/ASMv2 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "OpenGVLab/ASMv2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/ASMv2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "OpenGVLab/ASMv2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/ASMv2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use OpenGVLab/ASMv2 with Docker Model Runner:
docker model run hf.co/OpenGVLab/ASMv2
ASMv2 Model Card
Model details
Model type: ASMv2 is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on multimodal instruction-following data. It integrates the Relation Conversation (ReC) ability while maintaining powerful general capabilities. This model is also endowed with grounding and referring capabilities, exhibiting state-of-the-art performance on region-level tasks, and can be naturally adapted to the Scene Graph Generation task in an open-ended manner.
Model date: ASMv2 was trained in January 2024.
Paper or resources for more information: https://github.com/OpenGVLab/all-seeing
License
ASMv2 is open-sourced under the Apache License 2.0.
Where to send questions or comments about the model: https://github.com/OpenGVLab/all-seeing/issues
Intended use
Primary intended uses: The primary use of ASMv2 is research on large multimodal models and chatbots.
Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
Training dataset
The pretrain phase employs 5M filtered samples from CC12M, 10M filtered samples from AS-1B, and 15M filtered samples from GRiT.
The instruction-tuning phase employs 4M samples collected from a variety of sources, including image-level datasets
See here for more details.
Evaluation dataset
A collection of 20 benchmarks, including 5 academic VQA benchmarks, 7 multimodal benchmarks specifically proposed for instruction-following LMMs, 3 referring expression comprehension benchmarks, 2 region captioning benchmarks, 1 referring question answering benchmark, 1 scene graph generation benchmark, and 1 relation comprehension benchmark.
- Downloads last month
- 743
docker model run hf.co/OpenGVLab/ASMv2