|
--- |
|
language: |
|
- en |
|
pipeline_tag: image-to-text |
|
inference: false |
|
arxiv: 2304.08485 |
|
datasets: |
|
- HuggingFaceH4/llava-instruct-mix-vsft |
|
--- |
|
|
|
![image/png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6200d0a443eb0913fa2df7cc%2Fq5GXv6Om4Hf2n6IB3e7DQ.png%3C%2Fspan%3E) |
|
# Model Card |
|
HuggingFaceH4/vsft-llava-1.5-7b-hf-trl is a Vision Language Model, created by performing VSFT on the [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) model with 260k image and conversation pairs from the [HuggingFaceH4/llava-instruct-mix-vsft](https://huggingface.co/datasets/HuggingFaceH4/llava-instruct-mix-vsft) dataset. |
|
|
|
Check out our Spaces demo! [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-md-dark.svg)](https://huggingface.co/spaces/HuggingFaceH4/vlm-playground) |
|
|
|
## Model details |
|
|
|
**Model type:** |
|
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. |
|
It is an auto-regressive language model, based on the transformer architecture. |
|
|
|
**Model date:** |
|
The model was trained on April the 11th 2024 |
|
|
|
**Example training script** |
|
[Train a VLM yourself with our TRL example](https://github.com/huggingface/trl/blob/main/examples/scripts/vsft_llava.py) |
|
|
|
## How to use the model |
|
|
|
The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template (`USER: xxx\nASSISTANT:`) and add the token `<image>` to the location where you want to query images: |
|
|
|
### Using `pipeline`: |
|
|
|
```python |
|
from transformers import pipeline |
|
from PIL import Image |
|
import requests |
|
|
|
model_id = "HuggingFaceH4/vsft-llava-1.5-7b-hf-trl" |
|
pipe = pipeline("image-to-text", model=model_id) |
|
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg" |
|
|
|
image = Image.open(requests.get(url, stream=True).raw) |
|
prompt = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT:" |
|
|
|
outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200}) |
|
print(outputs) |
|
>>> {"generated_text": "\nUSER: What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT: Lava"} |
|
``` |
|
|
|
### Using pure `transformers`: |
|
|
|
Below is an example script to run generation in `float16` precision on a GPU device: |
|
|
|
```python |
|
import requests |
|
from PIL import Image |
|
|
|
import torch |
|
from transformers import AutoProcessor, LlavaForConditionalGeneration |
|
|
|
model_id = "HuggingFaceH4/vsft-llava-1.5-7b-hf-trl" |
|
|
|
prompt = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\nWhat are these?\nASSISTANT:" |
|
image_file = "http://images.cocodataset.org/val2017/000000039769.jpg" |
|
|
|
model = LlavaForConditionalGeneration.from_pretrained( |
|
model_id, |
|
torch_dtype=torch.float16, |
|
low_cpu_mem_usage=True, |
|
).to(0) |
|
|
|
processor = AutoProcessor.from_pretrained(model_id) |
|
|
|
|
|
raw_image = Image.open(requests.get(image_file, stream=True).raw) |
|
inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16) |
|
|
|
output = model.generate(**inputs, max_new_tokens=200, do_sample=False) |
|
print(processor.decode(output[0][2:], skip_special_tokens=True)) |
|
``` |
|
|
|
### Model optimization |
|
|
|
#### 4-bit quantization through `bitsandbytes` library |
|
|
|
First make sure to install `bitsandbytes`, `pip install bitsandbytes` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with: |
|
|
|
```diff |
|
model = LlavaForConditionalGeneration.from_pretrained( |
|
model_id, |
|
torch_dtype=torch.float16, |
|
low_cpu_mem_usage=True, |
|
+ load_in_4bit=True |
|
) |
|
``` |
|
|
|
#### Use Flash-Attention 2 to further speed-up generation |
|
|
|
First make sure to install `flash-attn`. Refer to the [original repository of Flash Attention](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with: |
|
|
|
```diff |
|
model = LlavaForConditionalGeneration.from_pretrained( |
|
model_id, |
|
torch_dtype=torch.float16, |
|
low_cpu_mem_usage=True, |
|
+ use_flash_attention_2=True |
|
).to(0) |
|
``` |
|
|
|
## License |
|
Llama 2 is licensed under the LLAMA 2 Community License, |
|
Copyright (c) Meta Platforms, Inc. All Rights Reserved. |
|
|
|
## Citation |
|
``` |
|
@misc{vonwerra2022trl, |
|
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang}, |
|
title = {TRL: Transformer Reinforcement Learning}, |
|
year = {2020}, |
|
publisher = {GitHub}, |
|
journal = {GitHub repository}, |
|
howpublished = {\url{https://github.com/huggingface/trl}} |
|
} |
|
``` |
|
|