llava-1.5-7b-8bit / README.md
prince-canuma's picture
Upload 11 files
605bdde verified
|
raw
history blame
559 Bytes
---
language:
- en
tags:
- mlx
datasets:
- liuhaotian/LLaVA-Instruct-150K
pipeline_tag: image-to-text
inference: false
arxiv: 2304.08485
---
# mlx-community/llava-1.5-7b-8bit
This model was converted to MLX format from [`llava-hf/llava-1.5-7b-hf`]() using mlx-vlm version **0.0.4**.
Refer to the [original model card](https://huggingface.co/llava-hf/llava-1.5-7b-hf) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/llava-1.5-7b-8bit --max-tokens 100 --temp 0.0
```