tags: | |
- mlx | |
# mlx-community/Mistral-7B-v0.2-4bit | |
This model was converted to MLX format from [`alpindale/Mistral-7B-v0.2-hf`]() using mlx-lm version **0.4.0**. | |
Refer to the [original model card](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf) for more details on the model. | |
## Use with mlx | |
```bash | |
pip install mlx-lm | |
``` | |
```python | |
from mlx_lm import load, generate | |
model, tokenizer = load("mlx-community/Mistral-7B-v0.2-4bit") | |
response = generate(model, tokenizer, prompt="hello", verbose=True) | |
``` | |