voxmenthe's picture
185dbfa56028db547a1fcf963ecfbf2f78cf02a3e663763e94cebe8d2059a06a
4336038 verified
|
raw
history blame
667 Bytes
---
license: cc-by-nc-nd-3.0
tags:
- mlx
---
# voxmenthe/SFR-Iterative-DPO-LLaMA-3-8B-R-unquantized
The Model [voxmenthe/SFR-Iterative-DPO-LLaMA-3-8B-R-unquantized](https://huggingface.co/voxmenthe/SFR-Iterative-DPO-LLaMA-3-8B-R-unquantized) was converted to MLX format from [Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R](https://huggingface.co/Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R) using mlx-lm version **0.13.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("voxmenthe/SFR-Iterative-DPO-LLaMA-3-8B-R-unquantized")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```