File size: 635 Bytes
88a3c72
 
 
 
 
 
 
 
 
f16b32d
 
 
 
0e4af86
f16b32d
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
---
base_model: NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
---
# Nous-Hermes-2-Mixtral-8x7B-DPO-HQQ

This model is part of a series of HQQ tests. I make no claims on the performance of this model and it very well may change or be deleted.

This is a very extreme example of quantization.

```python
from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('macadeliccc/Nous-Hermes-2-Mixtral-8x7B-DPO-HQQ', trust_remote_code=True)
model = HQQModelForCausalLM.from_pretrained(
    "macadeliccc/Nous-Hermes-2-Mixtral-8x7B-DPO-HQQ",
    torch_dtype=torch.float16,
    device_map="auto"
)
```