File size: 3,346 Bytes
b39ab66
 
dbca506
 
 
b39ab66
 
dbca506
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
593d3ff
dbca506
 
593d3ff
 
dbca506
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
library_name: transformers
license: gemma
base_model:
- google/gemma-2-9b-it
---

# This model has been xMADified!

This repository contains [`google/gemma-2-9b-it`](https://huggingface.co/google/gemma-2-9b-it) quantized from 16-bit floats to 4-bit integers, using xMAD.ai proprietary technology.

# Why should I use this model?

1. **Accuracy:** This xMADified model is the *best* quantized version of the [`google/gemma-2-9b-it`](https://huggingface.co/google/gemma-2-9b-it) model (8 GB only). See _Table 1_ below for model quality benchmarks.

2. **Memory-efficiency:** The full-precision model is around 18.5 GB, while this xMADified model is only around 8 GB, making it feasible to run on a 12 GB GPU.

3. **Fine-tuning**:  These models are fine-tunable over the same reduced (12 GB GPU) hardware in mere 3-clicks. Watch our product demo [here](https://www.youtube.com/watch?v=S0wX32kT90s&list=TLGGL9fvmJ-d4xsxODEwMjAyNA)


## Table 1: xMAD vs. Hugging Quants

| Model | MMLU | Arc Challenge | Arc Easy | LAMBADA Standard | LAMBADA OpenAI | PIQA | WinoGrande |
|---|---|---|---|---|---|---|---|
| [xmadai/gemma-2-9b-it-xMADai-INT4](https://huggingface.co/xmadai/gemma-2-9b-it-xMADai-INT4) (this model) | **71.17** | **62.37** | **85.61** | **70.60** | **72.15** | **81.50** | **75.06** |
| [hugging-quants/gemma-2-9b-it-AWQ-INT4](https://huggingface.co/hugging-quants/gemma-2-9b-it-AWQ-INT4) | 71.04 | 61.77 | 85.14 | 69.16 | 70.68 | 80.41 | 75.06 |

# How to Run Model

Loading the model checkpoint of this xMADified model requires around 8 GB of VRAM. Hence it can be efficiently run on a 12 GB GPU.

**Package prerequisites**:

1. Run the following *commands to install the required packages.
```bash
pip install torch==2.4.0  # Run following if you have CUDA version 11.8: pip install torch==2.4.0 --index-url https://download.pytorch.org/whl/cu118
pip install transformers accelerate optimum
pip install -vvv --no-build-isolation "git+https://github.com/PanQiWei/[email protected]"
```
**Sample Inference Code**
```python
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM
model_id = "xmadai/gemma-2-9b-it-xMADai-INT4"
prompt = [
    {"role": "system", "content": "You are a helpful assistant, that responds as a pirate."},
    {"role": "user", "content": "What's Deep Learning?"},
]
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=False)
inputs = tokenizer.apply_chat_template(
    prompt,
    tokenize=True,
    add_generation_prompt=True,
    return_tensors="pt",
    return_dict=True,
).to("cuda")
model = AutoGPTQForCausalLM.from_quantized(
    model_id,
    device_map='auto',
    trust_remote_code=True,
)
outputs = model.generate(**inputs, do_sample=True, max_new_tokens=1024)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
```

# Citation

If you found this model useful, please cite our research paper.
```
@article{zhang2024leanquant,
  title={LeanQuant: Accurate and Scalable Large Language Model Quantization with Loss-error-aware Grid},
  author={Zhang, Tianyi and Shrivastava, Anshumali},
  journal={arXiv preprint arXiv:2407.10032},
  year={2024},
  url={https://arxiv.org/abs/2407.10032},
}
```

# Contact Us
For additional xMADified models, access to fine-tuning, and general questions, please contact us at [email protected] and join our waiting list.