n1ck-guo commited on
Commit
9e092ef
·
verified ·
1 Parent(s): 637e969

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -6,7 +6,7 @@ datasets:
6
 
7
  ## Model Details
8
 
9
- This model is an int4 model with group_size 128 with quantized lm-head of [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round), auto-round is needed to run this model
10
 
11
  ## How To Use
12
 
@@ -77,7 +77,7 @@ print(response)
77
 
78
  ### Evaluate the model
79
 
80
- pip3 install lm-eval==0.4.2
81
 
82
  ```bash
83
  git clone https://github.com/intel/auto-round
@@ -86,7 +86,7 @@ python -m auto_round --model "Intel/Qwen2.5-72B-Instruct-int4-inc" --eval --eval
86
  ```
87
 
88
  | Metric | BF16 | INT4 |
89
- |:--------------:| :----: | :----: |
90
  | Avg | 0.7582 | 0.7567 |
91
  | mmlu | 0.8336 | 0.8306 |
92
  | cmmlu | 0.8722 | 0.8638 |
 
6
 
7
  ## Model Details
8
 
9
+ This model is an int4 model with group_size 128 of [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round), auto-round is needed to run this model
10
 
11
  ## How To Use
12
 
 
77
 
78
  ### Evaluate the model
79
 
80
+ pip3 install lm-eval==0.4.4
81
 
82
  ```bash
83
  git clone https://github.com/intel/auto-round
 
86
  ```
87
 
88
  | Metric | BF16 | INT4 |
89
+ |:-------------- | :----: | :----: |
90
  | Avg | 0.7582 | 0.7567 |
91
  | mmlu | 0.8336 | 0.8306 |
92
  | cmmlu | 0.8722 | 0.8638 |