luow-amd haoyang-amd commited on
Commit
c429a23
·
verified ·
1 Parent(s): 759308f

Update README.md (#2)

Browse files

- Update README.md (116a0a1da04f0ffe90ff9c7b4f8a3c6f7cc9e812)


Co-authored-by: haoyanli <[email protected]>

Files changed (1) hide show
  1. README.md +89 -5
README.md CHANGED
@@ -1,5 +1,89 @@
1
- ---
2
- license: other
3
- license_name: deepseek
4
- license_link: https://github.com/deepseek-ai/DeepSeek-MoE/blob/main/LICENSE-MODEL
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: deepseek
4
+ license_link: https://github.com/deepseek-ai/DeepSeek-MoE/blob/main/LICENSE-MODEL
5
+ ---
6
+
7
+
8
+
9
+
10
+
11
+ # deepseek-moe-16b-chat-FP8-KV 
12
+ - ## Introduction
13
+ This model was created by applying [Quark](https://quark.docs.amd.com/latest/index.html) with calibration samples from Pile dataset.
14
+ - ## Quantization Stragegy
15
+ - ***Quantized Layers***: All linear layers excluding "lm_head", "*gate"
16
+ - ***Weight***: FP8 symmetric per-tensor
17
+ - ***Activation***: FP8 symmetric per-tensor
18
+ - ***KV Cache***: FP8 symmetric  per-tensor
19
+ - ## Quick Start
20
+ 1. [Download and install Quark](https://quark.docs.amd.com/latest/install.html)
21
+ 2. Run the quantization script in the example folder using the following command line:
22
+ ```sh
23
+ export MODEL_DIR = [local model checkpoint folder] or deepseek-ai/deepseek-moe-16b-chat
24
+ # single GPU
25
+ python3 quantize_quark.py \
26
+        --model_dir $MODEL_DIR \
27
+        --output_dir deepseek-moe-16b-chat-FP8-KV \
28
+        --quant_scheme w_fp8_a_fp8 \
29
+        --kv_cache_dtype fp8 \
30
+        --num_calib_data 128  \
31
+        --model_export quark_safetensors
32
+ # If model size is too large for single GPU, please use multi GPU instead.
33
+ python3 quantize_quark.py \
34
+        --model_dir $MODEL_DIR \
35
+        --output_dir deepseek-moe-16b-chat-FP8-KV \
36
+        --quant_scheme w_fp8_a_fp8 \
37
+        --kv_cache_dtype fp8 \
38
+        --num_calib_data 128  \
39
+        --model_export quark_safetensors \
40
+        --multi_gpu
41
+ ```
42
+ ## Deployment
43
+ Quark has its own export format and allows FP8 quantized models to be efficiently deployed using the vLLM backend(vLLM-compatible).
44
+ ## Evaluation
45
+ Quark currently uses perplexity(PPL) as the evaluation metric for accuracy loss before and after quantization.The specific PPL algorithm can be referenced in the quantize_quark.py.
46
+ The quantization evaluation results are conducted in pseudo-quantization mode, which may slightly differ from the actual quantized inference accuracy. These results are provided for reference only.
47
+ #### Evaluation scores
48
+ <table>
49
+ <tr>
50
+   <td><strong>Benchmark</strong>
51
+   </td>
52
+   <td><strong>deepseek-moe-16b-chat</strong>
53
+   </td>
54
+   <td><strong>deepseek-moe-16b-chat-FP8-KV(this model)</strong>
55
+   </td>
56
+ </tr>
57
+ <tr>
58
+   <td>Perplexity-wikitext2
59
+   </td>
60
+   <td>7.3568
61
+   </td>
62
+   <td>7.3929
63
+   </td>
64
+ </tr>
65
+ </table>
66
+
67
+
68
+
69
+ #### License
70
+ Copyright (c) 2018-2024 Advanced Micro Devices, Inc. All Rights Reserved.
71
+
72
+
73
+
74
+ Licensed under the Apache License, Version 2.0 (the "License");
75
+ you may not use this file except in compliance with the License.
76
+ You may obtain a copy of the License at
77
+
78
+
79
+
80
+    http://www.apache.org/licenses/LICENSE-2.0
81
+
82
+
83
+
84
+ Unless required by applicable law or agreed to in writing, software
85
+ distributed under the License is distributed on an "AS IS" BASIS,
86
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
87
+ See the License for the specific language governing permissions and
88
+ limitations under the License.
89
+