justheuristic
commited on
Commit
·
2664145
1
Parent(s):
08ba7c1
Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,9 @@ In other words, all of the large weight-matrices are frozen in 8-bit, and you on
|
|
14 |
![img](https://i.imgur.com/n4XXo1x.png)
|
15 |
|
16 |
|
17 |
-
|
|
|
|
|
18 |
|
19 |
|
20 |
__What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU.
|
|
|
14 |
![img](https://i.imgur.com/n4XXo1x.png)
|
15 |
|
16 |
|
17 |
+
__Does 8-bit affect model quality?__ Technically yes, but the effect is negligible in practice. [This notebook measures wikitext test perplexity](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/check_perplexity.ipynb) and it is nigh indistinguishable from the original GPT-J. Quantized model is even slightly better, but that is not statistically significant.
|
18 |
+
|
19 |
+
Our code differs from other 8-bit methods in that we use **8-bit only for storage, and all computations are performed in float16 or float32**. As a result, we can take advantage of nonlinear quantization that fits to each individual weight distribution. Such nonlinear quantization does not accelerate inference, but it allows for much smaller error.
|
20 |
|
21 |
|
22 |
__What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU.
|