Update README.md
Browse files
README.md
CHANGED
@@ -188,7 +188,7 @@ extra_gated_button_content: Submit
|
|
188 |
quantized_by: bartowski
|
189 |
---
|
190 |
|
191 |
-
## Exllama v2 Quantizations of Meta-Llama-3-8B-Instruct
|
192 |
|
193 |
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.19">turboderp's ExLlamaV2 v0.0.19</a> for quantization.
|
194 |
|
@@ -213,18 +213,18 @@ Original model: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
|
|
213 |
|
214 |
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description |
|
215 |
| ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ |
|
216 |
-
| [8_0](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
|
217 |
-
| [6_5](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
|
218 |
-
| [5_0](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-exl2/tree/5_0) | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
|
219 |
-
| [4_25](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-exl2/tree/4_25) | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. |
|
220 |
-
| [3_5](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. |
|
221 |
|
222 |
## Download instructions
|
223 |
|
224 |
With git:
|
225 |
|
226 |
```shell
|
227 |
-
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-exl2 Meta-Llama-3-8B-Instruct-exl2-6_5
|
228 |
```
|
229 |
|
230 |
With huggingface hub (credit to TheBloke for instructions):
|
@@ -238,13 +238,13 @@ To download a specific branch, use the `--revision` parameter. For example, to d
|
|
238 |
Linux:
|
239 |
|
240 |
```shell
|
241 |
-
huggingface-cli download bartowski/Meta-Llama-3-8B-Instruct-exl2 --revision 6_5 --local-dir Meta-Llama-3-8B-Instruct-exl2-6_5 --local-dir-use-symlinks False
|
242 |
```
|
243 |
|
244 |
Windows (which apparently doesn't like _ in folders sometimes?):
|
245 |
|
246 |
```shell
|
247 |
-
huggingface-cli download bartowski/Meta-Llama-3-8B-Instruct-exl2 --revision 6_5 --local-dir Meta-Llama-3-8B-Instruct-exl2-6.5 --local-dir-use-symlinks False
|
248 |
```
|
249 |
|
250 |
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
|
|
188 |
quantized_by: bartowski
|
189 |
---
|
190 |
|
191 |
+
## Exllama v2 Quantizations of Meta-Llama-3-8B-Instruct with <|eot_id|> set to special=False
|
192 |
|
193 |
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.19">turboderp's ExLlamaV2 v0.0.19</a> for quantization.
|
194 |
|
|
|
213 |
|
214 |
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description |
|
215 |
| ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ |
|
216 |
+
| [8_0](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-special-eot-false-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
|
217 |
+
| [6_5](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-special-eot-false-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
|
218 |
+
| [5_0](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-special-eot-false-exl2/tree/5_0) | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
|
219 |
+
| [4_25](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-special-eot-false-exl2/tree/4_25) | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. |
|
220 |
+
| [3_5](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-special-eot-false-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. |
|
221 |
|
222 |
## Download instructions
|
223 |
|
224 |
With git:
|
225 |
|
226 |
```shell
|
227 |
+
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-special-eot-false-exl2 Meta-Llama-3-8B-Instruct-special-eot-false-exl2-6_5
|
228 |
```
|
229 |
|
230 |
With huggingface hub (credit to TheBloke for instructions):
|
|
|
238 |
Linux:
|
239 |
|
240 |
```shell
|
241 |
+
huggingface-cli download bartowski/Meta-Llama-3-8B-Instruct-special-eot-false-exl2 --revision 6_5 --local-dir Meta-Llama-3-8B-Instruct-special-eot-false-exl2-6_5 --local-dir-use-symlinks False
|
242 |
```
|
243 |
|
244 |
Windows (which apparently doesn't like _ in folders sometimes?):
|
245 |
|
246 |
```shell
|
247 |
+
huggingface-cli download bartowski/Meta-Llama-3-8B-Instruct-special-eot-false-exl2 --revision 6_5 --local-dir Meta-Llama-3-8B-Instruct-special-eot-false-exl2-6.5 --local-dir-use-symlinks False
|
248 |
```
|
249 |
|
250 |
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|