Update README.md
Browse files
README.md
CHANGED
@@ -24,7 +24,7 @@ C8888 "8" 888 888 " d88888 d88 88b 888
|
|
24 |
|
25 |
PROUDLY PRESENTS
|
26 |
```
|
27 |
-
# SorcererLM-8x22b-
|
28 |
|
29 |
Quantized using 115 rows of 8192 tokens from the default ExLlamav2-calibration dataset.
|
30 |
|
@@ -46,6 +46,8 @@ Original model README below.
|
|
46 |
|
47 |
# SorcererLM-8x22b-bf16
|
48 |
|
|
|
|
|
49 |
Oh boy, here we go. Low-rank (`r=16, alpha=32`) 16bit-LoRA on top of [WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B), trained on 2 epochs of (cleaned & deduped) c2-logs. As far as I can tell, this is an upgrade from `WizardLM-2-8x22B` for RP purposes.
|
50 |
|
51 |
Alongside this ready-to-use release I'm also releasing the LoRA itself as well as the earlier `epoch1`-checkpoint of the LoRA.
|
|
|
24 |
|
25 |
PROUDLY PRESENTS
|
26 |
```
|
27 |
+
# SorcererLM-8x22b-exl2-longcal
|
28 |
|
29 |
Quantized using 115 rows of 8192 tokens from the default ExLlamav2-calibration dataset.
|
30 |
|
|
|
46 |
|
47 |
# SorcererLM-8x22b-bf16
|
48 |
|
49 |
+
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/6569a4ed2419be6072890cf8/L_uGojVkNUsK6QHvWgs9o.mpga"></audio>
|
50 |
+
|
51 |
Oh boy, here we go. Low-rank (`r=16, alpha=32`) 16bit-LoRA on top of [WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B), trained on 2 epochs of (cleaned & deduped) c2-logs. As far as I can tell, this is an upgrade from `WizardLM-2-8x22B` for RP purposes.
|
52 |
|
53 |
Alongside this ready-to-use release I'm also releasing the LoRA itself as well as the earlier `epoch1`-checkpoint of the LoRA.
|