File size: 3,346 Bytes
6a6688b
 
289d22b
6a6688b
ae380d9
 
 
 
6a6688b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40968cb
6a6688b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96806ca
 
40968cb
 
ac00146
6a6688b
767fed8
6a6688b
 
 
 
 
 
 
 
 
 
 
 
 
d92ea76
6a6688b
 
 
73e1cc9
6a6688b
 
 
 
ed7ad82
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
license: apache-2.0
base_model_relation: quantized
quantized_by: Quant-Cartel
base_model: rAIfle/SorcererLM-8x22b-bf16
pipeline_tag: text-generation
tags:
- chat
---
```
  e88 88e                               d8     
 d888 888b  8888 8888  ,"Y88b 888 8e   d88     
C8888 8888D 8888 8888 "8" 888 888 88b d88888   
 Y888 888P  Y888 888P ,ee 888 888 888  888     
  "88 88"    "88 88"  "88 888 888 888  888     
      b                                        
      8b,                                      
 
  e88'Y88                  d8           888    
 d888  'Y  ,"Y88b 888,8,  d88    ,e e,  888    
C8888     "8" 888 888 "  d88888 d88 88b 888    
 Y888  ,d ,ee 888 888     888   888   , 888    
  "88,d88 "88 888 888     888    "YeeP" 888    
                                               
PROUDLY PRESENTS         
```
# SorcererLM-8x22b-exl2-longcal

Quantized using 115 rows of 8192 tokens from the default ExLlamav2-calibration dataset.

Branches:
- `main` -- `measurement.json`
- `8b8h` -- 8bpw, 8bit lm_head
- `6b6h` -- 6bpw, 6bit lm_head
- `5b6h` -- 5bpw, 6bit lm_head
- `4.5b6h` -- 4.5bpw, 6bit lm_head
- `4b6h` -- 4bpw, 6bit lm_head
- `3b6h` -- 3bpw, 6bit lm_head
- `2.25b6h` -- 2.25bpw, 6bit lm_head

Original model link: [rAIfle/SorcererLM-8x22b-bf16](https://huggingface.co/rAIfle/SorcererLM-8x22b-bf16)

Original model README below.

-----

# SorcererLM-8x22b-bf16

<img src="https://files.catbox.moe/1kohx8.png" width="400"/>

<audio controls src="/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6569a4ed2419be6072890cf8%2FL_uGojVkNUsK6QHvWgs9o.mpga%26quot%3B%3C%2Fspan%3E%26gt%3B%3C%2Fspan%3E%3C%2Fspan%3E%3Cspan class="language-xml"></audio>

Oh boy, here we go. Low-rank (`r=16, alpha=32`) 16bit-LoRA on top of [WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B), trained on 2 epochs of (cleaned & deduped) c2-logs. As far as I can tell, this is an upgrade from `WizardLM-2-8x22B` for RP purposes.

Alongside this ready-to-use release I'm also releasing the LoRA itself as well as the earlier `epoch1`-checkpoint of the LoRA.

## Why A LoRA?

The choice was fully intentional. I briefly considered a FFT but for this particular use-case a LoRA seemed a better fit. `WizardLM-2-8x22B` is smart by itself but its used vocabulary leaves much to be desired when it comes to RP. By training a low-rank LoRA on top of it to teach it some of Claude's writing style, we remedy that.

## Prompting

- Use the templates in [Quant-Cartel/Recommended-Settings](https://huggingface.co/Quant-Cartel/Recommended-Settings) under the `SorcererLM`-folder.
- Or Vicuna 1.1 and a sane context template. It's somewhat sensitive to samplers, I'd recommend Temperature 1, MinP 0.05 and a dash of DRY but YMMV. Shorter prompts seem to work better, too.

## Quantized Versions

- [iMat GGUFs](https://huggingface.co/Quant-Cartel/SorcererLM-8x22b-iMat-GGUF)
- [longcal exl2s](https://huggingface.co/Quant-Cartel/SorcererLM-8x22b-exl2-longcal)

## Acknowledgments

The main shoutout I want to make is to my [Cartel](https://huggingface.co/Quant-Cartel) bros, [Envoid](https://huggingface.co/Envoid) and particularly [I^2](https://huggingface.co/InferenceIllusionist), for being amazing. I count this as a team effort, so they deserve kudos too if you like this.


## Training

Trained using [qlora-pipe](https://github.com/tdrussell/qlora-pipe). Configs included in the `train`-subfolder.

## Safety

... n/a