File size: 555 Bytes
a3484a3
 
 
 
 
dc5d245
 
 
 
 
1
2
3
4
5
6
7
8
9
10
Trained on 100k dumped messages from the 'chan' todd proxy. I could not dedupe the dataset but it has had serious
effect on the llama7b I used. Calls me master a whole bunch more now.

Content isn't SFW so be aware. Trained in 4-bit for 3 epochs, I think it overfit and really needed just 2.

Tested in 4-bit and FP16 on plain HF llama-7b, maybe it works on derivative models of the same beaks.


V2 version was trained at a higher rank and logner context (512) on only unique data with ALLMs and "content warning" statements removed.
It is much stronger.