Update README.md
Browse files
README.md
CHANGED
@@ -3,4 +3,8 @@ effect on the llama7b I used. Calls me master a whole bunch more now.
|
|
3 |
|
4 |
Content isn't SFW so be aware. Trained in 4-bit for 3 epochs, I think it overfit and really needed just 2.
|
5 |
|
6 |
-
Tested in 4-bit and FP16 on plain HF llama-7b, maybe it works on derivative models of the same beaks.
|
|
|
|
|
|
|
|
|
|
3 |
|
4 |
Content isn't SFW so be aware. Trained in 4-bit for 3 epochs, I think it overfit and really needed just 2.
|
5 |
|
6 |
+
Tested in 4-bit and FP16 on plain HF llama-7b, maybe it works on derivative models of the same beaks.
|
7 |
+
|
8 |
+
|
9 |
+
V2 version was trained at a higher rank and logner context (512) on only unique data with ALLMs and "content warning" statements removed.
|
10 |
+
It is much stronger.
|