mav23 commited on
Commit
43ba769
·
verified ·
1 Parent(s): d567706

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +61 -0
  3. goliath-120b.Q2_K.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ goliath-120b.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ language:
4
+ - en
5
+ pipeline_tag: conversational
6
+ tags:
7
+ - merge
8
+ ---
9
+ # Goliath 120B
10
+
11
+ An auto-regressive causal LM created by combining 2x finetuned [Llama-2 70B](https://huggingface.co/meta-llama/llama-2-70b-hf) into one.
12
+
13
+ Please check out the quantized formats provided by [@TheBloke](https:///huggingface.co/TheBloke) and [@Panchovix](https://huggingface.co/Panchovix):
14
+
15
+ - [GGUF](https://huggingface.co/TheBloke/goliath-120b-GGUF) (llama.cpp)
16
+ - [GPTQ](https://huggingface.co/TheBloke/goliath-120b-GPTQ) (KoboldAI, TGW, Aphrodite)
17
+ - [AWQ](https://huggingface.co/TheBloke/goliath-120b-AWQ) (TGW, Aphrodite, vLLM)
18
+ - [Exllamav2](https://huggingface.co/Panchovix/goliath-120b-exl2) (TGW, KoboldAI)
19
+
20
+ # Prompting Format
21
+
22
+ Both Vicuna and Alpaca will work, but due the initial and final layers belonging primarily to Xwin, I expect Vicuna to work the best.
23
+
24
+ # Merge process
25
+
26
+ The models used in the merge are [Xwin](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) and [Euryale](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B).
27
+
28
+ The layer ranges used are as follows:
29
+
30
+ ```yaml
31
+ - range 0, 16
32
+ Xwin
33
+ - range 8, 24
34
+ Euryale
35
+ - range 17, 32
36
+ Xwin
37
+ - range 25, 40
38
+ Euryale
39
+ - range 33, 48
40
+ Xwin
41
+ - range 41, 56
42
+ Euryale
43
+ - range 49, 64
44
+ Xwin
45
+ - range 57, 72
46
+ Euryale
47
+ - range 65, 80
48
+ Xwin
49
+ ```
50
+
51
+ # Screenshots
52
+
53
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/635567189c72a7e742f1419c/Cat8_Rimaz6Ni7YhQiiGB.png)
54
+
55
+ # Benchmarks
56
+ Coming soon.
57
+
58
+ # Acknowledgements
59
+ Credits goes to [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit).
60
+
61
+ Special thanks to [@Undi95](https://huggingface.co/Undi95) for helping with the merge ratios.
goliath-120b.Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a55744c85faaa64656e4aae4022d636c628c628366749250678dc05d125fcd3
3
+ size 43245727360