raffr commited on
Commit
e011561
·
0 Parent(s):

Duplicate from localmodels/LLM

Browse files
.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ duplicated_from: localmodels/LLM
3
+ ---
4
+ # Vicuna 7B v1.3 ggml
5
+
6
+ From LMSYS: https://huggingface.co/lmsys/vicuna-7b-v1.3
7
+
8
+ ---
9
+
10
+ ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
11
+
12
+ Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48.
13
+
14
+ ### k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
15
+
16
+ Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387.
17
+
18
+ ---
19
+
20
+ ## Files
21
+ | Name | Quant method | Bits | Size | Max RAM required, no GPU offloading | Use case |
22
+ | ---- | ---- | ---- | ---- | ---- | ----- |
23
+ | vicuna-7b-v1.3.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
24
+ | vicuna-7b-v1.3.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
25
+ | vicuna-7b-v1.3.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
26
+ | vicuna-7b-v1.3.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
27
+ | vicuna-7b-v1.3.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB | 6.29 GB | Original llama.cpp quant method, 4-bit. |
28
+ | vicuna-7b-v1.3.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB | 6.71 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
29
+ | vicuna-7b-v1.3.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
30
+ | vicuna-7b-v1.3.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
31
+ | vicuna-7b-v1.3.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB | 7.13 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
32
+ | vicuna-7b-v1.3.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB | 7.56 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
33
+ | vicuna-7b-v1.3.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
34
+ | vicuna-7b-v1.3.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
35
+ | vicuna-7b-v1.3.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
36
+ | vicuna-7b-v1.3.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB | 9.66 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
37
+
38
+ ---
39
+
40
+ # Vicuna Model Card
41
+
42
+ ## Model Details
43
+
44
+ Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
45
+
46
+ - **Developed by:** [LMSYS](https://lmsys.org/)
47
+ - **Model type:** An auto-regressive language model based on the transformer architecture.
48
+ - **License:** Non-commercial license
49
+ - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
50
+
51
+ ### Model Sources
52
+
53
+ - **Repository:** https://github.com/lm-sys/FastChat
54
+ - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
55
+ - **Paper:** https://arxiv.org/abs/2306.05685
56
+ - **Demo:** https://chat.lmsys.org/
57
+
58
+ ## Uses
59
+
60
+ The primary use of Vicuna is research on large language models and chatbots.
61
+ The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
62
+
63
+ ## How to Get Started with the Model
64
+
65
+ Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
66
+ APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
67
+
68
+ ## Training Details
69
+
70
+ Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
71
+ The training data is around 140K conversations collected from ShareGPT.com.
72
+ See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
73
+
74
+ ## Evaluation
75
+
76
+ Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf).
77
+
78
+ ## Difference between different versions of Vicuna
79
+ See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
vicuna-7b-v1.3.ggmlv3.q2_K.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba975e596f1c73076e4299b77575e411f66ab152f5be143e4cbe6590f46a009c
3
+ size 2866807424
vicuna-7b-v1.3.ggmlv3.q3_K_L.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce5640d949acb7e6ebd6f2532c01876fcb4db127bf4b137c5c0c9c485d17ff81
3
+ size 3596821120
vicuna-7b-v1.3.ggmlv3.q3_K_M.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5fbade1913244de286ab2f678422ac432070aad30600e329f894c35f99c0c75d
3
+ size 3282248320
vicuna-7b-v1.3.ggmlv3.q3_K_S.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c57ae15cd3cf7af869d7a948c8dd18362d138fd6d4c733a0c2bd8380df76fdb
3
+ size 2948014720
vicuna-7b-v1.3.ggmlv3.q4_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23ce5ed290b56a19305178b9ada2c3d96036bd69a6c18304b6158eb6672d6c0f
3
+ size 3791725184
vicuna-7b-v1.3.ggmlv3.q4_1.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5190ac9f52810b34bacf02e2367eb784436db49ba180ee897e63f3a31949e784
3
+ size 4212859520
vicuna-7b-v1.3.ggmlv3.q4_K_M.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:345f4ece11bec4e8fe805d45accb9f97e068266836651bb3a5e47265d0d9f744
3
+ size 4080714368
vicuna-7b-v1.3.ggmlv3.q4_K_S.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b84c1da7ed9a4dd8b4a409ac8e6e0fb1fbfe4d08d34a6e8ae896f41b10bb95ea
3
+ size 3825517184
vicuna-7b-v1.3.ggmlv3.q5_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ce6a940599ba11b2b2f2c3f82a5ba4fd1efc9b121860f8b3028860d1b4e64ea
3
+ size 4633993856
vicuna-7b-v1.3.ggmlv3.q5_1.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca88e8de6251f08aa01979e8c7584bf4902c98c4f52eac146c781bf009e3af05
3
+ size 5055128192
vicuna-7b-v1.3.ggmlv3.q5_K_M.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c350fe44fd902c84da735041066cc0fbb2588647a9f189760e9ceae08f31a7e9
3
+ size 4782867072
vicuna-7b-v1.3.ggmlv3.q5_K_S.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:235a84c5904b303033d8871cc3be09d44a1af99b07ced737b674bcd21583b40c
3
+ size 4651401856
vicuna-7b-v1.3.ggmlv3.q6_K.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c65b3d0a90fc10c4684e0c2a150ceae88fd8f831e3462b8067543d3d4fdcbfd3
3
+ size 5528904320
vicuna-7b-v1.3.ggmlv3.q8_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:937fb83f8d03dba1dad54efe824ae5815c02188c592fa06290e9d3d468471dba
3
+ size 7160799872