|
--- |
|
duplicated_from: localmodels/LLM |
|
--- |
|
# Vicuna 30B GGML |
|
|
|
From LMSYS: https://huggingface.co/lmsys/vicuna-33b-v1.3 |
|
|
|
--- |
|
|
|
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0` |
|
|
|
Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48. |
|
|
|
### k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K` |
|
|
|
Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387. |
|
|
|
--- |
|
|
|
## Files |
|
| Name | Quant method | Bits | Size | Max RAM required, no GPU offloading | Use case | |
|
| ---- | ---- | ---- | ---- | ---- | ----- | |
|
| vicuna-33b.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB | 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | |
|
| vicuna-33b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB | 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | |
|
| vicuna-33b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB | 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | |
|
| vicuna-33b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB | 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | |
|
| vicuna-33b.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB | 20.80 GB | Original llama.cpp quant method, 4-bit. | |
|
| vicuna-33b.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB | 22.83 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | |
|
| vicuna-33b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB | 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | |
|
| vicuna-33b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB | 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | |
|
| vicuna-33b.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB | 24.87 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | |
|
| vicuna-33b.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB | 26.90 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | |
|
| vicuna-33b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB | 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | |
|
| vicuna-33b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB | 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | |
|
| vicuna-33b.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB | 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | |
|
| vicuna-33b.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB | 37.06 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | |
|
|
|
--- |
|
|
|
# Vicuna Model Card |
|
|
|
## Model Details |
|
|
|
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. |
|
|
|
- **Developed by:** [LMSYS](https://lmsys.org/) |
|
- **Model type:** An auto-regressive language model based on the transformer architecture. |
|
- **License:** Non-commercial license |
|
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971). |
|
|
|
### Model Sources |
|
|
|
- **Repository:** https://github.com/lm-sys/FastChat |
|
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/ |
|
- **Paper:** https://arxiv.org/abs/2306.05685 |
|
- **Demo:** https://chat.lmsys.org/ |
|
|
|
## Uses |
|
|
|
The primary use of Vicuna is research on large language models and chatbots. |
|
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. |
|
|
|
## How to Get Started with the Model |
|
|
|
Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights. |
|
APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api. |
|
|
|
## Training Details |
|
|
|
Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning. |
|
The training data is around 140K conversations collected from ShareGPT.com. |
|
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf). |
|
|
|
## Evaluation |
|
|
|
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf). |
|
|
|
## Difference between different versions of Vicuna |
|
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md) |