|
--- |
|
inference: false |
|
model-index: |
|
- name: vicuna-13B-1.1-HF |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: AI2 Reasoning Challenge (25-Shot) |
|
type: ai2_arc |
|
config: ARC-Challenge |
|
split: test |
|
args: |
|
num_few_shot: 25 |
|
metrics: |
|
- type: acc_norm |
|
value: 52.73 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheBloke/vicuna-13B-1.1-HF |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: HellaSwag (10-Shot) |
|
type: hellaswag |
|
split: validation |
|
args: |
|
num_few_shot: 10 |
|
metrics: |
|
- type: acc_norm |
|
value: 80.13 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheBloke/vicuna-13B-1.1-HF |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU (5-Shot) |
|
type: cais/mmlu |
|
config: all |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 51.94 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheBloke/vicuna-13B-1.1-HF |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: TruthfulQA (0-shot) |
|
type: truthful_qa |
|
config: multiple_choice |
|
split: validation |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: mc2 |
|
value: 52.08 |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheBloke/vicuna-13B-1.1-HF |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: Winogrande (5-shot) |
|
type: winogrande |
|
config: winogrande_xl |
|
split: validation |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 74.19 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheBloke/vicuna-13B-1.1-HF |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GSM8k (5-shot) |
|
type: gsm8k |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 8.64 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheBloke/vicuna-13B-1.1-HF |
|
name: Open LLM Leaderboard |
|
--- |
|
|
|
**NOTE: New version available** |
|
Please check out a newer version of the weights [here](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md). |
|
|
|
<br> |
|
|
|
# Vicuna Model Card |
|
|
|
## Model Details |
|
|
|
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. |
|
|
|
- **Developed by:** [LMSYS](https://lmsys.org/) |
|
- **Model type:** An auto-regressive language model based on the transformer architecture. |
|
- **License:** Non-commercial license |
|
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971). |
|
|
|
### Model Sources |
|
|
|
- **Repository:** https://github.com/lm-sys/FastChat |
|
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/ |
|
- **Paper:** https://arxiv.org/abs/2306.05685 |
|
- **Demo:** https://chat.lmsys.org/ |
|
|
|
## Uses |
|
|
|
The primary use of Vicuna is research on large language models and chatbots. |
|
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. |
|
|
|
## How to Get Started with the Model |
|
|
|
Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights. |
|
APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api. |
|
|
|
## Training Details |
|
|
|
Vicuna v1.1 is fine-tuned from LLaMA with supervised instruction fine-tuning. |
|
The training data is around 70K conversations collected from ShareGPT.com. |
|
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf). |
|
|
|
## Evaluation |
|
|
|
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard). |
|
|
|
## Difference between different versions of Vicuna |
|
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md) |
|
|
|
## Acknowledgement |
|
|
|
Special thanks to [@TheBloke](https://huggingface.co/TheBloke) for hosting this merged version of weights earlier. |
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__vicuna-13B-1.1-HF) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |53.29| |
|
|AI2 Reasoning Challenge (25-Shot)|52.73| |
|
|HellaSwag (10-Shot) |80.13| |
|
|MMLU (5-Shot) |51.94| |
|
|TruthfulQA (0-shot) |52.08| |
|
|Winogrande (5-shot) |74.19| |
|
|GSM8k (5-shot) | 8.64| |
|
|
|
|