vicuna-13b-v1.1 / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
7c37f19 verified
|
raw
history blame
5.39 kB
metadata
inference: false
model-index:
  - name: vicuna-13B-1.1-HF
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 52.73
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheBloke/vicuna-13B-1.1-HF
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 80.13
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheBloke/vicuna-13B-1.1-HF
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 51.94
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheBloke/vicuna-13B-1.1-HF
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 52.08
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheBloke/vicuna-13B-1.1-HF
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 74.19
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheBloke/vicuna-13B-1.1-HF
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 8.64
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheBloke/vicuna-13B-1.1-HF
          name: Open LLM Leaderboard

NOTE: New version available
Please check out a newer version of the weights here.


Vicuna Model Card

Model Details

Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.

  • Developed by: LMSYS
  • Model type: An auto-regressive language model based on the transformer architecture.
  • License: Non-commercial license
  • Finetuned from model: LLaMA.

Model Sources

Uses

The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.

How to Get Started with the Model

Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.

Training Details

Vicuna v1.1 is fine-tuned from LLaMA with supervised instruction fine-tuning. The training data is around 70K conversations collected from ShareGPT.com. See more details in the "Training Details of Vicuna Models" section in the appendix of this paper.

Evaluation

Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this paper and leaderboard.

Difference between different versions of Vicuna

See vicuna_weights_version.md

Acknowledgement

Special thanks to @TheBloke for hosting this merged version of weights earlier.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 53.29
AI2 Reasoning Challenge (25-Shot) 52.73
HellaSwag (10-Shot) 80.13
MMLU (5-Shot) 51.94
TruthfulQA (0-shot) 52.08
Winogrande (5-shot) 74.19
GSM8k (5-shot) 8.64