wonhosong commited on
Commit
b848b9f
·
1 Parent(s): f254d62

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -36,8 +36,7 @@ pipeline_tag: text-generation
36
  - [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca)
37
  - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only)
38
  - [GAIR/lima](https://huggingface.co/datasets/GAIR/lima)
39
-
40
- > No other data was used except for the dataset mentioned above
41
 
42
  ### Prompt Template
43
  ```
@@ -89,16 +88,17 @@ output_text = tokenizer.decode(output[0], skip_special_tokens=True)
89
  - We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
90
  We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`
91
  We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463)
 
92
 
93
  ### Main Results
94
  | Model | H4(Avg) | ARC | HellaSwag | MMLU | TruthfulQA | | MT_Bench |
95
  |--------------------------------------------------------------------|----------|----------|----------|------|----------|-|-------------|
96
- | **[Llama-2-70b-instruct-v2](https://huggingface.co/upstage/Llama-2-70b-instruct-v2)**(***Ours***, ***Local Reproduction***) | **72.7** | **71.6** | **87.7** | 69.7 | **61.6** | | **7.44063** |
97
  | [Llama-2-70b-instruct](https://huggingface.co/upstage/Llama-2-70b-instruct) (Ours, Open LLM Leaderboard) | 72.3 | 70.9 | 87.5 | **69.8** | 61 | | 7.24375 |
98
  | [llama-65b-instruct](https://huggingface.co/upstage/llama-65b-instruct) (Ours, Open LLM Leaderboard) | 69.4 | 67.6 | 86.5 | 64.9 | 58.8 | | |
99
  | Llama-2-70b-hf | 67.3 | 67.3 | 87.3 | **69.8** | 44.9 | | |
100
  | [llama-30b-instruct-2048](https://huggingface.co/upstage/llama-30b-instruct-2048) (Ours, Open LLM Leaderboard) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 | | |
101
- | [llama-30b-instruct](https://huggingface.co/upstage/llama-30b-instruct) (Ours, Open LLM Leaderboard) | 65.2 | 62.5 | 86.2 | 59.4 | 52.8 | | |
102
  | llama-65b | 64.2 | 63.5 | 86.1 | 63.9 | 43.4 | | |
103
  | falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 | | |
104
 
 
36
  - [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca)
37
  - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only)
38
  - [GAIR/lima](https://huggingface.co/datasets/GAIR/lima)
39
+ - No other data was used except for the dataset mentioned above
 
40
 
41
  ### Prompt Template
42
  ```
 
88
  - We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
89
  We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`
90
  We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463)
91
+ - We used [MT-bench](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge), a set of challenging multi-turn open-ended questions, to evaluate the models
92
 
93
  ### Main Results
94
  | Model | H4(Avg) | ARC | HellaSwag | MMLU | TruthfulQA | | MT_Bench |
95
  |--------------------------------------------------------------------|----------|----------|----------|------|----------|-|-------------|
96
+ | **[Llama-2-70b-instruct-v2](https://huggingface.co/upstage/Llama-2-70b-instruct-v2)**(Ours, Local Reproduction) | **72.7** | **71.6** | **87.7** | 69.7 | **61.6** | | **7.44063** |
97
  | [Llama-2-70b-instruct](https://huggingface.co/upstage/Llama-2-70b-instruct) (Ours, Open LLM Leaderboard) | 72.3 | 70.9 | 87.5 | **69.8** | 61 | | 7.24375 |
98
  | [llama-65b-instruct](https://huggingface.co/upstage/llama-65b-instruct) (Ours, Open LLM Leaderboard) | 69.4 | 67.6 | 86.5 | 64.9 | 58.8 | | |
99
  | Llama-2-70b-hf | 67.3 | 67.3 | 87.3 | **69.8** | 44.9 | | |
100
  | [llama-30b-instruct-2048](https://huggingface.co/upstage/llama-30b-instruct-2048) (Ours, Open LLM Leaderboard) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 | | |
101
+ | [llama-30b-instruct](https://huggingface.co/upstage/llama-30b-instruct) (***Ours***, ***Open LLM Leaderboard***) | 65.2 | 62.5 | 86.2 | 59.4 | 52.8 | | |
102
  | llama-65b | 64.2 | 63.5 | 86.1 | 63.9 | 43.4 | | |
103
  | falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 | | |
104