Update README.md
Browse files
README.md
CHANGED
@@ -84,7 +84,7 @@ The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained
|
|
84 |
|
85 |
|
86 |
| **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 7B | OpenLLaMA 3B | OpenLLaMA 13B |
|
87 |
-
|
|
88 |
| anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.33 | 0.33 | 0.33 |
|
89 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.36 | 0.32 | 0.33 |
|
90 |
| anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.38 | 0.35 | 0.40 |
|
|
|
84 |
|
85 |
|
86 |
| **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 7B | OpenLLaMA 3B | OpenLLaMA 13B |
|
87 |
+
| ---------------------- | -------- | -------- | --------- | ------------ | ------------ | ------------- |
|
88 |
| anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.33 | 0.33 | 0.33 |
|
89 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.36 | 0.32 | 0.33 |
|
90 |
| anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.38 | 0.35 | 0.40 |
|