yang31210999
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ We prune the Llama-3.1-8B-Instruct to 1.4B and fine-tune it with LLM-Neo method
|
|
10 |
|
11 |
## Benchmarks
|
12 |
|
13 |
-
In this section, we report the results for Llama3.1-Neo-1B-100w on standard automatic benchmarks. For all the evaluations, we use
|
14 |
|
15 |
### Evaluation results
|
16 |
|
|
|
10 |
|
11 |
## Benchmarks
|
12 |
|
13 |
+
In this section, we report the results for Llama3.1-Neo-1B-100w on standard automatic benchmarks. For all the evaluations, we use [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) library.
|
14 |
|
15 |
### Evaluation results
|
16 |
|