Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ license: other
|
|
6 |
|
7 |
This is a 7b parameter, fine-tuned on 100k synthetic instruction/response pairs generated by gpt-3.5-turbo using my version of self-instruct [airoboros](https://github.com/jondurbin/airoboros)
|
8 |
|
9 |
-
Context length is 2048.
|
10 |
|
11 |
Links:
|
12 |
|
@@ -80,7 +80,7 @@ with open("as_conversations.json", "w") as outfile:
|
|
80 |
|
81 |
## Evaluation
|
82 |
|
83 |
-
I used the same questions from (WizardVicunaLM)
|
84 |
|
85 |
| instruction | gpt3.5 | wizard-vicuna-13b | vicuna-13b | wizard-7b | airoboros-gpt-3.5-turbo-100k-7b |
|
86 |
| --- | --- | --- | --- | --- | --- |
|
|
|
6 |
|
7 |
This is a 7b parameter, fine-tuned on 100k synthetic instruction/response pairs generated by gpt-3.5-turbo using my version of self-instruct [airoboros](https://github.com/jondurbin/airoboros)
|
8 |
|
9 |
+
Context length for this model is 2048.
|
10 |
|
11 |
Links:
|
12 |
|
|
|
80 |
|
81 |
## Evaluation
|
82 |
|
83 |
+
I used the same questions from [WizardVicunaLM](https://github.com/melodysdreamj/WizardVicunaLM):
|
84 |
|
85 |
| instruction | gpt3.5 | wizard-vicuna-13b | vicuna-13b | wizard-7b | airoboros-gpt-3.5-turbo-100k-7b |
|
86 |
| --- | --- | --- | --- | --- | --- |
|