Pankaj Mathur
commited on
Commit
·
122b007
1
Parent(s):
5451a37
Update README.md
Browse files
README.md
CHANGED
@@ -5,19 +5,20 @@ language:
|
|
5 |
library_name: transformers
|
6 |
pipeline_tag: text-generation
|
7 |
---
|
8 |
-
#
|
9 |
-
|
|
|
10 |
|
11 |
|
12 |
# Dataset
|
13 |
|
14 |
-
We
|
15 |
|
16 |
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
|
17 |
|
18 |
-
This helps student model aka [
|
19 |
|
20 |
-
Please see below example usage how the **System** prompt is added before each
|
21 |
|
22 |
# Training
|
23 |
|
|
|
5 |
library_name: transformers
|
6 |
pipeline_tag: text-generation
|
7 |
---
|
8 |
+
# orca_mini_7b
|
9 |
+
|
10 |
+
An [OpenLLaMa-7B model](https://github.com/openlm-research/open_llama) model trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.
|
11 |
|
12 |
|
13 |
# Dataset
|
14 |
|
15 |
+
We build explain tuned [WizardLM dataset ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html) & [Dolly-V2 dataset ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707).
|
16 |
|
17 |
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
|
18 |
|
19 |
+
This helps student model aka [wizardlm_alpaca_dolly_orca_open_llama_13b](https://huggingface.co/psmathur/wizardlm_alpaca_dolly_orca_open_llama_13b) to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).
|
20 |
|
21 |
+
Please see below example usage how the **System** prompt is added before each **instruction**.
|
22 |
|
23 |
# Training
|
24 |
|