Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
json
Languages:
English
Size:
100K - 1M
Tags:
instruction-finetuning
License:
Update README.md
Browse files
README.md
CHANGED
@@ -53,7 +53,7 @@ The data in Tapir are mainly in English (BCP-47 en).
|
|
53 |
The data fields are as follows:
|
54 |
|
55 |
* `instruction`: describes the task the model should perform.
|
56 |
-
* `input`: context or input for the task. Each of the
|
57 |
* `output`: the answer taken from the original Tapir Dataset formatted as an IFTTT recipe.
|
58 |
* `score`: the correlation score obtained via BertForNextSentencePrediction
|
59 |
* `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors of Alpaca for fine-tuning their models.
|
|
|
53 |
The data fields are as follows:
|
54 |
|
55 |
* `instruction`: describes the task the model should perform.
|
56 |
+
* `input`: context or input for the task. Each of the 116K input is unique.
|
57 |
* `output`: the answer taken from the original Tapir Dataset formatted as an IFTTT recipe.
|
58 |
* `score`: the correlation score obtained via BertForNextSentencePrediction
|
59 |
* `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors of Alpaca for fine-tuning their models.
|