Update README.md
Browse files
README.md
CHANGED
@@ -24,8 +24,7 @@ The Databricks-dolly-15k dataset represents a substantial collection of over 15,
|
|
24 |
- Offer prompt/response pairs across eight different instruction categories, comprising the seven categories from the InstructGPT paper and an added open-ended category.
|
25 |
- Ensure authenticity with restrictions against online sourcing (with the exception of Wikipedia for some categories) and the use of generative AI in crafting content.
|
26 |
|
27 |
-
During the dataset's creation, contributors responded to peer questions. A focus was placed on rephrasing the original queries and emphasizing accurate responses. Furthermore, certain data subsets incorporate Wikipedia references, identifiable by bracketed citation numbers like [42].
|
28 |
-
|
29 |
#### Finetuning Details:
|
30 |
|
31 |
Our finetuning harnessed the capabilities of [MonsterAPI](https://monsterapi.ai)'s no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm):
|
|
|
24 |
- Offer prompt/response pairs across eight different instruction categories, comprising the seven categories from the InstructGPT paper and an added open-ended category.
|
25 |
- Ensure authenticity with restrictions against online sourcing (with the exception of Wikipedia for some categories) and the use of generative AI in crafting content.
|
26 |
|
27 |
+
During the dataset's creation, contributors responded to peer questions. A focus was placed on rephrasing the original queries and emphasizing accurate responses. Furthermore, certain data subsets incorporate Wikipedia references, identifiable by bracketed citation numbers like [42].
|
|
|
28 |
#### Finetuning Details:
|
29 |
|
30 |
Our finetuning harnessed the capabilities of [MonsterAPI](https://monsterapi.ai)'s no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm):
|