Update README.md
Browse files
README.md
CHANGED
@@ -11,30 +11,40 @@ datasets:
|
|
11 |
base_model: tiiuae/falcon-180B
|
12 |
---
|
13 |
|
14 |
-
|
15 |
|
16 |
-
|
|
|
17 |
|
18 |
-
|
19 |
|
20 |
-
|
21 |
|
22 |
-
|
|
|
|
|
23 |
|
24 |
-
|
25 |
-
- Model Path: tiiuae/falcon-180B
|
26 |
-
- Dataset: databricks/databricks-dolly-15k
|
27 |
-
- Learning rate: 0.0002
|
28 |
-
- Number of epochs: 1
|
29 |
-
- Data split: Training: 90% / Validation: 10%
|
30 |
-
- Gradient accumulation steps: 1
|
31 |
|
32 |
-
|
33 |
-
|
|
|
34 |
|
35 |
-
|
|
|
|
|
36 |
|
37 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
```
|
40 |
### INSTRUCTION:
|
@@ -44,4 +54,16 @@ Prompt Used:
|
|
44 |
|
45 |
### RESPONSE:
|
46 |
[response]
|
47 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
base_model: tiiuae/falcon-180B
|
12 |
---
|
13 |
|
14 |
+
### Finetuning Overview:
|
15 |
|
16 |
+
**Model Used:** tiiuae/falcon-180B
|
17 |
+
**Dataset:** Databricks-dolly-15k
|
18 |
|
19 |
+
#### Dataset Insights:
|
20 |
|
21 |
+
The Databricks-dolly-15k dataset represents a substantial collection of over 15,000 records, curated through the dedicated and collective efforts of numerous Databricks professionals. It's meticulously designed to:
|
22 |
|
23 |
+
- Enhance the magical interactivity of ChatGPT-like models.
|
24 |
+
- Offer prompt/response pairs across eight different instruction categories, comprising the seven categories from the InstructGPT paper and an added open-ended category.
|
25 |
+
- Ensure authenticity with restrictions against online sourcing (with the exception of Wikipedia for some categories) and the use of generative AI in crafting content.
|
26 |
|
27 |
+
During the dataset's creation, contributors responded to peer questions. A focus was placed on rephrasing the original queries and emphasizing accurate responses. Furthermore, certain data subsets incorporate Wikipedia references, identifiable by bracketed citation numbers like [42]. For optimal results in subsequent applications, users are advised to remove these references.
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
|
29 |
+
#### Finetuning Details:
|
30 |
+
|
31 |
+
Our finetuning harnessed the capabilities of [MonsterAPI](https://monsterapi.ai)'s no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm):
|
32 |
|
33 |
+
- **Duration:** The session spanned 41.7 hours.
|
34 |
+
- **Cost:** The entire process cost `$184.314`.
|
35 |
+
- **Hardware Utilized:** 2x A100 80GB GPUs.
|
36 |
|
37 |
+
#### Hyperparameters & Additional Details:
|
38 |
+
|
39 |
+
- **Model Path:** tiiuae/falcon-180B
|
40 |
+
- **Learning Rate:** 0.0002
|
41 |
+
- **Epochs:** 1
|
42 |
+
- **Data Split:** Training 90% / Validation 10%
|
43 |
+
- **Gradient Accumulation Steps:** 1
|
44 |
+
|
45 |
+
---
|
46 |
+
|
47 |
+
### Prompt Used:
|
48 |
|
49 |
```
|
50 |
### INSTRUCTION:
|
|
|
54 |
|
55 |
### RESPONSE:
|
56 |
[response]
|
57 |
+
```
|
58 |
+
|
59 |
+
Loss metrics
|
60 |
+
|
61 |
+
Training loss:
|
62 |
+
![training loss](train-loss.png "Training loss")
|
63 |
+
|
64 |
+
|
65 |
+
|
66 |
+
---
|
67 |
+
|
68 |
+
license: apache-2.0
|
69 |
+
|