Update README.md
Browse files
README.md
CHANGED
@@ -211,7 +211,7 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
211 |
```
|
212 |
|
213 |
# Notes:
|
214 |
-
- For small datasets with narrow content which the model has already done well on our domain, and doesn't want the model to forget the knowledge => Just need to focus on q,v.
|
215 |
- Fine-tuned lora with rank = 8 and alpha = 16, epoch = 1, linear (optim)
|
216 |
- DoRA
|
217 |
|
|
|
211 |
```
|
212 |
|
213 |
# Notes:
|
214 |
+
- For small datasets with narrow content which the model has already done well on our domain, and doesn't want the model to forget the knowledge => Just need to focus on q,v base on LoRA paper.
|
215 |
- Fine-tuned lora with rank = 8 and alpha = 16, epoch = 1, linear (optim)
|
216 |
- DoRA
|
217 |
|