Vijayendra
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ LoRA Fine-Tuning: Low-Rank Adaptation (LoRA) enhances parameter efficiency, allo
|
|
27 |
|
28 |
4-Bit Quantization: Memory usage is significantly reduced, making the model deployable on resource-constrained systems.
|
29 |
|
30 |
-
|
31 |
|
32 |
The model is trained using the SFTTrainer library from the trl package, with parameters optimized for accuracy and resource efficiency.
|
33 |
|
|
|
27 |
|
28 |
4-Bit Quantization: Memory usage is significantly reduced, making the model deployable on resource-constrained systems.
|
29 |
|
30 |
+
Gradient Checkpointing: Further optimizations for handling long sequences and reducing GPU memory usage.
|
31 |
|
32 |
The model is trained using the SFTTrainer library from the trl package, with parameters optimized for accuracy and resource efficiency.
|
33 |
|