Vijayendra commited on
Commit
570407e
·
verified ·
1 Parent(s): 58d8272

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -27,7 +27,7 @@ LoRA Fine-Tuning: Low-Rank Adaptation (LoRA) enhances parameter efficiency, allo
27
 
28
  4-Bit Quantization: Memory usage is significantly reduced, making the model deployable on resource-constrained systems.
29
 
30
- Cyclic Attention and Gradient Checkpointing: Further optimizations for handling long sequences and reducing GPU memory usage.
31
 
32
  The model is trained using the SFTTrainer library from the trl package, with parameters optimized for accuracy and resource efficiency.
33
 
 
27
 
28
  4-Bit Quantization: Memory usage is significantly reduced, making the model deployable on resource-constrained systems.
29
 
30
+ Gradient Checkpointing: Further optimizations for handling long sequences and reducing GPU memory usage.
31
 
32
  The model is trained using the SFTTrainer library from the trl package, with parameters optimized for accuracy and resource efficiency.
33