PocketDoc commited on
Commit
16f9e28
·
unverified ·
1 Parent(s): b9083a7

Update README.md to reflect current gradient checkpointing support

Browse files

Previously the readme stated gradient checkpointing was incompatible with 4-bit lora in the current implementation however this is no longer the case. I have replaced the warning with a link to the hugging face documentation on gradient checkpointing.

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -387,7 +387,7 @@ train_on_inputs: false
387
  # don't use this, leads to wonky training (according to someone on the internet)
388
  group_by_length: false
389
 
390
- # does not work with current implementation of 4-bit LoRA
391
  gradient_checkpointing: false
392
 
393
  # stop training after this many evaluation losses have increased in a row
 
387
  # don't use this, leads to wonky training (according to someone on the internet)
388
  group_by_length: false
389
 
390
+ # Whether to use gradient checkpointing https://huggingface.co/docs/transformers/v4.18.0/en/performance#gradient-checkpointing
391
  gradient_checkpointing: false
392
 
393
  # stop training after this many evaluation losses have increased in a row