Angainor Development commited on
Commit
193c73b
·
unverified ·
1 Parent(s): 6abfd87

Fix training over existing lora

Browse files

When training with Lora, and starting with an existing lora weights, current code produces a model with 0 trainable params and training can't work.
Adding the "is_trainable" param allows the loaded peft to be trained and fixes the bug.

Files changed (1) hide show
  1. src/axolotl/utils/models.py +1 -0
src/axolotl/utils/models.py CHANGED
@@ -402,6 +402,7 @@ def load_lora(model, cfg):
402
  model = PeftModel.from_pretrained(
403
  model,
404
  cfg.lora_model_dir,
 
405
  device_map=cfg.device_map,
406
  # torch_dtype=torch.float16,
407
  )
 
402
  model = PeftModel.from_pretrained(
403
  model,
404
  cfg.lora_model_dir,
405
+ is_trainable=True,
406
  device_map=cfg.device_map,
407
  # torch_dtype=torch.float16,
408
  )