Angainor Development
commited on
Fix training over existing lora
Browse filesWhen training with Lora, and starting with an existing lora weights, current code produces a model with 0 trainable params and training can't work.
Adding the "is_trainable" param allows the loaded peft to be trained and fixes the bug.
src/axolotl/utils/models.py
CHANGED
@@ -402,6 +402,7 @@ def load_lora(model, cfg):
|
|
402 |
model = PeftModel.from_pretrained(
|
403 |
model,
|
404 |
cfg.lora_model_dir,
|
|
|
405 |
device_map=cfg.device_map,
|
406 |
# torch_dtype=torch.float16,
|
407 |
)
|
|
|
402 |
model = PeftModel.from_pretrained(
|
403 |
model,
|
404 |
cfg.lora_model_dir,
|
405 |
+
is_trainable=True,
|
406 |
device_map=cfg.device_map,
|
407 |
# torch_dtype=torch.float16,
|
408 |
)
|