ensure enable_input_require_grads is called on model before getting the peft model (#345) 176b888 unverified winglian commited on Aug 6, 2023
scope flash-attn+qlora fix correctly, scope to llama, add comment 78b9efb tmm1 commited on Aug 3, 2023
ensure flash-attn fixes happen in both adapter/lora modes, and use torch_dtype 248bf90 tmm1 commited on Aug 2, 2023
add peft install back since it doesn't get installed by setup.py (#331) db2a358 unverified winglian commited on Jul 31, 2023
don't use llama if trust_remote_code is set since that needs to use AutoModel path 66afb76 winglian commited on Jul 8, 2023
Merge pull request #187 from OpenAccess-AI-Collective/strip-peft-device-map 93dacba unverified winglian commited on Jun 12, 2023
Merge pull request #177 from NanoCode012/fix/landmark-patch 8002ffb unverified winglian commited on Jun 12, 2023
Merge pull request #182 from OpenAccess-AI-Collective/fix-llama-ref 0124825 unverified winglian commited on Jun 10, 2023
fix for local variable 'LlamaForCausalLM' referenced before assignment 14163c1 winglian commited on Jun 10, 2023
new prompters, misc fixes for output dir missing using fsdp, and changing max seq len 4ac9e25 winglian commited on Jun 6, 2023
Merge pull request #124 from OpenAccess-AI-Collective/xformers-fix 2d0ba3b unverified winglian commited on May 31, 2023
copy xformers attn from ooba since we removed dep on alpaca_lora_4bit 6cb2310 winglian commited on May 31, 2023