Commit History
Unsloth optims for Llama (#1609)
8a1572a
unverified
winglian
commited on
feat: exclude mamba blocks for jamba (#1578)
8b9c15b
unverified
Nanobit
commited on
make sure everything stays in the same dtype when using dpo + FSDP (#1559)
68601ec
unverified
winglian
commited on
Unsloth gradient checkpointing offload (#1528)
6319da1
unverified
winglian
commited on
DBRX Model Support (#1462)
132eb74
unverified
winglian
commited on
ignore issues with calculating # params when printing (#1493)
2fa65b9
unverified
winglian
commited on
reduce verbosity of the special tokens (#1472)
0b10377
unverified
winglian
commited on
qwen2_moe support w multipack (#1455)
6086be8
unverified
winglian
commited on
fix some of the edge cases for Jamba (#1452)
05b398a
unverified
winglian
commited on
Jamba (#1451)
02af082
unverified
winglian
commited on
fix layer_replication arg to peft (#1446)
4155e99
unverified
winglian
commited on
support layer replication for peft and fix rslora integration (#1445)
25afd35
unverified
winglian
commited on
fix(dataset): normalize tokenizer config and change hash from tokenizer class to tokenizer path (#1298)
ff939d8
unverified
Nanobit
commited on
strip out hacky qlora-fsdp workarounds now that qlora-fsdp fixes are upstreamed (#1428)
2a1589f
unverified
winglian
commited on
fix(config): passing gradient_checkpoint_kwargs (#1412)
b1e3e1b
unverified
Nanobit
commited on
beta support for multipack with gemmoe: (#1402)
8df7b88
unverified
winglian
commited on
support for rslora (#1387) [skip ci]
7659c00
unverified
winglian
commited on
FDSP + QLoRA (#1378)
9b6ee83
unverified
winglian
commited on
support for DoRA w/ PEFT (#1363)
0cfdb2c
unverified
winglian
commited on
fix for protected model_ namespace w pydantic (#1345)
6b3b271
unverified
winglian
commited on
Pydantic 2.x cfg (#1239)
cc3cebf
unverified
winglian
commited on
make mlflow optional (#1317)
5894f0e
unverified
winglian
commited on
Add MPS support (#1264)
fac2d98
unverified
simplify haldning for newer multipack patches so they can be added in a single place (#1270)
5698943
unverified
winglian
commited on
Fix bug preventing model_kwargs being injected (#1262)
73f1bda
unverified
Zac Brannelly
commited on
relora: magnitude pruning of the optimizer (#1245)
8c2e05a
unverified
winglian
commited on
support for true batches with multipack (#1230)
00568c1
unverified
winglian
commited on
Peft deepspeed resume (#1227)
c67fb71
unverified
winglian
commited on
Support for additional_special_tokens (#1221) [skip ci]
25e037f
unverified
Fix typo (#1231) [skip ci]
8608d80
unverified
xhedit
commited on
Peft lotfq (#1222)
4cb7900
unverified
winglian
commited on
Revert "run PR e2e docker CI tests in Modal" (#1220) [skip ci]
8da1633
unverified
winglian
commited on
run PR e2e docker CI tests in Modal (#1217) [skip ci]
36d053f
unverified
winglian
commited on
more checks and fixes for deepspeed and fsdp (#1208) [skip ci]
e923e62
unverified
winglian
commited on
Feat/chatml add system message (#1117)
98b4762
unverified
fix(log): improve warning to clarify that lora_modules_to_save expect a list (#1197)
08719b9
unverified
Nanobit
commited on
Mixtral fixes 20240124 (#1192) [skip ci]
54d2ac1
unverified
winglian
commited on
Phi2 multipack (#1173)
814aee6
unverified
winglian
commited on
Falcon embeddings (#1149) [skip docker]
e799e08
unverified
winglian
commited on
jupyter lab fixes (#1139) [skip ci]
eaaeefc
unverified
winglian
commited on
Qwen2 (#1166)
f5a828a
unverified
winglian
commited on
make sure the model config loader respects the model_revision too (#1160) [skip-ci]
fccb542
unverified
winglian
commited on
Deprecate max packed sequence len (#1141)
2ce5c0d
unverified
winglian
commited on
Multipack simplify for Mixtral (#1142)
6910e6a
unverified
winglian
commited on
Add `layers_to_transform` for `lora_config` (#1118)
8487b97
unverified
xzuyn
commited on
keep gate in fp32 for 16 bit loras (#1105)
da97285
unverified
winglian
commited on