Commit History

Unsloth gradient checkpointing offload (#1528)
6319da1
unverified

winglian commited on

DBRX Model Support (#1462)
132eb74
unverified

winglian commited on

add field to sft dataset pydantic for completion support (#1497)
ff01c45
unverified

winglian commited on

Remove `validate_quantized_dora` (#1485)
9430b6e
unverified

xzuyn commited on

fix: reduce sample_packing warning (#1484)
bda48f0
unverified

Nanobit commited on

feat: validate sample packing requires flash_attention (#1465)
bf4cd67
unverified

Nanobit commited on

add support for cohere chat template (#1478)
05b0b7e
unverified

winglian commited on

Pretrain multipack v2 (#1470)
5aa5097
unverified

winglian commited on

fix pretraining_ on odd datasets (#1463)
586bd8d
unverified

monsoon-nlp commited on

LISA (#1469)
0ddfb24
unverified

winglian tmm1 commited on

Jamba (#1451)
02af082
unverified

winglian commited on

support layer replication for peft and fix rslora integration (#1445)
25afd35
unverified

winglian commited on

make sure to capture non-null defaults from config validation (#1415)
601b77b
unverified

winglian commited on

fix(dataset): normalize tokenizer config and change hash from tokenizer class to tokenizer path (#1298)
ff939d8
unverified

Nanobit commited on

support galore once upstreamed into transformers (#1409)
dd449c5
unverified

winglian commited on

Feat: Add sharegpt multirole (#1137)
40a88e8
unverified

Nanobit commited on

Add a config not to shuffle merged dataset (#1394) [skip ci]
43bdc5d
unverified

seungduk winglian commited on

ORPO (#1419)
2ea70eb
unverified

winglian commited on

Update ChatTemplate enum to include alpaca and gemma (#1396)
0976781
unverified

chiragjn commited on

chore: lint (#1389)
4326520
unverified

winglian commited on

Fix pydantic configuration for the max_memory input (#1385) [skip ci]
0bc114d
unverified

dandm1 winglian commited on

support for rslora (#1387) [skip ci]
7659c00
unverified

winglian commited on

validation for fsdp and deepspeed (#1388) [skip ci]
3fd8093
unverified

winglian commited on

support for DoRA w/ PEFT (#1363)
0cfdb2c
unverified

winglian commited on

lora+ support (#1352)
decb66e
unverified

winglian commited on

Fix validation for early stopping (#1358)
b5b4492
unverified

chiragjn commited on

fix for protected model_ namespace w pydantic (#1345)
6b3b271
unverified

winglian commited on

Fix `use_mlflow` to be bool instead of str (#1344)
3a5a2d2
unverified

chiragjn commited on

more fixes 20240228 (#1342) [skip ci]
0f985e1
unverified

winglian commited on

add gemma instruct chat template (#1341)
c1a7b3d
unverified

winglian commited on

more pydantic fixes (#1338)
3f69571
unverified

winglian commited on

Support user-defined prompt processing strategies for dpo (#1248)
1e3d530
unverified

nopperl winglian commited on

add lion-pytorch optimizer (#1299) [skip ci]
1648279
unverified

Maxime winglian commited on

hotfix to exclude_unset from pydantic config when converting back to a dict (#1334)
269c543
unverified

winglian commited on

hotfix for missing outputs params (#1333)
e7eed20
unverified

winglian commited on

hotfix for lora rank (#1332)
cf00231
unverified

winglian commited on

hotfix for capabilities loading (#1331)
7de912e
unverified

winglian commited on

ADD: push checkpoints to mlflow artifact registry (#1295) [skip ci]
d756534
unverified

JohanWork Nanobit winglian commited on

Pydantic 2.x cfg (#1239)
cc3cebf
unverified

winglian commited on