Commit History
Refactor landmark attention patch
919727b
Nanobit
commited on
fix formatting
958da70
winglian
commited on
Fix missing cfg.
a808bf9
unverified
Angainor Development
commited on
Merge pull request #182 from OpenAccess-AI-Collective/fix-llama-ref
0124825
unverified
winglian
commited on
address PR feedback
0c6f928
winglian
commited on
add streaming dataset support for pretraining datasets
eea2731
winglian
commited on
more gpt-neox long ctx fixes
ab5cd28
winglian
commited on
fix bettertransformers save, force it to skip after saving correctly in callback
1a82082
winglian
commited on
more tweaks to do pre-training with bettertransformers
1210dc8
winglian
commited on
experimental expansion of ctx len
488a67d
winglian
commited on
add validation/warning for bettertransformers and torch version
71a43f8
winglian
commited on
add support for opimum bettertransformers
1edc30c
winglian
commited on
fix for local variable 'LlamaForCausalLM' referenced before assignment
14163c1
winglian
commited on
Merge branch 'main' into patch-1
79e2a6f
unverified
Angainor Development
commited on
add support to extend context with xpos rope
a03a7d7
winglian
commited on
fix for max sequence len across different model types
7f09106
winglian
commited on
Fix backward compat for peft
aefb2fc
Nanobit
commited on
WIP: Rely on cfg.inference
813cfa4
unverified
Angainor Development
commited on
Fix grad checkpoint and outputs param
2a801b0
Nanobit
commited on
Fix patching via import instead of hijacking
e44c9e0
Nanobit
commited on
Feat: Add landmark attention
55b8542
Nanobit
commited on
Disable Wandb
f4df266
Bruno Cabral
commited on
Refactor out unmodified save_steps and eval_steps
2ef4634
Nanobit
commited on
Set to use cfg.seed or 42 for backward compat
2cfe9e9
Nanobit
commited on
Fix failing test
bfd27ba
Nanobit
commited on
Validate falcon with fsdp
babf0fd
Nanobit
commited on
Fix future deprecate prepare_model_for_int8_training
df9528f
Nanobit
commited on
Fix training over existing lora
193c73b
unverified
Angainor Development
commited on
fix camel ai, add guanaco/oasst mapping for sharegpt
59bb219
winglian
commited on
new prompters, misc fixes for output dir missing using fsdp, and changing max seq len
4ac9e25
winglian
commited on
Update doc for grad_accu and add validation tests for batch size
3c71c8d
Nanobit
commited on
fix batch size calculation
5a631b3
winglian
commited on
fix packing so that concatenated sequences reset the attention
9b8585d
winglian
commited on
Merge pull request #124 from OpenAccess-AI-Collective/xformers-fix
2d0ba3b
unverified
winglian
commited on
Merge pull request #120 from OpenAccess-AI-Collective/model-from-path
c7021e1
unverified
winglian
commited on
don't worry about dupes
c56818b
winglian
commited on
remove unused import and update readme
e3c494c
winglian
commited on
black formatting
ad0ea6a
winglian
commited on
copy xformers attn from ooba since we removed dep on alpaca_lora_4bit
6cb2310
winglian
commited on
add support for gradient accumulation steps
3aad5f3
winglian
commited on
fix up tokenizer config, isort fix
39a208c
winglian
commited on
split up llama model loading so config can be loaded from base config and models can be loaded from a path
2520ecd
winglian
commited on
Fix incorrect rebase
594e72b
Nanobit
commited on
Fix sharegpt prompt
25eeeeb
Nanobit
commited on
fix relative path for fixtures
cfcc549
winglian
commited on
Fix security issue or ignore false positives
a1f9850
Nanobit
commited on