Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
lewtun
/
zephyr-7b-dpo-qlora-fix
like
0
PEFT
TensorBoard
Safetensors
HuggingFaceH4/ultrafeedback_binarized
mistral
alignment-handbook
Generated from Trainer
trl
dpo
4-bit precision
bitsandbytes
License:
apache-2.0
Model card
Files
Files and versions
Metrics
Training metrics
Community
Train
Use this model
17e3ec2
zephyr-7b-dpo-qlora-fix
1 contributor
History:
6 commits
lewtun
HF staff
Training in progress, step 400
17e3ec2
verified
12 months ago
runs
Training in progress, step 400
12 months ago
.gitattributes
1.52 kB
initial commit
12 months ago
adapter_config.json
657 Bytes
Training in progress, step 100
12 months ago
adapter_model.safetensors
671 MB
LFS
Training in progress, step 400
12 months ago
special_tokens_map.json
551 Bytes
Training in progress, step 100
12 months ago
tokenizer.json
1.8 MB
Training in progress, step 100
12 months ago
tokenizer_config.json
1.39 kB
Training in progress, step 100
12 months ago
training_args.bin
pickle
Detected Pickle imports (8)
"torch.device"
,
"transformers.trainer_utils.HubStrategy"
,
"transformers.trainer_utils.IntervalStrategy"
,
"transformers.trainer_utils.SchedulerType"
,
"accelerate.state.PartialState"
,
"alignment.configs.DPOConfig"
,
"accelerate.utils.dataclasses.DistributedType"
,
"transformers.training_args.OptimizerNames"
How to fix it?
4.86 kB
LFS
Training in progress, step 100
12 months ago