qwen2.5-0.5b-expo-L2EXPO-0.01
This model is a fine-tuned version of hZzy/qwen2.5-0.5b-sft-news-IFT on the hZzy/train_pairwise dataset. It achieves the following results on the evaluation set:
- Loss: 0.4106
- Logps: -103.3770
- Logits: -1.4557
- Objective: 0.4142
- Dpo Loss: 0.6907
- Regularize: 0.4142
- Ranking Simple: 0.5345
- Ranking Idealized: 0.8584
- Ranking Idealized Expo: 0.5207
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 12
- total_train_batch_size: 192
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss | Logps | Logits | Objective | Dpo Loss | Regularize | Ranking Simple | Ranking Idealized | Ranking Idealized Expo |
---|---|---|---|---|---|---|---|---|---|---|---|
0.4151 | 0.1889 | 50 | 0.4153 | -97.3580 | -1.3067 | 0.4159 | 0.6927 | 0.4159 | 0.5180 | 0.8584 | 0.5207 |
0.3989 | 0.3778 | 100 | 0.4140 | -97.6116 | -1.3266 | 0.4149 | 0.6921 | 0.4149 | 0.5207 | 0.8584 | 0.5207 |
0.4205 | 0.5668 | 150 | 0.4131 | -97.5505 | -1.3609 | 0.4142 | 0.6916 | 0.4142 | 0.5283 | 0.8584 | 0.5207 |
0.4006 | 0.7557 | 200 | 0.4119 | -99.2361 | -1.3908 | 0.4139 | 0.6912 | 0.4139 | 0.5297 | 0.8584 | 0.5207 |
0.4158 | 0.9446 | 250 | 0.4114 | -100.9730 | -1.4099 | 0.4140 | 0.6912 | 0.4140 | 0.5318 | 0.8584 | 0.5207 |
0.41 | 1.1335 | 300 | 0.4107 | -101.8556 | -1.4325 | 0.4136 | 0.6908 | 0.4136 | 0.5338 | 0.8584 | 0.5207 |
0.4037 | 1.3224 | 350 | 0.4106 | -102.7009 | -1.4417 | 0.4139 | 0.6908 | 0.4139 | 0.5331 | 0.8584 | 0.5207 |
0.4169 | 1.5113 | 400 | 0.4107 | -103.3040 | -1.4516 | 0.4141 | 0.6908 | 0.4141 | 0.5345 | 0.8584 | 0.5207 |
0.4049 | 1.7003 | 450 | 0.4106 | -103.2897 | -1.4550 | 0.4141 | 0.6907 | 0.4141 | 0.5338 | 0.8584 | 0.5207 |
0.3916 | 1.8892 | 500 | 0.4106 | -103.3759 | -1.4556 | 0.4142 | 0.6907 | 0.4142 | 0.5345 | 0.8584 | 0.5207 |
Framework versions
- Transformers 4.42.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 2
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.
Model tree for hZzy/qwen2.5-0.5b-expo-L2EXPO-0.01
Base model
hZzy/qwen2.5-0.5b-sft-news-IFT