phi-2-dpo / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
2caa9a6 verified
|
raw
history blame
1.94 kB
---
license: mit
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
base_model: microsoft/phi-2
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: output_dir
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_dir
This model is a fine-tuned version of [phi_2_instruction](https://huggingface.co/huseyinatahaninan/phi-2-instruction) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.2.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_huseyinatahaninan__phi-2-dpo)
| Metric |Value|
|---------------------------------|----:|
|Avg. |62.33|
|AI2 Reasoning Challenge (25-Shot)|63.05|
|HellaSwag (10-Shot) |76.36|
|MMLU (5-Shot) |58.46|
|TruthfulQA (0-shot) |45.35|
|Winogrande (5-shot) |74.03|
|GSM8k (5-shot) |56.71|