zhezi12138 commited on
Commit
574b181
·
verified ·
1 Parent(s): aad0ac5

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -0
README.md ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - PKU-Alignment/BeaverTails
5
+ language:
6
+ - en
7
+ base_model:
8
+ - PKU-Alignment/alpaca-7b-reproduced
9
+ ---
10
+ This model is for the reproduction of results on Safe-RLHF dataset of paper "The crucial role of samplers in online direct preference optimization". Iteration 3 of DPO-mixp algorithm, trained on https://huggingface.co/zhezi12138/alpaca-7b-iter-2-mixp.