zhezi12138
commited on
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
datasets:
|
4 |
+
- PKU-Alignment/BeaverTails
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
base_model:
|
8 |
+
- PKU-Alignment/alpaca-7b-reproduced
|
9 |
+
---
|
10 |
+
This model is for the reproduction of results on Safe-RLHF dataset of paper "The crucial role of samplers in online direct preference optimization". Iteration 3 of DPO-mixp algorithm, trained on https://huggingface.co/zhezi12138/alpaca-7b-iter-2-mixp.
|