munish0838 commited on
Commit
619b4c0
·
verified ·
1 Parent(s): cc4ee2b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +103 -0
README.md ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - llama
7
+ - conversational
8
+ base_model: RLHFlow/LLaMA3-iterative-DPO-final
9
+ ---
10
+ # LLaMA3-iterative-DPO-final-GGUF
11
+ This is quantized version of [RLHFlow/LLaMA3-iterative-DPO-final](https://huggingface.co/RLHFlow/LLaMA3-iterative-DPO-final) created using llama.cpp
12
+
13
+ # Model Description
14
+ We release an unofficial checkpoint of a state-of-the-art instruct model of its class, **LLaMA3-iterative-DPO-final**.
15
+ On all three widely-used instruct model benchmarks: **Alpaca-Eval-V2**, **MT-Bench**, **Chat-Arena-Hard**, our model outperforms all models of similar size (e.g., LLaMA-3-8B-it), most large open-sourced models (e.g., Mixtral-8x7B-it),
16
+ and strong proprietary models (e.g., GPT-3.5-turbo-0613). The model is trained with open-sourced datasets without any additional human-/GPT4-labeling.
17
+
18
+ Even better, we provide a [detailed recipe](https://github.com/RLHFlow/Online-RLHF) to reproduce the model. Enjoy!
19
+
20
+ ## Model Releases
21
+ See the [collection](https://huggingface.co/collections/RLHFlow/online-rlhf-663ae95fade1a39663dab218) of the training set, reward/preference model, SFT model.
22
+
23
+ - [SFT model](https://huggingface.co/RLHFlow/LLaMA3-SFT)
24
+ - [Reward model](https://huggingface.co/sfairXC/FsfairX-LLaMA3-RM-v0.1)
25
+
26
+ ## Dataset
27
+ - [Preference data mix](https://huggingface.co/datasets/hendrydong/preference_700K)
28
+ - [Prompt collection for RLHF training](https://huggingface.co/datasets/RLHFlow/prompt-collection-v0.1)
29
+
30
+ ## Training methods
31
+ We have developed a simple and efficient online RLHF recipe for LLM instruct training. Our recipe is DPO-based and thus much cheaper and simpler to train and tune compared to PPO-based approaches.
32
+ Unlike widely-used offline DPO, the online component of our approach effectively mitigates distribution shifts during policy optimization.
33
+ For a detailed exposition, please refer to our accompanying technical report.
34
+
35
+
36
+ ## Chat Benchmarks
37
+
38
+ | **Model** | **Size** | **Method** | **LC Alpaca-Eval-V2** | **MT-Bench** | **Chat-Arena-Hard** |
39
+ |-------------------------|----------|-------------------|-----------------------|--------------|---------------------|
40
+ | **Small Open-Sourced Models** | | | | | |
41
+ | Gemma-7B-it | 7B | SFT | 10.4 | 6.38 | 7.5 |
42
+ | Zephyr-7B-beta | 7B | Vanilla DPO | 13.1 | 7.34 | - |
43
+ | Mistral-7B-v0.2-it | 7B | SFT | 17.1 | 7.51 | 12.6 |
44
+ | Open-Chat-0106 | 7B | SFT | 15.6 | 7.8 | - |
45
+ | Starling-7B-beta | 7B | PPO | 25.8 | 8.12 | 23.0 |
46
+ | LLaMA-3-8B-it | 8B | RS+DPO+PPO | 22.9 | 8.16 | 20.6 |
47
+ | **Ours** | | | | | |
48
+ | Ours (SFT baseline) | 8B | SFT | 10.2 | 7.69 | 5.6 |
49
+ | Ours (DPO baseline) | 8B | Vanilla DPO | 22.5 | 8.17 | 22.4 |
50
+ | Ours (Online RLHF) | 8B | Iterative DPO | **37.2** | **8.46** | **29.1** |
51
+ | **Large Open-Sourced Models** | | | | | |
52
+ | Vicuna-33b-v1.3 | 33B | SFT | 17.6 | 7.12 | 8.6 |
53
+ | Yi-34B-Chat | 34B | SFT | 27.2 | - | 23.1 |
54
+ | Mixtral-8x7B-it | 45B* | SFT | 23.7 | 8.30 | 23.4 |
55
+ | Tulu-2-DPO-70B | 70B | Vanilla DPO | 21.2 | 7.89 | 15.0 |
56
+ | LLaMA-3-70B-it | 70B | RS+DPO+PPO | 34.4 | 8.95 | 41.1 |
57
+ | Mixtral-8x22B-it | 141B* | SFT | 30.9 | 8.66 | 36.4 |
58
+ | **Proprietary Models** | | | | | |
59
+ | GPT-3.5-turbo-1106 | - | - | 19.3 | 8.35 | 18.9 |
60
+ | GPT-3.5-turbo-0613 | - | - | 22.7 | 8.39 | 24.8 |
61
+ | GPT-4-0613 | - | - | 30.2 | 9.18 | 37.9 |
62
+ | Claude-3-Opus | - | - | 40.5 | 9.00 | 60.4 |
63
+ | GPT-4 Turbo (04/09) | - | - | 55.0 | - | 82.6 |
64
+
65
+
66
+ ## Academic Benchmarks
67
+
68
+ | **Model** | **Size** | **Method** | **GSM-8K** | **MMLU** | **HumanEval** | **TruthfulQA** | **ARC** | **MBPP** |
69
+ |----------------------------|----------|-----------------|------------|----------|---------------|----------------|---------|----------|
70
+ | LLaMA-3-8B-it | 8B | RS+DPO+PPO | 79.6 | 66.0 | 61.6 | 43.9 | 59.5 | 61.1 |
71
+ | Ours (SFT baseline) | 8B | SFT | 74.2 | 64.7 | 65.2 | 53.4 | 61.4 | 62.3 |
72
+ | Ours (DPO baseline) | 8B | Vanilla DPO | 79.8 | 64.5 | 63.4 | 61.8 | 65.2 | 60.3 |
73
+ | Ours (Iterative RLHF) | 8B | Iterative DPO | 80.7 | 65.3 | 64.6 | 60.4 | 64.3 | 60.8 |
74
+
75
+
76
+ ## Usage
77
+ ```python
78
+ from transformers import AutoModelForCausalLM, AutoTokenizer
79
+
80
+ device = "cuda"
81
+
82
+ model = AutoModelForCausalLM.from_pretrained("RLHFlow/LLaMA3-iterative-DPO-final")
83
+ tokenizer = AutoTokenizer.from_pretrained("RLHFlow/LLaMA3-iterative-DPO-final")
84
+
85
+ messages = [
86
+ {"role": "user", "content": "I'm trying to teach myself to have nicer handwriting. Can you help?"},
87
+ ]
88
+
89
+ model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
90
+
91
+ model_inputs = model_inputs.to(device)
92
+ model.to(device)
93
+
94
+ output_tokens = model.generate(model_inputs, max_new_tokens=1024, do_sample=True)
95
+ model_outputs = tokenizer.batch_decode(output_tokens)
96
+ print(model_outputs[0])
97
+ ```
98
+
99
+
100
+ ## Limitations
101
+ RLHFlow/LLaMA3-iterative-DPO-final is an unofficial checkpoint developed to illustrate the power of online iterative RLHF and is for research purpose. While safety and ethical considerations are integral to our alignment process,
102
+ there remains the possibility that the model could generate offensive or unethical content, particularly under adversarial conditions.
103
+ We are committed to continuous improvement in our models to minimize such risks and encourage responsible usage.