instruction-pretrain
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,72 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
# Instruction Pre-Training: Language Models are Supervised Multitask Learners
|
5 |
+
This repo contains the **context-based instruction synthesizer** used in our paper **Instruction Pre-Training: Language Models are Supervised Multitask Learners**.
|
6 |
+
|
7 |
+
we explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction-response pairs covering 40+ task categories to verify the effectiveness of *Instruction Pre-Training*. ***Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continued pre-training.** In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning. In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B.
|
8 |
+
|
9 |
+
<p align='center'>
|
10 |
+
<img src="./hf_intro.png" width="400">
|
11 |
+
</p>
|
12 |
+
|
13 |
+
## Synthesize Instruction-Response Pairs from Any Raw Corproa
|
14 |
+
We conduct multitask fine-tuning on a language model to develop an instruction synthesizer capable of generating instruction-response pairs from any raw text.
|
15 |
+
|
16 |
+
<p align='center'>
|
17 |
+
<img src="./hf_synthesizer.png" width="700">
|
18 |
+
</p>
|
19 |
+
|
20 |
+
An example script to prompt the synthesizer to generate instruction-response pairs based on the given raw text is:
|
21 |
+
```python
|
22 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
23 |
+
|
24 |
+
model = AutoModelForCausalLM.from_pretrained("instruction-pretrain/instruction-synthesizer")
|
25 |
+
tokenizer = AutoTokenizer.from_pretrained("instruction-pretrain/instruction-synthesizer")
|
26 |
+
|
27 |
+
# Put your raw text here:
|
28 |
+
context = '''Free Fishing Weekend in NYS Slated
|
29 |
+
This weekend (June 28th-29th) New Yorkers may fish for free without a license in any of the state's 7,500 lakes and ponds or 50,000 miles of rivers and streams. In addition, there are a number of free events and fishing clinics taking place across the state to encourage New Yorkers to enjoy the great outdoors. For more information, visit'''
|
30 |
+
|
31 |
+
def parse_pred(pred):
|
32 |
+
"""Extract the list of instruction-response pairs from the prediction"""
|
33 |
+
QA_str_list = pred.split('</END>')
|
34 |
+
if not pred.endswith('</END>'):
|
35 |
+
QA_str_list = QA_str_list[:-1]
|
36 |
+
|
37 |
+
QA_list = []
|
38 |
+
raw_questions = []
|
39 |
+
for QA_str in QA_str_list:
|
40 |
+
try:
|
41 |
+
assert len(QA_str.split('<ANS>')) == 2, f'invalid QA string: {QA_str}'
|
42 |
+
Q_str, A_str = QA_str.split('<ANS>')
|
43 |
+
Q_str, A_str = Q_str.strip(), A_str.strip()
|
44 |
+
assert Q_str.startswith('<QUE>'), f'invalid question string: {Q_str} in QA_str: {QA_str}'
|
45 |
+
assert len(A_str) > 0, f'invalid answer string in QA_str: {QA_str}'
|
46 |
+
Q_str = Q_str.replace('<QUE>', '').strip()
|
47 |
+
assert Q_str.lower() not in raw_questions, f'duplicate question: {Q_str}'
|
48 |
+
QA_list.append({'Q': Q_str, 'A': A_str})
|
49 |
+
raw_questions.append(Q_str.lower())
|
50 |
+
except:
|
51 |
+
pass
|
52 |
+
|
53 |
+
return QA_list
|
54 |
+
|
55 |
+
def get_instruction_response_pairs(context):
|
56 |
+
'''Prompt the synthesizer to generate instruction-response pairs based on the given context'''
|
57 |
+
prompt = f'<s> <CON> {context} </CON>\n\n'
|
58 |
+
inputs = tokenizer(prompt, add_special_tokens=False, return_tensors="pt").input_ids.to(model.device)
|
59 |
+
outputs = model.generate(input_ids=inputs, max_new_tokens=400)[0]
|
60 |
+
|
61 |
+
pred_start = int(inputs.shape[-1])
|
62 |
+
pred = tokenizer.decode(outputs[pred_start:], skip_special_tokens=True)
|
63 |
+
return parse_pred(pred)
|
64 |
+
|
65 |
+
# Get the list of generated instruction-response paris
|
66 |
+
instruction_response_pairs = get_instruction_response_pairs(context)
|
67 |
+
|
68 |
+
# Print out the results
|
69 |
+
print(f'# Context:\n{context}\n')
|
70 |
+
for index, pair in enumerate(instruction_response_pairs):
|
71 |
+
print(f'## Instruction {index + 1}:\n{pair["Q"]}\n## Response {index + 1}:\n{pair["A"]}\n')
|
72 |
+
```
|