File size: 9,008 Bytes
ad1c206
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
Quantization made by Richard Erkhov.

[Github](https://github.com/RichardErkhov)

[Discord](https://discord.gg/pvy7H8DZMG)

[Request more models](https://github.com/RichardErkhov/quant_request)


InstructLM-1.3B - GGUF
- Model creator: https://huggingface.co/instruction-pretrain/
- Original model: https://huggingface.co/instruction-pretrain/InstructLM-1.3B/


| Name | Quant method | Size |
| ---- | ---- | ---- |
| [InstructLM-1.3B.Q2_K.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q2_K.gguf) | Q2_K | 0.49GB |
| [InstructLM-1.3B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.IQ3_XS.gguf) | IQ3_XS | 0.54GB |
| [InstructLM-1.3B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.IQ3_S.gguf) | IQ3_S | 0.57GB |
| [InstructLM-1.3B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q3_K_S.gguf) | Q3_K_S | 0.56GB |
| [InstructLM-1.3B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.IQ3_M.gguf) | IQ3_M | 0.58GB |
| [InstructLM-1.3B.Q3_K.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q3_K.gguf) | Q3_K | 0.62GB |
| [InstructLM-1.3B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q3_K_M.gguf) | Q3_K_M | 0.62GB |
| [InstructLM-1.3B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q3_K_L.gguf) | Q3_K_L | 0.67GB |
| [InstructLM-1.3B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.IQ4_XS.gguf) | IQ4_XS | 0.69GB |
| [InstructLM-1.3B.Q4_0.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q4_0.gguf) | Q4_0 | 0.72GB |
| [InstructLM-1.3B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.IQ4_NL.gguf) | IQ4_NL | 0.73GB |
| [InstructLM-1.3B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q4_K_S.gguf) | Q4_K_S | 0.73GB |
| [InstructLM-1.3B.Q4_K.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q4_K.gguf) | Q4_K | 0.77GB |
| [InstructLM-1.3B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q4_K_M.gguf) | Q4_K_M | 0.77GB |
| [InstructLM-1.3B.Q4_1.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q4_1.gguf) | Q4_1 | 0.8GB |
| [InstructLM-1.3B.Q5_0.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q5_0.gguf) | Q5_0 | 0.87GB |
| [InstructLM-1.3B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q5_K_S.gguf) | Q5_K_S | 0.87GB |
| [InstructLM-1.3B.Q5_K.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q5_K.gguf) | Q5_K | 0.89GB |
| [InstructLM-1.3B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q5_K_M.gguf) | Q5_K_M | 0.89GB |
| [InstructLM-1.3B.Q5_1.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q5_1.gguf) | Q5_1 | 0.95GB |
| [InstructLM-1.3B.Q6_K.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q6_K.gguf) | Q6_K | 1.03GB |
| [InstructLM-1.3B.Q8_0.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q8_0.gguf) | Q8_0 | 1.33GB |




Original model description:
---
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
- instruction-pretrain/ft-instruction-synthesizer-collection
language:
- en
---
# Instruction Pre-Training: Language Models are Supervised Multitask Learners
This repo contains the **general models pre-trained from scratch** in our paper [Instruction Pre-Training: Language Models are Supervised Multitask Learners](https://huggingface.co/papers/2406.14491).

We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction-response pairs covering 40+ task categories to verify the effectiveness of *Instruction Pre-Training*. Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continual pre-training. **In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning.** In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B.

<p align='center'>
    <img src="/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F66711d2ee12fa6cc5f5dfc89%2FvRdsFIVQptbNaGiZ18Lih.png%26quot%3B%3C%2Fspan%3E width="400">
</p>

## Resources
**🤗 We share our data and models with example usages, feel free to open any issues or discussions! 🤗**

- Context-Based Instruction Synthesizer: [instruction-synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer)
- Fine-Tuning Data for the Synthesizer: [ft-instruction-synthesizer-collection](https://huggingface.co/datasets/instruction-pretrain/ft-instruction-synthesizer-collection)
- General Models Pre-Trained from Scratch:
  - [InstructLM-500M](https://huggingface.co/instruction-pretrain/InstructLM-500M)
  - [InstructLM-1.3B](https://huggingface.co/instruction-pretrain/InstructLM-1.3B)
- Domain-Specific Models Pre-Trained from Llama3-8B:
  - [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B)
  - [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B)
- General Instruction-Augmented Corpora: [general-instruction-augmented-corpora](https://huggingface.co/datasets/instruction-pretrain/general-instruction-augmented-corpora)
- Domain-Specific Instruction-Augmented Corpora (no finance data to avoid ethical issues): [medicine-instruction-augmented-corpora](https://huggingface.co/datasets/instruction-pretrain/medicine-instruction-augmented-corpora)

## General Pre-Training From Scratch
We augment the [RefinedWeb corproa](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) with instruction-response pairs generated by our [context-based instruction synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer) to pre-train general langauge models from scratch.

To evaluate our general base model using the [lm-evaluation-harness framework](https://github.com/EleutherAI/lm-evaluation-harness)

1. Setup dependencies:
```bash
git clone https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
```

2. Evalaute:
```bash
MODEL=instruction-pretrain/InstructLM-1.3B
add_bos_token=True # this flag is needed because lm-eval-harness set add_bos_token to False by default, but ours require add_bos_token to be True

accelerate launch -m lm_eval --model hf \
    --model_args pretrained=${MODEL},add_bos_token=${add_bos_token},dtype=float16  \
    --gen_kwargs do_sample=False \
    --tasks piqa,hellaswag,winogrande \
    --batch_size auto \
    --num_fewshot 0

accelerate launch -m lm_eval --model hf \
    --model_args pretrained=${MODEL},add_bos_token=${add_bos_token},dtype=float16 \
    --gen_kwargs do_sample=False \
    --tasks social_iqa,ai2_arc,openbookqa,boolq,mmlu \
    --batch_size auto \
    --num_fewshot 5
```

## Citation
If you find our work helpful, please cite us:

Instruction Pre-Training
```bibtex
@article{cheng2024instruction,
  title={Instruction Pre-Training: Language Models are Supervised Multitask Learners},
  author={Cheng, Daixuan and Gu, Yuxian and Huang, Shaohan and Bi, Junyu and Huang, Minlie and Wei, Furu},
  journal={arXiv preprint arXiv:2406.14491},
  year={2024}
}
```

[AdaptLLM](https://huggingface.co/papers/2309.09530)
```bibtex
@inproceedings{
cheng2024adapting,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=y886UXPEZ0}
}
```