Triangle104 commited on
Commit
cbeccb7
·
verified ·
1 Parent(s): b873a47

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +172 -0
README.md CHANGED
@@ -15,6 +15,178 @@ tags:
15
  This model was converted to GGUF format from [`allenai/OLMo-2-1124-7B`](https://huggingface.co/allenai/OLMo-2-1124-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
  Refer to the [original model card](https://huggingface.co/allenai/OLMo-2-1124-7B) for more details on the model.
17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ## Use with llama.cpp
19
  Install llama.cpp through brew (works on Mac and Linux)
20
 
 
15
  This model was converted to GGUF format from [`allenai/OLMo-2-1124-7B`](https://huggingface.co/allenai/OLMo-2-1124-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
  Refer to the [original model card](https://huggingface.co/allenai/OLMo-2-1124-7B) for more details on the model.
17
 
18
+ ---
19
+ Model details:
20
+ -
21
+ We introduce OLMo 2, a new family of 7B and 13B models featuring a
22
+ 9-point increase in MMLU, among other evaluation improvements, compared
23
+ to the original OLMo 7B model. These gains come from training on
24
+ OLMo-mix-1124 and Dolmino-mix-1124 datasets and staged training
25
+ approach.
26
+
27
+ OLMo is a series of Open Language Models
28
+ designed to enable the science of language models.
29
+ These models are trained on the Dolma dataset. We are releasing all
30
+ code, checkpoints, logs (coming soon), and associated training details.
31
+
32
+ Installation
33
+
34
+ OLMo 2 will be supported in the next version of Transformers, and you need to install it from the main branch using:
35
+
36
+ pip install --upgrade git+https://github.com/huggingface/transformers.git
37
+
38
+ Inference
39
+
40
+ You can use OLMo with the standard HuggingFace transformers library:
41
+
42
+ from transformers import AutoModelForCausalLM, AutoTokenizer
43
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-7B")
44
+ tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-2-1124-7B")
45
+ message = ["Language modeling is "]
46
+ inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
47
+
48
+ optional verifying cuda
49
+
50
+ inputs = {k: v.to('cuda') for k,v in inputs.items()}
51
+
52
+ olmo = olmo.to('cuda')
53
+
54
+ response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
55
+ print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
56
+
57
+ 'Language modeling is a key component of any text-based application, but its effectiveness...'
58
+
59
+ For faster performance, you can quantize the model using the following method:
60
+
61
+ AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-7B",
62
+ torch_dtype=torch.float16,
63
+ load_in_8bit=True) # Requires bitsandbytes
64
+
65
+ The quantized model is more sensitive to data
66
+ types and CUDA operations. To avoid potential issues, it's recommended
67
+ to pass the inputs directly to CUDA using:
68
+
69
+ inputs.input_ids.to('cuda')
70
+
71
+ We have released checkpoints for these models. For pretraining, the
72
+ naming convention is stepXXX-tokensYYYB. For checkpoints with
73
+ ingredients of the soup, the naming convention is
74
+ stage2-ingredientN-stepXXX-tokensYYYB
75
+
76
+ To load a specific model revision with HuggingFace, simply add the argument revision:
77
+
78
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-7B", revision="step1000-tokens5B")
79
+
80
+ Or, you can access all the revisions for the models via the following code snippet:
81
+
82
+ from huggingface_hub import list_repo_refs
83
+ out = list_repo_refs("allenai/OLMo-2-1124-7B")
84
+ branches = [b.name for b in out.branches]
85
+
86
+ Fine-tuning
87
+
88
+ Model fine-tuning can be done from the final checkpoint (the main
89
+ revision of this model) or many intermediate checkpoints. Two recipes
90
+ for tuning are available.
91
+
92
+ Fine-tune with the OLMo repository:
93
+
94
+ torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config}
95
+ --data.paths=[{path_to_data}/input_ids.npy]
96
+ --data.label_mask_paths=[{path_to_data}/label_mask.npy]
97
+ --load_path={path_to_checkpoint}
98
+ --reset_trainer_state
99
+
100
+ For more documentation, see the GitHub readme.
101
+
102
+ Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are here.
103
+
104
+ Model Description
105
+
106
+ Developed by: Allen Institute for AI (Ai2)
107
+ Model type: a Transformer style autoregressive language model.
108
+ Language(s) (NLP): English
109
+ License: The code and model are released under Apache 2.0.
110
+ Contact: Technical inquiries: [email protected]. Press: [email protected]
111
+ Date cutoff: Dec. 2023.
112
+
113
+ Model Sources
114
+
115
+ Project Page: https://allenai.org/olmo
116
+ Repositories:
117
+ Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
118
+ Evaluation code: https://github.com/allenai/OLMo-Eval
119
+ Further fine-tuning code: https://github.com/allenai/open-instruct
120
+
121
+ Paper: Coming soon
122
+
123
+ Pretraining
124
+
125
+ OLMo 2 7B
126
+ OLMo 2 13B
127
+
128
+ Pretraining Stage 1
129
+ (OLMo-Mix-1124)
130
+ 4 trillion tokens
131
+ (1 epoch)
132
+ 5 trillion tokens
133
+ (1.2 epochs)
134
+
135
+ Pretraining Stage 2
136
+ (Dolmino-Mix-1124)
137
+ 50B tokens (3 runs)
138
+ merged
139
+ 100B tokens (3 runs)
140
+ 300B tokens (1 run)
141
+ merged
142
+
143
+ Post-training
144
+ (Tulu 3 SFT OLMo mix)
145
+ SFT + DPO + PPO
146
+ (preference mix)
147
+ SFT + DPO + PPO
148
+ (preference mix)
149
+
150
+ Stage 1: Initial Pretraining
151
+
152
+ Dataset: OLMo-Mix-1124 (3.9T tokens)
153
+ Coverage: 90%+ of total pretraining budget
154
+ 7B Model: ~1 epoch
155
+ 13B Model: 1.2 epochs (5T tokens)
156
+
157
+ Stage 2: Fine-tuning
158
+
159
+ Dataset: Dolmino-Mix-1124 (843B tokens)
160
+ Three training mixes:
161
+ 50B tokens
162
+ 100B tokens
163
+ 300B tokens
164
+
165
+ Mix composition: 50% high-quality data + academic/Q&A/instruction/math content
166
+
167
+ Model Merging
168
+
169
+ 7B Model: 3 versions trained on 50B mix, merged via model souping
170
+ 13B Model: 3 versions on 100B mix + 1 version on 300B mix, merged for final checkpoint
171
+
172
+ Bias, Risks, and Limitations
173
+
174
+ Like any base language model or fine-tuned model without safety
175
+ filtering, these models can easily be prompted by users to generate
176
+ harmful and sensitive content. Such content may also be produced
177
+ unintentionally, especially in cases involving bias, so we recommend
178
+ that users consider the risks when applying this technology.
179
+ Additionally, many statements from OLMo or any LLM are often inaccurate,
180
+ so facts should be verified.
181
+ Citation
182
+
183
+ A technical manuscript is forthcoming!
184
+
185
+ Model Card Contact
186
+
187
+ For errors in this model card, contact [email protected].
188
+
189
+ ---
190
  ## Use with llama.cpp
191
  Install llama.cpp through brew (works on Mac and Linux)
192