Text Generation
Transformers
PyTorch
English
olmo2
conversational
Inference Endpoints
vwxyzjn commited on
Commit
2dc466a
1 Parent(s): f132903

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -6
README.md CHANGED
@@ -14,15 +14,15 @@ datasets:
14
 
15
  # OLMo-2-1124-7B-DPO
16
 
17
- ## NOTE: 12/18/2024 UPDATE:
18
 
19
- Upon the initial release of OLMo-2 models, we realized the post-trained models did not share the pre-tokenization logic that the base models use. As a result, we have trained new post-trained models. The new models are available under the same names as the original models, but we have made the old models available with a postfix "-legacy". See [OLMo 2 Legacy Post-trained Models](https://huggingface.co/collections/allenai/olmo-2-legacy-post-trained-models-6762f662c660962e52de7c96) for the colleciton of the legacy models.
20
 
21
  ## Release Documentation
22
 
23
- OLMo 2 7B Instruct November 2024 is post-trained variant of the [OLMo-2 7B November 2024](https://huggingface.co/allenai/OLMo2-7B-1124) model, which has undergone supervised finetuning on an OLMo-specific variant of the [Tülu 3 dataset](allenai/tulu-3-sft-olmo-2-mixture) and further DPO training on [this dataset](https://huggingface.co/datasets/allenai/olmo-2-1124-7b-preference-mix), and finally RLVR training using [this data](https://huggingface.co/datasets/allenai/RLVR-GSM).
24
  Tülu 3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
25
- Check out the OLMo 2 paper (forthcoming) or [Tülu 3 paper](https://arxiv.org/abs/2411.15124) for more details!
26
 
27
  OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
28
  These models are trained on the Dolma dataset. We are releasing all code, checkpoints, logs (coming soon), and associated training details.
@@ -44,7 +44,7 @@ The core models released in this batch include the following:
44
  - **Model type:** A model trained on a mix of publicly available, synthetic and human-created datasets.
45
  - **Language(s) (NLP):** Primarily English
46
  - **License:** Apache 2.0
47
- - **Finetuned from model:** allenai/OLMo-2-7B-1124-DPO
48
 
49
  ### Model Sources
50
 
@@ -137,4 +137,15 @@ This model has been fine-tuned using a dataset mix with outputs generated from t
137
 
138
  ## Citation
139
 
140
- A technical manuscript is forthcoming!
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
  # OLMo-2-1124-7B-DPO
16
 
17
+ ## NOTE: 1/3/2025 UPDATE:
18
 
19
+ Upon the initial release of OLMo-2 models, we realized the post-trained models did not share the pre-tokenization logic that the base models use. As a result, we have trained new post-trained models. The new models are available under the same names as the original models, but we have made the old models available with a postfix "-preview". See [OLMo 2 Preview Post-trained Models](https://huggingface.co/collections/allenai/olmo-2-preview-post-trained-models-6762f662c660962e52de7c96) for the colleciton of the legacy models.
20
 
21
  ## Release Documentation
22
 
23
+ OLMo 2 7B DPO November 2024 is post-trained variant of the [OLMo 2 7B November 2024](https://huggingface.co/allenai/OLMo2-7B-1124) model, which has undergone supervised finetuning on an OLMo-specific variant of the [Tülu 3 dataset](allenai/tulu-3-sft-olmo-2-mixture) and further DPO training on [this dataset](allenai/olmo-2-1124-7b-preference-mix).
24
  Tülu 3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
25
+ Check out the [OLMo 2 paper](https://arxiv.org/abs/2501.00656) or [Tülu 3 paper](https://arxiv.org/abs/2411.15124) for more details!
26
 
27
  OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
28
  These models are trained on the Dolma dataset. We are releasing all code, checkpoints, logs (coming soon), and associated training details.
 
44
  - **Model type:** A model trained on a mix of publicly available, synthetic and human-created datasets.
45
  - **Language(s) (NLP):** Primarily English
46
  - **License:** Apache 2.0
47
+ - **Finetuned from model:** allenai/OLMo-2-7B-1124-SFT
48
 
49
  ### Model Sources
50
 
 
137
 
138
  ## Citation
139
 
140
+ ```bibtex
141
+ @article{olmo20242olmo2furious,
142
+ title={2 OLMo 2 Furious},
143
+ author={Team OLMo and Pete Walsh and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Shane Arora and Akshita Bhagia and Yuling Gu and Shengyi Huang and Matt Jordan and Nathan Lambert and Dustin Schwenk and Oyvind Tafjord and Taira Anderson and David Atkinson and Faeze Brahman and Christopher Clark and Pradeep Dasigi and Nouha Dziri and Michal Guerquin and Hamish Ivison and Pang Wei Koh and Jiacheng Liu and Saumya Malik and William Merrill and Lester James V. Miranda and Jacob Morrison and Tyler Murray and Crystal Nam and Valentina Pyatkin and Aman Rangapur and Michael Schmitz and Sam Skjonsberg and David Wadden and Christopher Wilhelm and Michael Wilson and Luke Zettlemoyer and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi},
144
+ year={2024},
145
+ eprint={2501.00656},
146
+ archivePrefix={arXiv},
147
+ primaryClass={cs.CL},
148
+ url={https://arxiv.org/abs/2501.00656},
149
+ }
150
+ ```
151
+