File size: 11,191 Bytes
0bab5b9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
Quantization made by Richard Erkhov.

[Github](https://github.com/RichardErkhov)

[Discord](https://discord.gg/pvy7H8DZMG)

[Request more models](https://github.com/RichardErkhov/quant_request)


OLMoE-1B-7B-0924 - GGUF
- Model creator: https://huggingface.co/allenai/
- Original model: https://huggingface.co/allenai/OLMoE-1B-7B-0924/


| Name | Quant method | Size |
| ---- | ---- | ---- |
| [OLMoE-1B-7B-0924.Q2_K.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.Q2_K.gguf) | Q2_K | 2.39GB |
| [OLMoE-1B-7B-0924.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.IQ3_XS.gguf) | IQ3_XS | 2.67GB |
| [OLMoE-1B-7B-0924.IQ3_S.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.IQ3_S.gguf) | IQ3_S | 2.82GB |
| [OLMoE-1B-7B-0924.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.Q3_K_S.gguf) | Q3_K_S | 2.82GB |
| [OLMoE-1B-7B-0924.IQ3_M.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.IQ3_M.gguf) | IQ3_M | 2.87GB |
| [OLMoE-1B-7B-0924.Q3_K.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.Q3_K.gguf) | Q3_K | 3.11GB |
| [OLMoE-1B-7B-0924.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.Q3_K_M.gguf) | Q3_K_M | 3.11GB |
| [OLMoE-1B-7B-0924.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.Q3_K_L.gguf) | Q3_K_L | 3.36GB |
| [OLMoE-1B-7B-0924.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.IQ4_XS.gguf) | IQ4_XS | 3.5GB |
| [OLMoE-1B-7B-0924.Q4_0.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.Q4_0.gguf) | Q4_0 | 3.66GB |
| [OLMoE-1B-7B-0924.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.IQ4_NL.gguf) | IQ4_NL | 3.69GB |
| [OLMoE-1B-7B-0924.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.Q4_K_S.gguf) | Q4_K_S | 3.69GB |
| [OLMoE-1B-7B-0924.Q4_K.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.Q4_K.gguf) | Q4_K | 3.92GB |
| [OLMoE-1B-7B-0924.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.Q4_K_M.gguf) | Q4_K_M | 3.92GB |
| [OLMoE-1B-7B-0924.Q4_1.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.Q4_1.gguf) | Q4_1 | 4.05GB |
| [OLMoE-1B-7B-0924.Q5_0.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.Q5_0.gguf) | Q5_0 | 4.45GB |
| [OLMoE-1B-7B-0924.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.Q5_K_S.gguf) | Q5_K_S | 4.45GB |
| [OLMoE-1B-7B-0924.Q5_K.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.Q5_K.gguf) | Q5_K | 4.59GB |
| [OLMoE-1B-7B-0924.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.Q5_K_M.gguf) | Q5_K_M | 4.59GB |
| [OLMoE-1B-7B-0924.Q5_1.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.Q5_1.gguf) | Q5_1 | 4.85GB |
| [OLMoE-1B-7B-0924.Q6_K.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.Q6_K.gguf) | Q6_K | 5.29GB |
| [OLMoE-1B-7B-0924.Q8_0.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMoE-1B-7B-0924-gguf/blob/main/OLMoE-1B-7B-0924.Q8_0.gguf) | Q8_0 | 6.85GB |




Original model description:
---
license: apache-2.0
language:
- en
tags:
- moe
- olmo
- olmoe
co2_eq_emissions: 1
datasets:
- allenai/OLMoE-mix-0924
library_name: transformers
---

<img alt="OLMoE Logo." src="olmoe-logo.png" width="250px">


# Model Summary

> OLMoE-1B-7B is a Mixture-of-Experts LLM with 1B active and 7B total parameters released in September 2024 (0924). It yields state-of-the-art performance among models with a similar cost (1B) and is competitive with much larger models like Llama2-13B. OLMoE is 100% open-source.

This information and more can also be found on the [**OLMoE GitHub repository**](https://github.com/allenai/OLMoE).
- **Paper**: https://arxiv.org/abs/2409.02060
- **Pretraining** [Checkpoints](https://hf.co/allenai/OLMoE-1B-7B-0924), [Code](https://github.com/allenai/OLMo/tree/Muennighoff/MoE), [Data](https://huggingface.co/datasets/allenai/OLMoE-mix-0924) and [Logs](https://wandb.ai/ai2-llm/olmoe/reports/OLMoE-1B-7B-0924--Vmlldzo4OTcyMjU3).
- **SFT (Supervised Fine-Tuning)** [Checkpoints](https://huggingface.co/allenai/OLMoE-1B-7B-0924-SFT), [Code](https://github.com/allenai/open-instruct/tree/olmoe-sft), [Data](https://hf.co/datasets/allenai/tulu-v3.1-mix-preview-4096-OLMoE) and [Logs](https://github.com/allenai/OLMoE/blob/main/logs/olmoe-sft-logs.txt).
- **DPO/KTO (Direct Preference Optimization/Kahneman-Tversky Optimization)**, [Checkpoints](https://huggingface.co/allenai/OLMoE-1B-7B-0924-Instruct), [Preference Data](https://hf.co/datasets/allenai/ultrafeedback_binarized_cleaned), [DPO code](https://github.com/allenai/open-instruct/tree/olmoe-sft), [KTO code](https://github.com/Muennighoff/kto/blob/master/kto.py) and [Logs](https://github.com/allenai/OLMoE/blob/main/logs/olmoe-dpo-logs.txt).

# Use

Install `transformers` **from source** until a release after [this PR](https://github.com/huggingface/transformers/pull/32406) & `torch` and run:

```python
from transformers import OlmoeForCausalLM, AutoTokenizer
import torch

DEVICE = "cuda" if torch.cuda.is_available() else "cpu"

# Load different ckpts via passing e.g. `revision=step10000-tokens41B`
model = OlmoeForCausalLM.from_pretrained("allenai/OLMoE-1B-7B-0924").to(DEVICE)
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMoE-1B-7B-0924")
inputs = tokenizer("Bitcoin is", return_tensors="pt")
inputs = {k: v.to(DEVICE) for k, v in inputs.items()}
out = model.generate(**inputs, max_length=64)
print(tokenizer.decode(out[0]))
# > # Bitcoin is a digital currency that is created and held electronically. No one controls it. Bitcoins aren’t printed, like dollars or euros – they’re produced by people and businesses running computers all around the world, using software that solves mathematical
```

You can list all revisions/branches by installing `huggingface-hub` & running:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("OLMoE/OLMoE-1B-7B-0924")
branches = [b.name for b in out.branches]
```

Important branches:
- `step1200000-tokens5033B`: Pretraining checkpoint used for annealing. There are a few more checkpoints after this one but we did not use them.
- `main`: Checkpoint annealed from `step1200000-tokens5033B` for an additional 100B tokens (23,842 steps). We use this checkpoint for our adaptation (https://huggingface.co/allenai/OLMoE-1B-7B-0924-SFT & https://huggingface.co/allenai/OLMoE-1B-7B-0924-Instruct).
- `fp32`: FP32 version of `main`. The model weights were stored in FP32 during training but we did not observe any performance drop from casting them to BF16 after training so we upload all weights in BF16. If you want the original FP32 checkpoint for `main` you can use this one. You will find that it yields slightly different results but should perform around the same on benchmarks.

# Evaluation Snapshot

| Model                       | Active Params | Open Data | MMLU | HellaSwag | ARC-Chall. | ARC-Easy | PIQA | WinoGrande |
|-----------------------------|---------------|-----------|------|-----------|------------|----------|------|------------|
| **LMs with ~1B active parameters** |               |           |      |           |            |          |      |            |
| **OLMoE-1B-7B**              | **1.3B**      | **βœ…**    | **54.1** | **80.0** | **62.1**   | **84.2** | **79.8** | **70.2**  |
| DCLM-1B                     | 1.4B          | βœ…        | 48.5 | 75.1      | 57.6       | 79.5     | 76.6 | 68.1       |
| TinyLlama-1B                | 1.1B          | βœ…        | 33.6 | 60.8      | 38.1       | 69.5     | 71.7 | 60.1       |
| OLMo-1B (0724)              | 1.3B          | βœ…        | 32.1 | 67.5      | 36.4       | 53.5     | 74.0 | 62.9       |
| Pythia-1B                   | 1.1B          | βœ…        | 31.1 | 48.0      | 31.4       | 63.4     | 68.9 | 52.7       |
| **LMs with ~2-3B active parameters** |               |           |      |           |            |          |      |            |
| Qwen1.5-3B-14B              | 2.7B          | ❌        | **62.4** | 80.0      | **77.4**   | **91.6** | **81.0** | 72.3 |
| Gemma2-3B                   | 2.6B          | ❌        | 53.3 | 74.6      | 67.5       | 84.3     | 78.5 | 71.8       |
| JetMoE-2B-9B                | 2.2B          | ❌        | 49.1 | **81.7**  | 61.4       | 81.9     | 80.3 | 70.7       |
| DeepSeek-3B-16B             | 2.9B          | ❌        | 45.5 | 80.4      | 53.4       | 82.7     | 80.1 | **73.2**   |
| StableLM-2B                 | 1.6B          | ❌        | 40.4 | 70.3      | 50.6       | 75.3     | 75.6 | 65.8       |
| OpenMoE-3B-9B               | 2.9B          | βœ…        | 27.4 | 44.4      | 29.3       | 50.6     | 63.3 | 51.9       |
| **LMs with ~7-9B active parameters** |               |           |      |           |            |          |      |            |
| Gemma2-9B                   | 9.2B          | ❌        | **70.6** | **87.3**  | **89.5**   | **95.5** | **86.1** | **78.8** |
| Llama3.1-8B                 | 8.0B          | ❌        | 66.9 | 81.6      | 79.5       | 91.7     | 81.1 | 76.6       |
| DCLM-7B                     | 6.9B          | βœ…        | 64.4 | 82.3      | 79.8       | 92.3     | 80.1 | 77.3       |
| Mistral-7B                  | 7.3B          | ❌        | 64.0 | 83.0      | 78.6       | 90.8     | 82.8 | 77.9       |
| OLMo-7B (0724)              | 6.9B          | βœ…        | 54.9 | 80.5      | 68.0       | 85.7     | 79.3 | 73.2       |
| Llama2-7B                   | 6.7B          | ❌        | 46.2 | 78.9      | 54.2       | 84.0     | 77.5 | 71.7       |

# Citation

```bibtex
@misc{muennighoff2024olmoeopenmixtureofexpertslanguage,
      title={OLMoE: Open Mixture-of-Experts Language Models}, 
      author={Niklas Muennighoff and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Jacob Morrison and Sewon Min and Weijia Shi and Pete Walsh and Oyvind Tafjord and Nathan Lambert and Yuling Gu and Shane Arora and Akshita Bhagia and Dustin Schwenk and David Wadden and Alexander Wettig and Binyuan Hui and Tim Dettmers and Douwe Kiela and Ali Farhadi and Noah A. Smith and Pang Wei Koh and Amanpreet Singh and Hannaneh Hajishirzi},
      year={2024},
      eprint={2409.02060},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2409.02060}, 
}
```