Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Pretrain-Qwen-1.2B - GGUF - Model creator: https://huggingface.co/MiniLLM/ - Original model: https://huggingface.co/MiniLLM/Pretrain-Qwen-1.2B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Pretrain-Qwen-1.2B.Q2_K.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_Pretrain-Qwen-1.2B-gguf/blob/main/Pretrain-Qwen-1.2B.Q2_K.gguf) | Q2_K | 0.51GB | | [Pretrain-Qwen-1.2B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_Pretrain-Qwen-1.2B-gguf/blob/main/Pretrain-Qwen-1.2B.Q3_K_S.gguf) | Q3_K_S | 0.57GB | | [Pretrain-Qwen-1.2B.Q3_K.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_Pretrain-Qwen-1.2B-gguf/blob/main/Pretrain-Qwen-1.2B.Q3_K.gguf) | Q3_K | 0.61GB | | [Pretrain-Qwen-1.2B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_Pretrain-Qwen-1.2B-gguf/blob/main/Pretrain-Qwen-1.2B.Q3_K_M.gguf) | Q3_K_M | 0.61GB | | [Pretrain-Qwen-1.2B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_Pretrain-Qwen-1.2B-gguf/blob/main/Pretrain-Qwen-1.2B.Q3_K_L.gguf) | Q3_K_L | 0.63GB | | [Pretrain-Qwen-1.2B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_Pretrain-Qwen-1.2B-gguf/blob/main/Pretrain-Qwen-1.2B.IQ4_XS.gguf) | IQ4_XS | 0.65GB | | [Pretrain-Qwen-1.2B.Q4_0.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_Pretrain-Qwen-1.2B-gguf/blob/main/Pretrain-Qwen-1.2B.Q4_0.gguf) | Q4_0 | 0.67GB | | [Pretrain-Qwen-1.2B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_Pretrain-Qwen-1.2B-gguf/blob/main/Pretrain-Qwen-1.2B.IQ4_NL.gguf) | IQ4_NL | 0.67GB | | [Pretrain-Qwen-1.2B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_Pretrain-Qwen-1.2B-gguf/blob/main/Pretrain-Qwen-1.2B.Q4_K_S.gguf) | Q4_K_S | 0.69GB | | [Pretrain-Qwen-1.2B.Q4_K.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_Pretrain-Qwen-1.2B-gguf/blob/main/Pretrain-Qwen-1.2B.Q4_K.gguf) | Q4_K | 0.72GB | | [Pretrain-Qwen-1.2B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_Pretrain-Qwen-1.2B-gguf/blob/main/Pretrain-Qwen-1.2B.Q4_K_M.gguf) | Q4_K_M | 0.72GB | | [Pretrain-Qwen-1.2B.Q4_1.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_Pretrain-Qwen-1.2B-gguf/blob/main/Pretrain-Qwen-1.2B.Q4_1.gguf) | Q4_1 | 0.72GB | | [Pretrain-Qwen-1.2B.Q5_0.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_Pretrain-Qwen-1.2B-gguf/blob/main/Pretrain-Qwen-1.2B.Q5_0.gguf) | Q5_0 | 0.78GB | | [Pretrain-Qwen-1.2B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_Pretrain-Qwen-1.2B-gguf/blob/main/Pretrain-Qwen-1.2B.Q5_K_S.gguf) | Q5_K_S | 0.79GB | | [Pretrain-Qwen-1.2B.Q5_K.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_Pretrain-Qwen-1.2B-gguf/blob/main/Pretrain-Qwen-1.2B.Q5_K.gguf) | Q5_K | 0.81GB | | [Pretrain-Qwen-1.2B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_Pretrain-Qwen-1.2B-gguf/blob/main/Pretrain-Qwen-1.2B.Q5_K_M.gguf) | Q5_K_M | 0.81GB | | [Pretrain-Qwen-1.2B.Q5_1.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_Pretrain-Qwen-1.2B-gguf/blob/main/Pretrain-Qwen-1.2B.Q5_1.gguf) | Q5_1 | 0.83GB | | [Pretrain-Qwen-1.2B.Q6_K.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_Pretrain-Qwen-1.2B-gguf/blob/main/Pretrain-Qwen-1.2B.Q6_K.gguf) | Q6_K | 0.93GB | | [Pretrain-Qwen-1.2B.Q8_0.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_Pretrain-Qwen-1.2B-gguf/blob/main/Pretrain-Qwen-1.2B.Q8_0.gguf) | Q8_0 | 1.15GB | Original model description: --- library_name: transformers license: apache-2.0 datasets: - monology/pile-uncopyrighted - MiniLLM/pile-tokenized language: - en metrics: - accuracy pipeline_tag: text-generation --- # Pretrain-Qwen-1.2B [paper](https://arxiv.org/abs/2410.17215) | [code](https://github.com/thu-coai/MiniPLM) **Pretrain-Qwen-1.2B** is a 1.2B model with Qwen achitecture conventionally pre-trained from scratch on [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted) for 50B tokens. We also open-source the tokenized [pre-training corpus](https://huggingface.co/datasets/MiniLLM/pile-tokenized) for reproducibility. **It is used as the baseline for [MiniLLM-Qwen-1.2B](https://huggingface.co/MiniLLM/MiniPLM-Qwen-1.2B)** ## Evaluation MiniPLM models achieves better performance given the same computation and scales well across model sizes:
## Other Baselines + [VanillaKD](https://huggingface.co/MiniLLM/VanillaKD-Pretrain-Qwen-1.2B) ## Citation ```bibtext @article{miniplm, title={MiniPLM: Knowledge Distillation for Pre-Training Language Models}, author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang}, journal={arXiv preprint arXiv:2410.17215}, year={2024} } ```