YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
MiniPLM-llama3.1-212M - AWQ
- Model creator: https://huggingface.co/MiniLLM/
- Original model: https://huggingface.co/MiniLLM/MiniPLM-llama3.1-212M/
Original model description:
library_name: transformers license: apache-2.0 datasets: - monology/pile-uncopyrighted - MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5 language: - en metrics: - accuracy pipeline_tag: text-generation
MiniPLM-llama3.1-212M
MiniPLM-llama3.1-212M is a 212M model with the LLaMA3.1 achitecture pre-trained from scratch on the Pile using the MiniPLM knowledge distillation framework with the offcial Qwen1.5-1.8B as the teacher model. This model shows the flexibility of the MiniPLM framework in conducting knowledge distillation across model families.
We also open-source the pre-training corpus refined by Difference Sampling in MiniPLM for reproducibility.
Evaluation
MiniPLM models achieves better performance given the same computation and scales well across model sizes:
Baseline Models
Citation
@article{miniplm,
title={MiniPLM: Knowledge Distillation for Pre-Training Language Models},
author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang},
journal={arXiv preprint arXiv:2410.17215},
year={2024}
}
- Downloads last month
- 0