YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
math-doc-refining-lm - AWQ
- Model creator: https://huggingface.co/gair-prox/
- Original model: https://huggingface.co/gair-prox/math-doc-refining-lm/
Original model description:
license: apache-2.0 datasets: - gair-prox/RedPajama-pro language: - en base_model: - gair-prox/RedPJ-ProX-0.7B pipeline_tag: text-generation library_name: transformers tags: - llama - code
Math-doc-refining-lm
Math-doc-refining-lm is an adapted 0.7B-ProX model, fine-tuned for doc level refining via program generation, and can be applied over math pre-training corpus such as open-web-math.
Citation
@article{zhou2024programming,
title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale},
author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei},
journal={arXiv preprint arXiv:2409.17115},
year={2024}
}
- Downloads last month
- 5