RichardErkhov commited on
Commit
7c74c09
·
verified ·
1 Parent(s): 286b592

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +56 -0
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ math-doc-refining-lm - AWQ
11
+ - Model creator: https://huggingface.co/gair-prox/
12
+ - Original model: https://huggingface.co/gair-prox/math-doc-refining-lm/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ license: apache-2.0
20
+ datasets:
21
+ - gair-prox/RedPajama-pro
22
+ language:
23
+ - en
24
+ base_model:
25
+ - gair-prox/RedPJ-ProX-0.7B
26
+ pipeline_tag: text-generation
27
+ library_name: transformers
28
+ tags:
29
+ - llama
30
+ - code
31
+ ---
32
+
33
+ # Math-doc-refining-lm
34
+
35
+ <p align="center">
36
+ <img src="prox-teaser.png">
37
+ </p>
38
+
39
+ [ArXiv](http://arxiv.org/abs/2409.17115) | [Code](https://github.com/GAIR-NLP/program-every-example)
40
+
41
+ **Math-doc-refining-lm** is an adapted [0.7B-ProX](https://huggingface.co/gair-prox/RedPJ-ProX-0.7B) model, fine-tuned for doc level refining via program generation, and can be applied over math pre-training corpus such as open-web-math.
42
+
43
+ <p align="center">
44
+ <img src="func_design.png">
45
+ </p>
46
+
47
+ ### Citation
48
+ ```
49
+ @article{zhou2024programming,
50
+ title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale},
51
+ author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei},
52
+ journal={arXiv preprint arXiv:2409.17115},
53
+ year={2024}
54
+ }
55
+ ```
56
+