|
--- |
|
arxiv: 2412.17743 |
|
base_model: yulan-team/YuLan-Mini |
|
datasets: |
|
- yulan-team/YuLan-Mini-Datasets |
|
- HuggingFaceFW/fineweb-edu |
|
- bigcode/the-stack-v2 |
|
- mlfoundations/dclm-baseline-1.0 |
|
- math-ai/AutoMathText |
|
- gair-prox/open-web-math-pro |
|
- RUC-AIBOX/long_form_thought_data_5k |
|
- internlm/Lean-Workbook |
|
- internlm/Lean-Github |
|
- deepseek-ai/DeepSeek-Prover-V1 |
|
- ScalableMath/Lean-STaR-base |
|
- ScalableMath/Lean-STaR-plus |
|
- ScalableMath/Lean-CoT-base |
|
- ScalableMath/Lean-CoT-plus |
|
- opencsg/chinese-fineweb-edu |
|
- liwu/MNBVC |
|
- vikp/textbook_quality_programming |
|
- HuggingFaceTB/smollm-corpus |
|
- OpenCoder-LLM/opc-annealing-corpus |
|
- OpenCoder-LLM/opc-sft-stage1 |
|
- OpenCoder-LLM/opc-sft-stage2 |
|
- XinyaoHu/AMPS_mathematica |
|
- deepmind/math_dataset |
|
- mrfakename/basic-math-10m |
|
- microsoft/orca-math-word-problems-200k |
|
- AI-MO/NuminaMath-CoT |
|
- HuggingFaceTB/cosmopedia |
|
- MU-NLPC/Calc-ape210k |
|
- manu/project_gutenberg |
|
- storytracer/LoC-PD-Books |
|
- allenai/dolma |
|
language: |
|
- en |
|
- zh |
|
library_name: transformers |
|
license: mit |
|
quantized_by: mradermacher |
|
tags: |
|
- code |
|
- math |
|
--- |
|
## About |
|
|
|
<!-- ### quantize_version: 2 --> |
|
<!-- ### output_tensor_quantised: 1 --> |
|
<!-- ### convert_type: hf --> |
|
<!-- ### vocab_type: --> |
|
<!-- ### tags: --> |
|
static quants of https://huggingface.co/yulan-team/YuLan-Mini |
|
|
|
<!-- provided-files --> |
|
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. |
|
## Usage |
|
|
|
If you are unsure how to use GGUF files, refer to one of [TheBloke's |
|
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for |
|
more details, including on how to concatenate multi-part files. |
|
|
|
## Provided Quants |
|
|
|
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) |
|
|
|
| Link | Type | Size/GB | Notes | |
|
|:-----|:-----|--------:|:------| |
|
| [GGUF](https://huggingface.co/mradermacher/YuLan-Mini-GGUF/resolve/main/YuLan-Mini.Q3_K_S.gguf) | Q3_K_S | 1.6 | | |
|
| [GGUF](https://huggingface.co/mradermacher/YuLan-Mini-GGUF/resolve/main/YuLan-Mini.Q2_K.gguf) | Q2_K | 1.6 | | |
|
| [GGUF](https://huggingface.co/mradermacher/YuLan-Mini-GGUF/resolve/main/YuLan-Mini.IQ4_XS.gguf) | IQ4_XS | 1.6 | | |
|
| [GGUF](https://huggingface.co/mradermacher/YuLan-Mini-GGUF/resolve/main/YuLan-Mini.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality | |
|
| [GGUF](https://huggingface.co/mradermacher/YuLan-Mini-GGUF/resolve/main/YuLan-Mini.Q3_K_L.gguf) | Q3_K_L | 1.7 | | |
|
| [GGUF](https://huggingface.co/mradermacher/YuLan-Mini-GGUF/resolve/main/YuLan-Mini.Q4_K_S.gguf) | Q4_K_S | 1.8 | fast, recommended | |
|
| [GGUF](https://huggingface.co/mradermacher/YuLan-Mini-GGUF/resolve/main/YuLan-Mini.Q4_K_M.gguf) | Q4_K_M | 1.9 | fast, recommended | |
|
| [GGUF](https://huggingface.co/mradermacher/YuLan-Mini-GGUF/resolve/main/YuLan-Mini.Q5_K_S.gguf) | Q5_K_S | 2.0 | | |
|
| [GGUF](https://huggingface.co/mradermacher/YuLan-Mini-GGUF/resolve/main/YuLan-Mini.Q5_K_M.gguf) | Q5_K_M | 2.1 | | |
|
| [GGUF](https://huggingface.co/mradermacher/YuLan-Mini-GGUF/resolve/main/YuLan-Mini.Q6_K.gguf) | Q6_K | 2.7 | very good quality | |
|
| [GGUF](https://huggingface.co/mradermacher/YuLan-Mini-GGUF/resolve/main/YuLan-Mini.Q8_0.gguf) | Q8_0 | 2.7 | fast, best quality | |
|
| [GGUF](https://huggingface.co/mradermacher/YuLan-Mini-GGUF/resolve/main/YuLan-Mini.f16.gguf) | f16 | 5.0 | 16 bpw, overkill | |
|
|
|
Here is a handy graph by ikawrakow comparing some lower-quality quant |
|
types (lower is better): |
|
|
|
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) |
|
|
|
And here are Artefact2's thoughts on the matter: |
|
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 |
|
|
|
## FAQ / Model Request |
|
|
|
See https://huggingface.co/mradermacher/model_requests for some answers to |
|
questions you might have and/or if you want some other model quantized. |
|
|
|
## Thanks |
|
|
|
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting |
|
me use its servers and providing upgrades to my workstation to enable |
|
this work in my free time. |
|
|
|
<!-- end --> |
|
|