File size: 4,871 Bytes
5af7f73 337670e 5af7f73 337670e 5af7f73 337670e 5af7f73 337670e 665a8e9 337670e d23fbb2 5ef170b 07dbc6f 48d547a 337670e 3bb735e 337670e a63da35 337670e f2ae149 337670e 9b46a05 337670e 9b46a05 337670e 9b46a05 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
---
license: cc-by-nc-sa-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
base_model: kyujinpy/Sakura-SOLAR-Instruct
model_creator: KyujinHan
model_name: Sakura Solar Instruct
tags:
- exl2
---
# Sakura-SOLAR-Instruct
- Model creator: [KyujinHan](https://huggingface.co/kyujinpy)
- Original model: [Merged AGI 7B](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct)
- Is a merge of:
- [VAGOsolutions/SauerkrautLM-SOLAR-Instruct](https://huggingface.co/VAGOsolutions/SauerkrautLM-SOLAR-Instruct)
- [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)
## Quantizations
Measured using ExLlamav2_HF and 4096 max_seq_len with [Oobabooga's Text Generation WebUI](https://github.com/oobabooga/text-generation-webui/tree/main).
I also provided zipped quantization because a lot of people find gguf single download convenient. Zipped quantization is relatively smaller in size to download. After extracted, you can use the model folder as usual.
Use [TheBloke's 4bit-32g quants](https://huggingface.co/TheBloke/Sakura-SOLAR-Instruct-GPTQ/tree/gptq-4bit-32g-actorder_True) (7.4GB VRAM usage) if you have 8GB cards.
| Branch | BPW | Folder Size | Zipped File Size | VRAM Usage | Description |
| ------ | --- | ----------- | ---------------- | ---------- | ----------- |
[3.0bpw](https://huggingface.co/hgloow/Sakura-SOLAR-Instruct-EXL2/tree/3.0bpw)/[3.0bpw-zip](https://huggingface.co/hgloow/Sakura-SOLAR-Instruct-EXL2/tree/3.0bpw-zip)|3.0BPW|4.01GB|3.72GB|5.1 GB|For >=6GB VRAM cards with idle VRAM atleast or below 500MB (headroom for other things)
[5.0bpw (main)](https://huggingface.co/hgloow/Sakura-SOLAR-Instruct-EXL2/tree/main)/[5.0bpw-zip](https://huggingface.co/hgloow/Sakura-SOLAR-Instruct-EXL2/tree/5.0bpw-zip)|5.0BPW|6.45GB|6.3GB|7.7 GB|For >=10GB VRAM cards
[6.0bpw](https://huggingface.co/hgloow/Sakura-SOLAR-Instruct-EXL2/tree/6.0bpw)/[6.0bpw-zip](https://huggingface.co/hgloow/Sakura-SOLAR-Instruct-EXL2/tree/6.0bpw-zip)|6.0BPW|7.66GB|7.4GB|9.0 GB|For >=10GB VRAM cards with idle VRAM atleast or below 500MB (headroom for other things)
[7.0bpw](https://huggingface.co/hgloow/Sakura-SOLAR-Instruct-EXL2/tree/7.0bpw)/[7.0bpw-zip](https://huggingface.co/hgloow/Sakura-SOLAR-Instruct-EXL2/tree/7.0bpw-zip)|7.0BPW|8.89GB|8.6GB|10.2 GB|For >=11GB VRAM cards with idle VRAM atleast or below 500MB (headroom for other things)
[8.0bpw](https://huggingface.co/hgloow/Sakura-SOLAR-Instruct-EXL2/tree/8.0bpw)/[8.0bpw-zip](https://huggingface.co/hgloow/Sakura-SOLAR-Instruct-EXL2/tree/8.0bpw-zip)|8.0BPW|10.1GB|9.7GB|11.3 GB|For >=12GB VRAM cards with idle VRAM atleast or below 500MB (headroom for other things)
## Calibration Dataset
- [argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo)
- Training dataset of [VAGOsolutions/SauerkrautLM-SOLAR-Instruct](https://huggingface.co/VAGOsolutions/SauerkrautLM-SOLAR-Instruct)
## Prompt template: Orca-Hashes
From [TheBloke](https://huggingface.co/TheBloke)
```
### System:
{system_message}
### User:
{prompt}
### Assistant:
```
### If you use Oobabooga's Chat tab
From my testing, the template "Orca-Mini" or any of the Orca templates produced the best result. Feel free to leave a suggestion if you know better.
# Original Info
# **Sakura-SOLAR-Instruct**
<img src='./sakura.png' width=512>
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Method**
Using [Mergekit](https://github.com/cg123/mergekit).
I shared the information about my model. (training and code)
**Please see: [⭐Sakura-SOLAR](https://github.com/KyujinHan/Sakura-SOLAR-DPO).**
**Blog**
- [Sakura-SOLAR 모델 제작 과정 및 후기](https://kyujinpy.tistory.com/122).
# **Model Benchmark**
## Open leaderboard
- Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Sakura-SOLRCA-Instruct-DPO | 74.05 | 71.16 | 88.49 | 66.17 | 72.10 | 82.95 | 63.46 |
| Sakura-SOLAR-Instruct-DPO-v2 | 74.14 | 70.90 | 88.41 | 66.48 | 71.86 | 83.43 | 63.76 |
| [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) | 74.40 | 70.99 | 88.42 | 66.33 | 71.79 | 83.66 | 65.20
> Rank1 2023.12.27 PM 11:50
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/Sakura-SOLAR-Instruct"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- |