ssaryssane commited on
Commit
c5f53b3
·
verified ·
1 Parent(s): 24f0668

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,24 +1,34 @@
1
  ---
2
- tags:
3
- - merge
4
- - mergekit
5
- - lazymergekit
6
- - Sao10K/Fimbulvetr-10.7B-v1
7
- - upstage/SOLAR-10.7B-Instruct-v1.0
8
  base_model:
9
- - Sao10K/Fimbulvetr-10.7B-v1
10
  - upstage/SOLAR-10.7B-Instruct-v1.0
 
 
 
 
 
 
11
  ---
 
12
 
13
- # sarry-10.7B-slerp
14
 
15
- sarry-10.7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
16
- * [Sao10K/Fimbulvetr-10.7B-v1](https://huggingface.co/Sao10K/Fimbulvetr-10.7B-v1)
 
 
 
 
 
 
17
  * [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)
 
18
 
19
- ## 🧩 Configuration
 
 
20
 
21
  ```yaml
 
22
  slices:
23
  - sources:
24
  - model: Sao10K/Fimbulvetr-10.7B-v1
@@ -26,7 +36,7 @@ slices:
26
  - model: upstage/SOLAR-10.7B-Instruct-v1.0
27
  layer_range: [0, 32]
28
  merge_method: slerp
29
- base_model: Sao10K/Fimbulvetr-10.7B-v1
30
  parameters:
31
  t:
32
  - filter: self_attn
@@ -35,29 +45,5 @@ parameters:
35
  value: [1, 0.5, 0.7, 0.3, 0]
36
  - value: 0.5
37
  dtype: bfloat16
38
- ```
39
 
40
- ## 💻 Usage
41
-
42
- ```python
43
- !pip install -qU transformers accelerate
44
-
45
- from transformers import AutoTokenizer
46
- import transformers
47
- import torch
48
-
49
- model = "ssaryssane/sarry-10.7B-slerp"
50
- messages = [{"role": "user", "content": "What is a large language model?"}]
51
-
52
- tokenizer = AutoTokenizer.from_pretrained(model)
53
- prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
54
- pipeline = transformers.pipeline(
55
- "text-generation",
56
- model=model,
57
- torch_dtype=torch.float16,
58
- device_map="auto",
59
- )
60
-
61
- outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
62
- print(outputs[0]["generated_text"])
63
- ```
 
1
  ---
 
 
 
 
 
 
2
  base_model:
 
3
  - upstage/SOLAR-10.7B-Instruct-v1.0
4
+ - Sao10K/Fimbulvetr-10.7B-v1
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+
10
  ---
11
+ # merge
12
 
13
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
14
 
15
+ ## Merge Details
16
+ ### Merge Method
17
+
18
+ This model was merged using the SLERP merge method.
19
+
20
+ ### Models Merged
21
+
22
+ The following models were included in the merge:
23
  * [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)
24
+ * [Sao10K/Fimbulvetr-10.7B-v1](https://huggingface.co/Sao10K/Fimbulvetr-10.7B-v1)
25
 
26
+ ### Configuration
27
+
28
+ The following YAML configuration was used to produce this model:
29
 
30
  ```yaml
31
+
32
  slices:
33
  - sources:
34
  - model: Sao10K/Fimbulvetr-10.7B-v1
 
36
  - model: upstage/SOLAR-10.7B-Instruct-v1.0
37
  layer_range: [0, 32]
38
  merge_method: slerp
39
+ base_model: upstage/SOLAR-10.7B-Instruct-v1.0
40
  parameters:
41
  t:
42
  - filter: self_attn
 
45
  value: [1, 0.5, 0.7, 0.3, 0]
46
  - value: 0.5
47
  dtype: bfloat16
 
48
 
49
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
config.json CHANGED
@@ -1,10 +1,9 @@
1
  {
2
- "_name_or_path": "Sao10K/Fimbulvetr-10.7B-v1",
3
  "architectures": [
4
  "LlamaForCausalLM"
5
  ],
6
  "attention_bias": false,
7
- "attention_dropout": 0.0,
8
  "bos_token_id": 1,
9
  "eos_token_id": 2,
10
  "hidden_act": "silu",
@@ -16,6 +15,7 @@
16
  "num_attention_heads": 32,
17
  "num_hidden_layers": 32,
18
  "num_key_value_heads": 8,
 
19
  "pretraining_tp": 1,
20
  "rms_norm_eps": 1e-05,
21
  "rope_scaling": null,
 
1
  {
2
+ "_name_or_path": "upstage/SOLAR-10.7B-Instruct-v1.0",
3
  "architectures": [
4
  "LlamaForCausalLM"
5
  ],
6
  "attention_bias": false,
 
7
  "bos_token_id": 1,
8
  "eos_token_id": 2,
9
  "hidden_act": "silu",
 
15
  "num_attention_heads": 32,
16
  "num_hidden_layers": 32,
17
  "num_key_value_heads": 8,
18
+ "pad_token_id": 2,
19
  "pretraining_tp": 1,
20
  "rms_norm_eps": 1e-05,
21
  "rope_scaling": null,
mergekit_config.yml CHANGED
@@ -6,7 +6,7 @@ slices:
6
  - model: upstage/SOLAR-10.7B-Instruct-v1.0
7
  layer_range: [0, 32]
8
  merge_method: slerp
9
- base_model: Sao10K/Fimbulvetr-10.7B-v1
10
  parameters:
11
  t:
12
  - filter: self_attn
 
6
  - model: upstage/SOLAR-10.7B-Instruct-v1.0
7
  layer_range: [0, 32]
8
  merge_method: slerp
9
+ base_model: upstage/SOLAR-10.7B-Instruct-v1.0
10
  parameters:
11
  t:
12
  - filter: self_attn
model-00001-of-00008.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7795f4c6534852f2b3f55875ab8cee7defaafc98baf5e1e62ade28bb847b44a2
3
  size 1979773128
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:48525a4a0f783906648f6817cebce0f1eff22c1791098223a526adeec2a7bac3
3
  size 1979773128
model-00002-of-00008.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5bf25709948eb151bb2b9ff508b1738ff1378e0203e114367fa46f0ad21f33d4
3
  size 1946235640
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eab31689452a8111dd7bf69790ad21beffd0f1a4103bef6e734c2bc4be1bb4f9
3
  size 1946235640
model-00003-of-00008.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6b25b5026d4dd347168d14b4713c14356defac9d726217520364efe90dda084f
3
  size 1973490216
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:876ca2e573d87537d07b8131373c00dd63cd371564d925b0470f1040cad493b5
3
  size 1973490216
model-00004-of-00008.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d5cb5eb53e4d699ff0615532387ebad3a73e4798f2362f2c1927d55ba397cab4
3
  size 1979781464
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf9abd9329742be8437dfb8a6d26a58139f0ef15dac5482c34a91f47f8450148
3
  size 1979781464
model-00005-of-00008.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0c08e5f2905b841b28bf777e810c97635533a2a2201848db77e538d9856a0d43
3
  size 1946243984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba2fa338fe09e12b70885b507408c9281d2ee9775e7cfe8a2ff32bc46748faa6
3
  size 1946243984
model-00006-of-00008.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6e5140ee9413f3555956ae43830764d68bd27452e46f99e171314cbf12113ee7
3
  size 1979798072
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:501fdcb9fc81aad0ee37315f2785b372ebf2460d2dfdb82db77e2b3e56215c6f
3
  size 1979798072
model-00007-of-00008.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d297a98cdcd4d915aff581ba02e84336fa11d17612d92ef65710f04b6705c07c
3
  size 1979789776
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a5553968470e655f57ae92b71f1d001a8d8e324029c842312d01a5ddfa8eb37
3
  size 1979789776
model-00008-of-00008.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:731ad821946a4b4cac0a5035a77758ac85f0376691203a55a9528f711ce65989
3
  size 698385744
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8cc0b0392b79effef6bbeb2b5ec9188ec3180b88165b1dcb2be14a4c031b6ca
3
  size 698385744
special_tokens_map.json CHANGED
@@ -13,6 +13,13 @@
13
  "rstrip": false,
14
  "single_word": false
15
  },
 
 
 
 
 
 
 
16
  "unk_token": {
17
  "content": "<unk>",
18
  "lstrip": false,
 
13
  "rstrip": false,
14
  "single_word": false
15
  },
16
+ "pad_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
  "unk_token": {
24
  "content": "<unk>",
25
  "lstrip": false,
tokenizer_config.json CHANGED
@@ -27,11 +27,12 @@
27
  },
28
  "additional_special_tokens": [],
29
  "bos_token": "<s>",
 
30
  "clean_up_tokenization_spaces": false,
31
  "eos_token": "</s>",
32
  "legacy": true,
33
  "model_max_length": 1000000000000000019884624838656,
34
- "pad_token": null,
35
  "sp_model_kwargs": {},
36
  "spaces_between_special_tokens": false,
37
  "tokenizer_class": "LlamaTokenizer",
 
27
  },
28
  "additional_special_tokens": [],
29
  "bos_token": "<s>",
30
+ "chat_template": "{% for message in messages %}{% if message['role'] == 'system' %}{% if message['content']%}{{'### System:\n' + message['content']+'\n\n'}}{% endif %}{% elif message['role'] == 'user' %}{{'### User:\n' + message['content']+'\n\n'}}{% elif message['role'] == 'assistant' %}{{'### Assistant:\n' + message['content']}}{% endif %}{% if loop.last and add_generation_prompt %}{{ '### Assistant:\n' }}{% endif %}{% endfor %}",
31
  "clean_up_tokenization_spaces": false,
32
  "eos_token": "</s>",
33
  "legacy": true,
34
  "model_max_length": 1000000000000000019884624838656,
35
+ "pad_token": "</s>",
36
  "sp_model_kwargs": {},
37
  "spaces_between_special_tokens": false,
38
  "tokenizer_class": "LlamaTokenizer",