Alepach commited on
Commit
9ea58c2
·
verified ·
1 Parent(s): 3339c75

Model save

Browse files
README.md CHANGED
@@ -6,28 +6,30 @@ tags:
6
  - generated_from_trainer
7
  - trl
8
  - sft
9
- license: apache-2.0
10
- datasets:
11
- - OpenAssistant/oasst1
12
  ---
13
 
14
- # notHumpback-Myx
15
 
16
- This model follows the Humpback architecture, proposed in the paper [Self-Alignment with Instruction Backtranslation](https://arxiv.org/pdf/2308.06259)
17
- by Li et al.
18
 
19
- It represents the "backward model", which is used to generate instructions from web texts. These are considered as possible model outputs.
20
 
21
- Humpback uses instruction backtranslation on a web corpus to generate input-output pairs (self-augmentation),
22
- creating a richer dataset for fine-tuning models without the need for additional manual annotation.
23
- The model then iteratively curates the created dataset, scoring the pairs by quality, and is then finetuned on the resulting subset
24
- of all pairs with the highest possible score (self-curation).
 
 
 
 
 
 
25
 
26
- Varying from the original paper, this model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B).
27
- It has been trained using [TRL](https://github.com/huggingface/trl).
28
 
29
- The dataset used to train this model has been sampled from the [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) dataset.
30
- In order to achieve the "backward" structure, the model is trained on output-input pairs.
31
 
32
  ### Framework versions
33
 
@@ -39,18 +41,7 @@ In order to achieve the "backward" structure, the model is trained on output-inp
39
 
40
  ## Citations
41
 
42
- Original paper:
43
 
44
- ```bibtex
45
- @misc{li2023selfalignment,
46
- title={Self-Alignment with Instruction Backtranslation},
47
- author={Xian Li and Ping Yu and Chunting Zhou and Timo Schick and Luke Zettlemoyer and Omer Levy and Jason Weston and Mike Lewis},
48
- year={2023},
49
- eprint={2308.06259},
50
- archivePrefix={arXiv},
51
- primaryClass={cs.CL}
52
- }
53
- ```
54
 
55
  Cite TRL as:
56
 
 
6
  - generated_from_trainer
7
  - trl
8
  - sft
9
+ licence: license
 
 
10
  ---
11
 
12
+ # Model Card for notHumpback-Myx
13
 
14
+ This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
+ ## Quick start
18
 
19
+ ```python
20
+ from transformers import pipeline
21
+
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="Alepach/notHumpback-Myx", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
+
28
+ ## Training procedure
29
 
 
 
30
 
31
+
32
+ This model was trained with SFT.
33
 
34
  ### Framework versions
35
 
 
41
 
42
  ## Citations
43
 
 
44
 
 
 
 
 
 
 
 
 
 
 
45
 
46
  Cite TRL as:
47
 
model-00001-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6363ec1a035cd7af9d4beb1c18baaca6aa941b0bb170eb33378fdddea3dbed9c
3
  size 4965799096
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69de60651018900fe91a6663e026a0c4d97c115b56e6147eb2cfbef4615d8fb9
3
  size 4965799096
model-00002-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b6994d2660bda6dfe7f473f56b27aeb1a78d4b743508b1e6c6ddae996f1f4db8
3
  size 1459729952
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7335e946d7197501f57939c4dd520663e0e301bab0bf6faf140a8d59feafd205
3
  size 1459729952
special_tokens_map.json CHANGED
@@ -13,11 +13,5 @@
13
  "rstrip": false,
14
  "single_word": false
15
  },
16
- "pad_token": {
17
- "content": "<|finetune_right_pad_id|>",
18
- "lstrip": false,
19
- "normalized": false,
20
- "rstrip": false,
21
- "single_word": false
22
- }
23
  }
 
13
  "rstrip": false,
14
  "single_word": false
15
  },
16
+ "pad_token": "<|finetune_right_pad_id|>"
 
 
 
 
 
 
17
  }
tokenizer_config.json CHANGED
@@ -2053,15 +2053,11 @@
2053
  "chat_template": "{{- bos_token }}\n{% set ns = namespace(system_message='') %}\n{%- for message in messages %}\n {%- if message['role'] == 'system' %}\n {% set ns.system_message = message['content'].strip() %}\n {%- elif message['role'] == 'user' %}\n {{- '<|start_header_id|>user<|end_header_id|>' + ns.system_message + '\\n' + message['content'].strip() + '<|eot_id|>' }}\n {%- elif message['role'] == 'assistant' %}\n {{- '<|start_header_id|>assistant<|end_header_id|>' + message['content'] + '<|eot_id|>' }}\n {%- endif %}\n{%- endfor %}\n",
2054
  "clean_up_tokenization_spaces": true,
2055
  "eos_token": "<|end_of_text|>",
2056
- "max_length": 131072,
2057
  "model_input_names": [
2058
  "input_ids",
2059
  "attention_mask"
2060
  ],
2061
  "model_max_length": 131072,
2062
  "pad_token": "<|finetune_right_pad_id|>",
2063
- "stride": 0,
2064
- "tokenizer_class": "PreTrainedTokenizerFast",
2065
- "truncation_side": "right",
2066
- "truncation_strategy": "longest_first"
2067
  }
 
2053
  "chat_template": "{{- bos_token }}\n{% set ns = namespace(system_message='') %}\n{%- for message in messages %}\n {%- if message['role'] == 'system' %}\n {% set ns.system_message = message['content'].strip() %}\n {%- elif message['role'] == 'user' %}\n {{- '<|start_header_id|>user<|end_header_id|>' + ns.system_message + '\\n' + message['content'].strip() + '<|eot_id|>' }}\n {%- elif message['role'] == 'assistant' %}\n {{- '<|start_header_id|>assistant<|end_header_id|>' + message['content'] + '<|eot_id|>' }}\n {%- endif %}\n{%- endfor %}\n",
2054
  "clean_up_tokenization_spaces": true,
2055
  "eos_token": "<|end_of_text|>",
 
2056
  "model_input_names": [
2057
  "input_ids",
2058
  "attention_mask"
2059
  ],
2060
  "model_max_length": 131072,
2061
  "pad_token": "<|finetune_right_pad_id|>",
2062
+ "tokenizer_class": "PreTrainedTokenizerFast"
 
 
 
2063
  }
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e64bae3fccca4a5921c72056a6283073e78cbbe6cf47e7d312aff49e77f208c6
3
  size 5560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:870ae2e32c102e49c08b17791d1b79605cb1e4c2926081ec569d9b43c21d8ccd
3
  size 5560