winglian commited on
Commit
ce694e2
·
2 Parent(s): cebea37 6e7d4d5

Merge branch 'main' of github.com:OpenAccess-AI-Collective/axolotl into dev

Browse files
Files changed (4) hide show
  1. README.md +241 -68
  2. data/README.md +4 -4
  3. image/axolotl.png +0 -0
  4. src/axolotl/utils/models.py +2 -0
README.md CHANGED
@@ -1,71 +1,211 @@
1
  # Axolotl
2
 
3
- #### Go ahead and axolotl questions
 
 
 
 
 
 
 
 
 
 
4
 
5
- ## Support Matrix
6
 
7
  | | fp16/fp32 | fp16/fp32 w/ lora | 4bit-quant | 4bit-quant w/flash attention | flash attention | xformers attention |
8
  |----------|:----------|:------------------|------------|------------------------------|-----------------|--------------------|
9
  | llama | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
10
  | Pythia | ✅ | ✅ | ❌ | ❌ | ❌ | ❓ |
11
  | cerebras | ✅ | ✅ | ❌ | ❌ | ❌ | ❓ |
 
12
 
13
 
14
- ## Getting Started
15
- - install python 3.9. 3.10 and above are not supported.
16
 
17
- - Point the config you are using to a huggingface hub dataset (see [configs/llama_7B_4bit.yml](https://github.com/winglian/axolotl/blob/main/configs/llama_7B_4bit.yml#L6-L8))
18
 
19
- ```yaml
20
- datasets:
21
- - path: vicgalle/alpaca-gpt4
22
- type: alpaca
 
 
 
 
 
 
 
 
 
23
  ```
24
 
25
- - Optionally Download some datasets, see [data/README.md](data/README.md)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
 
27
 
28
- - Create a new or update the existing YAML config [config/sample.yml](config/sample.yml)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
30
  ```yaml
31
  # this is the huggingface model that contains *.pt, *.safetensors, or *.bin files
32
  # this can also be a relative path to a model on disk
33
- base_model: decapoda-research/llama-7b-hf-int4
34
  # you can specify an ignore pattern if the model repo contains more than 1 model type (*.pt, etc)
35
  base_model_ignore_patterns:
36
  # if the base_model repo on hf hub doesn't include configuration .json files,
37
  # you can set that here, or leave this empty to default to base_model
38
- base_model_config: decapoda-research/llama-7b-hf
39
  # If you want to specify the type of model to load, AutoModelForCausalLM is a good choice too
40
  model_type: AutoModelForCausalLM
41
  # Corresponding tokenizer for the model AutoTokenizer is a good choice
42
  tokenizer_type: AutoTokenizer
 
 
 
43
  # whether you are training a 4-bit quantized model
44
  load_4bit: true
 
 
 
45
  # this will attempt to quantize the model down to 8 bits and use adam 8 bit optimizer
46
  load_in_8bit: true
 
 
 
 
 
 
 
 
47
  # a list of one or more datasets to finetune the model with
48
  datasets:
49
  # this can be either a hf dataset, or relative path
50
  - path: vicgalle/alpaca-gpt4
51
  # The type of prompt to use for training. [alpaca, sharegpt, gpteacher, oasst, reflection]
52
  type: alpaca
 
 
53
  # axolotl attempts to save the dataset as an arrow after packing the data together so
54
  # subsequent training attempts load faster, relative path
55
  dataset_prepared_path: data/last_run_prepared
 
 
56
  # How much of the dataset to set aside as evaluation. 1 = 100%, 0.50 = 50%, etc
57
  val_set_size: 0.04
58
- # if you want to use lora, leave blank to train all parameters in original model
59
- adapter: lora
60
- # if you already have a lora model trained that you want to load, put that here
61
- lora_model_dir:
62
  # the maximum length of an input to train with, this should typically be less than 2048
63
  # as most models have a token/context limit of 2048
64
  sequence_len: 2048
65
  # max sequence length to concatenate training samples together up to
66
  # inspired by StackLLaMA. see https://huggingface.co/blog/stackllama#supervised-fine-tuning
67
  max_packed_sequence_len: 1024
 
 
 
 
68
  # lora hyperparameters
 
69
  lora_r: 8
70
  lora_alpha: 16
71
  lora_dropout: 0.05
@@ -74,14 +214,24 @@ lora_target_modules:
74
  - v_proj
75
  # - k_proj
76
  # - o_proj
 
 
 
 
 
 
 
77
  lora_fan_in_fan_out: false
78
- # wandb configuration if your're using it
 
79
  wandb_project:
80
  wandb_watch:
81
  wandb_run_id:
82
- wandb_log_model: checkpoint
83
- # where to save the finsihed model to
 
84
  output_dir: ./completed-model
 
85
  # training hyperparameters
86
  batch_size: 8
87
  micro_batch_size: 2
@@ -89,87 +239,110 @@ eval_batch_size: 2
89
  num_epochs: 3
90
  warmup_steps: 100
91
  learning_rate: 0.00003
 
 
92
  # whether to mask out or include the human's prompt from the training labels
93
  train_on_inputs: false
94
  # don't use this, leads to wonky training (according to someone on the internet)
95
  group_by_length: false
96
- # Use CUDA bf16
97
- bf16: true
98
- # Use CUDA tf32
99
- tf32: true
100
  # does not work with current implementation of 4-bit LoRA
101
  gradient_checkpointing: false
 
102
  # stop training after this many evaluation losses have increased in a row
103
  # https://huggingface.co/transformers/v4.2.2/_modules/transformers/trainer_callback.html#EarlyStoppingCallback
104
  early_stopping_patience: 3
105
  # specify a scheduler to use with the optimizer. only one_cycle is supported currently
106
  lr_scheduler:
 
 
 
 
 
107
  # whether to use xformers attention patch https://github.com/facebookresearch/xformers:
108
  xformers_attention:
109
  # whether to use flash attention patch https://github.com/HazyResearch/flash-attention:
110
  flash_attention:
 
111
  # resume from a specific checkpoint dir
112
  resume_from_checkpoint:
113
  # if resume_from_checkpoint isn't set and you simply want it to start where it left off
114
  # be careful with this being turned on between different models
115
  auto_resume_from_checkpoints: false
 
116
  # don't mess with this, it's here for accelerate and torchrun
117
  local_rank:
118
- ```
119
 
120
- - Install python dependencies with ONE of the following:
 
 
 
 
 
 
121
 
122
- - `pip3 install -e .[int4]` (recommended)
123
- - `pip3 install -e .[int4_triton]`
124
- - `pip3 install -e .`
125
- -
126
- - If not using `int4` or `int4_triton`, run `pip install "peft @ git+https://github.com/huggingface/peft.git"`
127
- - Configure accelerate `accelerate config` or update `~/.cache/huggingface/accelerate/default_config.yaml`
128
 
129
- ```yaml
130
- compute_environment: LOCAL_MACHINE
131
- distributed_type: MULTI_GPU
132
- downcast_bf16: 'no'
133
- gpu_ids: all
134
- machine_rank: 0
135
- main_training_function: main
136
- mixed_precision: bf16
137
- num_machines: 1
138
- num_processes: 4
139
- rdzv_backend: static
140
- same_network: true
141
- tpu_env: []
142
- tpu_use_cluster: false
143
- tpu_use_sudo: false
144
- use_cpu: false
145
  ```
146
 
147
- - Train! `accelerate launch scripts/finetune.py`, make sure to choose the correct YAML config file
148
- - Alternatively you can pass in the config file like: `accelerate launch scripts/finetune.py configs/llama_7B_alpaca.yml`~~
 
149
 
 
150
 
151
- ## How to start training on Runpod in under 10 minutes
 
152
 
153
- - Choose your Docker container wisely.
154
- - I recommend `huggingface:transformers-pytorch-deepspeed-latest-gpu` see https://hub.docker.com/r/huggingface/transformers-pytorch-deepspeed-latest-gpu/
155
- - Once you start your runpod, and SSH into it:
156
- ```shell
157
- export TORCH_CUDA_ARCH_LIST="7.0 7.5 8.0 8.6+PTX"
158
- source <(curl -s https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/dev/scripts/setup-runpod.sh)
159
  ```
160
 
161
- - Once the setup script completes
162
- ```shell
163
- accelerate launch scripts/finetune.py configs/quickstart.yml
 
 
164
  ```
165
 
166
- - Here are some helpful environment variables you'll want to manually set if you open a new shell
167
- ```shell
168
- export WANDB_MODE=offline
169
- export WANDB_CACHE_DIR=/workspace/data/wandb-cache
170
- export HF_DATASETS_CACHE="/workspace/data/huggingface-cache/datasets"
171
- export HUGGINGFACE_HUB_CACHE="/workspace/data/huggingface-cache/hub"
172
- export TRANSFORMERS_CACHE="/workspace/data/huggingface-cache/hub"
173
- export NCCL_P2P_DISABLE=1
174
  ```
175
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # Axolotl
2
 
3
+ <div align="center">
4
+ <img src="image/axolotl.png" alt="axolotl" width="160">
5
+ <div>
6
+ <p>
7
+ <b>One repo to finetune them all! </b>
8
+ </p>
9
+ <p>
10
+ Go ahead and axolotl questions!!
11
+ </p>
12
+ </div>
13
+ </div>
14
 
15
+ ## Axolotl supports
16
 
17
  | | fp16/fp32 | fp16/fp32 w/ lora | 4bit-quant | 4bit-quant w/flash attention | flash attention | xformers attention |
18
  |----------|:----------|:------------------|------------|------------------------------|-----------------|--------------------|
19
  | llama | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
20
  | Pythia | ✅ | ✅ | ❌ | ❌ | ❌ | ❓ |
21
  | cerebras | ✅ | ✅ | ❌ | ❌ | ❌ | ❓ |
22
+ | mpt | ✅ | ❌ | ❌ | ❌ | ❌ | ❓ |
23
 
24
 
25
+ ## Quickstart
 
26
 
27
+ **Requirements**: Python 3.9.
28
 
29
+ ```bash
30
+ git clone https://github.com/OpenAccess-AI-Collective/axolotl
31
+
32
+ pip3 install -e .[int4]
33
+
34
+ accelerate config
35
+
36
+ # finetune
37
+ accelerate launch scripts/finetune.py examples/4bit-lora-7b/config.yml
38
+
39
+ # inference
40
+ accelerate launch scripts/finetune.py examples/4bit-lora-7b/config.yml \
41
+ --inference --lora_model_dir="./llama-7b-lora-int4"
42
  ```
43
 
44
+ ## Installation
45
+
46
+ ### Environment
47
+
48
+ - Docker
49
+ ```bash
50
+ docker run --gpus '"all"' --rm -it winglian/axolotl:main
51
+ ```
52
+ - `winglian/axolotl:dev`: dev branch
53
+ - `winglian/axolotl-runpod:main`: for runpod
54
+
55
+ - Conda/Pip venv
56
+ 1. Install python **3.9**
57
+
58
+ 2. Install python dependencies with ONE of the following:
59
+ - `pip3 install -e .[int4]` (recommended)
60
+ - `pip3 install -e .[int4_triton]`
61
+ - `pip3 install -e .`
62
+
63
+ ### Dataset
64
+
65
+ Have dataset(s) in one of the following format (JSONL recommended):
66
+
67
+ - `alpaca`: instruction; input(optional)
68
+ ```json
69
+ {"instruction": "...", "input": "...", "output": "..."}
70
+ ```
71
+ - `sharegpt`: conversations
72
+ ```json
73
+ {"conversations": [{"from": "...", "value": "..."}]}
74
+ ```
75
+ - `completion`: raw corpus
76
+ ```json
77
+ {"text": "..."}
78
+ ```
79
+
80
+ <details>
81
+
82
+ <summary>See other formats</summary>
83
+
84
+ - `jeopardy`: question and answer
85
+ ```json
86
+ {"question": "...", "category": "...", "answer": "..."}
87
+ ```
88
+ - `oasst`: instruction
89
+ ```json
90
+ {"INSTRUCTION": "...", "RESPONSE": "..."}
91
+ ```
92
+ - `gpteacher`: instruction; input(optional)
93
+ ```json
94
+ {"instruction": "...", "input": "...", "response": "..."}
95
+ ```
96
+ - `reflection`: instruction with reflect; input(optional)
97
+ ```json
98
+ {"instruction": "...", "input": "...", "output": "...", "reflection": "...", "corrected": "..."}
99
+ ```
100
+
101
+ > Have some new format to propose? Check if it's already defined in [data.py](src/axolotl/utils/data.py) in `dev` branch!
102
 
103
+ </details>
104
 
105
+ Optionally, download some datasets, see [data/README.md](data/README.md)
106
+
107
+ ### Config
108
+
109
+ See sample configs in [configs](configs) folder or [examples](examples) for quick start. It is recommended to duplicate and modify to your needs. The most important options are:
110
+
111
+ - model
112
+ ```yaml
113
+ base_model: ./llama-7b-hf # local or huggingface repo
114
+ ```
115
+ Note: The code will load the right architecture.
116
+
117
+ - dataset
118
+ ```yaml
119
+ datasets:
120
+ - path: vicgalle/alpaca-gpt4 # local or huggingface repo
121
+ type: alpaca # format from earlier
122
+ sequence_len: 2048 # max token length / prompt
123
+ ```
124
+
125
+ - loading
126
+ ```yaml
127
+ load_4bit: true
128
+ load_in_8bit: true
129
+ bf16: true
130
+ fp16: true
131
+ tf32: true
132
+ ```
133
+ Note: Repo does not do 4-bit quantization.
134
+
135
+ - lora
136
+ ```yaml
137
+ adapter: lora # blank for full finetune
138
+ lora_r: 8
139
+ lora_alpha: 16
140
+ lora_dropout: 0.05
141
+ lora_target_modules:
142
+ - q_proj
143
+ - v_proj
144
+ ```
145
+
146
+ <details>
147
+
148
+ <summary>All yaml options</summary>
149
 
150
  ```yaml
151
  # this is the huggingface model that contains *.pt, *.safetensors, or *.bin files
152
  # this can also be a relative path to a model on disk
153
+ base_model: ./llama-7b-hf
154
  # you can specify an ignore pattern if the model repo contains more than 1 model type (*.pt, etc)
155
  base_model_ignore_patterns:
156
  # if the base_model repo on hf hub doesn't include configuration .json files,
157
  # you can set that here, or leave this empty to default to base_model
158
+ base_model_config: ./llama-7b-hf
159
  # If you want to specify the type of model to load, AutoModelForCausalLM is a good choice too
160
  model_type: AutoModelForCausalLM
161
  # Corresponding tokenizer for the model AutoTokenizer is a good choice
162
  tokenizer_type: AutoTokenizer
163
+ # Trust remote code for untrusted source
164
+ trust_remote_code:
165
+
166
  # whether you are training a 4-bit quantized model
167
  load_4bit: true
168
+ gptq_groupsize: 128 # group size
169
+ gptq_model_v1: false # v1 or v2
170
+
171
  # this will attempt to quantize the model down to 8 bits and use adam 8 bit optimizer
172
  load_in_8bit: true
173
+
174
+ # Use CUDA bf16
175
+ bf16: true
176
+ # Use CUDA fp16
177
+ fp16: true
178
+ # Use CUDA tf32
179
+ tf32: true
180
+
181
  # a list of one or more datasets to finetune the model with
182
  datasets:
183
  # this can be either a hf dataset, or relative path
184
  - path: vicgalle/alpaca-gpt4
185
  # The type of prompt to use for training. [alpaca, sharegpt, gpteacher, oasst, reflection]
186
  type: alpaca
187
+ data_files: # path to source data files
188
+
189
  # axolotl attempts to save the dataset as an arrow after packing the data together so
190
  # subsequent training attempts load faster, relative path
191
  dataset_prepared_path: data/last_run_prepared
192
+ # push prepared dataset to hub
193
+ push_dataset_to_hub: # repo path
194
  # How much of the dataset to set aside as evaluation. 1 = 100%, 0.50 = 50%, etc
195
  val_set_size: 0.04
196
+
 
 
 
197
  # the maximum length of an input to train with, this should typically be less than 2048
198
  # as most models have a token/context limit of 2048
199
  sequence_len: 2048
200
  # max sequence length to concatenate training samples together up to
201
  # inspired by StackLLaMA. see https://huggingface.co/blog/stackllama#supervised-fine-tuning
202
  max_packed_sequence_len: 1024
203
+
204
+ # if you want to use lora, leave blank to train all parameters in original model
205
+ adapter: lora
206
+ # if you already have a lora model trained that you want to load, put that here
207
  # lora hyperparameters
208
+ lora_model_dir:
209
  lora_r: 8
210
  lora_alpha: 16
211
  lora_dropout: 0.05
 
214
  - v_proj
215
  # - k_proj
216
  # - o_proj
217
+ # - gate_proj
218
+ # - down_proj
219
+ # - up_proj
220
+ lora_modules_to_save:
221
+ # - embed_tokens
222
+ # - lm_head
223
+ lora_out_dir:
224
  lora_fan_in_fan_out: false
225
+
226
+ # wandb configuration if you're using it
227
  wandb_project:
228
  wandb_watch:
229
  wandb_run_id:
230
+ wandb_log_model: # 'checkpoint'
231
+
232
+ # where to save the finished model to
233
  output_dir: ./completed-model
234
+
235
  # training hyperparameters
236
  batch_size: 8
237
  micro_batch_size: 2
 
239
  num_epochs: 3
240
  warmup_steps: 100
241
  learning_rate: 0.00003
242
+ logging_steps:
243
+
244
  # whether to mask out or include the human's prompt from the training labels
245
  train_on_inputs: false
246
  # don't use this, leads to wonky training (according to someone on the internet)
247
  group_by_length: false
248
+
 
 
 
249
  # does not work with current implementation of 4-bit LoRA
250
  gradient_checkpointing: false
251
+
252
  # stop training after this many evaluation losses have increased in a row
253
  # https://huggingface.co/transformers/v4.2.2/_modules/transformers/trainer_callback.html#EarlyStoppingCallback
254
  early_stopping_patience: 3
255
  # specify a scheduler to use with the optimizer. only one_cycle is supported currently
256
  lr_scheduler:
257
+ # specify optimizer
258
+ optimizer:
259
+ # specify weight decay
260
+ weight_decay:
261
+
262
  # whether to use xformers attention patch https://github.com/facebookresearch/xformers:
263
  xformers_attention:
264
  # whether to use flash attention patch https://github.com/HazyResearch/flash-attention:
265
  flash_attention:
266
+
267
  # resume from a specific checkpoint dir
268
  resume_from_checkpoint:
269
  # if resume_from_checkpoint isn't set and you simply want it to start where it left off
270
  # be careful with this being turned on between different models
271
  auto_resume_from_checkpoints: false
272
+
273
  # don't mess with this, it's here for accelerate and torchrun
274
  local_rank:
 
275
 
276
+ # add or change special tokens
277
+ special_tokens:
278
+ # bos_token: "<s>"
279
+ # eos_token: "</s>"
280
+ # unk_token: "<unk>"
281
+ # add extra tokens
282
+ tokens:
283
 
284
+ # FSDP
285
+ fsdp:
286
+ fsdp_config:
 
 
 
287
 
288
+ # Deepspeed
289
+ deepspeed:
290
+
291
+ # TODO
292
+ torchdistx_path:
293
+
294
+ # Debug mode
295
+ debug:
 
 
 
 
 
 
 
 
296
  ```
297
 
298
+ </details>
299
+
300
+ ### Accelerate
301
 
302
+ Configure accelerate
303
 
304
+ ```bash
305
+ accelerate config
306
 
307
+ # Edit manually
308
+ # nano ~/.cache/huggingface/accelerate/default_config.yaml
 
 
 
 
309
  ```
310
 
311
+ ### Train
312
+
313
+ Run
314
+ ```bash
315
+ accelerate launch scripts/finetune.py configs/your_config.yml
316
  ```
317
 
318
+ ### Inference
319
+
320
+ Add `--inference` flag to train command above
321
+
322
+ If you are inferencing a pretrained LORA, pass
323
+ ```bash
324
+ --lora_model_dir ./completed-model
 
325
  ```
326
 
327
+ ### Merge LORA to base (Dev branch 🔧 )
328
+
329
+ Add below flag to train command above
330
+
331
+ ```bash
332
+ --merge_lora --lora_model_dir="./completed-model"
333
+ ```
334
+
335
+ ## Common Errors 🧰
336
+
337
+ > Cuda out of memory
338
+
339
+ Please reduce any below
340
+ - `micro_batch_size`
341
+ - `eval_batch_size`
342
+ - `sequence_len`
343
+
344
+ ## Contributing 🤝
345
+
346
+ Bugs? Please check for open issue else create a new [Issue](https://github.com/OpenAccess-AI-Collective/axolotl/issues/new).
347
+
348
+ PRs are **greatly welcome**!
data/README.md CHANGED
@@ -1,6 +1,5 @@
1
 
2
- - Download some datasets
3
- -
4
  ```shell
5
  curl https://raw.githubusercontent.com/tloen/alpaca-lora/main/alpaca_data_gpt4.json -o data/raw/alpaca_data_gpt4.json
6
  curl https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json -L -o data/raw/vicuna_cleaned.json
@@ -8,7 +7,7 @@ curl https://github.com/teknium1/GPTeacher/blob/main/Instruct/gpt4-instruct-simi
8
  curl https://github.com/teknium1/GPTeacher/blob/main/Roleplay/roleplay-similarity_0.6-instruct-dataset.json?raw=true -L -o data/raw/roleplay-similarity_0.6-instruct-dataset.json
9
  ```
10
 
11
- - Convert the JSON data files to JSONL.
12
 
13
  ```shell
14
  python3 ./scripts/alpaca_json_to_jsonl.py --input data/alpaca_data_gpt4.json > data/alpaca_data_gpt4.jsonl
@@ -16,8 +15,9 @@ python3 ./scripts/alpaca_json_to_jsonl.py --input data/raw/vicuna_cleaned.json >
16
  python3 ./scripts/alpaca_json_to_jsonl.py --input data/raw/roleplay-similarity_0.6-instruct-dataset.json > data/roleplay-similarity_0.6-instruct-dataset.jsonl
17
  python3 ./scripts/alpaca_json_to_jsonl.py --input data/raw/gpt4-instruct-similarity-0.6-dataset.json > data/gpt4-instruct-similarity-0.6-dataset.jsonl
18
  ```
 
19
 
20
- - Using JSONL makes it easier to subset the data if you want a smaller training set, i.e get 2000 random examples.
21
 
22
  ```shell
23
  shuf -n2000 data/vicuna_cleaned.jsonl > data/vicuna_cleaned.subset0.jsonl
 
1
 
2
+ ## Download some datasets
 
3
  ```shell
4
  curl https://raw.githubusercontent.com/tloen/alpaca-lora/main/alpaca_data_gpt4.json -o data/raw/alpaca_data_gpt4.json
5
  curl https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json -L -o data/raw/vicuna_cleaned.json
 
7
  curl https://github.com/teknium1/GPTeacher/blob/main/Roleplay/roleplay-similarity_0.6-instruct-dataset.json?raw=true -L -o data/raw/roleplay-similarity_0.6-instruct-dataset.json
8
  ```
9
 
10
+ ## Convert the JSON data files to JSONL.
11
 
12
  ```shell
13
  python3 ./scripts/alpaca_json_to_jsonl.py --input data/alpaca_data_gpt4.json > data/alpaca_data_gpt4.jsonl
 
15
  python3 ./scripts/alpaca_json_to_jsonl.py --input data/raw/roleplay-similarity_0.6-instruct-dataset.json > data/roleplay-similarity_0.6-instruct-dataset.jsonl
16
  python3 ./scripts/alpaca_json_to_jsonl.py --input data/raw/gpt4-instruct-similarity-0.6-dataset.json > data/gpt4-instruct-similarity-0.6-dataset.jsonl
17
  ```
18
+ ---
19
 
20
+ Using JSONL makes it easier to subset the data if you want a smaller training set, i.e get 2000 random examples.
21
 
22
  ```shell
23
  shuf -n2000 data/vicuna_cleaned.jsonl > data/vicuna_cleaned.subset0.jsonl
image/axolotl.png ADDED
src/axolotl/utils/models.py CHANGED
@@ -124,6 +124,7 @@ def load_model(
124
  base_model_config if base_model_config else base_model,
125
  model_path,
126
  device_map=cfg.device_map,
 
127
  groupsize=cfg.gptq_groupsize if cfg.gptq_groupsize else -1,
128
  is_v1_model=cfg.gptq_model_v1
129
  if cfg.gptq_model_v1 is not None
@@ -343,6 +344,7 @@ def load_lora(model, cfg):
343
  target_modules=cfg.lora_target_modules,
344
  lora_dropout=cfg.lora_dropout,
345
  fan_in_fan_out=cfg.lora_fan_in_fan_out,
 
346
  bias="none",
347
  task_type="CAUSAL_LM",
348
  )
 
124
  base_model_config if base_model_config else base_model,
125
  model_path,
126
  device_map=cfg.device_map,
127
+ half=cfg.fp16,
128
  groupsize=cfg.gptq_groupsize if cfg.gptq_groupsize else -1,
129
  is_v1_model=cfg.gptq_model_v1
130
  if cfg.gptq_model_v1 is not None
 
344
  target_modules=cfg.lora_target_modules,
345
  lora_dropout=cfg.lora_dropout,
346
  fan_in_fan_out=cfg.lora_fan_in_fan_out,
347
+ modules_to_save=cfg.lora_modules_to_save if cfg.lora_modules_to_save else None,
348
  bias="none",
349
  task_type="CAUSAL_LM",
350
  )