joonavel commited on
Commit
3ac15e6
·
verified ·
1 Parent(s): 52c8e22

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -7
README.md CHANGED
@@ -189,9 +189,6 @@ https://huggingface.co/datasets/won75/text_to_sql_ko
189
 
190
  https://github.com/100suping/train_with_unsloth
191
 
192
- ```
193
-
194
- ```
195
 
196
  ### Preprocess Functions
197
 
@@ -247,6 +244,29 @@ def formatting_prompts_func(examples):
247
  - CPU 16 vCore
248
  - Memory 192 GiB
249
  - Storage 100 GiB
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
250
 
251
  ## Citation [optional]
252
 
@@ -267,13 +287,10 @@ def formatting_prompts_func(examples):
267
  }
268
  ```
269
 
270
- ## Model Card Authors [optional]
271
 
272
  joonavel[https://github.com/joonavel] from 100suping [https://github.com/100suping]
273
 
274
- ## Model Card Contact
275
-
276
- [More Information Needed]
277
  ### Framework versions
278
 
279
  - PEFT 0.13.2
 
189
 
190
  https://github.com/100suping/train_with_unsloth
191
 
 
 
 
192
 
193
  ### Preprocess Functions
194
 
 
244
  - CPU 16 vCore
245
  - Memory 192 GiB
246
  - Storage 100 GiB
247
+ - **Memory-Used(GPU VRAM):** ~60GB
248
+
249
+ ## For Continuous Instruction-tuning
250
+
251
+ To use this LoRA adapter, refer to the following code:
252
+
253
+ ```
254
+ from peft import PeftModel
255
+
256
+ bnb_config = get_bnb_config(bit=bit)
257
+
258
+ model, tokenizer = FastLanguageModel.from_pretrained(
259
+ model_name=model_name,
260
+ dtype=None,
261
+ quantization_config=bnb_config,
262
+ )
263
+
264
+ model = PeftModel.from_pretrained(model, adapter_path, is_trainable=True)
265
+ model = FastLanguageModel.patch_peft_model(model, use_gradient_checkpointing="unsloth")
266
+
267
+ model.print_trainable_parameters()
268
+ ```
269
+
270
 
271
  ## Citation [optional]
272
 
 
287
  }
288
  ```
289
 
290
+ ## Model Card Authors
291
 
292
  joonavel[https://github.com/joonavel] from 100suping [https://github.com/100suping]
293
 
 
 
 
294
  ### Framework versions
295
 
296
  - PEFT 0.13.2