BenjaminB HF staff amannagrawall002 commited on
Commit
617c1d0
1 Parent(s): 1c735d3

Update lora_clm_with_additional_tokens.ipynb (#5)

Browse files

- Update lora_clm_with_additional_tokens.ipynb (1c16df5a5d0e9f5546bc26e086a3a20b5d7b2d0f)


Co-authored-by: Aman Agrawal <[email protected]>

lora_clm_with_additional_tokens.ipynb CHANGED
@@ -10,7 +10,7 @@
10
  "In this example, we will learn how to train a LoRA model when adding new tokens to the tokenizer and model. \n",
11
  "This is a common usecase when doing the following:\n",
12
  "1. Instruction finetuning with new tokens beind added such as `<|user|>`, `<|assistant|>`, `<|system|>`, `</s>`, `<s>` to properly format the conversations\n",
13
- "2. Finetuning on a specific language wherein language spoecific tokens are added, e.g., korean tokens being added to vocabulary for finetuning LLM on Korean datasets.\n",
14
  "3. Instruction finetuning to return outputs in certain format to enable agent behaviour new tokens such as `<|FUNCTIONS|>`, `<|BROWSE|>`, `<|TEXT2IMAGE|>`, `<|ASR|>`, `<|TTS|>`, `<|GENERATECODE|>`, `<|RAG|>`.\n",
15
  "\n",
16
  "In such cases, you add the Embedding modules to the LORA `target_modules`. PEFT will take care of saving the embedding layers with the new added tokens along with the adapter weights that were trained on the specific initialization of the embeddings weights of the added tokens."
 
10
  "In this example, we will learn how to train a LoRA model when adding new tokens to the tokenizer and model. \n",
11
  "This is a common usecase when doing the following:\n",
12
  "1. Instruction finetuning with new tokens beind added such as `<|user|>`, `<|assistant|>`, `<|system|>`, `</s>`, `<s>` to properly format the conversations\n",
13
+ "2. Finetuning on a specific language wherein language specific tokens are added, e.g., korean tokens being added to vocabulary for finetuning LLM on Korean datasets.\n",
14
  "3. Instruction finetuning to return outputs in certain format to enable agent behaviour new tokens such as `<|FUNCTIONS|>`, `<|BROWSE|>`, `<|TEXT2IMAGE|>`, `<|ASR|>`, `<|TTS|>`, `<|GENERATECODE|>`, `<|RAG|>`.\n",
15
  "\n",
16
  "In such cases, you add the Embedding modules to the LORA `target_modules`. PEFT will take care of saving the embedding layers with the new added tokens along with the adapter weights that were trained on the specific initialization of the embeddings weights of the added tokens."