Fine-tune SmolLM's on custom synthetic data
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. Fine-tuning a language model like SmolLM involves several steps, from setting up the environment to training the model and saving the results. Below is a detailed step-by-step guide based on the provided notebook file.
Notebook | Link |
---|---|
SmolLM-FT-360M | SmolLM-FT-360M.ipynb |
SmolLM-FT-135M (Type 2) | SmolLM-FT-135M-T2.ipynb |
SmolLM2 Demo | SmolLM2 Demo.ipynb |
Hugging face access token
Don't forget to add or replace your own Hugging Face access token in the runtime to access gated lightweight models
Step 1: Setting Up the Environment
Before diving into fine-tuning, you need to set up your environment with the necessary libraries and tools.
Install Required Libraries:
- Install the necessary Python libraries using
pip
. These includetransformers
,datasets
,trl
,torch
,accelerate
,bitsandbytes
, andwandb
. - These libraries are essential for working with Hugging Face models, datasets, and training loops.
!pip install transformers datasets trl torch accelerate bitsandbytes wandb
- Install the necessary Python libraries using
Import Necessary Modules:
- Import the required modules from the installed libraries. These include
AutoModelForCausalLM
,AutoTokenizer
,TrainingArguments
,pipeline
,load_dataset
, andSFTTrainer
.
from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments, pipeline from datasets import load_dataset from trl import SFTConfig, SFTTrainer, setup_chat_format import torch import os
- Import the required modules from the installed libraries. These include
Detect Device (GPU, MPS, or CPU):
- Detect the available hardware (GPU, MPS, or CPU) to ensure the model runs on the most efficient device.
device = ( "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu" )
Step 2: Load the Pre-trained Model and Tokenizer
Next, load the pre-trained SmolLM model and its corresponding tokenizer.
Load the Model and Tokenizer:
- Use
AutoModelForCausalLM
andAutoTokenizer
to load the SmolLM model and tokenizer from Hugging Face.
model_name = "HuggingFaceTB/SmolLM2-360M" model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=model_name) tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=model_name)
- Use
Set Up Chat Format:
- Use the
setup_chat_format
function to prepare the model and tokenizer for chat-based tasks.
model, tokenizer = setup_chat_format(model=model, tokenizer=tokenizer)
- Use the
Test the Base Model:
- Test the base model with a simple prompt to ensure it’s working correctly.
prompt = "Explain AGI ?" pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0 if device == "cuda" else -1) print(pipe(prompt, max_new_tokens=200))
If: Encountering:
- Chat template is already added to the tokenizer, indicates that the tokenizer already has a predefined chat template, which prevents the setup_chat_format() from modifying it again.
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name = "HuggingFaceTB/SmolLM2-1.7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=model_name)
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=model_name)
tokenizer.chat_template = None
from trl.models.utils import setup_chat_format
model, tokenizer = setup_chat_format(model=model, tokenizer=tokenizer)
prompt = "Explain AGI?"
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
print(pipe(prompt, max_new_tokens=200))
📍 Else Skip the Part [ Step 4 ] !
Step 3: Load and Prepare the Dataset
Fine-tuning requires a dataset. In this case, we’re using a custom dataset called Deepthink-Reasoning
.
Load the Dataset:
- Use the
load_dataset
function to load the dataset from Hugging Face.
ds = load_dataset("prithivMLmods/Deepthink-Reasoning")
- Use the
Tokenize the Dataset:
- Define a tokenization function that processes the dataset in batches. This function applies the chat template to each prompt-response pair and tokenizes the text.
def tokenize_function(examples): prompts = [p.strip() for p in examples["prompt"]] responses = [r.strip() for r in examples["response"]] texts = [ tokenizer.apply_chat_template( [{"role": "user", "content": p}, {"role": "assistant", "content": r}], tokenize=False ) for p, r in zip(prompts, responses) ] return tokenizer(texts, truncation=True, padding="max_length", max_length=512)
Apply Tokenization:
- Apply the tokenization function to the dataset.
ds = ds.map(tokenize_function, batched=True)
Step 4: Configure Training Arguments
Set up the training arguments to control the fine-tuning process.
Define Training Arguments:
- Use
TrainingArguments
to specify parameters like batch size, learning rate, number of steps, and optimization settings.
use_bf16 = torch.cuda.is_bf16_supported() training_args = TrainingArguments( per_device_train_batch_size=2, gradient_accumulation_steps=4, warmup_steps=5, max_steps=60, learning_rate=2e-4, fp16=not use_bf16, bf16=use_bf16, logging_steps=1, optim="adamw_8bit", weight_decay=0.01, lr_scheduler_type="linear", seed=3407, output_dir="outputs", report_to="wandb", )
- Use
Step 5: Initialize the Trainer
Initialize the SFTTrainer
with the model, tokenizer, dataset, and training arguments.
trainer = SFTTrainer(
model=model,
processing_class=tokenizer,
train_dataset=ds["train"],
args=training_args,
)
Step 6: Start Training
Begin the fine-tuning process by calling the train
method on the trainer.
trainer.train()
Step 7: Save the Fine-Tuned Model
After training, save the fine-tuned model and tokenizer to a local directory.
Save Model and Tokenizer:
- Use the
save_pretrained
method to save the model and tokenizer.
save_directory = "/content/my_model" model.save_pretrained(save_directory) tokenizer.save_pretrained(save_directory)
- Use the
Zip and Download the Model:
- Zip the saved directory and download it for future use.
import shutil shutil.make_archive(save_directory, 'zip', save_directory) from google.colab import files files.download(f"{save_directory}.zip")
Model & Quant
Item | Link |
---|---|
Model | SmolLM2-CoT-360M |
Quantized Version | SmolLM2-CoT-360M-GGUF |
Conclusion
Fine-tuning SmolLM involves setting up the environment, loading the model and dataset, configuring training parameters, and running the training loop. By following these steps, you can adapt SmolLM to your specific use case, whether it’s for reasoning tasks, chat-based applications, or other NLP tasks.
This process is highly customizable, so feel free to experiment with different datasets, hyperparameters, and training strategies to achieve the best results for your project.