Tsunami-0.5-7B-Instruct
TSUNAMI: Transformative Semantic Understanding and Natural Augmentation Model for Intelligence.
TSUNAMI full name was created by ChatGPT.
infomation
Tsunami-0.5-7B-Instruct is Thai Large Language Model that fine-tuned from Qwen2.5-7B around 60,000 rows in Thai-specific domain.
Prompt Template
This model uses ChatML
prompt template:
<|im_start|>system
{System}<|im_end|>
<|im_start|>user
{User}<|im_end|>
<|im_start|>assistant
{Assistant}
How to use
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "Tsunami-th/Tsunami-0.5-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "สวัสดีครับ"}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer(text, return_tensors="pt")
inputs = inputs.to(model.device)
with torch.no_grad():
output = model.generate(**inputs, max_new_tokens=512)
response = tokenizer.decode(output[0, len(inputs['input_ids'][0]):], skip_special_tokens=True)
Author
- Pollakrit Lorprasertkul | [email protected]
- Tsunami-0.5-7B-Instruct is the version 0.5 that did not train on the whole dataset.
- Tsunami-1.0-7B-Instruct is coming soon.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 28.04 |
IFEval (0-Shot) | 74.00 |
BBH (3-Shot) | 36.14 |
MATH Lvl 5 (4-Shot) | 0.15 |
GPQA (0-shot) | 7.83 |
MuSR (0-shot) | 12.21 |
MMLU-PRO (5-shot) | 37.92 |
- Downloads last month
- 54
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Tsunami-th/Tsunami-0.5-7B-Instruct
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard74.000
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard36.140
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard0.150
- acc_norm on GPQA (0-shot)Open LLM Leaderboard7.830
- acc_norm on MuSR (0-shot)Open LLM Leaderboard12.210
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard37.920