File size: 2,973 Bytes
0cb4060
 
26712aa
 
 
0cb4060
26712aa
 
 
 
 
 
284daa3
26712aa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
license: apache-2.0
base_model: state-spaces/mamba-130m-hf
tokenizer: yhavinga/dutch-llama-tokenizer
datasets: Kalamazooter/GeminiPhiDutch
---
# A Tiny Dutch model, just-about semi-coherent

![RatelSlang](RatelSlang-Micro.jpg)

## Overview

An experimental fine-tune of [mamba-130m](https://hf.co/state-spaces/mamba-130m-hf) using the [GeminiPhi Dataset](https://hf.co/Kalamazooter/GeminiPhiDutch) and the [dutch-llama-tokenizer by yhavinga](https://huggingface.co/yhavinga/dutch-llama-tokenizer)

# Usage

You need to install `transformers` from `main` until `transformers=4.39.0` is released. 
```bash
pip install git+https://github.com/huggingface/transformers@main
```

We also recommend you to install both `causal_conv_1d` and `mamba-ssm` using: 

```bash
pip install causal-conv1d>=1.2.0
pip install mamba-ssm
```

If any of these two is not installed, the "eager" implementation will be used. Otherwise the more optimised `cuda` kernels will be used.

## Generation
You can use the classic `generate` API:
**setup (For Cuda)**
```python
from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
import torch
device = torch.device('cuda:0')
tokenizer = AutoTokenizer.from_pretrained("Kalamazooter/RatelSlang-Micro-130M")
model = MambaForCausalLM.from_pretrained("Kalamazooter/RatelSlang-Micro-130M")
model = model.to(device)
```
**Inference**
```python
input_ids = tokenizer("**Vraag: Ik heb 4 schapen, per schaap heb ik 3 lammetjes, hoeveel lammetjes heb ik?\n\n Antwoord:", return_tensors="pt").input_ids.to(device)
out = model.generate(input_ids, max_new_tokens=50)
print(tokenizer.batch_decode(out))
['<s> **Vraag: Ik heb 4 schapen, per schaap heb ik 3 lammetjes, hoeveel lammetjes heb ik?\n\n Antwoord:\n\n1. Bereken het aantal lammetjes dat je hebt: 4 schapen x 3 lammetjes per schaap = 12 lammetjes\n2. Bereken het aantal lammetjes dat je hebt: 12 lam']
```

## PEFT finetuning example
In order to finetune using the `peft` library, it is recommend to keep the model in float32!

```python
from datasets import load_dataset
from trl import SFTTrainer
from peft import LoraConfig
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments
tokenizer = AutoTokenizer.from_pretrained("Kalamazooter/RatelSlang-Micro-130M")
model = AutoModelForCausalLM.from_pretrained("Kalamazooter/RatelSlang-Micro-130M")
dataset = load_dataset("Abirate/english_quotes", split="train")
training_args = TrainingArguments(
    output_dir="./results",
    num_train_epochs=3,
    per_device_train_batch_size=4,
    logging_dir='./logs',
    logging_steps=10,
    learning_rate=2e-3
)
lora_config =  LoraConfig(
        r=8,
        target_modules=["x_proj", "embeddings", "in_proj", "out_proj"],
        task_type="CAUSAL_LM",
        bias="none"
)
trainer = SFTTrainer(
    model=model,
    tokenizer=tokenizer,
    args=training_args,
    peft_config=lora_config,
    train_dataset=dataset,
    dataset_text_field="quote",
)
trainer.train()
```