File size: 2,481 Bytes
229a0a6
 
03fe74b
 
 
 
 
 
 
 
229a0a6
03fe74b
 
 
 
 
 
 
 
 
 
 
 
656382c
03fe74b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
license: mit
language: de
tags:
- bert
- ner
metrics:
- type: accuracy
  value: 0.922
base_model: "deepset/gbert-base"
---

# NERToxicBERT

This model was trained to do a token classification of online comments to determine
whether the token contains a vulgarity or not (swear words, insult, ...).

This model is based don GBERT from deepset (https://huggingface.co/deepset/gbert-base) which was mainly trained on wikipedia.
To this model we added a freshly initialized token classification header, which had to be trained on our labeled data.

# Training

For the training a dataset of 4500 comments german comments label on toxicity was used.
This dataset is not publicly available, but can be requested form TU-Wien (https://doi.org/10.5281/zenodo.10996203).


## Data preparation

The dataset contains additional tags, which are
* Target_Group
* Target_Individual
* Target_Other
* Vulgarity

We decided to use the Vulgarity tag to mark the words which are considered to be an insult.
1306 Comments contained a Vulgarity, but 452 did not belong to a toxic considered comment.
These comments are split into 1484 number of sentences containing vulgarities.
Data prepared to have sentence by sentence data set tagged with vulgarity token. [‘O’,’Vul’] (1484 sentences).
A 80/10/10 train/validation/test split was used.

### Training Setup

Out of 4500 comments 1306 contained a vulgarity tags.
In order to identify an optimally performing model for classifying toxic speech, a large set of models was trained and evaluated.
Hyperparameter:
- Layer 2 and 6 layers frozen
- 5 and 10 epochs, with a batch size of 8
 

### Model Evaluation


The best model used 2 frozen layers and was evaluated on the training set with the following metrics:

| accuracy | f1 | precision | recall |
|----------|----|-----------|--------|
| 0.922 | 0.761 | 0.815 | 0.764 |


## Usage 

Here is how to use this model to get the features of a given text in PyTorch:

```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import numpy as np

from transformers import pipeline

# Replace this with your own checkpoint
model_checkpoint = "./saved_model"
token_classifier = pipeline(
    "token-classification", model=model_checkpoint, aggregation_strategy="simple"
)

print(token_classifier("Die Fpö hat also auch ein Bescheuert-Gen in ihrer politischen DNA."))
```
[{'entity_group': 'Vul', 'score': 0.9548946, 'word': 'Bescheuert - Gen', 'start': 26, 'end': 40}]