Upload README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,78 @@
|
|
1 |
---
|
2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
language: de
|
4 |
+
tags:
|
5 |
+
- bert
|
6 |
+
- ner
|
7 |
+
metrics:
|
8 |
+
- type: accuracy
|
9 |
+
value: 0.922
|
10 |
+
base_model: "deepset/gbert-base"
|
11 |
---
|
12 |
+
|
13 |
+
# NERToxicBERT
|
14 |
+
|
15 |
+
This model was trained to do a token classification of online comments to determine
|
16 |
+
whether the token contains a vulgarity or not (swear words, insult, ...).
|
17 |
+
|
18 |
+
This model is based don GBERT from deepset (https://huggingface.co/deepset/gbert-base) which was mainly trained on wikipedia.
|
19 |
+
To this model we added a freshly initialized token classification header, which had to be trained on our labeled data.
|
20 |
+
|
21 |
+
# Training
|
22 |
+
|
23 |
+
For the training a dataset of 4500 comments german comments label on toxicity was used.
|
24 |
+
This dataset is not publicly available, but can be requested form TU-Wien ([email protected]).
|
25 |
+
|
26 |
+
|
27 |
+
## Data preparation
|
28 |
+
|
29 |
+
The dataset contains additional tags, which are
|
30 |
+
* Target_Group
|
31 |
+
* Target_Individual
|
32 |
+
* Target_Other
|
33 |
+
* Vulgarity
|
34 |
+
|
35 |
+
We decided to use the Vulgarity tag to mark the words which are considered to be an insult.
|
36 |
+
1306 Comments contained a Vulgarity, but 452 did not belong to a toxic considered comment.
|
37 |
+
These comments are split into 1484 number of sentences containing vulgarities.
|
38 |
+
Data prepared to have sentence by sentence data set tagged with vulgarity token. [‘O’,’Vul’] (1484 sentences).
|
39 |
+
A 80/10/10 train/validation/test split was used.
|
40 |
+
|
41 |
+
### Training Setup
|
42 |
+
|
43 |
+
Out of 4500 comments 1306 contained a vulgarity tags.
|
44 |
+
In order to identify an optimally performing model for classifying toxic speech, a large set of models was trained and evaluated.
|
45 |
+
Hyperparameter:
|
46 |
+
- Layer 2 and 6 layers frozen
|
47 |
+
- 5 and 10 epochs, with a batch size of 8
|
48 |
+
|
49 |
+
|
50 |
+
### Model Evaluation
|
51 |
+
|
52 |
+
|
53 |
+
The best model used 2 frozen layers and was evaluated on the training set with the following metrics:
|
54 |
+
|
55 |
+
| accuracy | f1 | precision | recall |
|
56 |
+
|----------|----|-----------|--------|
|
57 |
+
| 0.922 | 0.761 | 0.815 | 0.764 |
|
58 |
+
|
59 |
+
|
60 |
+
## Usage
|
61 |
+
|
62 |
+
Here is how to use this model to get the features of a given text in PyTorch:
|
63 |
+
|
64 |
+
```python
|
65 |
+
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
66 |
+
import numpy as np
|
67 |
+
|
68 |
+
from transformers import pipeline
|
69 |
+
|
70 |
+
# Replace this with your own checkpoint
|
71 |
+
model_checkpoint = "./saved_model"
|
72 |
+
token_classifier = pipeline(
|
73 |
+
"token-classification", model=model_checkpoint, aggregation_strategy="simple"
|
74 |
+
)
|
75 |
+
|
76 |
+
print(token_classifier("Die Fpö hat also auch ein Bescheuert-Gen in ihrer politischen DNA."))
|
77 |
+
```
|
78 |
+
[{'entity_group': 'Vul', 'score': 0.9548946, 'word': 'Bescheuert - Gen', 'start': 26, 'end': 40}]
|