patriciacarla
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,15 +1,60 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
language:
|
4 |
- en
|
5 |
- it
|
6 |
- sl
|
|
|
|
|
|
|
|
|
|
|
7 |
metrics:
|
8 |
- accuracy
|
|
|
|
|
9 |
- f1
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
|
|
3 |
- en
|
4 |
- it
|
5 |
- sl
|
6 |
+
tags:
|
7 |
+
- hate-speech-detection
|
8 |
+
- multilingual
|
9 |
+
- XLM-R
|
10 |
+
license: apache-2.0
|
11 |
metrics:
|
12 |
- accuracy
|
13 |
+
- precision
|
14 |
+
- recall
|
15 |
- f1
|
16 |
+
---
|
17 |
+
|
18 |
+
# Multilingual Hate Speech Classifier
|
19 |
+
|
20 |
+
## Model Description
|
21 |
+
|
22 |
+
This model is a multilingual hate speech classifier based on the XLM-R architecture. It is trained to detect hate speech in English (EN), Italian (IT), and Slovene (SL). The model leverages multilingual datasets and incorporates techniques to learn from disagreement among annotators, making it robust in understanding and identifying nuanced hate speech across different languages.
|
23 |
+
|
24 |
+
## Model Details
|
25 |
+
|
26 |
+
- **Model Name:** Multilingual Hate Speech Classifier
|
27 |
+
- **Model Architecture:** XLM-R (XLM-RoBERTa)
|
28 |
+
- **Languages Supported:** English (EN), Italian (IT), Slovene (SL)
|
29 |
+
|
30 |
+
## Training Data
|
31 |
+
|
32 |
+
The model is trained using a multilingual dataset consisting of Twitter and YouTube comments in EN, IT and SL.
|
33 |
+
|
34 |
+
### Techniques Used
|
35 |
+
|
36 |
+
- **Multilingual Training:** The model is trained on datasets in multiple languages, allowing it to generalize well across different languages.
|
37 |
+
- **Learning from Disagreement:** The model incorporates techniques to learn from annotator disagreement, improving its ability to handle ambiguous and nuanced cases of hate speech.
|
38 |
+
|
39 |
+
## Evaluation Metrics
|
40 |
+
|
41 |
+
The model's performance is evaluated using the following metrics:
|
42 |
+
|
43 |
+
- **Krippendorff's Ordinal Alpha**
|
44 |
+
- **Accuracy**
|
45 |
+
- **Precision**
|
46 |
+
- **Recall**
|
47 |
+
- **F1 Score**
|
48 |
+
|
49 |
+
These metrics are computed for each language separately, as well as across the entire multilingual dataset. Krippendorff's Alpha was used to measure both the disagreement between the annotators themselves and
|
50 |
+
between the annotators and the model.
|
51 |
+
|
52 |
+
### Primary Use Case
|
53 |
+
|
54 |
+
The primary use case for this model is to automatically detect and moderate hate speech on social media platforms, online forums, and other digital content platforms. This can help in reducing the spread of harmful content and maintaining a safe online environment.
|
55 |
+
|
56 |
+
### Limitations
|
57 |
+
|
58 |
+
- The model may struggle with extremely nuanced cases where context is critical.
|
59 |
+
- False positives can occur, where non-hate speech content is incorrectly classified as hate speech.
|
60 |
+
- The performance may vary for languages not included in the training data.
|