Update README.md
Browse files
README.md
CHANGED
@@ -28,3 +28,47 @@ configs:
|
|
28 |
- split: test
|
29 |
path: data/test-*
|
30 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
- split: test
|
29 |
path: data/test-*
|
30 |
---
|
31 |
+
# Combined Dataset
|
32 |
+
|
33 |
+
This dataset contains tweets classified into various categories with an additional moderator label to indicate safety.
|
34 |
+
|
35 |
+
## Features
|
36 |
+
|
37 |
+
- **tweet**: The text of the tweet.
|
38 |
+
- **class**: The category of the tweet (e.g., `neutral`, `hatespeech`, `counterspeech`).
|
39 |
+
- **data**: Additional information about the tweet.
|
40 |
+
- **moderator**: A label indicating if the tweet is `safe` or `unsafe`.
|
41 |
+
|
42 |
+
## Usage
|
43 |
+
|
44 |
+
This dataset is intended for training models in text classification, hate speech detection, or sentiment analysis.
|
45 |
+
|
46 |
+
## Licensing
|
47 |
+
|
48 |
+
This dataset is licensed under the [MIT License](https://opensource.org/licenses/MIT).
|
49 |
+
|
50 |
+
|
51 |
+
Hatebase data set has been curated from multiple benchmark datasets and converted into binary class problem.
|
52 |
+
These are the following benchmark dataset:
|
53 |
+
HateXplain : Converted hate,offensive, neither into binary Classification
|
54 |
+
Peace Violence :Converted Peace and Violence, 4 classes into binary Classification
|
55 |
+
Hate Offensive : Converted hate,offensive, neither into binary Classification
|
56 |
+
OWS
|
57 |
+
Go Emotion
|
58 |
+
CallmeSexistBut.. : Binary classification along with toxicity score
|
59 |
+
Slur : Based on slur, multiclass problem (DEG,NDEG,HOM, APPR)
|
60 |
+
Stormfront : Whitesupermacist forum with Binary Classification
|
61 |
+
UCberkley_HS : Multilclass hatespeech, counter hs or neutral (It has continuous score for eac class which is converted in our case)
|
62 |
+
BIC (Each of 3 class has categorical score which is converted into binary using a threshold of 0.5) offensive, intent and lewd (sexual) -->
|
63 |
+
|
64 |
+
|
65 |
+
train example: 222196
|
66 |
+
test examples: 24689
|
67 |
+
|
68 |
+
## Example
|
69 |
+
|
70 |
+
```python
|
71 |
+
from datasets import load_dataset
|
72 |
+
|
73 |
+
dataset = load_dataset("your-hf-username/combined-dataset")
|
74 |
+
print(dataset['train'][0])
|