|
--- |
|
dataset_info: |
|
features: |
|
- name: tweet |
|
dtype: string |
|
- name: category |
|
dtype: string |
|
- name: data |
|
dtype: string |
|
- name: class |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 34225882 |
|
num_examples: 236738 |
|
- name: test |
|
num_bytes: 3789570 |
|
num_examples: 26313 |
|
download_size: 20731348 |
|
dataset_size: 38015452 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: test |
|
path: data/test-* |
|
--- |
|
# Combined Dataset |
|
|
|
This dataset contains tweets classified into various categories with an additional moderator label to indicate safety. |
|
|
|
## Features |
|
|
|
- **tweet**: The text of the tweet. |
|
- **class**: The category of the tweet (e.g., `neutral`, `hatespeech`, `counterspeech`). |
|
- **data**: Additional information about the tweet. |
|
- **moderator**: A label indicating if the tweet is `safe` or `unsafe`. |
|
|
|
## Usage |
|
|
|
This dataset is intended for training models in text classification, hate speech detection, or sentiment analysis. |
|
|
|
## Licensing |
|
|
|
This dataset is licensed under the [MIT License](https://opensource.org/licenses/MIT). |
|
|
|
|
|
### Hatebase data set has been curated from multiple benchmark datasets and converted into binary class problem. |
|
These are the following benchmark dataset: |
|
HateXplain : Converted hate,offensive, neither into binary Classification |
|
Peace Violence :Converted Peace and Violence, 4 classes into binary Classification |
|
Hate Offensive : Converted hate,offensive, neither into binary Classification |
|
OWS |
|
Go Emotion |
|
CallmeSexistBut.. : Binary classification along with toxicity score |
|
Slur : Based on slur, multiclass problem (DEG,NDEG,HOM, APPR) |
|
Stormfront : Whitesupermacist forum with Binary Classification |
|
UCberkley_HS : Multilclass hatespeech, counter hs or neutral (It has continuous score for eac class which is converted in our case) |
|
BIC (Each of 3 class has categorical score which is converted into binary using a threshold of 0.5) offensive, intent and lewd (sexual) --> |
|
|
|
|
|
train example: 222196 |
|
test examples: 24689 |
|
|
|
## Example |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
dataset = load_dataset("your-hf-username/combined-dataset") |
|
print(dataset['train'][0]) |
|
|