File size: 2,180 Bytes
68b953a
 
 
 
 
03e4b49
68b953a
 
 
03e4b49
68b953a
 
 
16439ac
 
68b953a
16439ac
 
 
 
68b953a
 
 
 
 
 
 
 
70c2a96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6c877d8
70c2a96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
dataset_info:
  features:
  - name: tweet
    dtype: string
  - name: category
    dtype: string
  - name: data
    dtype: string
  - name: class
    dtype: string
  splits:
  - name: train
    num_bytes: 34225882
    num_examples: 236738
  - name: test
    num_bytes: 3789570
    num_examples: 26313
  download_size: 20731348
  dataset_size: 38015452
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---
# Combined Dataset

This dataset contains tweets classified into various categories with an additional moderator label to indicate safety.

## Features

- **tweet**: The text of the tweet.
- **class**: The category of the tweet (e.g., `neutral`, `hatespeech`, `counterspeech`).
- **data**: Additional information about the tweet.
- **moderator**: A label indicating if the tweet is `safe` or `unsafe`.

## Usage

This dataset is intended for training models in text classification, hate speech detection, or sentiment analysis.

## Licensing

This dataset is licensed under the [MIT License](https://opensource.org/licenses/MIT).


### Hatebase data set has been curated from multiple benchmark datasets and converted into binary class problem.
These are the following benchmark dataset:
HateXplain : Converted hate,offensive, neither into binary Classification
Peace Violence :Converted  Peace and Violence, 4 classes into binary Classification 
Hate Offensive : Converted hate,offensive, neither into binary Classification
OWS
Go Emotion
CallmeSexistBut.. : Binary classification along with toxicity score 
Slur : Based on slur, multiclass problem (DEG,NDEG,HOM, APPR)
Stormfront : Whitesupermacist forum with Binary Classification
UCberkley_HS : Multilclass hatespeech, counter hs or neutral (It has continuous score for eac class which is converted in our case)
BIC (Each of 3 class has categorical score which is converted into binary using a threshold of 0.5) offensive, intent and lewd (sexual) -->


train example: 222196
test examples: 24689

## Example

```python
from datasets import load_dataset

dataset = load_dataset("your-hf-username/combined-dataset")
print(dataset['train'][0])