Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
csv
Languages:
English
Size:
10K - 100K
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,13 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
4 |
The dataset contains labeled examples of toxic and non-toxic text comments. Each comment is labeled with one of the two classes: "hate" or "not hate."
|
5 |
|
6 |
The classes are balanced, with an equal number of samples for each class.
|
@@ -9,14 +16,14 @@ The dataset is a smaller version of the original dataset [Toxic Comment Classifi
|
|
9 |
|
10 |
The original dataset contains an unequal distribution of “_hate_” and “_not hate_” samples for multi-classification.
|
11 |
|
12 |
-
The smaller version contains
|
13 |
|
14 |
## Dataset Details
|
15 |
Dataset Name: Toxic-Content Dataset
|
16 |
|
17 |
Language: English
|
18 |
|
19 |
-
Total Size: Over
|
20 |
|
21 |
## Contents
|
22 |
⚠️ THE EXAMPLES IN THIS DATASET CONTAIN TOXIC/OFFENSIVE LANGUAGE ⚠️
|
@@ -42,11 +49,11 @@ Toxic-Content Dataset can be utilized to train models to detect harmful/toxic te
|
|
42 |
|
43 |
## How to use
|
44 |
It is available only in English.
|
45 |
-
|
46 |
from datasets import load_dataset
|
47 |
|
48 |
-
dataset = load_dataset("AiresPucrs/toxic_content")
|
49 |
-
|
50 |
## License
|
51 |
|
52 |
Dataset License
|
@@ -55,5 +62,4 @@ The Toxic-Content Dataset is licensed under the Apache License, Version 2.0. See
|
|
55 |
|
56 |
|
57 |
## Disclaimer
|
58 |
-
This dataset is provided as is, without any warranty or guarantee of its accuracy or suitability for any purpose. The creators and contributors of this dataset are not liable for any damages or losses arising from its use. Please review and comply with the licenses and terms of the original datasets before use.
|
59 |
-
---
|
|
|
1 |
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pretty_name: toxic-comments
|
6 |
+
size_categories:
|
7 |
+
- 10K<n<100K
|
8 |
---
|
9 |
+
# Toxic-comments
|
10 |
+
|
11 |
The dataset contains labeled examples of toxic and non-toxic text comments. Each comment is labeled with one of the two classes: "hate" or "not hate."
|
12 |
|
13 |
The classes are balanced, with an equal number of samples for each class.
|
|
|
16 |
|
17 |
The original dataset contains an unequal distribution of “_hate_” and “_not hate_” samples for multi-classification.
|
18 |
|
19 |
+
The smaller version contains equal “_hate_” and “_not hate_” samples.
|
20 |
|
21 |
## Dataset Details
|
22 |
Dataset Name: Toxic-Content Dataset
|
23 |
|
24 |
Language: English
|
25 |
|
26 |
+
Total Size: Over 70,157 demonstrations
|
27 |
|
28 |
## Contents
|
29 |
⚠️ THE EXAMPLES IN THIS DATASET CONTAIN TOXIC/OFFENSIVE LANGUAGE ⚠️
|
|
|
49 |
|
50 |
## How to use
|
51 |
It is available only in English.
|
52 |
+
```python
|
53 |
from datasets import load_dataset
|
54 |
|
55 |
+
dataset = load_dataset("AiresPucrs/toxic_content", split = 'train')
|
56 |
+
```
|
57 |
## License
|
58 |
|
59 |
Dataset License
|
|
|
62 |
|
63 |
|
64 |
## Disclaimer
|
65 |
+
This dataset is provided as is, without any warranty or guarantee of its accuracy or suitability for any purpose. The creators and contributors of this dataset are not liable for any damages or losses arising from its use. Please review and comply with the licenses and terms of the original datasets before use.
|
|