Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
Libraries:
Datasets
pandas
License:
dieineb commited on
Commit
24471c2
·
verified ·
1 Parent(s): 62d0080

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -2
README.md CHANGED
@@ -1,20 +1,26 @@
1
  ---
2
  {}
3
  ---
4
- The dataset contains labeled examples of toxic and non-toxic text comments. Each comment is labeled with one of the two classes: "hate" or "not hate." The classes are balanced, with an equal number of samples for each class.
 
 
5
 
6
  The dataset is a smaller version of the original dataset [Toxic Comment Classification Challenge Dataset](https://github.com/tianqwang/Toxic-Comment-Classification-Challenge), originally curated by the [Conversation AI](https://conversationai.github.io/) team. The original dataset contains an unequal distribution of “_hate_” and “_not hate_” samples for multi-classification. The smaller version contains an equal amount of “_hate_” and “_not hate_” samples.
7
 
8
  ## Dataset Details
9
  Dataset Name: Toxic-Content Dataset
 
10
  Language: English
 
11
  Total Size: Over 70157 demonstrations
12
 
13
  ## Contents
14
  ⚠️ THE EXAMPLES IN THIS DATASET CONTAIN TOXIC/OFFENSIVE LANGUAGE ⚠️
 
15
  The dataset consists of data frames with the following columns:
16
 
17
  non_toxic_response: Text that was evaluated as non-toxic.
 
18
  toxic_response: Text evaluated as toxic.
19
 
20
  {
@@ -32,12 +38,15 @@ Toxic-Content Dataset can be utilized to train models to detect harmful/toxic te
32
 
33
  ## How to use
34
  It is available only in English.
 
35
  from datasets import load_dataset
36
- dataset = load_dataset("dieineb/toxic_content")
 
37
 
38
  ## License
39
 
40
  Dataset License
 
41
  The Toxic-Content Dataset is licensed under the Apache License, Version 2.0. See the LICENSE file for more details.
42
 
43
 
 
1
  ---
2
  {}
3
  ---
4
+ The dataset contains labeled examples of toxic and non-toxic text comments. Each comment is labeled with one of the two classes: "hate" or "not hate."
5
+
6
+ The classes are balanced, with an equal number of samples for each class.
7
 
8
  The dataset is a smaller version of the original dataset [Toxic Comment Classification Challenge Dataset](https://github.com/tianqwang/Toxic-Comment-Classification-Challenge), originally curated by the [Conversation AI](https://conversationai.github.io/) team. The original dataset contains an unequal distribution of “_hate_” and “_not hate_” samples for multi-classification. The smaller version contains an equal amount of “_hate_” and “_not hate_” samples.
9
 
10
  ## Dataset Details
11
  Dataset Name: Toxic-Content Dataset
12
+
13
  Language: English
14
+
15
  Total Size: Over 70157 demonstrations
16
 
17
  ## Contents
18
  ⚠️ THE EXAMPLES IN THIS DATASET CONTAIN TOXIC/OFFENSIVE LANGUAGE ⚠️
19
+
20
  The dataset consists of data frames with the following columns:
21
 
22
  non_toxic_response: Text that was evaluated as non-toxic.
23
+
24
  toxic_response: Text evaluated as toxic.
25
 
26
  {
 
38
 
39
  ## How to use
40
  It is available only in English.
41
+
42
  from datasets import load_dataset
43
+
44
+ dataset = load_dataset("AiresPucrs/toxic_content")
45
 
46
  ## License
47
 
48
  Dataset License
49
+
50
  The Toxic-Content Dataset is licensed under the Apache License, Version 2.0. See the LICENSE file for more details.
51
 
52