Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
Libraries:
Datasets
pandas
License:
dieineb commited on
Commit
62d0080
·
1 Parent(s): a47571d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ {}
3
+ ---
4
+ The dataset contains labeled examples of toxic and non-toxic text comments. Each comment is labeled with one of the two classes: "hate" or "not hate." The classes are balanced, with an equal number of samples for each class.
5
+
6
+ The dataset is a smaller version of the original dataset [Toxic Comment Classification Challenge Dataset](https://github.com/tianqwang/Toxic-Comment-Classification-Challenge), originally curated by the [Conversation AI](https://conversationai.github.io/) team. The original dataset contains an unequal distribution of “_hate_” and “_not hate_” samples for multi-classification. The smaller version contains an equal amount of “_hate_” and “_not hate_” samples.
7
+
8
+ ## Dataset Details
9
+ Dataset Name: Toxic-Content Dataset
10
+ Language: English
11
+ Total Size: Over 70157 demonstrations
12
+
13
+ ## Contents
14
+ ⚠️ THE EXAMPLES IN THIS DATASET CONTAIN TOXIC/OFFENSIVE LANGUAGE ⚠️
15
+ The dataset consists of data frames with the following columns:
16
+
17
+ non_toxic_response: Text that was evaluated as non-toxic.
18
+ toxic_response: Text evaluated as toxic.
19
+
20
+ {
21
+ "toxic_response": "I think you should shut up your big mouth.",
22
+
23
+ "non_toxic_response":"I do not agree with you.",
24
+ }
25
+
26
+ ## Intended Use
27
+
28
+ The purpose of this dataset is to serve as a resource in an educational setting, where its primary aim is to aid in detecting toxicity within textual content and identifying potentially harmful language.
29
+
30
+ ## Use Cases
31
+ Toxic-Content Dataset can be utilized to train models to detect harmful/toxic text.
32
+
33
+ ## How to use
34
+ It is available only in English.
35
+ from datasets import load_dataset
36
+ dataset = load_dataset("dieineb/toxic_content")
37
+
38
+ ## License
39
+
40
+ Dataset License
41
+ The Toxic-Content Dataset is licensed under the Apache License, Version 2.0. See the LICENSE file for more details.
42
+
43
+
44
+ ## Disclaimer
45
+ This dataset is provided as is, without any warranty or guarantee of its accuracy or suitability for any purpose. The creators and contributors of this dataset are not liable for any damages or losses arising from its use. Please review and comply with the licenses and terms of the original datasets before use.
46
+ ---