Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
Libraries:
Datasets
pandas
License:
nicholasKluge commited on
Commit
0a1226f
·
verified ·
1 Parent(s): 8a0351c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -49
README.md CHANGED
@@ -5,61 +5,20 @@ language:
5
  pretty_name: toxic-comments
6
  size_categories:
7
  - 10K<n<100K
 
 
 
 
 
8
  ---
9
- # Toxic-comments
10
 
11
- The dataset contains labeled examples of toxic and non-toxic text comments. Each comment is labeled with one of the two classes: "hate" or "not hate."
12
 
13
- The classes are balanced, with an equal number of samples for each class.
14
 
15
- The dataset is a smaller version of the original dataset [Toxic Comment Classification Challenge Dataset](https://github.com/tianqwang/Toxic-Comment-Classification-Challenge), originally curated by the [Conversation AI](https://conversationai.github.io/) team.
16
-
17
- The original dataset contains an unequal distribution of “_hate_” and “_not hate_” samples for multi-classification.
18
-
19
- The smaller version contains equal “_hate_” and “_not hate_” samples.
20
-
21
- ## Dataset Details
22
- Dataset Name: Toxic-Content Dataset
23
-
24
- Language: English
25
-
26
- Total Size: Over 70,157 demonstrations
27
-
28
- ## Contents
29
- ⚠️ THE EXAMPLES IN THIS DATASET CONTAIN TOXIC/OFFENSIVE LANGUAGE ⚠️
30
-
31
- The dataset consists of data frames with the following columns:
32
-
33
- non_toxic_response: Text that was evaluated as non-toxic.
34
-
35
- toxic_response: Text evaluated as toxic.
36
-
37
- {
38
- "toxic_response": "I think you should shut up your big mouth.",
39
-
40
- "non_toxic_response":"I do not agree with you.",
41
- }
42
-
43
- ## Intended Use
44
-
45
- The purpose of this dataset is to serve as a resource in an educational setting, where its primary aim is to aid in detecting toxicity within textual content and identifying potentially harmful language.
46
-
47
- ## Use Cases
48
- Toxic-Content Dataset can be utilized to train models to detect harmful/toxic text.
49
-
50
- ## How to use
51
- It is available only in English.
52
  ```python
53
  from datasets import load_dataset
54
 
55
  dataset = load_dataset("AiresPucrs/toxic_content", split = 'train')
56
  ```
57
- ## License
58
-
59
- Dataset License
60
-
61
- The Toxic-Content Dataset is licensed under the Apache License, Version 2.0. See the LICENSE file for more details.
62
-
63
-
64
- ## Disclaimer
65
- This dataset is provided as is, without any warranty or guarantee of its accuracy or suitability for any purpose. The creators and contributors of this dataset are not liable for any damages or losses arising from its use. Please review and comply with the licenses and terms of the original datasets before use.
 
5
  pretty_name: toxic-comments
6
  size_categories:
7
  - 10K<n<100K
8
+ task_categories:
9
+ - text-classification
10
+ tags:
11
+ - toxic
12
+ - hate
13
  ---
14
+ # Toxic-comments (Teeny-Tiny Castle)
15
 
16
+ This dataset is part of the tutorial tied to the [Teeny-Tiny Castle](https://github.com/Nkluge-correa/TeenyTinyCastle), an open-source repository containing educational tools for AI Ethics and Safety research.
17
 
18
+ ## How to Use
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ```python
21
  from datasets import load_dataset
22
 
23
  dataset = load_dataset("AiresPucrs/toxic_content", split = 'train')
24
  ```