Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
WanyueZhang commited on
Commit
8b524af
·
verified ·
1 Parent(s): 75f9201

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -7
README.md CHANGED
@@ -1,7 +1,49 @@
1
- ---
2
- license: apache-2.0
3
- language:
4
- - zh
5
- size_categories:
6
- - n>1T
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ChineseWebText 2.0: Large-Scale High-quality Chinese Web Text with Multi-dimensional and fine-grained information
2
+ This directory contains the ChineseWebText2.0 dataset, and a new tool-chain called MDFG-tool for constructing large-scale and high-quality Chinese datasets with multi-dimensional and fine-grained information. Our ChineseWebText2.0 code is publicly available on github [(here)](https://github.com/CASIA-LM/ChineseWebText2.0).
3
+ ## ChineseWebText2.0
4
+ - ### Dataset Overview
5
+ We have released the latest and largest Chinese dataset, ChineseWebText 2.0, which consists of 3.8 TB of data. Each text in the dataset is accompanied by a quality score, domain single-label and multi-label tags, as well as toxicity classification and scores, enabling LLM researchers to select data based on new quality thresholds.
6
+ - ### Data Example
7
+ ```json
8
+ {
9
+ "text": "近日,黑龙江省高校校报协会第十四届学术年会暨校报工作交流研讨会在东北农业大学举行。我校10件新闻作品喜获2项一等奖,2项二等奖,6项三等奖……",
10
+ "domain":
11
+ {
12
+ "single_label": "news",
13
+ "multi_label": ["news", "education"]
14
+ },
15
+ "toxicity":
16
+ {
17
+ "label": 0,
18
+ "score": 1.0347155694034882e-05
19
+ },
20
+ "quality_score": 0.96044921875
21
+ }
22
+ ```
23
+
24
+ - "text": [string] Text content of data sample.
25
+ - "single_label": [string] The highest probability label generated by the domain classification model.
26
+ - "multi_label": [list] All labels generated by the domain classification model with probabilities higher than the threshold.
27
+ - "label": [int] Toxicity label generated by toxicity classification models.
28
+ - "score": [flaot] Toxicity score generated by toxicity classification model.
29
+ - "quality_score": [float] Quality score generated by the quality evaluation model.
30
+
31
+ ## MDFG-tool
32
+
33
+ ### Introduction
34
+
35
+ We introduce a new toolchain, MDFG-tool (see Figure 1). We begin with the coarse-grained filtering module, which applies rule-based methods to clean the data, focusing on criteria such as text length and sensitive words to ensure data quality. After cleaning, we evaluate the text quality using a BERT-based model. This process generates a quality score, and by selecting an appropriate threshold, we can extract high-quality text data that meets our needs. Next, we use FastText for both single-label and multi-label classification of the cleaned data. Meanwhile, we conduct toxicity assessment. The FastText model is used to filter out toxic content and assign toxicity scores to each text. This scoring system allows researchers to set thresholds for identifying and selecting harmful texts for further training.
36
+
37
+ <div align="center">
38
+ <img src=".\assets\structure.png" width="50%" />
39
+ </div>
40
+
41
+ ## Citation
42
+ Please cite the paper if you use the data or code in this repo.
43
+
44
+ ```shell
45
+
46
+ ```
47
+
48
+
49
+