eReverter commited on
Commit
ffbbe24
·
1 Parent(s): 5c25213

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -2
README.md CHANGED
@@ -27,6 +27,41 @@ language:
27
  size_categories:
28
  - 100K<n<1M
29
  ---
30
- # Dataset Card for "cnn_dailymail_extractive"
31
 
32
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
  size_categories:
28
  - 100K<n<1M
29
  ---
30
+ ## Data Card for Extractive CNN/DailyMail Dataset
31
 
32
+ ### Overview
33
+ This is an extractive version of the [CNN/Dailymail](https://huggingface.co/datasets/cnn_dailymail) dataset. The structure of this dataset is identical to the original except for a minor modification in the data representation and the introduction of labels to denote the extractive summary.
34
+
35
+ The labels are generated following a greedy algorithm, as proposed by [Liu (2019)](https://arxiv.org/abs/1903.10318). The curation process can be found in the [bertsum-hf](https://github.com/eReverter/bertsum-hf) repository. I am uploading it in case someone does not want to go through the preprocessing, although Liu has a version ready for training in its [bertsum](https://github.com/nlpyang/BertSum) repository!
36
+
37
+ In this dataset:
38
+ - 'src' corresponds to 'article',
39
+ - 'tgt' equates to 'abstract',
40
+ - 'labels' represents a mapping of sentences forming the extractive summary.
41
+
42
+ ### Data Architecture
43
+
44
+ Each entry in the dataset contains the following fields:
45
+ - `id`: a unique `string` identifier for each example.
46
+ - `src`: a `list[string]` field representing the original news article. Each string in the list is a separate sentence from the article.
47
+ - `tgt`: a `list[string]` field representing the professionally edited highlights or abstract of the article.
48
+ - `labels`: a `list[bool]` field with binary values. Each boolean value corresponds to a sentence in 'article', indicating whether that sentence is part of the extractive summary (1 for True, 0 for False).
49
+
50
+ ### Sample Data Entry
51
+ Here is an illustrative example from the dataset:
52
+
53
+ ```json
54
+ {
55
+ "id": "1",
56
+ "src": ["This is the first sentence",
57
+ "This is the second"],
58
+ "tgt": ["This is one of the highlights"],
59
+ "labels": [1, 0]
60
+ }
61
+ ```
62
+
63
+ In this example, the first sentence of the article is selected as part of the extractive summary (as indicated by '1' in the 'labels'), while the second sentence is not ('0' in the 'labels').
64
+
65
+ ### Usage
66
+
67
+ The extractive CNN/DailyMail dataset can be used to train and evaluate models for extractive text summarization tasks. It allows models to learn to predict which sentences from an original text contribute to a summary, providing a binary mapping as a reference. The 'tgt' or 'abstract' field can serve as a basis for comparison, helping to assess how well the selected sentences cover the key points in the abstract.