Datasets:
GEM
/

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
parquet-converter commited on
Commit
916040f
·
1 Parent(s): 0e97cc8

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,632 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - none
4
- language_creators:
5
- - unknown
6
- language:
7
- - en
8
- license:
9
- - mit
10
- multilinguality:
11
- - unknown
12
- size_categories:
13
- - unknown
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - table-to-text
18
- task_ids: []
19
- pretty_name: dart
20
- tags:
21
- - data-to-text
22
- ---
23
-
24
- # Dataset Card for GEM/dart
25
-
26
- ## Dataset Description
27
-
28
- - **Homepage:** n/a
29
- - **Repository:** https://github.com/Yale-LILY/dart
30
- - **Paper:** https://aclanthology.org/2021.naacl-main.37/
31
- - **Leaderboard:** https://github.com/Yale-LILY/dart#leaderboard
32
- - **Point of Contact:** Dragomir Radev, Rui Zhang, Nazneen Rajani
33
-
34
- ### Link to Main Data Card
35
-
36
- You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/dart).
37
-
38
- ### Dataset Summary
39
-
40
- DART is an English dataset aggregating multiple other data-to-text dataset in a common triple-based format. The new format is completely flat, thus not requiring a model to learn hierarchical structures, while still retaining the full information.
41
-
42
- You can load the dataset via:
43
- ```
44
- import datasets
45
- data = datasets.load_dataset('GEM/dart')
46
- ```
47
- The data loader can be found [here](https://huggingface.co/datasets/GEM/dart).
48
-
49
- #### website
50
- n/a
51
-
52
- #### paper
53
- [ACL Anthology](https://aclanthology.org/2021.naacl-main.37/)
54
-
55
- #### authors
56
- Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto, Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Mutethia Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu, Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, Richard Socher, Nazneen Fatema Rajani
57
-
58
- ## Dataset Overview
59
-
60
- ### Where to find the Data and its Documentation
61
-
62
- #### Download
63
-
64
- <!-- info: What is the link to where the original dataset is hosted? -->
65
- <!-- scope: telescope -->
66
- [Github](https://github.com/Yale-LILY/dart)
67
-
68
- #### Paper
69
-
70
- <!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
71
- <!-- scope: telescope -->
72
- [ACL Anthology](https://aclanthology.org/2021.naacl-main.37/)
73
-
74
- #### BibTex
75
-
76
- <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
77
- <!-- scope: microscope -->
78
- ```
79
- @inproceedings{nan-etal-2021-dart,
80
- title = "{DART}: Open-Domain Structured Data Record to Text Generation",
81
- author = "Nan, Linyong and
82
- Radev, Dragomir and
83
- Zhang, Rui and
84
- Rau, Amrit and
85
- Sivaprasad, Abhinand and
86
- Hsieh, Chiachun and
87
- Tang, Xiangru and
88
- Vyas, Aadit and
89
- Verma, Neha and
90
- Krishna, Pranav and
91
- Liu, Yangxiaokang and
92
- Irwanto, Nadia and
93
- Pan, Jessica and
94
- Rahman, Faiaz and
95
- Zaidi, Ahmad and
96
- Mutuma, Mutethia and
97
- Tarabar, Yasin and
98
- Gupta, Ankit and
99
- Yu, Tao and
100
- Tan, Yi Chern and
101
- Lin, Xi Victoria and
102
- Xiong, Caiming and
103
- Socher, Richard and
104
- Rajani, Nazneen Fatema",
105
- booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
106
- month = jun,
107
- year = "2021",
108
- address = "Online",
109
- publisher = "Association for Computational Linguistics",
110
- url = "https://aclanthology.org/2021.naacl-main.37",
111
- doi = "10.18653/v1/2021.naacl-main.37",
112
- pages = "432--447",
113
- abstract = "We present DART, an open domain structured DAta Record to Text generation dataset with over 82k instances (DARTs). Data-to-text annotations can be a costly process, especially when dealing with tables which are the major source of structured data and contain nontrivial structures. To this end, we propose a procedure of extracting semantic triples from tables that encodes their structures by exploiting the semantic dependencies among table headers and the table title. Our dataset construction framework effectively merged heterogeneous sources from open domain semantic parsing and spoken dialogue systems by utilizing techniques including tree ontology annotation, question-answer pair to declarative sentence conversion, and predicate unification, all with minimum post-editing. We present systematic evaluation on DART as well as new state-of-the-art results on WebNLG 2017 to show that DART (1) poses new challenges to existing data-to-text datasets and (2) facilitates out-of-domain generalization. Our data and code can be found at https://github.com/Yale-LILY/dart.",
114
- }
115
- ```
116
-
117
- #### Contact Name
118
-
119
- <!-- quick -->
120
- <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
121
- <!-- scope: periscope -->
122
- Dragomir Radev, Rui Zhang, Nazneen Rajani
123
-
124
- #### Contact Email
125
-
126
- <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
127
- <!-- scope: periscope -->
128
- {dragomir.radev, r.zhang}@yale.edu, {nazneen.rajani}@salesforce.com
129
-
130
- #### Has a Leaderboard?
131
-
132
- <!-- info: Does the dataset have an active leaderboard? -->
133
- <!-- scope: telescope -->
134
- yes
135
-
136
- #### Leaderboard Link
137
-
138
- <!-- info: Provide a link to the leaderboard. -->
139
- <!-- scope: periscope -->
140
- [Leaderboard](https://github.com/Yale-LILY/dart#leaderboard)
141
-
142
- #### Leaderboard Details
143
-
144
- <!-- info: Briefly describe how the leaderboard evaluates models. -->
145
- <!-- scope: microscope -->
146
- Several state-of-the-art table-to-text models were evaluated on DART, such as BART ([Lewis et al., 2020](https://arxiv.org/pdf/1910.13461.pdf)), Seq2Seq-Att ([MELBOURNE](https://webnlg-challenge.loria.fr/files/melbourne_report.pdf)) and End-to-End Transformer ([Castro Ferreira et al., 2019](https://arxiv.org/pdf/1908.09022.pdf)).
147
- The leaderboard reports BLEU, METEOR, TER, MoverScore, BERTScore and BLEURT scores.
148
-
149
-
150
- ### Languages and Intended Use
151
-
152
- #### Multilingual?
153
-
154
- <!-- quick -->
155
- <!-- info: Is the dataset multilingual? -->
156
- <!-- scope: telescope -->
157
- no
158
-
159
- #### Covered Dialects
160
-
161
- <!-- info: What dialects are covered? Are there multiple dialects per language? -->
162
- <!-- scope: periscope -->
163
- It is an aggregated from multiple other datasets that use general US-American or British English without differentiation between dialects.
164
-
165
- #### Covered Languages
166
-
167
- <!-- quick -->
168
- <!-- info: What languages/dialects are covered in the dataset? -->
169
- <!-- scope: telescope -->
170
- `English`
171
-
172
- #### Whose Language?
173
-
174
- <!-- info: Whose language is in the dataset? -->
175
- <!-- scope: periscope -->
176
- The dataset is aggregated from multiple others that were crowdsourced on different platforms.
177
-
178
- #### License
179
-
180
- <!-- quick -->
181
- <!-- info: What is the license of the dataset? -->
182
- <!-- scope: telescope -->
183
- mit: MIT License
184
-
185
- #### Intended Use
186
-
187
- <!-- info: What is the intended use of the dataset? -->
188
- <!-- scope: microscope -->
189
- The dataset is aimed to further research in natural language generation from semantic data.
190
-
191
- #### Primary Task
192
-
193
- <!-- info: What primary task does the dataset support? -->
194
- <!-- scope: telescope -->
195
- Data-to-Text
196
-
197
- #### Communicative Goal
198
-
199
- <!-- quick -->
200
- <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
201
- <!-- scope: periscope -->
202
- The speaker is required to produce coherent sentences and construct a trees structured ontology of the column headers.
203
-
204
-
205
-
206
-
207
- ### Credit
208
-
209
- #### Curation Organization Type(s)
210
-
211
- <!-- info: In what kind of organization did the dataset curation happen? -->
212
- <!-- scope: telescope -->
213
- `academic`, `industry`
214
-
215
- #### Curation Organization(s)
216
-
217
- <!-- info: Name the organization(s). -->
218
- <!-- scope: periscope -->
219
- Yale University, Salesforce Research, Penn State University, The University of Hong Kong, MIT
220
-
221
- #### Dataset Creators
222
-
223
- <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
224
- <!-- scope: microscope -->
225
- Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto, Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Mutethia Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu, Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, Richard Socher, Nazneen Fatema Rajani
226
-
227
- #### Who added the Dataset to GEM?
228
-
229
- <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
230
- <!-- scope: microscope -->
231
- Miruna Clinciu contributed the original data card and Yacine Jernite wrote the initial data loader. Sebastian Gehrmann migrated the data card and the loader to the new format.
232
-
233
-
234
- ### Dataset Structure
235
-
236
- #### Data Fields
237
-
238
- <!-- info: List and describe the fields present in the dataset. -->
239
- <!-- scope: telescope -->
240
- -`tripleset`: a list of tuples, each tuple has 3 items
241
- -`subtree_was_extended`: a boolean variable (true or false)
242
- -`annotations`: a list of dict, each with source and text keys.
243
- -`source`: a string mentioning the name of the source table.
244
- -`text`: a sentence string.
245
-
246
-
247
- #### Reason for Structure
248
-
249
- <!-- info: How was the dataset structure determined? -->
250
- <!-- scope: microscope -->
251
- The structure is supposed to be able more complex structures beyond "flat" attribute-value pairs, instead encoding hierarchical relationships.
252
-
253
- #### How were labels chosen?
254
-
255
- <!-- info: How were the labels chosen? -->
256
- <!-- scope: microscope -->
257
- They are a combination of those from existing datasets and new annotations that take advantage of the hierarchical structure
258
-
259
- #### Example Instance
260
-
261
- <!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
262
- <!-- scope: periscope -->
263
- ```
264
- {
265
- "tripleset": [
266
- [
267
- "Ben Mauk",
268
- "High school",
269
- "Kenton"
270
- ],
271
- [
272
- "Ben Mauk",
273
- "College",
274
- "Wake Forest Cincinnati"
275
- ]
276
- ],
277
- "subtree_was_extended": false,
278
- "annotations": [
279
- {
280
- "source": "WikiTableQuestions_lily",
281
- "text": "Ben Mauk, who attended Kenton High School, attended Wake Forest Cincinnati for college."
282
- }
283
- ]
284
- }
285
- ```
286
-
287
- #### Data Splits
288
-
289
- <!-- info: Describe and name the splits in the dataset if there are more than one. -->
290
- <!-- scope: periscope -->
291
- |Input Unit | Examples | Vocab Size | Words per SR | Sents per SR | Tables |
292
- | ------------- | ------------- || ------------- || ------------- || ------------- || ------------- |
293
- |Triple Set | 82,191 | 33.2K | 21.6 | 1.5 | 5,623 |
294
-
295
- | Train | Dev | Test|
296
- | ------------- | ------------- || ------------- |
297
- | 62,659 | 6,980 | 12,552|
298
-
299
-
300
- Statistics of DART decomposed by different collection methods. DART exhibits a great deal of topical variety in terms of the number of unique predicates, the number of unique triples, and the vocabulary size. These statistics are computed from DART v1.1.1; the number of unique predicates reported is post-unification (see Section 3.4). SR: Surface Realization.
301
- ([details in Table 1 and 2](https://arxiv.org/pdf/2007.02871.pdf)).
302
-
303
-
304
- #### Splitting Criteria
305
-
306
- <!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
307
- <!-- scope: microscope -->
308
- For WebNLG 2017 and Cleaned E2E, DART use the original data splits. For the new annotation on WikiTableQuestions and WikiSQL, random splitting will make train, dev, and test splits contain similar tables and similar <triple-set, sentence> examples. They are thus split based on Jaccard similarity such that no training examples has a similarity with a test example of over 0.5
309
-
310
-
311
-
312
- ## Dataset in GEM
313
-
314
- ### Rationale for Inclusion in GEM
315
-
316
- #### Why is the Dataset in GEM?
317
-
318
- <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
319
- <!-- scope: microscope -->
320
- DART is a large and open-domain structured DAta Record to Text generation corpus with high-quality sentence annotations with each input being a set of entity-relation triples following a tree-structured ontology.
321
-
322
- #### Similar Datasets
323
-
324
- <!-- info: Do other datasets for the high level task exist? -->
325
- <!-- scope: telescope -->
326
- yes
327
-
328
- #### Unique Language Coverage
329
-
330
- <!-- info: Does this dataset cover other languages than other datasets for the same task? -->
331
- <!-- scope: periscope -->
332
- no
333
-
334
- #### Difference from other GEM datasets
335
-
336
- <!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
337
- <!-- scope: microscope -->
338
- The tree structure is unique among GEM datasets
339
-
340
- #### Ability that the Dataset measures
341
-
342
- <!-- info: What aspect of model ability can be measured with this dataset? -->
343
- <!-- scope: periscope -->
344
- Reasoning, surface realization
345
-
346
-
347
- ### GEM-Specific Curation
348
-
349
- #### Modificatied for GEM?
350
-
351
- <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
352
- <!-- scope: telescope -->
353
- no
354
-
355
- #### Additional Splits?
356
-
357
- <!-- info: Does GEM provide additional splits to the dataset? -->
358
- <!-- scope: telescope -->
359
- no
360
-
361
-
362
- ### Getting Started with the Task
363
-
364
- #### Pointers to Resources
365
-
366
- <!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
367
- <!-- scope: microscope -->
368
- Experimental results on DART shows that BART model as the highest performance among three models with a BLEU score of 37.06. This is attributed to BART’s generalization ability due to pretraining ([Table 4](https://arxiv.org/pdf/2007.02871.pdf)).
369
-
370
-
371
-
372
-
373
- ## Previous Results
374
-
375
- ### Previous Results
376
-
377
- #### Measured Model Abilities
378
-
379
- <!-- info: What aspect of model ability can be measured with this dataset? -->
380
- <!-- scope: telescope -->
381
- Reasoning, surface realization
382
-
383
- #### Metrics
384
-
385
- <!-- info: What metrics are typically used for this task? -->
386
- <!-- scope: periscope -->
387
- `BLEU`, `MoverScore`, `BERT-Score`, `BLEURT`
388
-
389
- #### Proposed Evaluation
390
-
391
- <!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
392
- <!-- scope: microscope -->
393
- The leaderboard uses the combination of BLEU, METEOR, TER, MoverScore, BERTScore, PARENT and BLEURT to overcome the limitations of the n-gram overlap metrics.
394
- A small scale human annotation of 100 data points was conducted along the dimensions of (1) fluency - a sentence is natural and grammatical, and (2) semantic faithfulness - a sentence is supported by the input triples.
395
-
396
- #### Previous results available?
397
-
398
- <!-- info: Are previous results available? -->
399
- <!-- scope: telescope -->
400
- yes
401
-
402
- #### Other Evaluation Approaches
403
-
404
- <!-- info: What evaluation approaches have others used? -->
405
- <!-- scope: periscope -->
406
- n/a
407
-
408
- #### Relevant Previous Results
409
-
410
- <!-- info: What are the most relevant previous results for this task/dataset? -->
411
- <!-- scope: microscope -->
412
- BART currently achieves the best performance according to the leaderboard.
413
-
414
-
415
-
416
- ## Dataset Curation
417
-
418
- ### Original Curation
419
-
420
- #### Original Curation Rationale
421
-
422
- <!-- info: Original curation rationale -->
423
- <!-- scope: telescope -->
424
- The dataset creators encourage through DART further research in natural language generation from semantic data. DART provides high-quality sentence annotations with each input being a set of entity-relation triples in a tree structure.
425
-
426
-
427
-
428
- #### Communicative Goal
429
-
430
- <!-- info: What was the communicative goal? -->
431
- <!-- scope: periscope -->
432
- The speaker is required to produce coherent sentences and construct a trees structured ontology of the column headers.
433
-
434
-
435
-
436
- #### Sourced from Different Sources
437
-
438
- <!-- info: Is the dataset aggregated from different data sources? -->
439
- <!-- scope: telescope -->
440
- yes
441
-
442
- #### Source Details
443
-
444
- <!-- info: List the sources (one per line) -->
445
- <!-- scope: periscope -->
446
- - human annotation on open-domain Wikipedia tables from WikiTableQuestions ([Pasupat and Liang,
447
- 2015](https://www.aclweb.org/anthology/P15-1142.pdf)) and WikiSQL ([Zhong et al., 2017](https://arxiv.org/pdf/1709.00103.pdf))
448
- - automatic conversion of questions in WikiSQL to declarative sentences
449
- - incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017[a](https://www.aclweb.org/anthology/P17-1017.pdf),[b](https://www.aclweb.org/anthology/W17-3518.pdf); [Shimorina and Gardent, 2018](https://www.aclweb.org/anthology/W18-6543.pdf)) and Cleaned E2E ([Novikova et al., 2017b](https://arxiv.org/pdf/1706.09254.pdf); Dušek et al., [2018](https://arxiv.org/pdf/1810.01170.pdf), [2019](https://www.aclweb.org/anthology/W19-8652.pdf))
450
-
451
-
452
-
453
- ### Language Data
454
-
455
- #### How was Language Data Obtained?
456
-
457
- <!-- info: How was the language data obtained? -->
458
- <!-- scope: telescope -->
459
- `Found`, `Created for the dataset`
460
-
461
- #### Where was it found?
462
-
463
- <!-- info: If found, where from? -->
464
- <!-- scope: telescope -->
465
- `Offline media collection`
466
-
467
- #### Creation Process
468
-
469
- <!-- info: If created for the dataset, describe the creation process. -->
470
- <!-- scope: microscope -->
471
- Creators proposed a two-stage annotation process for constructing triple set sentence pairs based on a tree-structured ontology of each table. First, internal skilled annotators denote the parent column for each column header. Then, a larger number of annotators provide a sentential description of an automatically-chosen subset of table cells in a row. To form a triple set sentence pair, the highlighted cells can be converted to a connected triple set automatically according to the column ontology for the given table.
472
-
473
-
474
-
475
- #### Language Producers
476
-
477
- <!-- info: What further information do we have on the language producers? -->
478
- <!-- scope: microscope -->
479
- No further information about the MTurk workers has been provided.
480
-
481
-
482
-
483
- #### Topics Covered
484
-
485
- <!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
486
- <!-- scope: periscope -->
487
- The sub-datasets are from Wikipedia, DBPedia, and artificially created restaurant data.
488
-
489
- #### Data Validation
490
-
491
- <!-- info: Was the text validated by a different worker or a data curator? -->
492
- <!-- scope: telescope -->
493
- validated by crowdworker
494
-
495
- #### Was Data Filtered?
496
-
497
- <!-- info: Were text instances selected or filtered? -->
498
- <!-- scope: telescope -->
499
- not filtered
500
-
501
-
502
- ### Structured Annotations
503
-
504
- #### Additional Annotations?
505
-
506
- <!-- quick -->
507
- <!-- info: Does the dataset have additional annotations for each instance? -->
508
- <!-- scope: telescope -->
509
- none
510
-
511
- #### Annotation Service?
512
-
513
- <!-- info: Was an annotation service used? -->
514
- <!-- scope: telescope -->
515
- no
516
-
517
-
518
- ### Consent
519
-
520
- #### Any Consent Policy?
521
-
522
- <!-- info: Was there a consent policy involved when gathering the data? -->
523
- <!-- scope: telescope -->
524
- no
525
-
526
- #### Justification for Using the Data
527
-
528
- <!-- info: If not, what is the justification for reusing the data? -->
529
- <!-- scope: microscope -->
530
- The new annotations are based on Wikipedia which is in the public domain and the other two datasets permit reuse (with attribution)
531
-
532
-
533
- ### Private Identifying Information (PII)
534
-
535
- #### Contains PII?
536
-
537
- <!-- quick -->
538
- <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
539
- <!-- scope: telescope -->
540
- no PII
541
-
542
- #### Justification for no PII
543
-
544
- <!-- info: Provide a justification for selecting `no PII` above. -->
545
- <!-- scope: periscope -->
546
- None of the datasets talk about individuals
547
-
548
-
549
- ### Maintenance
550
-
551
- #### Any Maintenance Plan?
552
-
553
- <!-- info: Does the original dataset have a maintenance plan? -->
554
- <!-- scope: telescope -->
555
- no
556
-
557
-
558
-
559
- ## Broader Social Context
560
-
561
- ### Previous Work on the Social Impact of the Dataset
562
-
563
- #### Usage of Models based on the Data
564
-
565
- <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
566
- <!-- scope: telescope -->
567
- no
568
-
569
-
570
- ### Impact on Under-Served Communities
571
-
572
- #### Addresses needs of underserved Communities?
573
-
574
- <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
575
- <!-- scope: telescope -->
576
- no
577
-
578
-
579
- ### Discussion of Biases
580
-
581
- #### Any Documented Social Biases?
582
-
583
- <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
584
- <!-- scope: telescope -->
585
- no
586
-
587
- #### Are the Language Producers Representative of the Language?
588
-
589
- <!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
590
- <!-- scope: periscope -->
591
- No, the annotators are raters on crowdworking platforms and thus only represent their demographics.
592
-
593
-
594
-
595
- ## Considerations for Using the Data
596
-
597
- ### PII Risks and Liability
598
-
599
-
600
-
601
- ### Licenses
602
-
603
- #### Copyright Restrictions on the Dataset
604
-
605
- <!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
606
- <!-- scope: periscope -->
607
- `open license - commercial use allowed`
608
-
609
- #### Copyright Restrictions on the Language Data
610
-
611
- <!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
612
- <!-- scope: periscope -->
613
- `open license - commercial use allowed`
614
-
615
-
616
- ### Known Technical Limitations
617
-
618
- #### Technical Limitations
619
-
620
- <!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
621
- <!-- scope: microscope -->
622
- The dataset may contain some social biases, as the input sentences are based on Wikipedia (WikiTableQuestions, WikiSQL, WebNLG). Studies have shown that the English Wikipedia contains gender biases([Dinan et al., 2020](https://www.aclweb.org/anthology/2020.emnlp-main.23.pdf)), racial biases([Papakyriakopoulos et al., 2020 (https://dl.acm.org/doi/pdf/10.1145/3351095.3372843)) and geographical bias([Livingstone et al., 2010](https://doi.org/10.5204/mcj.315)). [More info](https://en.wikipedia.org/wiki/Racial_bias_on_Wikipedia#cite_note-23).
623
-
624
-
625
- #### Unsuited Applications
626
-
627
- <!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
628
- <!-- scope: microscope -->
629
- The end-to-end transformer has the lowest performance since the transformer model needs intermediate pipeline planning steps to have higher performance. Similar findings can be found in [Castro Ferreira et al., 2019](https://arxiv.org/pdf/1908.09022.pdf).
630
-
631
-
632
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dart.json DELETED
@@ -1,185 +0,0 @@
1
- {
2
- "overview": {
3
- "where": {
4
- "has-leaderboard": "yes",
5
- "leaderboard-url": "[Leaderboard](https://github.com/Yale-LILY/dart#leaderboard)",
6
- "leaderboard-description": "Several state-of-the-art table-to-text models were evaluated on DART, such as BART ([Lewis et al., 2020](https://arxiv.org/pdf/1910.13461.pdf)), Seq2Seq-Att ([MELBOURNE](https://webnlg-challenge.loria.fr/files/melbourne_report.pdf)) and End-to-End Transformer ([Castro Ferreira et al., 2019](https://arxiv.org/pdf/1908.09022.pdf)).\nThe leaderboard reports BLEU, METEOR, TER, MoverScore, BERTScore and BLEURT scores.",
7
- "website": "n/a",
8
- "data-url": "[Github](https://github.com/Yale-LILY/dart)",
9
- "paper-url": "[ACL Anthology](https://aclanthology.org/2021.naacl-main.37/)",
10
- "contact-email": "{dragomir.radev, r.zhang}@yale.edu, {nazneen.rajani}@salesforce.com",
11
- "contact-name": "Dragomir Radev, Rui Zhang, Nazneen Rajani",
12
- "paper-bibtext": "```\n@inproceedings{nan-etal-2021-dart,\n title = \"{DART}: Open-Domain Structured Data Record to Text Generation\",\n author = \"Nan, Linyong and\n Radev, Dragomir and\n Zhang, Rui and\n Rau, Amrit and\n Sivaprasad, Abhinand and\n Hsieh, Chiachun and\n Tang, Xiangru and\n Vyas, Aadit and\n Verma, Neha and\n Krishna, Pranav and\n Liu, Yangxiaokang and\n Irwanto, Nadia and\n Pan, Jessica and\n Rahman, Faiaz and\n Zaidi, Ahmad and\n Mutuma, Mutethia and\n Tarabar, Yasin and\n Gupta, Ankit and\n Yu, Tao and\n Tan, Yi Chern and\n Lin, Xi Victoria and\n Xiong, Caiming and\n Socher, Richard and\n Rajani, Nazneen Fatema\",\n booktitle = \"Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n month = jun,\n year = \"2021\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/2021.naacl-main.37\",\n doi = \"10.18653/v1/2021.naacl-main.37\",\n pages = \"432--447\",\n abstract = \"We present DART, an open domain structured DAta Record to Text generation dataset with over 82k instances (DARTs). Data-to-text annotations can be a costly process, especially when dealing with tables which are the major source of structured data and contain nontrivial structures. To this end, we propose a procedure of extracting semantic triples from tables that encodes their structures by exploiting the semantic dependencies among table headers and the table title. Our dataset construction framework effectively merged heterogeneous sources from open domain semantic parsing and spoken dialogue systems by utilizing techniques including tree ontology annotation, question-answer pair to declarative sentence conversion, and predicate unification, all with minimum post-editing. We present systematic evaluation on DART as well as new state-of-the-art results on WebNLG 2017 to show that DART (1) poses new challenges to existing data-to-text datasets and (2) facilitates out-of-domain generalization. Our data and code can be found at https://github.com/Yale-LILY/dart.\",\n}\n```"
13
- },
14
- "languages": {
15
- "is-multilingual": "no",
16
- "license": "mit: MIT License",
17
- "task-other": "N/A",
18
- "language-names": [
19
- "English"
20
- ],
21
- "language-dialects": "It is an aggregated from multiple other datasets that use general US-American or British English without differentiation between dialects. ",
22
- "license-other": "N/A",
23
- "task": "Data-to-Text",
24
- "communicative": "The speaker is required to produce coherent sentences and construct a trees structured ontology of the column headers.\n\n",
25
- "language-speakers": "The dataset is aggregated from multiple others that were crowdsourced on different platforms. ",
26
- "intended-use": "The dataset is aimed to further research in natural language generation from semantic data."
27
- },
28
- "credit": {
29
- "organization-type": [
30
- "academic",
31
- "industry"
32
- ],
33
- "organization-names": "Yale University, Salesforce Research, Penn State University, The University of Hong Kong, MIT",
34
- "creators": "Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto, Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Mutethia Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu, Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, Richard Socher, Nazneen Fatema Rajani",
35
- "funding": "n/a",
36
- "gem-added-by": "Miruna Clinciu contributed the original data card and Yacine Jernite wrote the initial data loader. Sebastian Gehrmann migrated the data card and the loader to the new format."
37
- },
38
- "structure": {
39
- "data-fields": "-`tripleset`: a list of tuples, each tuple has 3 items\n-`subtree_was_extended`: a boolean variable (true or false)\n-`annotations`: a list of dict, each with source and text keys.\n-`source`: a string mentioning the name of the source table.\n-`text`: a sentence string.\n",
40
- "structure-example": "```\n {\n \"tripleset\": [\n [\n \"Ben Mauk\",\n \"High school\",\n \"Kenton\"\n ],\n [\n \"Ben Mauk\",\n \"College\",\n \"Wake Forest Cincinnati\"\n ]\n ],\n \"subtree_was_extended\": false,\n \"annotations\": [\n {\n \"source\": \"WikiTableQuestions_lily\",\n \"text\": \"Ben Mauk, who attended Kenton High School, attended Wake Forest Cincinnati for college.\"\n }\n ]\n }\n```",
41
- "structure-splits": "|Input Unit | Examples | Vocab Size | Words per SR | Sents per SR | Tables |\n| ------------- | ------------- || ------------- || ------------- || ------------- || ------------- |\n|Triple Set | 82,191 | 33.2K | 21.6 | 1.5 | 5,623 |\n\n| Train | Dev | Test|\n| ------------- | ------------- || ------------- |\n| 62,659 | 6,980 | 12,552|\n\n\nStatistics of DART decomposed by different collection methods. DART exhibits a great deal of topical variety in terms of the number of unique predicates, the number of unique triples, and the vocabulary size. These statistics are computed from DART v1.1.1; the number of unique predicates reported is post-unification (see Section 3.4). SR: Surface Realization.\n([details in Table 1 and 2](https://arxiv.org/pdf/2007.02871.pdf)).\n",
42
- "structure-description": "The structure is supposed to be able more complex structures beyond \"flat\" attribute-value pairs, instead encoding hierarchical relationships.",
43
- "structure-labels": "They are a combination of those from existing datasets and new annotations that take advantage of the hierarchical structure",
44
- "structure-splits-criteria": "For WebNLG 2017 and Cleaned E2E, DART use the original data splits. For the new annotation on WikiTableQuestions and WikiSQL, random splitting will make train, dev, and test splits contain similar tables and similar <triple-set, sentence> examples. They are thus split based on Jaccard similarity such that no training examples has a similarity with a test example of over 0.5",
45
- "structure-outlier": "n/a"
46
- },
47
- "what": {
48
- "dataset": "DART is an English dataset aggregating multiple other data-to-text dataset in a common triple-based format. The new format is completely flat, thus not requiring a model to learn hierarchical structures, while still retaining the full information. "
49
- }
50
- },
51
- "curation": {
52
- "original": {
53
- "is-aggregated": "yes",
54
- "aggregated-sources": "- human annotation on open-domain Wikipedia tables from WikiTableQuestions ([Pasupat and Liang,\n2015](https://www.aclweb.org/anthology/P15-1142.pdf)) and WikiSQL ([Zhong et al., 2017](https://arxiv.org/pdf/1709.00103.pdf))\n- automatic conversion of questions in WikiSQL to declarative sentences\n- incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017[a](https://www.aclweb.org/anthology/P17-1017.pdf),[b](https://www.aclweb.org/anthology/W17-3518.pdf); [Shimorina and Gardent, 2018](https://www.aclweb.org/anthology/W18-6543.pdf)) and Cleaned E2E ([Novikova et al., 2017b](https://arxiv.org/pdf/1706.09254.pdf); Du\u0161ek et al., [2018](https://arxiv.org/pdf/1810.01170.pdf), [2019](https://www.aclweb.org/anthology/W19-8652.pdf))\n",
55
- "rationale": "The dataset creators encourage through DART further research in natural language generation from semantic data. DART provides high-quality sentence annotations with each input being a set of entity-relation triples in a tree structure.\n\n",
56
- "communicative": "The speaker is required to produce coherent sentences and construct a trees structured ontology of the column headers.\n\n"
57
- },
58
- "language": {
59
- "found": [
60
- "Offline media collection"
61
- ],
62
- "crowdsourced": [],
63
- "created": "Creators proposed a two-stage annotation process for constructing triple set sentence pairs based on a tree-structured ontology of each table. First, internal skilled annotators denote the parent column for each column header. Then, a larger number of annotators provide a sentential description of an automatically-chosen subset of table cells in a row. To form a triple set sentence pair, the highlighted cells can be converted to a connected triple set automatically according to the column ontology for the given table.\n\n",
64
- "machine-generated": "N/A",
65
- "validated": "validated by crowdworker",
66
- "is-filtered": "not filtered",
67
- "filtered-criteria": "N/A",
68
- "obtained": [
69
- "Found",
70
- "Created for the dataset"
71
- ],
72
- "producers-description": "No further information about the MTurk workers has been provided.\n\n",
73
- "topics": "The sub-datasets are from Wikipedia, DBPedia, and artificially created restaurant data. ",
74
- "pre-processed": "n/a"
75
- },
76
- "annotations": {
77
- "origin": "none",
78
- "rater-number": "N/A",
79
- "rater-qualifications": "N/A",
80
- "rater-training-num": "N/A",
81
- "rater-test-num": "N/A",
82
- "rater-annotation-service-bool": "no",
83
- "rater-annotation-service": [],
84
- "values": "N/A",
85
- "quality-control": [],
86
- "quality-control-details": "N/A"
87
- },
88
- "consent": {
89
- "has-consent": "no",
90
- "consent-policy": "N/A",
91
- "consent-other": "N/A",
92
- "no-consent-justification": "The new annotations are based on Wikipedia which is in the public domain and the other two datasets permit reuse (with attribution)"
93
- },
94
- "pii": {
95
- "has-pii": "no PII",
96
- "no-pii-justification": "None of the datasets talk about individuals",
97
- "is-pii-identified": "N/A",
98
- "pii-identified-method": "N/A",
99
- "is-pii-replaced": "N/A",
100
- "pii-replaced-method": "N/A",
101
- "pii-categories": []
102
- },
103
- "maintenance": {
104
- "has-maintenance": "no",
105
- "description": "N/A",
106
- "contact": "N/A",
107
- "contestation-mechanism": "N/A",
108
- "contestation-link": "N/A",
109
- "contestation-description": "N/A"
110
- }
111
- },
112
- "gem": {
113
- "rationale": {
114
- "sole-task-dataset": "yes",
115
- "sole-language-task-dataset": "no",
116
- "distinction-description": "The tree structure is unique among GEM datasets",
117
- "contribution": "DART is a large and open-domain structured DAta Record to Text generation corpus with high-quality sentence annotations with each input being a set of entity-relation triples following a tree-structured ontology. ",
118
- "model-ability": "Reasoning, surface realization"
119
- },
120
- "curation": {
121
- "has-additional-curation": "no",
122
- "modification-types": [],
123
- "modification-description": "N/A",
124
- "has-additional-splits": "no",
125
- "additional-splits-description": "N/A",
126
- "additional-splits-capacicites": "N/A"
127
- },
128
- "starting": {
129
- "research-pointers": "Experimental results on DART shows that BART model as the highest performance among three models with a BLEU score of 37.06. This is attributed to BART\u2019s generalization ability due to pretraining ([Table 4](https://arxiv.org/pdf/2007.02871.pdf)).\n",
130
- "technical-terms": "n/a"
131
- }
132
- },
133
- "results": {
134
- "results": {
135
- "other-metrics-definitions": "N/A",
136
- "has-previous-results": "yes",
137
- "current-evaluation": "n/a ",
138
- "previous-results": "BART currently achieves the best performance according to the leaderboard.",
139
- "model-abilities": "Reasoning, surface realization",
140
- "metrics": [
141
- "BLEU",
142
- "MoverScore",
143
- "BERT-Score",
144
- "BLEURT"
145
- ],
146
- "original-evaluation": "The leaderboard uses the combination of BLEU, METEOR, TER, MoverScore, BERTScore, PARENT and BLEURT to overcome the limitations of the n-gram overlap metrics. \nA small scale human annotation of 100 data points was conducted along the dimensions of (1) fluency - a sentence is natural and grammatical, and (2) semantic faithfulness - a sentence is supported by the input triples."
147
- }
148
- },
149
- "considerations": {
150
- "pii": {
151
- "risks-description": "n/a"
152
- },
153
- "licenses": {
154
- "dataset-restrictions-other": "N/A",
155
- "data-copyright-other": "N/A",
156
- "dataset-restrictions": [
157
- "open license - commercial use allowed"
158
- ],
159
- "data-copyright": [
160
- "open license - commercial use allowed"
161
- ]
162
- },
163
- "limitations": {
164
- "data-technical-limitations": "The dataset may contain some social biases, as the input sentences are based on Wikipedia (WikiTableQuestions, WikiSQL, WebNLG). Studies have shown that the English Wikipedia contains gender biases([Dinan et al., 2020](https://www.aclweb.org/anthology/2020.emnlp-main.23.pdf)), racial biases([Papakyriakopoulos et al., 2020 (https://dl.acm.org/doi/pdf/10.1145/3351095.3372843)) and geographical bias([Livingstone et al., 2010](https://doi.org/10.5204/mcj.315)). [More info](https://en.wikipedia.org/wiki/Racial_bias_on_Wikipedia#cite_note-23).\n",
165
- "data-unsuited-applications": "The end-to-end transformer has the lowest performance since the transformer model needs intermediate pipeline planning steps to have higher performance. Similar findings can be found in [Castro Ferreira et al., 2019](https://arxiv.org/pdf/1908.09022.pdf).\n",
166
- "data-discouraged-use": "n/a"
167
- }
168
- },
169
- "context": {
170
- "previous": {
171
- "is-deployed": "no",
172
- "described-risks": "N/A",
173
- "changes-from-observation": "N/A"
174
- },
175
- "underserved": {
176
- "helps-underserved": "no",
177
- "underserved-description": "N/A"
178
- },
179
- "biases": {
180
- "has-biases": "no",
181
- "bias-analyses": "N/A",
182
- "speaker-distibution": "No, the annotators are raters on crowdworking platforms and thus only represent their demographics."
183
- }
184
- }
185
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dart.py DELETED
@@ -1,142 +0,0 @@
1
- import json
2
- import datasets
3
-
4
- _CITATION = """\
5
- @inproceedings{nan-etal-2021-dart,
6
- title = "{DART}: Open-Domain Structured Data Record to Text Generation",
7
- author = "Nan, Linyong and
8
- Radev, Dragomir and
9
- Zhang, Rui and
10
- Rau, Amrit and
11
- Sivaprasad, Abhinand and
12
- Hsieh, Chiachun and
13
- Tang, Xiangru and
14
- Vyas, Aadit and
15
- Verma, Neha and
16
- Krishna, Pranav and
17
- Liu, Yangxiaokang and
18
- Irwanto, Nadia and
19
- Pan, Jessica and
20
- Rahman, Faiaz and
21
- Zaidi, Ahmad and
22
- Mutuma, Mutethia and
23
- Tarabar, Yasin and
24
- Gupta, Ankit and
25
- Yu, Tao and
26
- Tan, Yi Chern and
27
- Lin, Xi Victoria and
28
- Xiong, Caiming and
29
- Socher, Richard and
30
- Rajani, Nazneen Fatema",
31
- booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
32
- month = jun,
33
- year = "2021",
34
- address = "Online",
35
- publisher = "Association for Computational Linguistics",
36
- url = "https://aclanthology.org/2021.naacl-main.37",
37
- doi = "10.18653/v1/2021.naacl-main.37",
38
- pages = "432--447",
39
- abstract = "We present DART, an open domain structured DAta Record to Text generation dataset with over 82k instances (DARTs). Data-to-text annotations can be a costly process, especially when dealing with tables which are the major source of structured data and contain nontrivial structures. To this end, we propose a procedure of extracting semantic triples from tables that encodes their structures by exploiting the semantic dependencies among table headers and the table title. Our dataset construction framework effectively merged heterogeneous sources from open domain semantic parsing and spoken dialogue systems by utilizing techniques including tree ontology annotation, question-answer pair to declarative sentence conversion, and predicate unification, all with minimum post-editing. We present systematic evaluation on DART as well as new state-of-the-art results on WebNLG 2017 to show that DART (1) poses new challenges to existing data-to-text datasets and (2) facilitates out-of-domain generalization. Our data and code can be found at https://github.com/Yale-LILY/dart.",
40
- }
41
- """
42
-
43
- _DESCRIPTION = """\
44
- DART is a large and open-domain structured DAta Record to Text generation corpus
45
- with high-quality sentence annotations with each input being a set of
46
- entity-relation triples following a tree-structured ontology. It consists of
47
- 82191 examples across different domains with each input being a semantic RDF
48
- triple set derived from data records in tables and the tree ontology of table
49
- schema, annotated with sentence description that covers all facts in the triple set.
50
- """
51
-
52
- _URLs = {
53
- "train": "https://raw.githubusercontent.com/Yale-LILY/dart/master/data/v1.1.1/dart-v1.1.1-full-train.json",
54
- "validation": "https://raw.githubusercontent.com/Yale-LILY/dart/master/data/v1.1.1/dart-v1.1.1-full-dev.json",
55
- "test": "https://raw.githubusercontent.com/Yale-LILY/dart/master/data/v1.1.1/dart-v1.1.1-full-test.json",
56
- }
57
-
58
-
59
- class Dart(datasets.GeneratorBasedBuilder):
60
- VERSION = datasets.Version("1.0.0")
61
- DEFAULT_CONFIG_NAME = "dart"
62
-
63
- def _info(self):
64
- features = datasets.Features(
65
- {
66
- "gem_id": datasets.Value("string"),
67
- "gem_parent_id": datasets.Value("string"),
68
- "dart_id": datasets.Value("int32"),
69
- "tripleset": [[datasets.Value("string")]], # list of triples
70
- "subtree_was_extended": datasets.Value("bool"),
71
- "target_sources": [datasets.Value("string")],
72
- "target": datasets.Value("string"), # single target for train
73
- "references": [datasets.Value("string")],
74
- }
75
- )
76
- return datasets.DatasetInfo(
77
- description=_DESCRIPTION,
78
- features=features,
79
- supervised_keys=datasets.info.SupervisedKeysData(
80
- input="tripleset", output="target"
81
- ),
82
- homepage="",
83
- citation=_CITATION,
84
- )
85
-
86
- def _split_generators(self, dl_manager):
87
- """Returns SplitGenerators."""
88
- dl_dir = dl_manager.download_and_extract(_URLs)
89
- return [
90
- datasets.SplitGenerator(
91
- name=spl, gen_kwargs={"filepath": dl_dir[spl], "split": spl}
92
- )
93
- for spl in ["train", "validation", "test"]
94
- ]
95
-
96
- def _generate_examples(self, filepath, split, filepaths=None, lang=None):
97
- """Yields examples."""
98
- with open(filepath, encoding="utf-8") as f:
99
- data = json.loads(f.read())
100
- id_ = -1
101
- i = -1
102
- for example in data:
103
- if split == "train":
104
- i += 1
105
- for annotation in example["annotations"]:
106
- id_ += 1
107
- yield id_, {
108
- "gem_id": f"dart-{split}-{id_}",
109
- "gem_parent_id": f"dart-{split}-{id_}",
110
- "dart_id": i,
111
- "tripleset": example["tripleset"],
112
- "subtree_was_extended": example.get(
113
- "subtree_was_extended", None
114
- ), # some are missing
115
- "target_sources": [
116
- annotation["source"]
117
- for annotation in example["annotations"]
118
- ],
119
- "target": annotation["text"],
120
- "references": [],
121
- }
122
- else:
123
- id_ += 1
124
- yield id_, {
125
- "gem_id": f"dart-{split}-{id_}",
126
- "gem_parent_id": f"dart-{split}-{id_}",
127
- "dart_id": id_,
128
- "tripleset": example["tripleset"],
129
- "subtree_was_extended": example.get(
130
- "subtree_was_extended", None
131
- ), # some are missing
132
- "target_sources": [
133
- annotation["source"]
134
- for annotation in example["annotations"]
135
- ],
136
- "target": example["annotations"][0]["text"]
137
- if len(example["annotations"]) > 0
138
- else "",
139
- "references": [
140
- annotation["text"] for annotation in example["annotations"]
141
- ],
142
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1,107 +0,0 @@
1
- {
2
- "dart": {
3
- "description": "GEM is a benchmark environment for Natural Language Generation with a focus on its Evaluation,\nboth through human annotations and automated Metrics.\n\nGEM aims to:\n- measure NLG progress across 13 datasets spanning many NLG tasks and languages.\n- provide an in-depth analysis of data and models presented via data statements and challenge sets.\n- develop standards for evaluation of generated text using both automated and human metrics.\n\nIt is our goal to regularly update GEM and to encourage toward more inclusive practices in dataset development\nby extending existing data or developing datasets for additional languages.\n",
4
- "citation": "@article{gem_benchmark,\n author = {Sebastian Gehrmann and\n Tosin P. Adewumi and\n Karmanya Aggarwal and\n Pawan Sasanka Ammanamanchi and\n Aremu Anuoluwapo and\n Antoine Bosselut and\n Khyathi Raghavi Chandu and\n Miruna{-}Adriana Clinciu and\n Dipanjan Das and\n Kaustubh D. Dhole and\n Wanyu Du and\n Esin Durmus and\n Ondrej Dusek and\n Chris Emezue and\n Varun Gangal and\n Cristina Garbacea and\n Tatsunori Hashimoto and\n Yufang Hou and\n Yacine Jernite and\n Harsh Jhamtani and\n Yangfeng Ji and\n Shailza Jolly and\n Dhruv Kumar and\n Faisal Ladhak and\n Aman Madaan and\n Mounica Maddela and\n Khyati Mahajan and\n Saad Mahamood and\n Bodhisattwa Prasad Majumder and\n Pedro Henrique Martins and\n Angelina McMillan{-}Major and\n Simon Mille and\n Emiel van Miltenburg and\n Moin Nadeem and\n Shashi Narayan and\n Vitaly Nikolaev and\n Rubungo Andre Niyongabo and\n Salomey Osei and\n Ankur P. Parikh and\n Laura Perez{-}Beltrachini and\n Niranjan Ramesh Rao and\n Vikas Raunak and\n Juan Diego Rodriguez and\n Sashank Santhanam and\n Joao Sedoc and\n Thibault Sellam and\n Samira Shaikh and\n Anastasia Shimorina and\n Marco Antonio Sobrevilla Cabezudo and\n Hendrik Strobelt and\n Nishant Subramani and\n Wei Xu and\n Diyi Yang and\n Akhila Yerukola and\n Jiawei Zhou},\n title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and\n Metrics},\n journal = {CoRR},\n volume = {abs/2102.01672},\n year = {2021},\n url = {https://arxiv.org/abs/2102.01672},\n archivePrefix = {arXiv},\n eprint = {2102.01672}\n}\n",
5
- "homepage": "https://gem-benchmark.github.io/",
6
- "license": "CC-BY-SA-4.0",
7
- "features": {
8
- "gem_id": {
9
- "dtype": "string",
10
- "id": null,
11
- "_type": "Value"
12
- },
13
- "gem_parent_id": {
14
- "dtype": "string",
15
- "id": null,
16
- "_type": "Value"
17
- },
18
- "dart_id": {
19
- "dtype": "int32",
20
- "id": null,
21
- "_type": "Value"
22
- },
23
- "tripleset": [
24
- [
25
- {
26
- "dtype": "string",
27
- "id": null,
28
- "_type": "Value"
29
- }
30
- ]
31
- ],
32
- "subtree_was_extended": {
33
- "dtype": "bool",
34
- "id": null,
35
- "_type": "Value"
36
- },
37
- "target_sources": [
38
- {
39
- "dtype": "string",
40
- "id": null,
41
- "_type": "Value"
42
- }
43
- ],
44
- "target": {
45
- "dtype": "string",
46
- "id": null,
47
- "_type": "Value"
48
- },
49
- "references": [
50
- {
51
- "dtype": "string",
52
- "id": null,
53
- "_type": "Value"
54
- }
55
- ]
56
- },
57
- "post_processed": null,
58
- "supervised_keys": null,
59
- "builder_name": "gem",
60
- "config_name": "dart",
61
- "version": {
62
- "version_str": "1.1.0",
63
- "description": null,
64
- "major": 1,
65
- "minor": 1,
66
- "patch": 0
67
- },
68
- "splits": {
69
- "train": {
70
- "name": "train",
71
- "num_bytes": 23047610,
72
- "num_examples": 62659,
73
- "dataset_name": "gem"
74
- },
75
- "validation": {
76
- "name": "validation",
77
- "num_bytes": 1934054,
78
- "num_examples": 2768,
79
- "dataset_name": "gem"
80
- },
81
- "test": {
82
- "name": "test",
83
- "num_bytes": 3476953,
84
- "num_examples": 5097,
85
- "dataset_name": "gem"
86
- }
87
- },
88
- "download_checksums": {
89
- "https://raw.githubusercontent.com/Yale-LILY/dart/master/data/v1.1.1/dart-v1.1.1-full-train.json": {
90
- "num_bytes": 22969160,
91
- "checksum": "92c8594979c05f508f5739047079ec2ffe5a244e58bfa2b50a9cb8b9c65f5a2b"
92
- },
93
- "https://raw.githubusercontent.com/Yale-LILY/dart/master/data/v1.1.1/dart-v1.1.1-full-dev.json": {
94
- "num_bytes": 2468789,
95
- "checksum": "56606eac12baa7f0ddb81c61890f9f1a95bace4df8f8989852786358fe5d2b88"
96
- },
97
- "https://raw.githubusercontent.com/Yale-LILY/dart/master/data/v1.1.1/dart-v1.1.1-full-test.json": {
98
- "num_bytes": 4501417,
99
- "checksum": "984be50fa46d0dbfce1ecfdad4a5c5a5cf82f1be0b124fe94f9f9b175d2a5045"
100
- }
101
- },
102
- "download_size": 29939366,
103
- "post_processing_size": null,
104
- "dataset_size": 28458617,
105
- "size_in_bytes": 58397983
106
- }
107
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
default/dart-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53302c4e0cdb2e0f08e504f7a58b0323ce3181c7f3bcd4100d87a830a65b30d2
3
+ size 1064128
default/dart-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:104b07d210c3695c03555e3587866e697d1718628e5f5ba31c677f7b631daa9d
3
+ size 4573366
default/dart-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:684cc75ea83ca1e520f4c3bb00a25cf0c0e7ec1344b7e032d9fc01ff828d78ad
3
+ size 554529