Datasets:
GEM
/

Modalities:
Text
Languages:
English
Libraries:
Datasets
License:
Sebastian Gehrmann commited on
Commit
8142f10
·
1 Parent(s): 5969289

data card.

Browse files
Files changed (1) hide show
  1. README.md +333 -135
README.md CHANGED
@@ -1,17 +1,79 @@
1
- ## Dataset Overview
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
- ### Where to find the data and its documentation
4
 
5
- #### What is the webpage for the dataset (if it exists)?
6
 
7
- https://nlds.soe.ucsc.edu/viggo
8
 
9
- #### What is the link to the paper describing the dataset (open access preferred)?
 
 
10
 
11
- https://aclanthology.org/W19-8623/
12
 
13
- #### Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex.
 
 
14
 
 
 
 
 
15
  ```
16
  @inproceedings{juraska-etal-2019-viggo,
17
  title = "{V}i{GGO}: A Video Game Corpus for Data-To-Text Generation in Open-Domain Conversation",
@@ -27,71 +89,106 @@ https://aclanthology.org/W19-8623/
27
  doi = "10.18653/v1/W19-8623",
28
  pages = "164--172",
29
  }
30
- ```
 
 
 
 
 
 
 
 
 
31
 
32
- #### If known, provide the name of at least one person the reader can contact for questions about the dataset.
 
 
33
 
34
- Juraj Juraska
35
 
36
- #### If known, provide the email of at least one person the reader can contact for questions about the dataset.
 
 
37
 
38
39
 
40
- #### Does the dataset have an active leaderboard?
41
 
42
- no
43
 
44
- ### Languages and Intended Use
 
 
 
45
 
46
- #### Is the dataset multilingual?
47
 
48
- no
 
 
 
49
 
50
- #### What languages/dialects are covered in the dataset?
51
 
52
- English
 
 
 
53
 
54
- #### What is the license of the dataset?
55
 
56
- cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
 
 
57
 
58
- #### What is the intended use of the dataset?
59
 
60
- ViGGO was designed for the task of data-to-text generation in chatbots (as opposed to task-oriented dialogue systems), with target responses being more conversational than information-seeking, yet constrained to the information presented in a meaning representation. The dataset, being relatively small and clean, can also serve for demonstrating transfer learning capabilities of neural models.
 
 
61
 
62
- #### What primary task does the dataset support?
63
 
64
- Data-to-Text
65
 
66
- ### Credit
67
 
68
- #### In what kind of organization did the dataset curation happen?
 
 
69
 
70
- academic
71
 
72
- #### Name the organization(s).
 
 
73
 
74
- University of California, Santa Cruz
75
 
76
- #### Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s).
 
 
77
 
78
- Juraj Juraska, Kevin K. Bowden, Marilyn Walker
79
 
80
- #### Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM.
 
 
81
 
82
- Juraj Juraska
83
 
84
- ### Structure
85
 
86
- #### List and describe the fields present in the dataset.
87
 
 
 
88
  Each example in the dataset has the following two fields:
 
89
  - `mr`: A meaning representation (MR) that, in a structured format, provides the information to convey, as well as the desired dialogue act (DA) type.
90
  - `ref`: A reference output, i.e., a corresponding utterance realizing all the information in the MR.
91
 
92
  Each MR is a flattened dictionary of attribute-and-value pairs, "wrapped" in the dialogue act type indication. This format was chosen primarily for its compactness, but also to allow for easy concatenation of multiple DAs (each with potentially different attributes) in a single MR.
93
 
94
  Following is the list of all possible attributes (which are also refered to as "slots") in ViGGO along with their types/possible values:
 
95
  - `name`: The name of a video game (e.g., Rise of the Tomb Raider).
96
  - `release_year`: The year a video game was released in (e.g., 2015).
97
  - `exp_release_date`: For a not-yet-released game, the date when it is expected to be released (e.g., February 22, 2019). *Note: This slot cannot appear together with `release_year` in the same dialogue act.*
@@ -107,39 +204,48 @@ Following is the list of all possible attributes (which are also refered to as "
107
  - `has_mac_release`: Indicates whether a game is supported on macOS (possible values: yes, no).
108
  - `specifier`: A game specifier used by the `request` DA, typically an adjective (e.g., addictive, easiest, overrated, visually impressive).
109
 
110
- Each MR in the dataset has 3 distinct reference utterances, which are represented as 3 separate examples with the same MR.
111
 
112
- #### How was the dataset structure determined?
113
 
114
- The dataset structure mostly follows the format of the popular E2E dataset, however, with added dialogue act type indications, new list-type attributes introduced, and unified naming convention for multi-word attribute names.
 
 
115
 
116
- #### Provide a JSON formatted example of a typical instance in the dataset.
117
 
118
- ```json
 
119
  ```
120
  {
121
  "mr": "give_opinion(name[SpellForce 3], rating[poor], genres[real-time strategy, role-playing], player_perspective[bird view])",
122
  "ref": "I think that SpellForce 3 is one of the worst games I've ever played. Trying to combine the real-time strategy and role-playing genres just doesn't work, and the bird view perspective makes it near impossible to play."
123
  }
124
  ```
125
- ```
126
 
127
- #### Describe and name the splits in the dataset if there are more than one.
128
 
 
 
129
  ViGGO is split into 3 partitions, with no MRs in common between the training set and either of the validation and the test set (and that *after* delexicalizing the `name` and `developer` slots). The ratio of examples in the partitions is approximately 7.5 : 1 : 1.5, with their exact sizes listed below:
 
130
  - **Train:** 5,103 (1,675 unique MRs)
131
  - **Validation:** 714 (238 unique MRs)
132
  - **Test:** 1,083 (359 unique MRs)
133
  - **TOTAL:** 6,900 (2,253 unique MRs)
134
 
135
- *Note: The reason why the number of unique MRs is not exactly one third of all examples is that for each `request_attribute` DA (which only has one slot, and that without a value) 12 reference utterances were collected instead of 3.*
136
 
137
- #### Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
138
 
139
- A similar MR length and slot distribution was preserved across the partitions. The distribution of DA types, on the other hand, is skewed slightly toward fewer `inform` DA instances (the most prevalent DA type) and a higher proportion of the less prevalent DAs in the validation and the test set.
 
 
140
 
141
- #### What does an outlier of the dataset in terms of length/perplexity/embedding look like?
142
 
 
 
143
  ```
144
  {
145
  "mr": "request_attribute(player_perspective[])",
@@ -153,189 +259,281 @@ A similar MR length and slot distribution was preserved across the partitions. T
153
  "mr": "inform(name[Super Bomberman], release_year[1993], genres[action, strategy], has_multiplayer[no], platforms[Nintendo, PC], available_on_steam[no], has_linux_release[no], has_mac_release[no])",
154
  "ref": "Super Bomberman is one of my favorite Nintendo games, also available on PC, though not through Steam. It came out all the way back in 1993, and you can't get it for any modern consoles, unfortunately, so no online multiplayer, or of course Linux or Mac releases either. That said, it's still one of the most addicting action-strategy games out there."
155
  }
156
- ```
 
157
 
158
- ## Dataset in GEM
159
 
160
- ### Rationale
161
 
162
- #### What does this dataset contribute toward better generation evaluation and why is it part of GEM?
163
 
164
- ViGGO is a fairly small dataset but includes a greater variety of utterance types than most other datasets for NLG from structured meaning representations. This makes it more interesting from the perspective of model evaluation, since models have to learn to differentiate between various dialogue act types that share the same slots.
165
 
166
- #### Do other datasets for the high level task exist?
 
 
167
 
168
- yes
169
 
170
- #### Does this dataset cover other languages than other datasets for the same task?
 
 
171
 
172
- no
173
 
174
- #### What else sets this dataset apart from other similar datasets in GEM?
 
 
175
 
176
- ViGGO's language is more casual and conversational -- as opposed to information-seeking -- which differentiates it from the majority of popular datasets for the same type of data-to-text task. Moreover, the video game domain is a rather uncommon one in the NLG community, despite being very well-suited for data-to-text generation, considering it offers entities with many attributes to talk about, which can be described in a structured format.
177
 
178
- ### GEM Additional Curation
 
 
179
 
180
- #### Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data?
181
 
182
- no
183
 
184
- #### Does GEM provide additional splits to the dataset?
185
 
186
- no
 
 
187
 
188
- ### Getting Started
189
 
190
- #### Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task.
 
 
191
 
192
- - [E2E NLG Challenge](http://www.macs.hw.ac.uk/InteractionLab/E2E/)
193
 
194
- #### Technical terms used in this card and the dataset and their definitions
195
 
 
 
 
 
 
 
 
 
 
 
196
  - MR = meaning representation
197
- - DA = dialogue act
 
198
 
199
- ## Previous Results
200
 
201
- ### Previous Results
202
 
203
- #### What metrics are typically used for this task?
204
 
205
- BLEU METEOR ROUGE BERT-Score BLEURT Other: Other Metrics
206
 
207
- #### Definitions of other metrics
 
 
208
 
209
- SER (slot error rate): Indicates the proportion of missing/incorrect/duplicate/hallucinated slot mentions in the utterances across a test set. The closer to zero a model scores in this metric, the more semantically accurate its outputs are. This metric is typically calculated either manually on a small sample of generated outputs, or heuristically using domain-specific regex rules and gazetteers.
210
 
211
- #### Are previous results available?
 
 
212
 
213
- yes
214
 
215
- #### What are the most relevant previous results for this task/dataset?
 
 
216
 
 
 
 
 
217
  - [Juraska et al., 2019. ViGGO: A Video Game Corpus for Data-To-Text Generation in Open-Domain Conversation.](https://aclanthology.org/W19-8623/)
218
  - [Harkous et al., 2020. Have Your Text and Use It Too! End-to-End Neural Data-to-Text Generation with Semantic Fidelity.](https://aclanthology.org/2020.coling-main.218/)
219
  - [Kedzie and McKeown, 2020. Controllable Meaning Representation to Text Generation: Linearization and Data Augmentation Strategies.](https://aclanthology.org/2020.emnlp-main.419/)
220
- - [Juraska and Walker, 2021. Attention Is Indeed All You Need: Semantically Attention-Guided Decoding for Data-to-Text NLG.](https://aclanthology.org/2021.inlg-1.45/)
 
 
221
 
222
- ## Dataset Curation
223
 
224
- ### Original Curation
225
 
226
- #### Original curation rationale
227
 
 
 
228
  The primary motivation behind ViGGO was to create a data-to-text corpus in a new but conversational domain, and intended for use in open-domain chatbots rather than task-oriented dialogue systems. To this end, the dataset contains utterances of 9 generalizable and conversational dialogue act types, revolving around various aspects of video games. The idea is that similar, relatively small datasets could fairly easily be collected for other conversational domains -- especially other entertainment domains (such as music or books), but perhaps also topics like animals or food -- to support an open-domain conversational agent with controllable neural NLG.
229
 
230
- Another desired quality of the ViGGO dataset was cleanliness (no typos and grammatical errors) and semantic accuracy, which has often not been the case with other crowdsourced data-to-text corpora. In general, for the data-to-text generation task, there is arguably no need to put the burden on the generation model to figure out the noise, since the noise would not be expected to be there in a real-world system whose dialogue manager that creates the input for the NLG module is usually configurable and tightly controlled.
231
 
232
- #### What was the communicative goal?
233
 
234
- Produce a response from a structured meaning representation in the context of a conversation about video games. It can be a brief opinion or a description of a game, as well as a request for attribute (e.g., genre, player perspective, or platform) preference/confirmation or an inquiry about liking a particular type of games.
 
 
235
 
236
- #### Is the dataset aggregated from different data sources?
237
 
238
- no
 
 
239
 
240
- ### Language Data
241
 
242
- #### How was the language data obtained?
243
 
244
- Crowdsourced
245
 
246
- #### If crowdsourced, where from?
 
 
247
 
248
- Amazon Mechanical Turk
249
 
250
- #### What further information do we have on the language producers?
 
 
251
 
252
- The paid crowdworkers who produced the reference utterances were from English-speaking countries, and they had at least 1,000 HITs approved and a HIT approval rate of 98% or more. Furthermore, in the instructions, crowdworkers were discouraged from taking on the task unless they considered themselves a gamer.
253
 
254
- #### Does the language in the dataset focus on specific topics? How would you describe them?
 
 
255
 
256
- The dataset focuses on video games and their various aspects, and hence the language of the utterances may contain video game-specific jargon.
257
 
258
- #### Was the text validated by a different worker or a data curator?
 
 
259
 
260
- validated by data curator
261
 
262
- #### How was the text data pre-processed? (Enter N/A if the text was not pre-processed)
 
 
263
 
 
 
 
 
264
  First, regular expressions were used to enforce several standardization policies regarding special characters, punctuation, and the correction of undesired abbreviations/misspellings of standard domain-specific terms (e.g., terms like "Play station" or "PS4" would be changed to the uniform "PlayStation"). At the same time, hyphens were removed or enforced uniformly in certain terms, for example, "single-player". Although phrases such as "first person" should correctly have a hyphen when used as adjective, the crowdworkers used this rule very inconsistently. In order to avoid model outputs being penalized during the evaluation by the arbitrary choice of a hyphen presence or absence in the reference utterances, the hyphen was removed in all such phrases regardless of the noun vs. adjective use.
265
 
266
  Second, an extensive set of heuristics was developed to identify slot-related errors. This process revealed the vast majority of missing or incorrect slot mentions, which were subsequently fixed according to the corresponding MRs. This eventually led to the development of a robust, cross-domain, heuristic slot aligner that can be used for automatic slot error rate evaluation. For details, see the appendix in [Juraska and Walker, 2021](https://aclanthology.org/2021.inlg-1.45/).
267
 
268
- Crowdworkers would sometimes also inject a piece of information which was not present in the MR, some of which is not even represented by any of the slots, e.g., plot or main characters. This unsolicited information was removed from the utterances so as to avoid confusing the neural model. Finally, any remaining typos and grammatical errors were resolved.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
269
 
270
- #### Were text instances selected or filtered?
271
 
272
- manually
273
 
274
- #### What were the selection criteria?
 
 
 
275
 
276
- Compliance with the indicated dialogue act type, semantic accuracy (i.e., all information in the corresponding MR mentioned and that correctly), and minimal extraneous information (e.g., personal experience/opinion). Whenever it was within a reasonable amount of effort, the utterances were manually fixed instead of being discarded/crowdsourced anew.
277
 
278
- ### Structured Annotations
 
 
279
 
280
- #### Does the dataset have additional annotations for each instance?
281
 
282
- none
283
 
284
- #### Was an annotation service used?
285
 
286
- no
 
 
287
 
288
- ### Consent
289
 
290
- #### Was there a consent policy involved when gathering the data?
291
 
292
- no
293
 
294
- ### Private Identifying Information (PII)
295
 
296
- #### Does the source language data likely contain Personal Identifying Information about the data creators or subjects?
297
 
298
- no PII
 
 
299
 
300
- #### Provide a justification for selecting `no PII` above.
301
 
302
- Crowdworkers were instructed to only express the information in the provided meaning representation, which never prompted them to mention anything about themselves. Occasionally, they would still include a bit of personal experience (e.g., "I used to like the game as a kid.") or opinion, but these would be too general to be considered PII.
303
 
304
- ### Maintenance
305
 
306
- #### Does the original dataset have a maintenance plan?
 
 
307
 
308
- no
309
 
310
- ## Broader Social Context
311
 
312
- ### Previous Work on the Social Impact of the Dataset
313
 
314
- #### Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems?
 
 
315
 
316
- no
317
 
318
- ### Impact on Under-Served Communities
319
 
320
- #### Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models).
321
 
322
- no
323
 
324
- ### Discussion of Biases
325
 
326
- #### Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group.
327
 
328
- no
329
 
330
- ## Considerations for Using the Data
331
 
332
- ### PII Risks and Liability
333
 
334
- ### Licenses
335
 
336
- ### Known Technical Limitations
337
 
338
- #### Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible.
 
 
339
 
340
- The dataset is limited to a single domain: video games. One caveat of using a language generator trained on this dataset in a dialogue system as-is is that multiple subsequent turns discussing the same video game would be repeating its full name. ViGGO was designed for generation without context, and therefore it is up to the dialogue manager to ensure that pronouns are substituted for the names whenever it would sound more natural in a dialogue. Alternately, the dataset can easily be augmented with automatically constructed samples which omit the `name` slot in the MR and replace the name with a pronoun in the reference utterance.
341
 
 
1
+ ---
2
+ annotations_creators:
3
+ - none
4
+ language_creators:
5
+ - unknown
6
+ languages:
7
+ - unknown
8
+ licenses:
9
+ - cc-by-sa-4.0
10
+ multilinguality:
11
+ - unknown
12
+ pretty_name: viggo
13
+ size_categories:
14
+ - unknown
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - data-to-text
19
+ task_ids:
20
+ - unknown
21
+ ---
22
+
23
+ # Dataset Card for GEM/viggo
24
+
25
+ ## Dataset Description
26
+
27
+ - **Homepage:** https://nlds.soe.ucsc.edu/viggo
28
+ - **Repository:** [Needs More Information]
29
+ - **Paper:** https://aclanthology.org/W19-8623/
30
+ - **Leaderboard:** N/A
31
+ - **Point of Contact:** Juraj Juraska
32
+
33
+ ### Link to Main Data Card
34
+
35
+ You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/viggo).
36
+
37
+ ### Dataset Summary
38
+
39
+ ViGGO is an English data-to-text generation dataset in the video game domain, with target responses being more conversational than information-seeking, yet constrained to the information presented in a meaning representation. The dataset is relatively small with about 5,000 datasets but very clean, and can thus serve for evaluating transfer learning, low-resource, or few-shot capabilities of neural models.
40
+
41
+ You can load the dataset via:
42
+ ```
43
+ import datasets
44
+ data = datasets.load_dataset('GEM/viggo')
45
+ ```
46
+ The data loader can be found [here](https://huggingface.co/datasets/GEM/viggo).
47
+
48
+ #### website
49
+ [Wesbite](https://nlds.soe.ucsc.edu/viggo)
50
+
51
+ #### paper
52
+ [ACL Anthology](https://aclanthology.org/W19-8623/)
53
+
54
+ #### authors
55
+ Juraj Juraska, Kevin K. Bowden, Marilyn Walker
56
 
57
+ ## Dataset Overview
58
 
59
+ ### Where to find the Data and its Documentation
60
 
61
+ #### Webpage
62
 
63
+ <!-- info: What is the webpage for the dataset (if it exists)? -->
64
+ <!-- scope: telescope -->
65
+ [Wesbite](https://nlds.soe.ucsc.edu/viggo)
66
 
67
+ #### Paper
68
 
69
+ <!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
70
+ <!-- scope: telescope -->
71
+ [ACL Anthology](https://aclanthology.org/W19-8623/)
72
 
73
+ #### BibTex
74
+
75
+ <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
76
+ <!-- scope: microscope -->
77
  ```
78
  @inproceedings{juraska-etal-2019-viggo,
79
  title = "{V}i{GGO}: A Video Game Corpus for Data-To-Text Generation in Open-Domain Conversation",
 
89
  doi = "10.18653/v1/W19-8623",
90
  pages = "164--172",
91
  }
92
+ ```
93
+
94
+ #### Contact Name
95
+
96
+ <!-- quick -->
97
+ <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
98
+ <!-- scope: periscope -->
99
+ Juraj Juraska
100
+
101
+ #### Contact Email
102
 
103
+ <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
104
+ <!-- scope: periscope -->
105
106
 
107
+ #### Has a Leaderboard?
108
 
109
+ <!-- info: Does the dataset have an active leaderboard? -->
110
+ <!-- scope: telescope -->
111
+ no
112
 
 
113
 
114
+ ### Languages and Intended Use
115
 
116
+ #### Multilingual?
117
 
118
+ <!-- quick -->
119
+ <!-- info: Is the dataset multilingual? -->
120
+ <!-- scope: telescope -->
121
+ no
122
 
123
+ #### Covered Languages
124
 
125
+ <!-- quick -->
126
+ <!-- info: What languages/dialects are covered in the dataset? -->
127
+ <!-- scope: telescope -->
128
+ `English`
129
 
130
+ #### License
131
 
132
+ <!-- quick -->
133
+ <!-- info: What is the license of the dataset? -->
134
+ <!-- scope: telescope -->
135
+ cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
136
 
137
+ #### Intended Use
138
 
139
+ <!-- info: What is the intended use of the dataset? -->
140
+ <!-- scope: microscope -->
141
+ ViGGO was designed for the task of data-to-text generation in chatbots (as opposed to task-oriented dialogue systems), with target responses being more conversational than information-seeking, yet constrained to the information presented in a meaning representation. The dataset, being relatively small and clean, can also serve for demonstrating transfer learning capabilities of neural models.
142
 
143
+ #### Primary Task
144
 
145
+ <!-- info: What primary task does the dataset support? -->
146
+ <!-- scope: telescope -->
147
+ Data-to-Text
148
 
 
149
 
150
+ ### Credit
151
 
152
+ #### Curation Organization Type(s)
153
 
154
+ <!-- info: In what kind of organization did the dataset curation happen? -->
155
+ <!-- scope: telescope -->
156
+ `academic`
157
 
158
+ #### Curation Organization(s)
159
 
160
+ <!-- info: Name the organization(s). -->
161
+ <!-- scope: periscope -->
162
+ University of California, Santa Cruz
163
 
164
+ #### Dataset Creators
165
 
166
+ <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
167
+ <!-- scope: microscope -->
168
+ Juraj Juraska, Kevin K. Bowden, Marilyn Walker
169
 
170
+ #### Who added the Dataset to GEM?
171
 
172
+ <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
173
+ <!-- scope: microscope -->
174
+ Juraj Juraska
175
 
 
176
 
177
+ ### Dataset Structure
178
 
179
+ #### Data Fields
180
 
181
+ <!-- info: List and describe the fields present in the dataset. -->
182
+ <!-- scope: telescope -->
183
  Each example in the dataset has the following two fields:
184
+
185
  - `mr`: A meaning representation (MR) that, in a structured format, provides the information to convey, as well as the desired dialogue act (DA) type.
186
  - `ref`: A reference output, i.e., a corresponding utterance realizing all the information in the MR.
187
 
188
  Each MR is a flattened dictionary of attribute-and-value pairs, "wrapped" in the dialogue act type indication. This format was chosen primarily for its compactness, but also to allow for easy concatenation of multiple DAs (each with potentially different attributes) in a single MR.
189
 
190
  Following is the list of all possible attributes (which are also refered to as "slots") in ViGGO along with their types/possible values:
191
+
192
  - `name`: The name of a video game (e.g., Rise of the Tomb Raider).
193
  - `release_year`: The year a video game was released in (e.g., 2015).
194
  - `exp_release_date`: For a not-yet-released game, the date when it is expected to be released (e.g., February 22, 2019). *Note: This slot cannot appear together with `release_year` in the same dialogue act.*
 
204
  - `has_mac_release`: Indicates whether a game is supported on macOS (possible values: yes, no).
205
  - `specifier`: A game specifier used by the `request` DA, typically an adjective (e.g., addictive, easiest, overrated, visually impressive).
206
 
207
+ Each MR in the dataset has 3 distinct reference utterances, which are represented as 3 separate examples with the same MR.
208
 
209
+ #### Reason for Structure
210
 
211
+ <!-- info: How was the dataset structure determined? -->
212
+ <!-- scope: microscope -->
213
+ The dataset structure mostly follows the format of the popular E2E dataset, however, with added dialogue act type indications, new list-type attributes introduced, and unified naming convention for multi-word attribute names.
214
 
215
+ #### Example Instance
216
 
217
+ <!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
218
+ <!-- scope: periscope -->
219
  ```
220
  {
221
  "mr": "give_opinion(name[SpellForce 3], rating[poor], genres[real-time strategy, role-playing], player_perspective[bird view])",
222
  "ref": "I think that SpellForce 3 is one of the worst games I've ever played. Trying to combine the real-time strategy and role-playing genres just doesn't work, and the bird view perspective makes it near impossible to play."
223
  }
224
  ```
 
225
 
226
+ #### Data Splits
227
 
228
+ <!-- info: Describe and name the splits in the dataset if there are more than one. -->
229
+ <!-- scope: periscope -->
230
  ViGGO is split into 3 partitions, with no MRs in common between the training set and either of the validation and the test set (and that *after* delexicalizing the `name` and `developer` slots). The ratio of examples in the partitions is approximately 7.5 : 1 : 1.5, with their exact sizes listed below:
231
+
232
  - **Train:** 5,103 (1,675 unique MRs)
233
  - **Validation:** 714 (238 unique MRs)
234
  - **Test:** 1,083 (359 unique MRs)
235
  - **TOTAL:** 6,900 (2,253 unique MRs)
236
 
237
+ *Note: The reason why the number of unique MRs is not exactly one third of all examples is that for each `request_attribute` DA (which only has one slot, and that without a value) 12 reference utterances were collected instead of 3.*
238
 
239
+ #### Splitting Criteria
240
 
241
+ <!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
242
+ <!-- scope: microscope -->
243
+ A similar MR length and slot distribution was preserved across the partitions. The distribution of DA types, on the other hand, is skewed slightly toward fewer `inform` DA instances (the most prevalent DA type) and a higher proportion of the less prevalent DAs in the validation and the test set.
244
 
245
+ ####
246
 
247
+ <!-- info: What does an outlier of the dataset in terms of length/perplexity/embedding look like? -->
248
+ <!-- scope: microscope -->
249
  ```
250
  {
251
  "mr": "request_attribute(player_perspective[])",
 
259
  "mr": "inform(name[Super Bomberman], release_year[1993], genres[action, strategy], has_multiplayer[no], platforms[Nintendo, PC], available_on_steam[no], has_linux_release[no], has_mac_release[no])",
260
  "ref": "Super Bomberman is one of my favorite Nintendo games, also available on PC, though not through Steam. It came out all the way back in 1993, and you can't get it for any modern consoles, unfortunately, so no online multiplayer, or of course Linux or Mac releases either. That said, it's still one of the most addicting action-strategy games out there."
261
  }
262
+ ```
263
+
264
 
 
265
 
266
+ ## Dataset in GEM
267
 
268
+ ### Rationale for Inclusion in GEM
269
 
270
+ #### Why is the Dataset in GEM?
271
 
272
+ <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
273
+ <!-- scope: microscope -->
274
+ ViGGO is a fairly small dataset but includes a greater variety of utterance types than most other datasets for NLG from structured meaning representations. This makes it more interesting from the perspective of model evaluation, since models have to learn to differentiate between various dialogue act types that share the same slots.
275
 
276
+ #### Similar Datasets
277
 
278
+ <!-- info: Do other datasets for the high level task exist? -->
279
+ <!-- scope: telescope -->
280
+ yes
281
 
282
+ #### Unique Language Coverage
283
 
284
+ <!-- info: Does this dataset cover other languages than other datasets for the same task? -->
285
+ <!-- scope: periscope -->
286
+ no
287
 
288
+ #### Difference from other GEM datasets
289
 
290
+ <!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
291
+ <!-- scope: microscope -->
292
+ ViGGO's language is more casual and conversational -- as opposed to information-seeking -- which differentiates it from the majority of popular datasets for the same type of data-to-text task. Moreover, the video game domain is a rather uncommon one in the NLG community, despite being very well-suited for data-to-text generation, considering it offers entities with many attributes to talk about, which can be described in a structured format.
293
 
 
294
 
295
+ ### GEM-Specific Curation
296
 
297
+ #### Modificatied for GEM?
298
 
299
+ <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
300
+ <!-- scope: telescope -->
301
+ no
302
 
303
+ #### Additional Splits?
304
 
305
+ <!-- info: Does GEM provide additional splits to the dataset? -->
306
+ <!-- scope: telescope -->
307
+ no
308
 
 
309
 
310
+ ### Getting Started with the Task
311
 
312
+ #### Pointers to Resources
313
+
314
+ <!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
315
+ <!-- scope: microscope -->
316
+ - [E2E NLG Challenge](http://www.macs.hw.ac.uk/InteractionLab/E2E/)
317
+
318
+ #### Technical Terms
319
+
320
+ <!-- info: Technical terms used in this card and the dataset and their definitions -->
321
+ <!-- scope: microscope -->
322
  - MR = meaning representation
323
+ - DA = dialogue act
324
+
325
 
 
326
 
327
+ ## Previous Results
328
 
329
+ ### Previous Results
330
 
331
+ #### Metrics
332
 
333
+ <!-- info: What metrics are typically used for this task? -->
334
+ <!-- scope: periscope -->
335
+ `BLEU`, `METEOR`, `ROUGE`, `BERT-Score`, `BLEURT`, `Other: Other Metrics`
336
 
337
+ #### Other Metrics
338
 
339
+ <!-- info: Definitions of other metrics -->
340
+ <!-- scope: periscope -->
341
+ SER (slot error rate): Indicates the proportion of missing/incorrect/duplicate/hallucinated slot mentions in the utterances across a test set. The closer to zero a model scores in this metric, the more semantically accurate its outputs are. This metric is typically calculated either manually on a small sample of generated outputs, or heuristically using domain-specific regex rules and gazetteers.
342
 
343
+ #### Previous results available?
344
 
345
+ <!-- info: Are previous results available? -->
346
+ <!-- scope: telescope -->
347
+ yes
348
 
349
+ #### Relevant Previous Results
350
+
351
+ <!-- info: What are the most relevant previous results for this task/dataset? -->
352
+ <!-- scope: microscope -->
353
  - [Juraska et al., 2019. ViGGO: A Video Game Corpus for Data-To-Text Generation in Open-Domain Conversation.](https://aclanthology.org/W19-8623/)
354
  - [Harkous et al., 2020. Have Your Text and Use It Too! End-to-End Neural Data-to-Text Generation with Semantic Fidelity.](https://aclanthology.org/2020.coling-main.218/)
355
  - [Kedzie and McKeown, 2020. Controllable Meaning Representation to Text Generation: Linearization and Data Augmentation Strategies.](https://aclanthology.org/2020.emnlp-main.419/)
356
+ - [Juraska and Walker, 2021. Attention Is Indeed All You Need: Semantically Attention-Guided Decoding for Data-to-Text NLG.](https://aclanthology.org/2021.inlg-1.45/)
357
+
358
+
359
 
360
+ ## Dataset Curation
361
 
362
+ ### Original Curation
363
 
364
+ #### Original Curation Rationale
365
 
366
+ <!-- info: Original curation rationale -->
367
+ <!-- scope: telescope -->
368
  The primary motivation behind ViGGO was to create a data-to-text corpus in a new but conversational domain, and intended for use in open-domain chatbots rather than task-oriented dialogue systems. To this end, the dataset contains utterances of 9 generalizable and conversational dialogue act types, revolving around various aspects of video games. The idea is that similar, relatively small datasets could fairly easily be collected for other conversational domains -- especially other entertainment domains (such as music or books), but perhaps also topics like animals or food -- to support an open-domain conversational agent with controllable neural NLG.
369
 
370
+ Another desired quality of the ViGGO dataset was cleanliness (no typos and grammatical errors) and semantic accuracy, which has often not been the case with other crowdsourced data-to-text corpora. In general, for the data-to-text generation task, there is arguably no need to put the burden on the generation model to figure out the noise, since the noise would not be expected to be there in a real-world system whose dialogue manager that creates the input for the NLG module is usually configurable and tightly controlled.
371
 
372
+ #### Communicative Goal
373
 
374
+ <!-- info: What was the communicative goal? -->
375
+ <!-- scope: periscope -->
376
+ Produce a response from a structured meaning representation in the context of a conversation about video games. It can be a brief opinion or a description of a game, as well as a request for attribute (e.g., genre, player perspective, or platform) preference/confirmation or an inquiry about liking a particular type of games.
377
 
378
+ #### Sourced from Different Sources
379
 
380
+ <!-- info: Is the dataset aggregated from different data sources? -->
381
+ <!-- scope: telescope -->
382
+ no
383
 
 
384
 
385
+ ### Language Data
386
 
387
+ #### How was Language Data Obtained?
388
 
389
+ <!-- info: How was the language data obtained? -->
390
+ <!-- scope: telescope -->
391
+ `Crowdsourced`
392
 
393
+ #### Where was it crowdsourced?
394
 
395
+ <!-- info: If crowdsourced, where from? -->
396
+ <!-- scope: periscope -->
397
+ `Amazon Mechanical Turk`
398
 
399
+ #### Language Producers
400
 
401
+ <!-- info: What further information do we have on the language producers? -->
402
+ <!-- scope: microscope -->
403
+ The paid crowdworkers who produced the reference utterances were from English-speaking countries, and they had at least 1,000 HITs approved and a HIT approval rate of 98% or more. Furthermore, in the instructions, crowdworkers were discouraged from taking on the task unless they considered themselves a gamer.
404
 
405
+ #### Topics Covered
406
 
407
+ <!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
408
+ <!-- scope: periscope -->
409
+ The dataset focuses on video games and their various aspects, and hence the language of the utterances may contain video game-specific jargon.
410
 
411
+ #### Data Validation
412
 
413
+ <!-- info: Was the text validated by a different worker or a data curator? -->
414
+ <!-- scope: telescope -->
415
+ validated by data curator
416
 
417
+ #### Data Preprocessing
418
+
419
+ <!-- info: How was the text data pre-processed? (Enter N/A if the text was not pre-processed) -->
420
+ <!-- scope: microscope -->
421
  First, regular expressions were used to enforce several standardization policies regarding special characters, punctuation, and the correction of undesired abbreviations/misspellings of standard domain-specific terms (e.g., terms like "Play station" or "PS4" would be changed to the uniform "PlayStation"). At the same time, hyphens were removed or enforced uniformly in certain terms, for example, "single-player". Although phrases such as "first person" should correctly have a hyphen when used as adjective, the crowdworkers used this rule very inconsistently. In order to avoid model outputs being penalized during the evaluation by the arbitrary choice of a hyphen presence or absence in the reference utterances, the hyphen was removed in all such phrases regardless of the noun vs. adjective use.
422
 
423
  Second, an extensive set of heuristics was developed to identify slot-related errors. This process revealed the vast majority of missing or incorrect slot mentions, which were subsequently fixed according to the corresponding MRs. This eventually led to the development of a robust, cross-domain, heuristic slot aligner that can be used for automatic slot error rate evaluation. For details, see the appendix in [Juraska and Walker, 2021](https://aclanthology.org/2021.inlg-1.45/).
424
 
425
+ Crowdworkers would sometimes also inject a piece of information which was not present in the MR, some of which is not even represented by any of the slots, e.g., plot or main characters. This unsolicited information was removed from the utterances so as to avoid confusing the neural model. Finally, any remaining typos and grammatical errors were resolved.
426
+
427
+ #### Was Data Filtered?
428
+
429
+ <!-- info: Were text instances selected or filtered? -->
430
+ <!-- scope: telescope -->
431
+ manually
432
+
433
+ #### Filter Criteria
434
+
435
+ <!-- info: What were the selection criteria? -->
436
+ <!-- scope: microscope -->
437
+ Compliance with the indicated dialogue act type, semantic accuracy (i.e., all information in the corresponding MR mentioned and that correctly), and minimal extraneous information (e.g., personal experience/opinion). Whenever it was within a reasonable amount of effort, the utterances were manually fixed instead of being discarded/crowdsourced anew.
438
+
439
+
440
+ ### Structured Annotations
441
+
442
+ #### Additional Annotations?
443
+
444
+ <!-- quick -->
445
+ <!-- info: Does the dataset have additional annotations for each instance? -->
446
+ <!-- scope: telescope -->
447
+ none
448
+
449
+ #### Annotation Service?
450
+
451
+ <!-- info: Was an annotation service used? -->
452
+ <!-- scope: telescope -->
453
+ no
454
+
455
+
456
+ ### Consent
457
+
458
+ #### Any Consent Policy?
459
+
460
+ <!-- info: Was there a consent policy involved when gathering the data? -->
461
+ <!-- scope: telescope -->
462
+ no
463
+
464
 
465
+ ### Private Identifying Information (PII)
466
 
467
+ #### Contains PII?
468
 
469
+ <!-- quick -->
470
+ <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
471
+ <!-- scope: telescope -->
472
+ no PII
473
 
474
+ #### Justification for no PII
475
 
476
+ <!-- info: Provide a justification for selecting `no PII` above. -->
477
+ <!-- scope: periscope -->
478
+ Crowdworkers were instructed to only express the information in the provided meaning representation, which never prompted them to mention anything about themselves. Occasionally, they would still include a bit of personal experience (e.g., "I used to like the game as a kid.") or opinion, but these would be too general to be considered PII.
479
 
 
480
 
481
+ ### Maintenance
482
 
483
+ #### Any Maintenance Plan?
484
 
485
+ <!-- info: Does the original dataset have a maintenance plan? -->
486
+ <!-- scope: telescope -->
487
+ no
488
 
 
489
 
 
490
 
491
+ ## Broader Social Context
492
 
493
+ ### Previous Work on the Social Impact of the Dataset
494
 
495
+ #### Usage of Models based on the Data
496
 
497
+ <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
498
+ <!-- scope: telescope -->
499
+ no
500
 
 
501
 
502
+ ### Impact on Under-Served Communities
503
 
504
+ #### Addresses needs of underserved Communities?
505
 
506
+ <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
507
+ <!-- scope: telescope -->
508
+ no
509
 
 
510
 
511
+ ### Discussion of Biases
512
 
513
+ #### Any Documented Social Biases?
514
 
515
+ <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
516
+ <!-- scope: telescope -->
517
+ no
518
 
 
519
 
 
520
 
521
+ ## Considerations for Using the Data
522
 
523
+ ### PII Risks and Liability
524
 
 
525
 
 
526
 
527
+ ### Licenses
528
 
 
529
 
 
530
 
531
+ ### Known Technical Limitations
532
 
533
+ #### Technical Limitations
534
 
535
+ <!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
536
+ <!-- scope: microscope -->
537
+ The dataset is limited to a single domain: video games. One caveat of using a language generator trained on this dataset in a dialogue system as-is is that multiple subsequent turns discussing the same video game would be repeating its full name. ViGGO was designed for generation without context, and therefore it is up to the dialogue manager to ensure that pronouns are substituted for the names whenever it would sound more natural in a dialogue. Alternately, the dataset can easily be augmented with automatically constructed samples which omit the `name` slot in the MR and replace the name with a pronoun in the reference utterance.
538
 
 
539