parquet-converter commited on
Commit
df24389
·
1 Parent(s): d7c03d1

Update parquet files

Browse files
.gitattributes CHANGED
@@ -46,3 +46,5 @@ FewGLUE_32dev/multirc.zip filter=lfs diff=lfs merge=lfs -text
46
  FewGLUE_32dev/record.zip filter=lfs diff=lfs merge=lfs -text
47
  FewGLUE_32dev/rte.zip filter=lfs diff=lfs merge=lfs -text
48
  FewGLUE_32dev/wic.zip filter=lfs diff=lfs merge=lfs -text
 
 
 
46
  FewGLUE_32dev/record.zip filter=lfs diff=lfs merge=lfs -text
47
  FewGLUE_32dev/rte.zip filter=lfs diff=lfs merge=lfs -text
48
  FewGLUE_32dev/wic.zip filter=lfs diff=lfs merge=lfs -text
49
+ boolq/few_glue-test.parquet filter=lfs diff=lfs merge=lfs -text
50
+ record/few_glue-test.parquet filter=lfs diff=lfs merge=lfs -text
FewGLUE_32dev/copa.zip DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ba4785378b5d8a3d78a074da325067348425c8e82271d57bcb84f0758ef2686f
3
- size 8905
 
 
 
 
FewGLUE_32dev/multirc.zip DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f2797b7551a1818eb65a4b30dbc0c5545469216bf67ad1c8d74b22624ebff3e2
3
- size 179573
 
 
 
 
FewGLUE_32dev/record.zip DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:05bfd9279f782f52c387121da13e9b40520f9746257ccdad48c96ed0578f68d6
3
- size 4825596
 
 
 
 
FewGLUE_32dev/rte.zip DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:1f2c18ad91bb1363eefa8c27576e4ef9eb98209618c2d7c76b85466934e1a45d
3
- size 50793
 
 
 
 
FewGLUE_32dev/wic.zip DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8111406e1d595f09ce3340d972c01d1afa7143154570aa3f75d726b5dd659f4f
3
- size 41426
 
 
 
 
FewGLUE_32dev/wsc.zip DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:37a1bed8c47a5f8a37f35fd9ad1af79a9aef248b52d1ec83cc4717cebb413c24
3
- size 9326
 
 
 
 
README.md DELETED
@@ -1,13 +0,0 @@
1
- # FewGLUE_32dev
2
-
3
- This repository contains the FewGLUE_32dev dataset, an extension of the [FewGLUE](https://github.com/timoschick/fewglue), which enables NLU few-shot learning tasks to be benchmarked under a new 32-sample-dev setting. It has been proved in [previous work](https://arxiv.org/abs/2012.15723) that using larger development sets confer a significant advantage beyond few-shot. FewGLUE_32dev is built by adding additional few-shot dev sets with 32 examples randomly selected from the original/unused SuperGLUE training sets.
4
-
5
-
6
- ### Data Format
7
-
8
- The data files follow the exact same format as [SuperGLUE task files](https://super.gluebenchmark.com/tasks).
9
-
10
-
11
- ### Structure
12
-
13
- For each SuperGLUE task `T`, the directory `FewGLUE_32dev/T` contains the 32-sample-dev file (`dev32.jsonl`), which consists of 32 examples for few-shot validation.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
FewGLUE_32dev/boolq.zip → boolq/few_glue-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:db6e059e679ee7d9f2d2afe67b99f151012dec3bc6b3ab76e33a971106e58574
3
- size 859846
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83572ad236930c37170be7a5b5a9331c1e21f206721ab2d368332506e05173df
3
+ size 1313730
boolq/few_glue-train.parquet ADDED
Binary file (19.9 kB). View file
 
boolq/few_glue-validation.parquet ADDED
Binary file (19.5 kB). View file
 
cb/few_glue-test.parquet ADDED
Binary file (18 kB). View file
 
cb/few_glue-train.parquet ADDED
Binary file (12.6 kB). View file
 
cb/few_glue-validation.parquet ADDED
Binary file (11.2 kB). View file
 
copa/few_glue-test.parquet ADDED
Binary file (12 kB). View file
 
copa/few_glue-train.parquet ADDED
Binary file (6.34 kB). View file
 
copa/few_glue-validation.parquet ADDED
Binary file (6.29 kB). View file
 
few_glue.py DELETED
@@ -1,590 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """The FewGLUE benchmark."""
18
-
19
-
20
- import json
21
- import os
22
-
23
- import datasets
24
-
25
-
26
- _SUPER_GLUE_CITATION = """\
27
- @article{wang2019superglue,
28
- title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},
29
- author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},
30
- journal={arXiv preprint arXiv:1905.00537},
31
- year={2019}
32
- }
33
- Note that each SuperGLUE dataset has its own citation. Please see the source to
34
- get the correct citation for each contained dataset.
35
- """
36
-
37
- _GLUE_DESCRIPTION = """\
38
- SuperGLUE (https://super.gluebenchmark.com/) is a new benchmark styled after
39
- GLUE with a new set of more difficult language understanding tasks, improved
40
- resources, and a new public leaderboard.
41
- """
42
-
43
- _BOOLQ_DESCRIPTION = """\
44
- BoolQ (Boolean Questions, Clark et al., 2019a) is a QA task where each example consists of a short
45
- passage and a yes/no question about the passage. The questions are provided anonymously and
46
- unsolicited by users of the Google search engine, and afterwards paired with a paragraph from a
47
- Wikipedia article containing the answer. Following the original work, we evaluate with accuracy."""
48
-
49
- _CB_DESCRIPTION = """\
50
- The CommitmentBank (De Marneffe et al., 2019) is a corpus of short texts in which at least
51
- one sentence contains an embedded clause. Each of these embedded clauses is annotated with the
52
- degree to which we expect that the person who wrote the text is committed to the truth of the clause.
53
- The resulting task framed as three-class textual entailment on examples that are drawn from the Wall
54
- Street Journal, fiction from the British National Corpus, and Switchboard. Each example consists
55
- of a premise containing an embedded clause and the corresponding hypothesis is the extraction of
56
- that clause. We use a subset of the data that had inter-annotator agreement above 0.85. The data is
57
- imbalanced (relatively fewer neutral examples), so we evaluate using accuracy and F1, where for
58
- multi-class F1 we compute the unweighted average of the F1 per class."""
59
-
60
- _COPA_DESCRIPTION = """\
61
- The Choice Of Plausible Alternatives (COPA, Roemmele et al., 2011) dataset is a causal
62
- reasoning task in which a system is given a premise sentence and two possible alternatives. The
63
- system must choose the alternative which has the more plausible causal relationship with the premise.
64
- The method used for the construction of the alternatives ensures that the task requires causal reasoning
65
- to solve. Examples either deal with alternative possible causes or alternative possible effects of the
66
- premise sentence, accompanied by a simple question disambiguating between the two instance
67
- types for the model. All examples are handcrafted and focus on topics from online blogs and a
68
- photography-related encyclopedia. Following the recommendation of the authors, we evaluate using
69
- accuracy."""
70
-
71
- _RECORD_DESCRIPTION = """\
72
- (Reading Comprehension with Commonsense Reasoning Dataset, Zhang et al., 2018) is a
73
- multiple-choice QA task. Each example consists of a news article and a Cloze-style question about
74
- the article in which one entity is masked out. The system must predict the masked out entity from a
75
- given list of possible entities in the provided passage, where the same entity may be expressed using
76
- multiple different surface forms, all of which are considered correct. Articles are drawn from CNN
77
- and Daily Mail. Following the original work, we evaluate with max (over all mentions) token-level
78
- F1 and exact match (EM)."""
79
-
80
- _RTE_DESCRIPTION = """\
81
- The Recognizing Textual Entailment (RTE) datasets come from a series of annual competitions
82
- on textual entailment, the problem of predicting whether a given premise sentence entails a given
83
- hypothesis sentence (also known as natural language inference, NLI). RTE was previously included
84
- in GLUE, and we use the same data and format as before: We merge data from RTE1 (Dagan
85
- et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli
86
- et al., 2009). All datasets are combined and converted to two-class classification: entailment and
87
- not_entailment. Of all the GLUE tasks, RTE was among those that benefited from transfer learning
88
- the most, jumping from near random-chance performance (~56%) at the time of GLUE's launch to
89
- 85% accuracy (Liu et al., 2019c) at the time of writing. Given the eight point gap with respect to
90
- human performance, however, the task is not yet solved by machines, and we expect the remaining
91
- gap to be difficult to close."""
92
-
93
- _MULTIRC_DESCRIPTION = """\
94
- The Multi-Sentence Reading Comprehension dataset (MultiRC, Khashabi et al., 2018)
95
- is a true/false question-answering task. Each example consists of a context paragraph, a question
96
- about that paragraph, and a list of possible answers to that question which must be labeled as true or
97
- false. Question-answering (QA) is a popular problem with many datasets. We use MultiRC because
98
- of a number of desirable properties: (i) each question can have multiple possible correct answers,
99
- so each question-answer pair must be evaluated independent of other pairs, (ii) the questions are
100
- designed such that answering each question requires drawing facts from multiple context sentences,
101
- and (iii) the question-answer pair format more closely matches the API of other SuperGLUE tasks
102
- than span-based extractive QA does. The paragraphs are drawn from seven domains including news,
103
- fiction, and historical text."""
104
-
105
- _WIC_DESCRIPTION = """\
106
- The Word-in-Context (WiC, Pilehvar and Camacho-Collados, 2019) dataset supports a word
107
- sense disambiguation task cast as binary classification over sentence pairs. Given two sentences and a
108
- polysemous (sense-ambiguous) word that appears in both sentences, the task is to determine whether
109
- the word is used with the same sense in both sentences. Sentences are drawn from WordNet (Miller,
110
- 1995), VerbNet (Schuler, 2005), and Wiktionary. We follow the original work and evaluate using
111
- accuracy."""
112
-
113
- _WSC_DESCRIPTION = """\
114
- The Winograd Schema Challenge (WSC, Levesque et al., 2012) is a reading comprehension
115
- task in which a system must read a sentence with a pronoun and select the referent of that pronoun
116
- from a list of choices. Given the difficulty of this task and the headroom still left, we have included
117
- WSC in SuperGLUE and recast the dataset into its coreference form. The task is cast as a binary
118
- classification problem, as opposed to N-multiple choice, in order to isolate the model's ability to
119
- understand the coreference links within a sentence as opposed to various other strategies that may
120
- come into play in multiple choice conditions. With that in mind, we create a split with 65% negative
121
- majority class in the validation set, reflecting the distribution of the hidden test set, and 52% negative
122
- class in the training set. The training and validation examples are drawn from the original Winograd
123
- Schema dataset (Levesque et al., 2012), as well as those distributed by the affiliated organization
124
- Commonsense Reasoning. The test examples are derived from fiction books and have been shared
125
- with us by the authors of the original dataset. Previously, a version of WSC recast as NLI as included
126
- in GLUE, known as WNLI. No substantial progress was made on WNLI, with many submissions
127
- opting to submit only majority class predictions. WNLI was made especially difficult due to an
128
- adversarial train/dev split: Premise sentences that appeared in the training set sometimes appeared
129
- in the development set with a different hypothesis and a flipped label. If a system memorized the
130
- training set without meaningfully generalizing, which was easy due to the small size of the training
131
- set, it could perform far below chance on the development set. We remove this adversarial design
132
- in the SuperGLUE version of WSC by ensuring that no sentences are shared between the training,
133
- validation, and test sets.
134
- However, the validation and test sets come from different domains, with the validation set consisting
135
- of ambiguous examples such that changing one non-noun phrase word will change the coreference
136
- dependencies in the sentence. The test set consists only of more straightforward examples, with a
137
- high number of noun phrases (and thus more choices for the model), but low to no ambiguity."""
138
-
139
- _AXB_DESCRIPTION = """\
140
- An expert-constructed,
141
- diagnostic dataset that automatically tests models for a broad range of linguistic, commonsense, and
142
- world knowledge. Each example in this broad-coverage diagnostic is a sentence pair labeled with
143
- a three-way entailment relation (entailment, neutral, or contradiction) and tagged with labels that
144
- indicate the phenomena that characterize the relationship between the two sentences. Submissions
145
- to the GLUE leaderboard are required to include predictions from the submission's MultiNLI
146
- classifier on the diagnostic dataset, and analyses of the results were shown alongside the main
147
- leaderboard. Since this broad-coverage diagnostic task has proved difficult for top models, we retain
148
- it in SuperGLUE. However, since MultiNLI is not part of SuperGLUE, we collapse contradiction
149
- and neutral into a single not_entailment label, and request that submissions include predictions
150
- on the resulting set from the model used for the RTE task.
151
- """
152
-
153
- _AXG_DESCRIPTION = """\
154
- Winogender is designed to measure gender
155
- bias in coreference resolution systems. We use the Diverse Natural Language Inference Collection
156
- (DNC; Poliak et al., 2018) version that casts Winogender as a textual entailment task. Each example
157
- consists of a premise sentence with a male or female pronoun and a hypothesis giving a possible
158
- antecedent of the pronoun. Examples occur in minimal pairs, where the only difference between
159
- an example and its pair is the gender of the pronoun in the premise. Performance on Winogender
160
- is measured with both accuracy and the gender parity score: the percentage of minimal pairs for
161
- which the predictions are the same. We note that a system can trivially obtain a perfect gender parity
162
- score by guessing the same class for all examples, so a high gender parity score is meaningless unless
163
- accompanied by high accuracy. As a diagnostic test of gender bias, we view the schemas as having high
164
- positive predictive value and low negative predictive value; that is, they may demonstrate the presence
165
- of gender bias in a system, but not prove its absence.
166
- """
167
-
168
- _BOOLQ_CITATION = """\
169
- @inproceedings{clark2019boolq,
170
- title={BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions},
171
- author={Clark, Christopher and Lee, Kenton and Chang, Ming-Wei, and Kwiatkowski, Tom and Collins, Michael, and Toutanova, Kristina},
172
- booktitle={NAACL},
173
- year={2019}
174
- }"""
175
-
176
- _CB_CITATION = """\
177
- @article{de marneff_simons_tonhauser_2019,
178
- title={The CommitmentBank: Investigating projection in naturally occurring discourse},
179
- journal={proceedings of Sinn und Bedeutung 23},
180
- author={De Marneff, Marie-Catherine and Simons, Mandy and Tonhauser, Judith},
181
- year={2019}
182
- }"""
183
-
184
- _COPA_CITATION = """\
185
- @inproceedings{roemmele2011choice,
186
- title={Choice of plausible alternatives: An evaluation of commonsense causal reasoning},
187
- author={Roemmele, Melissa and Bejan, Cosmin Adrian and Gordon, Andrew S},
188
- booktitle={2011 AAAI Spring Symposium Series},
189
- year={2011}
190
- }"""
191
-
192
- _RECORD_CITATION = """\
193
- @article{zhang2018record,
194
- title={Record: Bridging the gap between human and machine commonsense reading comprehension},
195
- author={Zhang, Sheng and Liu, Xiaodong and Liu, Jingjing and Gao, Jianfeng and Duh, Kevin and Van Durme, Benjamin},
196
- journal={arXiv preprint arXiv:1810.12885},
197
- year={2018}
198
- }"""
199
-
200
- _RTE_CITATION = """\
201
- @inproceedings{dagan2005pascal,
202
- title={The PASCAL recognising textual entailment challenge},
203
- author={Dagan, Ido and Glickman, Oren and Magnini, Bernardo},
204
- booktitle={Machine Learning Challenges Workshop},
205
- pages={177--190},
206
- year={2005},
207
- organization={Springer}
208
- }
209
- @inproceedings{bar2006second,
210
- title={The second pascal recognising textual entailment challenge},
211
- author={Bar-Haim, Roy and Dagan, Ido and Dolan, Bill and Ferro, Lisa and Giampiccolo, Danilo and Magnini, Bernardo and Szpektor, Idan},
212
- booktitle={Proceedings of the second PASCAL challenges workshop on recognising textual entailment},
213
- volume={6},
214
- number={1},
215
- pages={6--4},
216
- year={2006},
217
- organization={Venice}
218
- }
219
- @inproceedings{giampiccolo2007third,
220
- title={The third pascal recognizing textual entailment challenge},
221
- author={Giampiccolo, Danilo and Magnini, Bernardo and Dagan, Ido and Dolan, Bill},
222
- booktitle={Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing},
223
- pages={1--9},
224
- year={2007},
225
- organization={Association for Computational Linguistics}
226
- }
227
- @inproceedings{bentivogli2009fifth,
228
- title={The Fifth PASCAL Recognizing Textual Entailment Challenge.},
229
- author={Bentivogli, Luisa and Clark, Peter and Dagan, Ido and Giampiccolo, Danilo},
230
- booktitle={TAC},
231
- year={2009}
232
- }"""
233
-
234
- _MULTIRC_CITATION = """\
235
- @inproceedings{MultiRC2018,
236
- author = {Daniel Khashabi and Snigdha Chaturvedi and Michael Roth and Shyam Upadhyay and Dan Roth},
237
- title = {Looking Beyond the Surface:A Challenge Set for Reading Comprehension over Multiple Sentences},
238
- booktitle = {Proceedings of North American Chapter of the Association for Computational Linguistics (NAACL)},
239
- year = {2018}
240
- }"""
241
-
242
- _WIC_CITATION = """\
243
- @article{DBLP:journals/corr/abs-1808-09121,
244
- author={Mohammad Taher Pilehvar and os{\'{e}} Camacho{-}Collados},
245
- title={WiC: 10, 000 Example Pairs for Evaluating Context-Sensitive Representations},
246
- journal={CoRR},
247
- volume={abs/1808.09121},
248
- year={2018},
249
- url={http://arxiv.org/abs/1808.09121},
250
- archivePrefix={arXiv},
251
- eprint={1808.09121},
252
- timestamp={Mon, 03 Sep 2018 13:36:40 +0200},
253
- biburl={https://dblp.org/rec/bib/journals/corr/abs-1808-09121},
254
- bibsource={dblp computer science bibliography, https://dblp.org}
255
- }"""
256
-
257
- _WSC_CITATION = """\
258
- @inproceedings{levesque2012winograd,
259
- title={The winograd schema challenge},
260
- author={Levesque, Hector and Davis, Ernest and Morgenstern, Leora},
261
- booktitle={Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning},
262
- year={2012}
263
- }"""
264
-
265
- _AXG_CITATION = """\
266
- @inproceedings{rudinger-EtAl:2018:N18,
267
- author = {Rudinger, Rachel and Naradowsky, Jason and Leonard, Brian and {Van Durme}, Benjamin},
268
- title = {Gender Bias in Coreference Resolution},
269
- booktitle = {Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
270
- month = {June},
271
- year = {2018},
272
- address = {New Orleans, Louisiana},
273
- publisher = {Association for Computational Linguistics}
274
- }
275
- """
276
-
277
-
278
- class FewGlueConfig(datasets.BuilderConfig):
279
- """BuilderConfig for SuperGLUE."""
280
-
281
- def __init__(self, features, data_url, citation, url, label_classes=("False", "True"), **kwargs):
282
- """BuilderConfig for SuperGLUE.
283
- Args:
284
- features: `list[string]`, list of the features that will appear in the
285
- feature dict. Should not include "label".
286
- citation: `string`, citation for the data set.
287
- url: `string`, url for information about the data set.
288
- label_classes: `list[string]`, the list of classes for the label if the
289
- label is present as a string. Non-string labels will be cast to either
290
- 'False' or 'True'.
291
- **kwargs: keyword arguments forwarded to super.
292
- """
293
- # Version history:
294
- # 1.0.2: Fixed non-nondeterminism in ReCoRD.
295
- # 1.0.1: Change from the pre-release trial version of SuperGLUE (v1.9) to
296
- # the full release (v2.0).
297
- # 1.0.0: S3 (new shuffling, sharding and slicing mechanism).
298
- # 0.0.2: Initial version.
299
- super(FewGlueConfig, self).__init__(version=datasets.Version("1.0.2"), **kwargs)
300
- self.features = features
301
- self.label_classes = label_classes
302
- self.citation = citation
303
- self.data_url = data_url
304
- self.url = url
305
-
306
-
307
- class FewGlue(datasets.GeneratorBasedBuilder):
308
- """The FewGLUE benchmark."""
309
-
310
- BUILDER_CONFIGS = [
311
- FewGlueConfig(
312
- name="boolq",
313
- description=_BOOLQ_DESCRIPTION,
314
- features=["question", "passage"],
315
- data_url="https://huggingface.co/datasets/juny116/few_glue/resolve/main/FewGLUE_32dev/boolq.zip",
316
- citation=_BOOLQ_CITATION,
317
- url="https://github.com/google-research-datasets/boolean-questions",
318
- ),
319
- FewGlueConfig(
320
- name="cb",
321
- description=_CB_DESCRIPTION,
322
- features=["premise", "hypothesis"],
323
- label_classes=["entailment", "contradiction", "neutral"],
324
- data_url="https://huggingface.co/datasets/juny116/few_glue/resolve/main/FewGLUE_32dev/cb.zip",
325
- citation=_CB_CITATION,
326
- url="https://github.com/mcdm/CommitmentBank",
327
- ),
328
- FewGlueConfig(
329
- name="copa",
330
- description=_COPA_DESCRIPTION,
331
- label_classes=["choice1", "choice2"],
332
- # Note that question will only be the X in the statement "What's
333
- # the X for this?".
334
- features=["premise", "choice1", "choice2", "question"],
335
- data_url="https://huggingface.co/datasets/juny116/few_glue/resolve/main/FewGLUE_32dev/copa.zip",
336
- citation=_COPA_CITATION,
337
- url="http://people.ict.usc.edu/~gordon/copa.html",
338
- ),
339
- FewGlueConfig(
340
- name="multirc",
341
- description=_MULTIRC_DESCRIPTION,
342
- features=["paragraph", "question", "answer"],
343
- data_url="https://huggingface.co/datasets/juny116/few_glue/resolve/main/FewGLUE_32dev/multirc.zip",
344
- citation=_MULTIRC_CITATION,
345
- url="https://cogcomp.org/multirc/",
346
- ),
347
- FewGlueConfig(
348
- name="record",
349
- description=_RECORD_DESCRIPTION,
350
- # Note that entities and answers will be a sequences of strings. Query
351
- # will contain @placeholder as a substring, which represents the word
352
- # to be substituted in.
353
- features=["passage", "query", "entities", "answers"],
354
- data_url="https://huggingface.co/datasets/juny116/few_glue/resolve/main/FewGLUE_32dev/record.zip",
355
- citation=_RECORD_CITATION,
356
- url="https://sheng-z.github.io/ReCoRD-explorer/",
357
- ),
358
- FewGlueConfig(
359
- name="rte",
360
- description=_RTE_DESCRIPTION,
361
- features=["premise", "hypothesis"],
362
- label_classes=["entailment", "not_entailment"],
363
- data_url="https://huggingface.co/datasets/juny116/few_glue/resolve/main/FewGLUE_32dev/rte.zip",
364
- citation=_RTE_CITATION,
365
- url="https://aclweb.org/aclwiki/Recognizing_Textual_Entailment",
366
- ),
367
- FewGlueConfig(
368
- name="wic",
369
- description=_WIC_DESCRIPTION,
370
- # Note that start1, start2, end1, and end2 will be integers stored as
371
- # datasets.Value('int32').
372
- features=["word", "sentence1", "sentence2", "start1", "start2", "end1", "end2"],
373
- data_url="https://huggingface.co/datasets/juny116/few_glue/resolve/main/FewGLUE_32dev/wic.zip",
374
- citation=_WIC_CITATION,
375
- url="https://pilehvar.github.io/wic/",
376
- ),
377
- FewGlueConfig(
378
- name="wsc",
379
- description=_WSC_DESCRIPTION,
380
- # Note that span1_index and span2_index will be integers stored as
381
- # datasets.Value('int32').
382
- features=["text", "span1_index", "span2_index", "span1_text", "span2_text"],
383
- data_url="https://huggingface.co/datasets/juny116/few_glue/resolve/main/FewGLUE_32dev/wsc.zip",
384
- citation=_WSC_CITATION,
385
- url="https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html",
386
- ),
387
- FewGlueConfig(
388
- name="wsc.fixed",
389
- description=(
390
- _WSC_DESCRIPTION + "\n\nThis version fixes issues where the spans are not actually "
391
- "substrings of the text."
392
- ),
393
- # Note that span1_index and span2_index will be integers stored as
394
- # datasets.Value('int32').
395
- features=["text", "span1_index", "span2_index", "span1_text", "span2_text"],
396
- data_url="https://huggingface.co/datasets/juny116/few_glue/resolve/main/FewGLUE_32dev/wsc.zip",
397
- citation=_WSC_CITATION,
398
- url="https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html",
399
- ),
400
- ]
401
-
402
- def _info(self):
403
- features = {feature: datasets.Value("string") for feature in self.config.features}
404
- if self.config.name.startswith("wsc"):
405
- features["span1_index"] = datasets.Value("int32")
406
- features["span2_index"] = datasets.Value("int32")
407
- if self.config.name == "wic":
408
- features["start1"] = datasets.Value("int32")
409
- features["start2"] = datasets.Value("int32")
410
- features["end1"] = datasets.Value("int32")
411
- features["end2"] = datasets.Value("int32")
412
- if self.config.name == "multirc":
413
- features["idx"] = dict(
414
- {
415
- "paragraph": datasets.Value("int32"),
416
- "question": datasets.Value("int32"),
417
- "answer": datasets.Value("int32"),
418
- }
419
- )
420
- elif self.config.name == "record":
421
- features["idx"] = dict(
422
- {
423
- "passage": datasets.Value("int32"),
424
- "query": datasets.Value("int32"),
425
- }
426
- )
427
- else:
428
- features["idx"] = datasets.Value("int32")
429
-
430
- if self.config.name == "record":
431
- # Entities are the set of possible choices for the placeholder.
432
- features["entities"] = datasets.features.Sequence(datasets.Value("string"))
433
- # Answers are the subset of entities that are correct.
434
- features["answers"] = datasets.features.Sequence(datasets.Value("string"))
435
- else:
436
- features["label"] = datasets.features.ClassLabel(names=self.config.label_classes)
437
-
438
- return datasets.DatasetInfo(
439
- description=_GLUE_DESCRIPTION + self.config.description,
440
- features=datasets.Features(features),
441
- homepage=self.config.url,
442
- citation=self.config.citation + "\n" + _SUPER_GLUE_CITATION,
443
- )
444
-
445
- def _split_generators(self, dl_manager):
446
- dl_dir = dl_manager.download_and_extract(self.config.data_url) or ""
447
- task_name = _get_task_name_from_data_url(self.config.data_url)
448
- dl_dir = os.path.join(dl_dir, task_name)
449
- return [
450
- datasets.SplitGenerator(
451
- name=datasets.Split.TRAIN,
452
- gen_kwargs={
453
- "data_file": os.path.join(dl_dir, "train.jsonl"),
454
- "split": datasets.Split.TRAIN,
455
- },
456
- ),
457
- datasets.SplitGenerator(
458
- name=datasets.Split.VALIDATION,
459
- gen_kwargs={
460
- "data_file": os.path.join(dl_dir, "dev32.jsonl"),
461
- "split": datasets.Split.VALIDATION,
462
- },
463
- ),
464
- datasets.SplitGenerator(
465
- name=datasets.Split.TEST,
466
- gen_kwargs={
467
- "data_file": os.path.join(dl_dir, "val.jsonl"),
468
- "split": datasets.Split.TEST,
469
- },
470
- ),
471
- ]
472
-
473
- def _generate_examples(self, data_file, split):
474
- with open(data_file, encoding="utf-8") as f:
475
- for line in f:
476
- row = json.loads(line)
477
-
478
- if self.config.name == "multirc":
479
- paragraph = row["passage"]
480
- for question in paragraph["questions"]:
481
- for answer in question["answers"]:
482
- label = answer.get("label")
483
- key = "%s_%s_%s" % (row["idx"], question["idx"], answer["idx"])
484
- yield key, {
485
- "paragraph": paragraph["text"],
486
- "question": question["question"],
487
- "answer": answer["text"],
488
- "label": -1 if label is None else _cast_label(bool(label)),
489
- "idx": {"paragraph": row["idx"], "question": question["idx"], "answer": answer["idx"]},
490
- }
491
- elif self.config.name == "record":
492
- passage = row["passage"]
493
- for qa in row["qas"]:
494
- yield qa["idx"], {
495
- "passage": passage["text"],
496
- "query": qa["query"],
497
- "entities": _get_record_entities(passage),
498
- "answers": _get_record_answers(qa),
499
- "idx": {"passage": row["idx"], "query": qa["idx"]},
500
- }
501
- else:
502
- if self.config.name.startswith("wsc"):
503
- row.update(row["target"])
504
- example = {feature: row[feature] for feature in self.config.features}
505
- if self.config.name == "wsc.fixed":
506
- example = _fix_wst(example)
507
- example["idx"] = row["idx"]
508
-
509
- if "label" in row:
510
- if self.config.name == "copa":
511
- example["label"] = "choice2" if row["label"] else "choice1"
512
- else:
513
- example["label"] = _cast_label(row["label"])
514
- else:
515
- assert split == datasets.Split.TEST, row
516
- example["label"] = -1
517
- yield example["idx"], example
518
-
519
-
520
- def _fix_wst(ex):
521
- """Fixes most cases where spans are not actually substrings of text."""
522
-
523
- def _fix_span_text(k):
524
- """Fixes a single span."""
525
- text = ex[k + "_text"]
526
- index = ex[k + "_index"]
527
-
528
- if text in ex["text"]:
529
- return
530
-
531
- if text in ("Kamenev and Zinoviev", "Kamenev, Zinoviev, and Stalin"):
532
- # There is no way to correct these examples since the subjects have
533
- # intervening text.
534
- return
535
-
536
- if "theyscold" in text:
537
- ex["text"].replace("theyscold", "they scold")
538
- ex["span2_index"] = 10
539
- # Make sure case of the first words match.
540
- first_word = ex["text"].split()[index]
541
- if first_word[0].islower():
542
- text = text[0].lower() + text[1:]
543
- else:
544
- text = text[0].upper() + text[1:]
545
- # Remove punctuation in span.
546
- text = text.rstrip(".")
547
- # Replace incorrect whitespace character in span.
548
- text = text.replace("\n", " ")
549
- ex[k + "_text"] = text
550
- assert ex[k + "_text"] in ex["text"], ex
551
-
552
- _fix_span_text("span1")
553
- _fix_span_text("span2")
554
- return ex
555
-
556
-
557
- def _cast_label(label):
558
- """Converts the label into the appropriate string version."""
559
- if isinstance(label, str):
560
- return label
561
- elif isinstance(label, bool):
562
- return "True" if label else "False"
563
- elif isinstance(label, int):
564
- assert label in (0, 1)
565
- return str(label)
566
- else:
567
- raise ValueError("Invalid label format.")
568
-
569
-
570
- def _get_record_entities(passage):
571
- """Returns the unique set of entities."""
572
- text = passage["text"]
573
- entities = set()
574
- for entity in passage["entities"]:
575
- entities.add(text[entity["start"] : entity["end"] + 1])
576
- return sorted(entities)
577
-
578
-
579
- def _get_record_answers(qa):
580
- """Returns the unique set of answers."""
581
- if "answers" not in qa:
582
- return []
583
- answers = set()
584
- for answer in qa["answers"]:
585
- answers.add(answer["text"])
586
- return sorted(answers)
587
-
588
-
589
- def _get_task_name_from_data_url(data_url):
590
- return data_url.split("/")[-1].split(".")[0]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
multirc/few_glue-test.parquet ADDED
Binary file (306 kB). View file
 
multirc/few_glue-train.parquet ADDED
Binary file (47.7 kB). View file
 
multirc/few_glue-validation.parquet ADDED
Binary file (54.9 kB). View file
 
FewGLUE_32dev/cb.zip → record/few_glue-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:80f371d5328be1647dbb01dbde930304f83bd4580b6ccba3761ae322973e586a
3
- size 18734
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e3e15e6a090e3df6b91bc388644af077178ec0a94bd193193b4bcbc827ef1a2
3
+ size 6649377
record/few_glue-train.parquet ADDED
Binary file (42.3 kB). View file
 
record/few_glue-validation.parquet ADDED
Binary file (39 kB). View file
 
rte/few_glue-test.parquet ADDED
Binary file (69.8 kB). View file
 
rte/few_glue-train.parquet ADDED
Binary file (12.3 kB). View file
 
rte/few_glue-validation.parquet ADDED
Binary file (14.4 kB). View file
 
wic/few_glue-test.parquet ADDED
Binary file (60.1 kB). View file
 
wic/few_glue-train.parquet ADDED
Binary file (8.26 kB). View file
 
wic/few_glue-validation.parquet ADDED
Binary file (8.11 kB). View file
 
wsc.fixed/few_glue-test.parquet ADDED
Binary file (10.2 kB). View file
 
wsc.fixed/few_glue-train.parquet ADDED
Binary file (7.39 kB). View file
 
wsc.fixed/few_glue-validation.parquet ADDED
Binary file (8.71 kB). View file
 
wsc/few_glue-test.parquet ADDED
Binary file (10.2 kB). View file
 
wsc/few_glue-train.parquet ADDED
Binary file (7.39 kB). View file
 
wsc/few_glue-validation.parquet ADDED
Binary file (8.72 kB). View file