Datasets:
Tasks:
Question Answering
Sub-tasks:
extractive-qa
Languages:
English
Size:
10K<n<100K
ArXiv:
Tags:
conversational-qa
License:
Commit
·
4bd55ae
0
Parent(s):
Update files from the datasets library (from 1.2.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.2.0
- .gitattributes +27 -0
- README.md +227 -0
- dataset_infos.json +1 -0
- dummy/history/1.0.0/dummy_data.zip +3 -0
- dummy/history_dev_multi/1.0.0/dummy_data.zip +3 -0
- dummy/mod/1.0.0/dummy_data.zip +3 -0
- dummy/mod_dev_multi/1.0.0/dummy_data.zip +3 -0
- sharc_modified.py +157 -0
.gitattributes
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,227 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- crowdsourced
|
4 |
+
language_creators:
|
5 |
+
- crowdsourced
|
6 |
+
- expert-generated
|
7 |
+
languages:
|
8 |
+
- en
|
9 |
+
licenses:
|
10 |
+
- unknown
|
11 |
+
multilinguality:
|
12 |
+
- monolingual
|
13 |
+
size_categories:
|
14 |
+
- 10K<n<100K
|
15 |
+
source_datasets:
|
16 |
+
- extended|sharc
|
17 |
+
task_categories:
|
18 |
+
- question-answering
|
19 |
+
task_ids:
|
20 |
+
- extractive-qa
|
21 |
+
- question-answering-other-conversational-qa
|
22 |
+
---
|
23 |
+
|
24 |
+
# Dataset Card Creation Guide
|
25 |
+
|
26 |
+
## Table of Contents
|
27 |
+
- [Dataset Description](#dataset-description)
|
28 |
+
- [Dataset Summary](#dataset-summary)
|
29 |
+
- [Supported Tasks](#supported-tasks-and-leaderboards)
|
30 |
+
- [Languages](#languages)
|
31 |
+
- [Dataset Structure](#dataset-structure)
|
32 |
+
- [Data Instances](#data-instances)
|
33 |
+
- [Data Fields](#data-instances)
|
34 |
+
- [Data Splits](#data-instances)
|
35 |
+
- [Dataset Creation](#dataset-creation)
|
36 |
+
- [Curation Rationale](#curation-rationale)
|
37 |
+
- [Source Data](#source-data)
|
38 |
+
- [Annotations](#annotations)
|
39 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
40 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
41 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
42 |
+
- [Discussion of Biases](#discussion-of-biases)
|
43 |
+
- [Other Known Limitations](#other-known-limitations)
|
44 |
+
- [Additional Information](#additional-information)
|
45 |
+
- [Dataset Curators](#dataset-curators)
|
46 |
+
- [Licensing Information](#licensing-information)
|
47 |
+
- [Citation Information](#citation-information)
|
48 |
+
|
49 |
+
## Dataset Description
|
50 |
+
|
51 |
+
- **Homepage:** [More info needed]
|
52 |
+
- **Repository:** [github](https://github.com/nikhilweee/neural-conv-qa)
|
53 |
+
- **Paper:** [Neural Conversational QA: Learning to Reason v.s. Exploiting Patterns](https://arxiv.org/abs/1909.03759)
|
54 |
+
- **Leaderboard:** [More info needed]
|
55 |
+
- **Point of Contact:** [More info needed]
|
56 |
+
|
57 |
+
### Dataset Summary
|
58 |
+
|
59 |
+
[More Information Needed]
|
60 |
+
|
61 |
+
### Supported Tasks and Leaderboards
|
62 |
+
|
63 |
+
[More Information Needed]
|
64 |
+
|
65 |
+
### Languages
|
66 |
+
|
67 |
+
The dataset is in english (en).
|
68 |
+
|
69 |
+
## Dataset Structure
|
70 |
+
|
71 |
+
### Data Instances
|
72 |
+
|
73 |
+
Example of one instance:
|
74 |
+
```
|
75 |
+
{
|
76 |
+
"annotation": {
|
77 |
+
"answer": [
|
78 |
+
{
|
79 |
+
"paragraph_reference": {
|
80 |
+
"end": 64,
|
81 |
+
"start": 35,
|
82 |
+
"string": "syndactyly affecting the feet"
|
83 |
+
},
|
84 |
+
"sentence_reference": {
|
85 |
+
"bridge": false,
|
86 |
+
"end": 64,
|
87 |
+
"start": 35,
|
88 |
+
"string": "syndactyly affecting the feet"
|
89 |
+
}
|
90 |
+
}
|
91 |
+
],
|
92 |
+
"explanation_type": "single_sentence",
|
93 |
+
"referential_equalities": [
|
94 |
+
{
|
95 |
+
"question_reference": {
|
96 |
+
"end": 40,
|
97 |
+
"start": 29,
|
98 |
+
"string": "webbed toes"
|
99 |
+
},
|
100 |
+
"sentence_reference": {
|
101 |
+
"bridge": false,
|
102 |
+
"end": 11,
|
103 |
+
"start": 0,
|
104 |
+
"string": "Webbed toes"
|
105 |
+
}
|
106 |
+
}
|
107 |
+
],
|
108 |
+
"selected_sentence": {
|
109 |
+
"end": 67,
|
110 |
+
"start": 0,
|
111 |
+
"string": "Webbed toes is the common name for syndactyly affecting the feet . "
|
112 |
+
}
|
113 |
+
},
|
114 |
+
"example_id": 9174646170831578919,
|
115 |
+
"original_nq_answers": [
|
116 |
+
{
|
117 |
+
"end": 45,
|
118 |
+
"start": 35,
|
119 |
+
"string": "syndactyly"
|
120 |
+
}
|
121 |
+
],
|
122 |
+
"paragraph_text": "Webbed toes is the common name for syndactyly affecting the feet . It is characterised by the fusion of two or more digits of the feet . This is normal in many birds , such as ducks ; amphibians , such as frogs ; and mammals , such as kangaroos . In humans it is considered unusual , occurring in approximately one in 2,000 to 2,500 live births .",
|
123 |
+
"question": "what is the medical term for webbed toes",
|
124 |
+
"sentence_starts": [
|
125 |
+
0,
|
126 |
+
67,
|
127 |
+
137,
|
128 |
+
247
|
129 |
+
],
|
130 |
+
"title_text": "Webbed toes",
|
131 |
+
"url": "https: //en.wikipedia.org//w/index.php?title=Webbed_toes&oldid=801229780"
|
132 |
+
}
|
133 |
+
```
|
134 |
+
|
135 |
+
### Data Fields
|
136 |
+
|
137 |
+
- `example_id`: a unique integer identifier that matches up with NQ
|
138 |
+
- `title_text`: the title of the wikipedia page containing the paragraph
|
139 |
+
- `url`: the url of the wikipedia page containing the paragraph
|
140 |
+
- `question`: a natural language question string from NQ
|
141 |
+
- `paragraph_text`: a paragraph string from a wikipedia page containing the answer to question
|
142 |
+
- `sentence_starts`: a list of integer character offsets indicating the start of sentences in the paragraph
|
143 |
+
- `original_nq_answers`: the original short answer spans from NQ
|
144 |
+
- `annotation`: the QED annotation, a dictionary with the following items and further elaborated upon below:
|
145 |
+
- `referential_equalities`: a list of dictionaries, one for each referential equality link annotated
|
146 |
+
- `answer`: a list of dictionaries, one for each short answer span
|
147 |
+
- `selected_sentence`: a dictionary representing the annotated sentence in the passage
|
148 |
+
- `explanation_type`: one of "single_sentence", "multi_sentence", or "none"
|
149 |
+
|
150 |
+
|
151 |
+
### Data Splits
|
152 |
+
|
153 |
+
The dataset is split into training and validation splits.
|
154 |
+
| | Tain | Valid |
|
155 |
+
| ----- | ------ | ----- |
|
156 |
+
| N. Instances | 7638 | 1355 |
|
157 |
+
|
158 |
+
## Dataset Creation
|
159 |
+
|
160 |
+
### Curation Rationale
|
161 |
+
|
162 |
+
[More Information Needed]
|
163 |
+
|
164 |
+
### Source Data
|
165 |
+
|
166 |
+
[More Information Needed]
|
167 |
+
|
168 |
+
#### Initial Data Collection and Normalization
|
169 |
+
|
170 |
+
[More Information Needed]
|
171 |
+
|
172 |
+
#### Who are the source language producers?
|
173 |
+
|
174 |
+
[More Information Needed]
|
175 |
+
|
176 |
+
### Annotations
|
177 |
+
|
178 |
+
[More Information Needed]
|
179 |
+
|
180 |
+
#### Annotation process
|
181 |
+
|
182 |
+
[More Information Needed]
|
183 |
+
|
184 |
+
#### Who are the annotators?
|
185 |
+
|
186 |
+
[More Information Needed]
|
187 |
+
|
188 |
+
### Personal and Sensitive Information
|
189 |
+
|
190 |
+
[More Information Needed]
|
191 |
+
|
192 |
+
## Considerations for Using the Data
|
193 |
+
|
194 |
+
### Social Impact of Dataset
|
195 |
+
|
196 |
+
[More Information Needed]
|
197 |
+
|
198 |
+
### Discussion of Biases
|
199 |
+
|
200 |
+
[More Information Needed]
|
201 |
+
|
202 |
+
### Other Known Limitations
|
203 |
+
|
204 |
+
[More Information Needed]
|
205 |
+
|
206 |
+
## Additional Information
|
207 |
+
|
208 |
+
### Dataset Curators
|
209 |
+
|
210 |
+
[More Information Needed]
|
211 |
+
|
212 |
+
### Licensing Information
|
213 |
+
|
214 |
+
[More Information Needed]
|
215 |
+
|
216 |
+
### Citation Information
|
217 |
+
|
218 |
+
```
|
219 |
+
@misc{lamm2020qed,
|
220 |
+
title={QED: A Framework and Dataset for Explanations in Question Answering},
|
221 |
+
author={Matthew Lamm and Jennimaria Palomaki and Chris Alberti and Daniel Andor and Eunsol Choi and Livio Baldini Soares and Michael Collins},
|
222 |
+
year={2020},
|
223 |
+
eprint={2009.06354},
|
224 |
+
archivePrefix={arXiv},
|
225 |
+
primaryClass={cs.CL}
|
226 |
+
}
|
227 |
+
```
|
dataset_infos.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"mod": {"description": "ShARC, a conversational QA task, requires a system to answer user questions based on rules expressed in natural language text. However, it is found that in the ShARC dataset there are multiple spurious patterns that could be exploited by neural models. SharcModified is a new dataset which reduces the patterns identified in the original dataset. To reduce the sensitivity of neural models, for each occurence of an instance conforming to any of the patterns, we automatically construct alternatives where we choose to either replace the current instance with an alternative instance which does not exhibit the pattern; or retain the original instance. The modified ShARC has two versions sharc-mod and history-shuffled. For morre details refer to Appendix A.3 .\n", "citation": "@inproceedings{verma-etal-2020-neural,\n title = \"Neural Conversational {QA}: Learning to Reason vs Exploiting Patterns\",\n author = \"Verma, Nikhil and\n Sharma, Abhishek and\n Madan, Dhiraj and\n Contractor, Danish and\n Kumar, Harshit and\n Joshi, Sachindra\",\n booktitle = \"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)\",\n month = nov,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.emnlp-main.589\",\n pages = \"7263--7269\",\n abstract = \"Neural Conversational QA tasks such as ShARC require systems to answer questions based on the contents of a given passage. On studying recent state-of-the-art models on the ShARC QA task, we found indications that the model(s) learn spurious clues/patterns in the data-set. Further, a heuristic-based program, built to exploit these patterns, had comparative performance to that of the neural models. In this paper we share our findings about the four types of patterns in the ShARC corpus and how the neural models exploit them. Motivated by the above findings, we create and share a modified data-set that has fewer spurious patterns than the original data-set, consequently allowing models to learn better.\",\n}\n", "homepage": "https://github.com/nikhilweee/neural-conv-qa", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "utterance_id": {"dtype": "string", "id": null, "_type": "Value"}, "source_url": {"dtype": "string", "id": null, "_type": "Value"}, "snippet": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "scenario": {"dtype": "string", "id": null, "_type": "Value"}, "history": [{"follow_up_question": {"dtype": "string", "id": null, "_type": "Value"}, "follow_up_answer": {"dtype": "string", "id": null, "_type": "Value"}}], "evidence": [{"follow_up_question": {"dtype": "string", "id": null, "_type": "Value"}, "follow_up_answer": {"dtype": "string", "id": null, "_type": "Value"}}], "answer": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "sharc_modified", "config_name": "mod", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 15138034, "num_examples": 21890, "dataset_name": "sharc_modified"}, "validation": {"name": "validation", "num_bytes": 1474239, "num_examples": 2270, "dataset_name": "sharc_modified"}}, "download_checksums": {"https://raw.githubusercontent.com/nikhilweee/neural-conv-qa/master/datasets/mod_train.json": {"num_bytes": 19301214, "checksum": "84ccbf71dab4c800f63aa9b50f3f911b413cd15c69f042ef93bbe2fdf873c603"}, "https://raw.githubusercontent.com/nikhilweee/neural-conv-qa/master/datasets/mod_dev.json": {"num_bytes": 1896057, "checksum": "426f561e458605bed72580228cc4891ec70be35abbdacec8af027f53d3770992"}}, "download_size": 21197271, "post_processing_size": null, "dataset_size": 16612273, "size_in_bytes": 37809544}, "mod_dev_multi": {"description": "ShARC, a conversational QA task, requires a system to answer user questions based on rules expressed in natural language text. However, it is found that in the ShARC dataset there are multiple spurious patterns that could be exploited by neural models. SharcModified is a new dataset which reduces the patterns identified in the original dataset. To reduce the sensitivity of neural models, for each occurence of an instance conforming to any of the patterns, we automatically construct alternatives where we choose to either replace the current instance with an alternative instance which does not exhibit the pattern; or retain the original instance. The modified ShARC has two versions sharc-mod and history-shuffled. For morre details refer to Appendix A.3 .\n", "citation": "@inproceedings{verma-etal-2020-neural,\n title = \"Neural Conversational {QA}: Learning to Reason vs Exploiting Patterns\",\n author = \"Verma, Nikhil and\n Sharma, Abhishek and\n Madan, Dhiraj and\n Contractor, Danish and\n Kumar, Harshit and\n Joshi, Sachindra\",\n booktitle = \"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)\",\n month = nov,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.emnlp-main.589\",\n pages = \"7263--7269\",\n abstract = \"Neural Conversational QA tasks such as ShARC require systems to answer questions based on the contents of a given passage. On studying recent state-of-the-art models on the ShARC QA task, we found indications that the model(s) learn spurious clues/patterns in the data-set. Further, a heuristic-based program, built to exploit these patterns, had comparative performance to that of the neural models. In this paper we share our findings about the four types of patterns in the ShARC corpus and how the neural models exploit them. Motivated by the above findings, we create and share a modified data-set that has fewer spurious patterns than the original data-set, consequently allowing models to learn better.\",\n}\n", "homepage": "https://github.com/nikhilweee/neural-conv-qa", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "utterance_id": {"dtype": "string", "id": null, "_type": "Value"}, "source_url": {"dtype": "string", "id": null, "_type": "Value"}, "snippet": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "scenario": {"dtype": "string", "id": null, "_type": "Value"}, "history": [{"follow_up_question": {"dtype": "string", "id": null, "_type": "Value"}, "follow_up_answer": {"dtype": "string", "id": null, "_type": "Value"}}], "evidence": [{"follow_up_question": {"dtype": "string", "id": null, "_type": "Value"}, "follow_up_answer": {"dtype": "string", "id": null, "_type": "Value"}}], "answer": {"dtype": "string", "id": null, "_type": "Value"}, "all_answers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "sharc_modified", "config_name": "mod_dev_multi", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 1553940, "num_examples": 2270, "dataset_name": "sharc_modified"}}, "download_checksums": {"https://raw.githubusercontent.com/nikhilweee/neural-conv-qa/master/datasets/mod_dev_multi.json": {"num_bytes": 2006124, "checksum": "cc49803a0091a05163a23f44c0449048eec63d75b7dc406cab3b48f5cee05e04"}}, "download_size": 2006124, "post_processing_size": null, "dataset_size": 1553940, "size_in_bytes": 3560064}, "history": {"description": "ShARC, a conversational QA task, requires a system to answer user questions based on rules expressed in natural language text. However, it is found that in the ShARC dataset there are multiple spurious patterns that could be exploited by neural models. SharcModified is a new dataset which reduces the patterns identified in the original dataset. To reduce the sensitivity of neural models, for each occurence of an instance conforming to any of the patterns, we automatically construct alternatives where we choose to either replace the current instance with an alternative instance which does not exhibit the pattern; or retain the original instance. The modified ShARC has two versions sharc-mod and history-shuffled. For morre details refer to Appendix A.3 .\n", "citation": "@inproceedings{verma-etal-2020-neural,\n title = \"Neural Conversational {QA}: Learning to Reason vs Exploiting Patterns\",\n author = \"Verma, Nikhil and\n Sharma, Abhishek and\n Madan, Dhiraj and\n Contractor, Danish and\n Kumar, Harshit and\n Joshi, Sachindra\",\n booktitle = \"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)\",\n month = nov,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.emnlp-main.589\",\n pages = \"7263--7269\",\n abstract = \"Neural Conversational QA tasks such as ShARC require systems to answer questions based on the contents of a given passage. On studying recent state-of-the-art models on the ShARC QA task, we found indications that the model(s) learn spurious clues/patterns in the data-set. Further, a heuristic-based program, built to exploit these patterns, had comparative performance to that of the neural models. In this paper we share our findings about the four types of patterns in the ShARC corpus and how the neural models exploit them. Motivated by the above findings, we create and share a modified data-set that has fewer spurious patterns than the original data-set, consequently allowing models to learn better.\",\n}\n", "homepage": "https://github.com/nikhilweee/neural-conv-qa", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "utterance_id": {"dtype": "string", "id": null, "_type": "Value"}, "source_url": {"dtype": "string", "id": null, "_type": "Value"}, "snippet": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "scenario": {"dtype": "string", "id": null, "_type": "Value"}, "history": [{"follow_up_question": {"dtype": "string", "id": null, "_type": "Value"}, "follow_up_answer": {"dtype": "string", "id": null, "_type": "Value"}}], "evidence": [{"follow_up_question": {"dtype": "string", "id": null, "_type": "Value"}, "follow_up_answer": {"dtype": "string", "id": null, "_type": "Value"}}], "answer": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "sharc_modified", "config_name": "history", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 15083103, "num_examples": 21890, "dataset_name": "sharc_modified"}, "validation": {"name": "validation", "num_bytes": 1468604, "num_examples": 2270, "dataset_name": "sharc_modified"}}, "download_checksums": {"https://raw.githubusercontent.com/nikhilweee/neural-conv-qa/master/datasets/history_train.json": {"num_bytes": 19246236, "checksum": "89ce72b684b19f16bb7fe57ac1e7f08f4d2fb443a918103ef05b0d9cab37782a"}, "https://raw.githubusercontent.com/nikhilweee/neural-conv-qa/master/datasets/history_dev.json": {"num_bytes": 1890422, "checksum": "4387fa0f0dd9fd68ea42a9f10c808443c20e73696d470fe89aab8856cce076ab"}}, "download_size": 21136658, "post_processing_size": null, "dataset_size": 16551707, "size_in_bytes": 37688365}, "history_dev_multi": {"description": "ShARC, a conversational QA task, requires a system to answer user questions based on rules expressed in natural language text. However, it is found that in the ShARC dataset there are multiple spurious patterns that could be exploited by neural models. SharcModified is a new dataset which reduces the patterns identified in the original dataset. To reduce the sensitivity of neural models, for each occurence of an instance conforming to any of the patterns, we automatically construct alternatives where we choose to either replace the current instance with an alternative instance which does not exhibit the pattern; or retain the original instance. The modified ShARC has two versions sharc-mod and history-shuffled. For morre details refer to Appendix A.3 .\n", "citation": "@inproceedings{verma-etal-2020-neural,\n title = \"Neural Conversational {QA}: Learning to Reason vs Exploiting Patterns\",\n author = \"Verma, Nikhil and\n Sharma, Abhishek and\n Madan, Dhiraj and\n Contractor, Danish and\n Kumar, Harshit and\n Joshi, Sachindra\",\n booktitle = \"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)\",\n month = nov,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.emnlp-main.589\",\n pages = \"7263--7269\",\n abstract = \"Neural Conversational QA tasks such as ShARC require systems to answer questions based on the contents of a given passage. On studying recent state-of-the-art models on the ShARC QA task, we found indications that the model(s) learn spurious clues/patterns in the data-set. Further, a heuristic-based program, built to exploit these patterns, had comparative performance to that of the neural models. In this paper we share our findings about the four types of patterns in the ShARC corpus and how the neural models exploit them. Motivated by the above findings, we create and share a modified data-set that has fewer spurious patterns than the original data-set, consequently allowing models to learn better.\",\n}\n", "homepage": "https://github.com/nikhilweee/neural-conv-qa", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "utterance_id": {"dtype": "string", "id": null, "_type": "Value"}, "source_url": {"dtype": "string", "id": null, "_type": "Value"}, "snippet": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "scenario": {"dtype": "string", "id": null, "_type": "Value"}, "history": [{"follow_up_question": {"dtype": "string", "id": null, "_type": "Value"}, "follow_up_answer": {"dtype": "string", "id": null, "_type": "Value"}}], "evidence": [{"follow_up_question": {"dtype": "string", "id": null, "_type": "Value"}, "follow_up_answer": {"dtype": "string", "id": null, "_type": "Value"}}], "answer": {"dtype": "string", "id": null, "_type": "Value"}, "all_answers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "sharc_modified", "config_name": "history_dev_multi", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 1548305, "num_examples": 2270, "dataset_name": "sharc_modified"}}, "download_checksums": {"https://raw.githubusercontent.com/nikhilweee/neural-conv-qa/master/datasets/history_dev_multi.json": {"num_bytes": 2000489, "checksum": "63b4ea610446b425bd6761d78ce14ea2ccc2a824bcedec22f311068dff768e03"}}, "download_size": 2000489, "post_processing_size": null, "dataset_size": 1548305, "size_in_bytes": 3548794}}
|
dummy/history/1.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:63dbb6bf14e0ab694b2b4d535641d4b27cc87123832cce52dd41d071b298a2e5
|
3 |
+
size 3922
|
dummy/history_dev_multi/1.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b57b0ccfce6efdedab7d58ad4002e928972b456a824ac10b461b8bcb651be769
|
3 |
+
size 2040
|
dummy/mod/1.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f25e89617056f268dad575365860de1afc2c4e65266c351fd3a9e1e3eecc43ef
|
3 |
+
size 3903
|
dummy/mod_dev_multi/1.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a1237e10d517ab3865aafe942b8032fe3db52d8412809e3b6ba13af4a6e2146b
|
3 |
+
size 2028
|
sharc_modified.py
ADDED
@@ -0,0 +1,157 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2020 The HuggingFace Datasets Authors.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
"""Modified ShARC datatset"""
|
16 |
+
|
17 |
+
from __future__ import absolute_import, division, print_function
|
18 |
+
|
19 |
+
import json
|
20 |
+
|
21 |
+
import datasets
|
22 |
+
|
23 |
+
|
24 |
+
_CITATION = """\
|
25 |
+
@inproceedings{verma-etal-2020-neural,
|
26 |
+
title = "Neural Conversational {QA}: Learning to Reason vs Exploiting Patterns",
|
27 |
+
author = "Verma, Nikhil and
|
28 |
+
Sharma, Abhishek and
|
29 |
+
Madan, Dhiraj and
|
30 |
+
Contractor, Danish and
|
31 |
+
Kumar, Harshit and
|
32 |
+
Joshi, Sachindra",
|
33 |
+
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
|
34 |
+
month = nov,
|
35 |
+
year = "2020",
|
36 |
+
address = "Online",
|
37 |
+
publisher = "Association for Computational Linguistics",
|
38 |
+
url = "https://www.aclweb.org/anthology/2020.emnlp-main.589",
|
39 |
+
pages = "7263--7269",
|
40 |
+
abstract = "Neural Conversational QA tasks such as ShARC require systems to answer questions based on the contents of a given passage. On studying recent state-of-the-art models on the ShARC QA task, we found indications that the model(s) learn spurious clues/patterns in the data-set. Further, a heuristic-based program, built to exploit these patterns, had comparative performance to that of the neural models. In this paper we share our findings about the four types of patterns in the ShARC corpus and how the neural models exploit them. Motivated by the above findings, we create and share a modified data-set that has fewer spurious patterns than the original data-set, consequently allowing models to learn better.",
|
41 |
+
}
|
42 |
+
"""
|
43 |
+
|
44 |
+
_DESCRIPTION = """\
|
45 |
+
ShARC, a conversational QA task, requires a system to answer user questions based on rules expressed in natural language text. \
|
46 |
+
However, it is found that in the ShARC dataset there are multiple spurious patterns that could be exploited by neural models. \
|
47 |
+
SharcModified is a new dataset which reduces the patterns identified in the original dataset. \
|
48 |
+
To reduce the sensitivity of neural models, for each occurence of an instance conforming to any of the patterns, \
|
49 |
+
we automatically construct alternatives where we choose to either replace the current instance with an alternative \
|
50 |
+
instance which does not exhibit the pattern; or retain the original instance. \
|
51 |
+
The modified ShARC has two versions sharc-mod and history-shuffled. For morre details refer to Appendix A.3 .
|
52 |
+
"""
|
53 |
+
|
54 |
+
_BASE_URL = "https://raw.githubusercontent.com/nikhilweee/neural-conv-qa/master/datasets"
|
55 |
+
|
56 |
+
|
57 |
+
class SharcModifiedConfig(datasets.BuilderConfig):
|
58 |
+
"""BuilderConfig for SharcModified."""
|
59 |
+
|
60 |
+
def __init__(self, **kwargs):
|
61 |
+
"""BuilderConfig for MedHop.
|
62 |
+
|
63 |
+
Args:
|
64 |
+
**kwargs: keyword arguments forwarded to super.
|
65 |
+
"""
|
66 |
+
super(SharcModifiedConfig, self).__init__(**kwargs)
|
67 |
+
|
68 |
+
|
69 |
+
class SharcModified(datasets.GeneratorBasedBuilder):
|
70 |
+
"""Modified ShARC datatset"""
|
71 |
+
|
72 |
+
VERSION = datasets.Version("1.0.0")
|
73 |
+
BUILDER_CONFIGS = [
|
74 |
+
SharcModifiedConfig(
|
75 |
+
name="mod",
|
76 |
+
version=datasets.Version("1.0.0"),
|
77 |
+
description="The modified ShARC dataset.",
|
78 |
+
),
|
79 |
+
SharcModifiedConfig(
|
80 |
+
name="mod_dev_multi",
|
81 |
+
version=datasets.Version("1.0.0"),
|
82 |
+
description="The modified ShARC dev dataset with multiple references.",
|
83 |
+
),
|
84 |
+
SharcModifiedConfig(
|
85 |
+
name="history", version=datasets.Version("1.0.0"), description="History-Shuffled ShARC dataset."
|
86 |
+
),
|
87 |
+
SharcModifiedConfig(
|
88 |
+
name="history_dev_multi",
|
89 |
+
version=datasets.Version("1.0.0"),
|
90 |
+
description="History-Shuffled dev dataset with multiple references.",
|
91 |
+
),
|
92 |
+
]
|
93 |
+
BUILDER_CONFIG_CLASS = SharcModifiedConfig
|
94 |
+
|
95 |
+
def _info(self):
|
96 |
+
features = {
|
97 |
+
"id": datasets.Value("string"),
|
98 |
+
"utterance_id": datasets.Value("string"),
|
99 |
+
"source_url": datasets.Value("string"),
|
100 |
+
"snippet": datasets.Value("string"),
|
101 |
+
"question": datasets.Value("string"),
|
102 |
+
"scenario": datasets.Value("string"),
|
103 |
+
"history": [
|
104 |
+
{"follow_up_question": datasets.Value("string"), "follow_up_answer": datasets.Value("string")}
|
105 |
+
],
|
106 |
+
"evidence": [
|
107 |
+
{"follow_up_question": datasets.Value("string"), "follow_up_answer": datasets.Value("string")}
|
108 |
+
],
|
109 |
+
"answer": datasets.Value("string"),
|
110 |
+
}
|
111 |
+
|
112 |
+
if self.config.name in ["mod_dev_multi", "history_dev_multi"]:
|
113 |
+
features["all_answers"] = datasets.Sequence(datasets.Value("string"))
|
114 |
+
|
115 |
+
return datasets.DatasetInfo(
|
116 |
+
description=_DESCRIPTION,
|
117 |
+
features=datasets.Features(features),
|
118 |
+
supervised_keys=None,
|
119 |
+
homepage="https://github.com/nikhilweee/neural-conv-qa",
|
120 |
+
citation=_CITATION,
|
121 |
+
)
|
122 |
+
|
123 |
+
def _split_generators(self, dl_manager):
|
124 |
+
if self.config.name in ["mod_dev_multi", "history_dev_multi"]:
|
125 |
+
url = f"{_BASE_URL}/{self.config.name}.json"
|
126 |
+
downloaded_file = dl_manager.download(url)
|
127 |
+
return [datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_file})]
|
128 |
+
|
129 |
+
urls = {
|
130 |
+
"train": f"{_BASE_URL}/{self.config.name}_train.json",
|
131 |
+
"dev": f"{_BASE_URL}/{self.config.name}_dev.json",
|
132 |
+
}
|
133 |
+
downloaded_files = dl_manager.download(urls)
|
134 |
+
return [
|
135 |
+
datasets.SplitGenerator(
|
136 |
+
name=datasets.Split.TRAIN,
|
137 |
+
gen_kwargs={"filepath": downloaded_files["train"]},
|
138 |
+
),
|
139 |
+
datasets.SplitGenerator(
|
140 |
+
name=datasets.Split.VALIDATION,
|
141 |
+
gen_kwargs={"filepath": downloaded_files["dev"]},
|
142 |
+
),
|
143 |
+
]
|
144 |
+
|
145 |
+
def _generate_examples(self, filepath):
|
146 |
+
with open(filepath, encoding="utf-8") as f:
|
147 |
+
examples = json.load(f)
|
148 |
+
for i, example in enumerate(examples):
|
149 |
+
example.pop("tree_id")
|
150 |
+
example["id"] = example["utterance_id"]
|
151 |
+
# the keys are misspelled for one of the example in dev set, fix it here
|
152 |
+
for evidence in example["evidence"]:
|
153 |
+
if evidence.get("followup_answer") is not None:
|
154 |
+
evidence["follow_up_answer"] = evidence.pop("followup_answer")
|
155 |
+
if evidence.get("followup_question") is not None:
|
156 |
+
evidence["follow_up_question"] = evidence.pop("followup_question")
|
157 |
+
yield example["id"], example
|