Datasets:
matteogabburo
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -14,6 +14,151 @@ language:
|
|
14 |
pretty_name: mWikiQA
|
15 |
size_categories:
|
16 |
- 100K<n<1M
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
---
|
18 |
## Dataset Description
|
19 |
|
@@ -21,6 +166,44 @@ size_categories:
|
|
21 |
|
22 |
The dataset has been translated into five European languages: French, German, Italian, Portuguese, and Spanish, as described in this paper: "Datasets for Multilingual Answer Sentence Selection."
|
23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
Each example has the following format:
|
25 |
|
26 |
```
|
@@ -43,6 +226,8 @@ Where:
|
|
43 |
- **question**: the question
|
44 |
- **candidate**: the answer candidate
|
45 |
|
|
|
|
|
46 |
## Citation
|
47 |
|
48 |
If you find this dataset useful, please cite the following paper:
|
|
|
14 |
pretty_name: mWikiQA
|
15 |
size_categories:
|
16 |
- 100K<n<1M
|
17 |
+
configs:
|
18 |
+
- config_name: en
|
19 |
+
data_files:
|
20 |
+
- split: train
|
21 |
+
path: "eng-train.jsonl"
|
22 |
+
- split: validation
|
23 |
+
path: "eng-dev.jsonl"
|
24 |
+
- split: test
|
25 |
+
path: "eng-test.jsonl"
|
26 |
+
- config_name: de
|
27 |
+
data_files:
|
28 |
+
- split: train
|
29 |
+
path: "deu-train.jsonl"
|
30 |
+
- split: validation
|
31 |
+
path: "deu-dev.jsonl"
|
32 |
+
- split: test
|
33 |
+
path: "deu-test.jsonl"
|
34 |
+
- config_name: fr
|
35 |
+
data_files:
|
36 |
+
- split: train
|
37 |
+
path: "fra-train.jsonl"
|
38 |
+
- split: validation
|
39 |
+
path: "fra-dev.jsonl"
|
40 |
+
- split: test
|
41 |
+
path: "fra-test.jsonl"
|
42 |
+
- config_name: it
|
43 |
+
data_files:
|
44 |
+
- split: train
|
45 |
+
path: "ita-train.jsonl"
|
46 |
+
- split: validation
|
47 |
+
path: "ita-dev.jsonl"
|
48 |
+
- split: test
|
49 |
+
path: "ita-test.jsonl"
|
50 |
+
- config_name: po
|
51 |
+
data_files:
|
52 |
+
- split: train
|
53 |
+
path: "por-train.jsonl"
|
54 |
+
- split: validation
|
55 |
+
path: "por-dev.jsonl"
|
56 |
+
- split: test
|
57 |
+
path: "por-test.jsonl"
|
58 |
+
- config_name: sp
|
59 |
+
data_files:
|
60 |
+
- split: train
|
61 |
+
path: "spa-train.jsonl"
|
62 |
+
- split: validation
|
63 |
+
path: "spa-dev.jsonl"
|
64 |
+
- split: test
|
65 |
+
path: "spa-test.jsonl"
|
66 |
+
- config_name: en_++
|
67 |
+
data_files:
|
68 |
+
- split: train
|
69 |
+
path: "eng-train.jsonl"
|
70 |
+
- split: validation
|
71 |
+
path: "eng-dev_no_allneg.jsonl"
|
72 |
+
- split: test
|
73 |
+
path: "eng-test_no_allneg.jsonl"
|
74 |
+
- config_name: de_++
|
75 |
+
data_files:
|
76 |
+
- split: train
|
77 |
+
path: "deu-train.jsonl"
|
78 |
+
- split: validation
|
79 |
+
path: "deu-dev_no_allneg.jsonl"
|
80 |
+
- split: test
|
81 |
+
path: "deu-test_no_allneg.jsonl"
|
82 |
+
- config_name: fr_++
|
83 |
+
data_files:
|
84 |
+
- split: train
|
85 |
+
path: "fra-train.jsonl"
|
86 |
+
- split: validation
|
87 |
+
path: "fra-dev_no_allneg.jsonl"
|
88 |
+
- split: test
|
89 |
+
path: "fra-test_no_allneg.jsonl"
|
90 |
+
- config_name: it_++
|
91 |
+
data_files:
|
92 |
+
- split: train
|
93 |
+
path: "ita-train.jsonl"
|
94 |
+
- split: validation
|
95 |
+
path: "ita-dev_no_allneg.jsonl"
|
96 |
+
- split: test
|
97 |
+
path: "ita-test_no_allneg.jsonl"
|
98 |
+
- config_name: po_++
|
99 |
+
data_files:
|
100 |
+
- split: train
|
101 |
+
path: "por-train.jsonl"
|
102 |
+
- split: validation
|
103 |
+
path: "por-dev_no_allneg.jsonl"
|
104 |
+
- split: test
|
105 |
+
path: "por-test_no_allneg.jsonl"
|
106 |
+
- config_name: sp_++
|
107 |
+
data_files:
|
108 |
+
- split: train
|
109 |
+
path: "spa-train.jsonl"
|
110 |
+
- split: validation
|
111 |
+
path: "spa-dev_no_allneg.jsonl"
|
112 |
+
- split: test
|
113 |
+
path: "spa-test_no_allneg.jsonl"
|
114 |
+
- config_name: en_clean
|
115 |
+
data_files:
|
116 |
+
- split: train
|
117 |
+
path: "eng-train.jsonl"
|
118 |
+
- split: validation
|
119 |
+
path: "eng-dev_clean.jsonl"
|
120 |
+
- split: test
|
121 |
+
path: "eng-test_clean.jsonl"
|
122 |
+
- config_name: de_clean
|
123 |
+
data_files:
|
124 |
+
- split: train
|
125 |
+
path: "deu-train.jsonl"
|
126 |
+
- split: validation
|
127 |
+
path: "deu-dev_clean.jsonl"
|
128 |
+
- split: test
|
129 |
+
path: "deu-test_clean.jsonl"
|
130 |
+
- config_name: fr_clean
|
131 |
+
data_files:
|
132 |
+
- split: train
|
133 |
+
path: "fra-train.jsonl"
|
134 |
+
- split: validation
|
135 |
+
path: "fra-dev_clean.jsonl"
|
136 |
+
- split: test
|
137 |
+
path: "fra-test_clean.jsonl"
|
138 |
+
- config_name: it_clean
|
139 |
+
data_files:
|
140 |
+
- split: train
|
141 |
+
path: "ita-train.jsonl"
|
142 |
+
- split: validation
|
143 |
+
path: "ita-dev_clean.jsonl"
|
144 |
+
- split: test
|
145 |
+
path: "ita-test_clean.jsonl"
|
146 |
+
- config_name: po_clean
|
147 |
+
data_files:
|
148 |
+
- split: train
|
149 |
+
path: "por-train.jsonl"
|
150 |
+
- split: validation
|
151 |
+
path: "por-dev_clean.jsonl"
|
152 |
+
- split: test
|
153 |
+
path: "por-test_clean.jsonl"
|
154 |
+
- config_name: sp_clean
|
155 |
+
data_files:
|
156 |
+
- split: train
|
157 |
+
path: "spa-train.jsonl"
|
158 |
+
- split: validation
|
159 |
+
path: "spa-dev_clean.jsonl"
|
160 |
+
- split: test
|
161 |
+
path: "spa-test_clean.jsonl"
|
162 |
---
|
163 |
## Dataset Description
|
164 |
|
|
|
166 |
|
167 |
The dataset has been translated into five European languages: French, German, Italian, Portuguese, and Spanish, as described in this paper: "Datasets for Multilingual Answer Sentence Selection."
|
168 |
|
169 |
+
## Splits:
|
170 |
+
|
171 |
+
For each language (English, French, German, Italian, Portuguese, and Spanish), we provide:
|
172 |
+
|
173 |
+
- **train** split
|
174 |
+
- **validation** split
|
175 |
+
- **test** split
|
176 |
+
|
177 |
+
In addition, the validation and the test splits are available also in the following preprocessed versions:
|
178 |
+
|
179 |
+
- **++**: without questions with only negative answer candidates
|
180 |
+
- **clean**: without questions with only negative and only positive answer candidates
|
181 |
+
|
182 |
+
### How to load them:
|
183 |
+
To use these splits, you can use the following snippet of code replacing ``[LANG]`` with a language identifier (en, fr, de, it, po, sp), and ``[VERSION]`` with the version identifier (++, clean)
|
184 |
+
|
185 |
+
```
|
186 |
+
from datasets import load_dataset
|
187 |
+
|
188 |
+
"""
|
189 |
+
if you want the default splits, replace [LANG] with an identifier in: en, fr, de, it, po, sp
|
190 |
+
dataset = load_dataset("mWikiQA", "[LANG]")
|
191 |
+
"""
|
192 |
+
# example:
|
193 |
+
italian_dataset = load_dataset("mWikiQA", "it")
|
194 |
+
|
195 |
+
|
196 |
+
"""
|
197 |
+
if you want the processed splits ("clean" and "no all negatives" sets), replace [LANG] with a language identifier and [VERSION] with "++" or "clean"
|
198 |
+
dataset = load_dataset("mWikiQA", "[LANG]_[VERSION]")
|
199 |
+
"""
|
200 |
+
# example:
|
201 |
+
italian_clean_dataset = load_dataset("mWikiQA", "it_clean")
|
202 |
+
|
203 |
+
```
|
204 |
+
|
205 |
+
|
206 |
+
## Format:
|
207 |
Each example has the following format:
|
208 |
|
209 |
```
|
|
|
226 |
- **question**: the question
|
227 |
- **candidate**: the answer candidate
|
228 |
|
229 |
+
|
230 |
+
|
231 |
## Citation
|
232 |
|
233 |
If you find this dataset useful, please cite the following paper:
|