merge
Browse files- .gitattributes +6 -0
- README.md +4 -76
.gitattributes
CHANGED
@@ -2,12 +2,14 @@
|
|
2 |
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
|
|
5 |
*.ftz filter=lfs diff=lfs merge=lfs -text
|
6 |
*.gz filter=lfs diff=lfs merge=lfs -text
|
7 |
*.h5 filter=lfs diff=lfs merge=lfs -text
|
8 |
*.joblib filter=lfs diff=lfs merge=lfs -text
|
9 |
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
10 |
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
|
|
11 |
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
*.npy filter=lfs diff=lfs merge=lfs -text
|
@@ -23,6 +25,10 @@
|
|
23 |
*.rar filter=lfs diff=lfs merge=lfs -text
|
24 |
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
25 |
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
26 |
*.tflite filter=lfs diff=lfs merge=lfs -text
|
27 |
*.tgz filter=lfs diff=lfs merge=lfs -text
|
28 |
*.wasm filter=lfs diff=lfs merge=lfs -text
|
|
|
2 |
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
13 |
*.model filter=lfs diff=lfs merge=lfs -text
|
14 |
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
15 |
*.npy filter=lfs diff=lfs merge=lfs -text
|
|
|
25 |
*.rar filter=lfs diff=lfs merge=lfs -text
|
26 |
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
29 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
32 |
*.tflite filter=lfs diff=lfs merge=lfs -text
|
33 |
*.tgz filter=lfs diff=lfs merge=lfs -text
|
34 |
*.wasm filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -1,4 +1,3 @@
|
|
1 |
-
---
|
2 |
annotations_creators:
|
3 |
- expert-generated
|
4 |
|
@@ -37,80 +36,9 @@ license:
|
|
37 |
|
38 |
task_ids:
|
39 |
- document-retrieval
|
40 |
-
---
|
41 |
-
|
42 |
-
# Dataset Card for MIRACL (Topics and Qrels)
|
43 |
-
|
44 |
-
|
45 |
-
## Dataset Description
|
46 |
-
* **Homepage:** http://miracl.ai
|
47 |
-
* **Repository:** https://github.com/project-miracl/miracl
|
48 |
-
* **Paper:** https://arxiv.org/abs/2210.09984
|
49 |
-
|
50 |
-
MIRACL ๐๐๐ (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
|
51 |
-
|
52 |
-
This dataset contains the collection data of the 16 "known languages". The remaining 2 "surprise languages" will not be released until later.
|
53 |
-
|
54 |
-
The topics are generated by native speakers of each language, who also label the relevance between the topics and a given document list.
|
55 |
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
1. To download the files:
|
60 |
-
Under folders `miracl-v1.0-{lang}/topics`,
|
61 |
-
the topics are saved in `.tsv` format, with each line to be:
|
62 |
-
```
|
63 |
-
qid\tquery
|
64 |
-
```
|
65 |
-
|
66 |
-
Under folders `miracl-v1.0-{lang}/qrels`,
|
67 |
-
the qrels are saved in standard TREC format, with each line to be:
|
68 |
-
```
|
69 |
-
qid Q0 docid relevance
|
70 |
-
```
|
71 |
-
|
72 |
-
|
73 |
-
2. To access the data using HuggingFace `datasets`:
|
74 |
-
```
|
75 |
-
lang='ar' # or any of the 16 languages
|
76 |
-
miracl = datasets.load_dataset('miracl/miracl', lang, use_auth_token=True)
|
77 |
-
|
78 |
-
# training set:
|
79 |
-
for data in miracl['train']: # or 'dev', 'testA'
|
80 |
-
query_id = data['query_id']
|
81 |
-
query = data['query']
|
82 |
-
positive_passages = data['positive_passages']
|
83 |
-
negative_passages = data['negative_passages']
|
84 |
-
|
85 |
-
for entry in positive_passages: # OR 'negative_passages'
|
86 |
-
docid = entry['docid']
|
87 |
-
title = entry['title']
|
88 |
-
text = entry['text']
|
89 |
-
```
|
90 |
-
The structure is the same for `train`, `dev`, and `testA` set, where `testA` only exists for languages in Mr. TyDi (i.e., Arabic, Bengali, English, Finnish, Indonesian, Japanese, Korean, Russian, Swahili, Telugu, Thai).
|
91 |
-
Note that `negative_passages` are annotated by native speakers as well, instead of the non-positive passages from top-`k` retrieval results.
|
92 |
-
|
93 |
-
|
94 |
-
## Dataset Statistics
|
95 |
-
The following table contains the number of queries (`#Q`) and the number of judgments (`#J`) in each language, for the training and development set,
|
96 |
-
where the judgments include both positive and negative samples.
|
97 |
|
98 |
-
|
99 |
-
|:----:|:-----:|:------:|:-----:|:------:|
|
100 |
-
| | **#Q**| **#J** |**#Q** |**#J** |
|
101 |
-
| ar | 3,495 | 25,382 | 2,896 | 29,197 |
|
102 |
-
| bn | 1,631 | 16,754 | 411 | 4,206 |
|
103 |
-
| en | 2,863 | 29,416 | 799 | 8,350 |
|
104 |
-
| es | 2,162 | 21,531 | 648 | 6,443 |
|
105 |
-
| fa | 2,107 | 21,844 | 632 | 6,571 |
|
106 |
-
| fi | 2,897 | 20,350 | 1,271 | 12,008 |
|
107 |
-
| fr | 1,143 | 11,426 | 343 | 3,429 |
|
108 |
-
| hi | 1,169 | 11,668 | 350 | 3,494 |
|
109 |
-
| id | 4,071 | 41,358 | 960 | 9,668 |
|
110 |
-
| ja | 3,477 | 34,387 | 860 | 8,354 |
|
111 |
-
| ko | 868 | 12,767 | 213 | 3,057 |
|
112 |
-
| ru | 4,683 | 33,921 | 1,252 | 13,100 |
|
113 |
-
| sw | 1,901 | 9,359 | 482 | 5,092 |
|
114 |
-
| te | 3,452 | 18,608 | 828 | 1,606 |
|
115 |
-
| th | 2,972 | 21,293 | 733 | 7,573 |
|
116 |
-
| zh | 1,312 | 13,113 | 393 | 3,928 |
|
|
|
|
|
1 |
annotations_creators:
|
2 |
- expert-generated
|
3 |
|
|
|
36 |
|
37 |
task_ids:
|
38 |
- document-retrieval
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
+
source_datasets
|
41 |
+
- miracl/miracl
|
42 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
|
44 |
+
A clone of the excellent [`miracl/miracl` dataset]() that doesn't require authentication. Refer to the original dataset for details.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|