machineteacher commited on
Commit
5799f7b
1 Parent(s): 01d91d7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +114 -0
README.md CHANGED
@@ -1,3 +1,117 @@
1
  ---
2
  license: cc-by-nc-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-4.0
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - en
7
+ - de
8
+ - ar
9
+ - ja
10
+ - ko
11
+ - es
12
+ - zh
13
+ pretty_name: medit
14
+ size_categories:
15
+ - 10K<n<100K
16
  ---
17
+ # Dataset Card for mEdIT: Multilingual Text Editing via Instruction Tuning
18
+
19
+ ## Paper: [mEdIT: Multilingual Text Editing via Instruction Tuning](https://arxiv.org/abs/2402.16472)
20
+ ## Authors: Vipul Raheja, Dimitris Alikaniotis, Vivek Kulkarni, Bashar Alhafni, Dhruv Kumar
21
+ ## Project Repo: [https://github.com/vipulraheja/medit](https://github.com/vipulraheja/medit)
22
+
23
+
24
+ ## Dataset Summary
25
+ This is the dataset that was used to train the mEdIT text editing models. Full details of the dataset can be found in our paper.
26
+
27
+
28
+ # Dataset Structure
29
+ The dataset is in JSON format.
30
+
31
+ ## Data Instances
32
+ ```
33
+ {
34
+ "instance":867453,
35
+ "task":"gec",
36
+ "language":"english",
37
+ "lang":"en",
38
+ "dataset":"lang8.bea19",
39
+ "src":"Fix grammar in this sentence: Luckily there was no damage for the earthquake .",
40
+ "prompt":"### 命令:\nこの文の文法上の誤りを修正してください\n### 入力:\nLuckily there was no damage for the earthquake .\n### 出力:\n\n",
41
+ "text":"Luckily there was no damage from the earthquake two years ago ."
42
+ }
43
+ ```
44
+
45
+ ## Data Fields
46
+ * `instance`: instance ID
47
+ * `language`: Language of input and edited text
48
+ * `lang`: Language code in ISO-639-1
49
+ * `dataset`: Source of the current example
50
+ * `task`: Text editing task for this instance
51
+ * `src`: input text (formatted as `instruction: input_text`)
52
+ * `prompt`: Full prompt (instruction + input) for training the models
53
+ * `text`: output text
54
+
55
+
56
+ ## Considerations for Using the Data
57
+ Please note that this dataset contains 69k instances (as opposed to the 190k instances we used in the paper).
58
+ This is because this public release includes only the instances that were acquired and curated from publicly available datasets.
59
+ Specifically, it is missing roughly 13k instances in training and 1.5k instances in validation data from Simplification and Formality Transfer tasks due to licensing restrictions.
60
+
61
+ Following are the details of the subsets (including the ones we are unable to publicly release):
62
+
63
+ *Grammatical Error Correction*:
64
+ - English:
65
+ - FCE, Lang8, and W&I+LOCNESS data can be found at: https://www.cl.cam.ac.uk/research/nl/bea2019st/#data
66
+ - *Note* that we are unable to share Lang8 data due to license restrictions
67
+ - Arabic:
68
+ - The QALB-2014 and QALB-2015 datasets can be requested at: https://docs.google.com/forms/d/e/1FAIpQLScSsuAu1_84KORcpzOKTid0nUMQDZNQKKnVcMilaIZ6QF-xdw/viewform
69
+ - *Note* that we are unable to share them due to license restrictions
70
+ - ZAEBUC: Can be requested at https://docs.google.com/forms/d/e/1FAIpQLSd0mFkEA6SIreDyqQXknwQrGOhdkC9Uweszgkp73gzCErEmJg/viewform
71
+ - Chinese:
72
+ - NLPCC-2018 data can be found at: https://github.com/zhaoyyoo/NLPCC2018_GEC
73
+ - German:
74
+ - FalKO-MERLIN GEC Corpus can be found at: https://github.com/adrianeboyd/boyd-wnut2018?tab=readme-ov-file#download-data
75
+ - Spanish:
76
+ - COWS-L2H dataset can be found at: https://github.com/ucdaviscl/cowsl2h
77
+ - Japanese:
78
+ - NAIST Lang8 Corpora can be found at: https://sites.google.com/site/naistlang8corpora
79
+ - *Note* that we are unable to share this data due to license restrictions
80
+ - Korean:
81
+ - Korean GEC data can be found at: https://github.com/soyoung97/Standard_Korean_GEC
82
+ - *Note* that we are unable to share this data due to license restrictions
83
+
84
+ *Simplification*:
85
+ - English:
86
+ - WikiAuto dataset can be found at: https://huggingface.co/datasets/wiki_auto
87
+ - WikiLarge dataset can be found at: https://github.com/XingxingZhang/dress
88
+ - *Note* that we are unable to share Newsela data due to license restrictions.
89
+ - Arabic, Spanish, Korean, Chinese:
90
+ - *Note* that we are unable to share the translated Newsela data due to license restrictions.
91
+ - German:
92
+ - GeoLino dataset can be found at: http://www.github.com/Jmallins/ZEST.
93
+ - TextComplexityDE dataset can be found at: https://github.com/babaknaderi/TextComplexityDE
94
+ - Japanese:
95
+ - EasyJapanese and EasyJapaneseExtended datasets were taken from the MultiSim dataset: https://huggingface.co/datasets/MichaelR207/MultiSim/tree/main/data/Japanese
96
+
97
+
98
+ *Paraphrasing*:
99
+ - Arabic:
100
+ - NSURL-19 (Shared Task 8) data can be found at: https://www.kaggle.com/competitions/nsurl-2019-task8
101
+ - *Note* that we are unable to share the NSURL data due to license restrictions.
102
+ - English, Chinese, German, Japanese, Korean, Spanish:
103
+ - PAWS-X data can be found at: https://huggingface.co/datasets/paws-x
104
+
105
+
106
+ ## Citation
107
+
108
+ ```
109
+ @misc{raheja2024medit,
110
+ title={mEdIT: Multilingual Text Editing via Instruction Tuning},
111
+ author={Vipul Raheja and Dimitris Alikaniotis and Vivek Kulkarni and Bashar Alhafni and Dhruv Kumar},
112
+ year={2024},
113
+ eprint={2402.16472},
114
+ archivePrefix={arXiv},
115
+ primaryClass={cs.CL}
116
+ }
117
+ ```