model update
Browse files
README.md
CHANGED
@@ -46,21 +46,6 @@ model-index:
|
|
46 |
- name: MoverScore (Question Generation)
|
47 |
type: moverscore_question_generation
|
48 |
value: 56.5
|
49 |
-
- name: BLEU4 (Question & Answer Generation (with Gold Answer))
|
50 |
-
type: bleu4_question_answer_generation_with_gold_answer
|
51 |
-
value: 13.83
|
52 |
-
- name: ROUGE-L (Question & Answer Generation (with Gold Answer))
|
53 |
-
type: rouge_l_question_answer_generation_with_gold_answer
|
54 |
-
value: 42.57
|
55 |
-
- name: METEOR (Question & Answer Generation (with Gold Answer))
|
56 |
-
type: meteor_question_answer_generation_with_gold_answer
|
57 |
-
value: 34.29
|
58 |
-
- name: BERTScore (Question & Answer Generation (with Gold Answer))
|
59 |
-
type: bertscore_question_answer_generation_with_gold_answer
|
60 |
-
value: 88.51
|
61 |
-
- name: MoverScore (Question & Answer Generation (with Gold Answer))
|
62 |
-
type: moverscore_question_answer_generation_with_gold_answer
|
63 |
-
value: 62.42
|
64 |
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
|
65 |
type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer
|
66 |
value: 88.52
|
@@ -79,21 +64,6 @@ model-index:
|
|
79 |
- name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
|
80 |
type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer
|
81 |
value: 62.46
|
82 |
-
- name: BLEU4 (Question & Answer Generation)
|
83 |
-
type: bleu4_question_answer_generation
|
84 |
-
value: 1.4
|
85 |
-
- name: ROUGE-L (Question & Answer Generation)
|
86 |
-
type: rouge_l_question_answer_generation
|
87 |
-
value: 15.84
|
88 |
-
- name: METEOR (Question & Answer Generation)
|
89 |
-
type: meteor_question_answer_generation
|
90 |
-
value: 19.29
|
91 |
-
- name: BERTScore (Question & Answer Generation)
|
92 |
-
type: bertscore_question_answer_generation
|
93 |
-
value: 68.54
|
94 |
-
- name: MoverScore (Question & Answer Generation)
|
95 |
-
type: moverscore_question_answer_generation
|
96 |
-
value: 51.06
|
97 |
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer]
|
98 |
type: qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer
|
99 |
value: 79.72
|
@@ -169,40 +139,24 @@ output = pipe("Créateur » (Maker), lui aussi au singulier, « <hl> le Suprême
|
|
169 |
|
170 |
| | Score | Type | Dataset |
|
171 |
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
|
172 |
-
| BERTScore | 88.51 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
173 |
-
| Bleu_1 | 39.78 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
174 |
-
| Bleu_2 | 27.56 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
175 |
-
| Bleu_3 | 19.54 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
176 |
-
| Bleu_4 | 13.83 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
177 |
-
| METEOR | 34.29 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
178 |
-
| MoverScore | 62.42 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
179 |
| QAAlignedF1Score (BERTScore) | 88.52 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
180 |
| QAAlignedF1Score (MoverScore) | 62.46 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
181 |
| QAAlignedPrecision (BERTScore) | 88.53 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
182 |
| QAAlignedPrecision (MoverScore) | 62.46 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
183 |
| QAAlignedRecall (BERTScore) | 88.51 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
184 |
| QAAlignedRecall (MoverScore) | 62.45 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
185 |
-
| ROUGE_L | 42.57 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
186 |
|
187 |
|
188 |
- ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/mt5-small-frquad-ae`](https://huggingface.co/lmqg/mt5-small-frquad-ae). [raw metric file](https://huggingface.co/lmqg/mt5-small-frquad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_frquad.default.lmqg_mt5-small-frquad-ae.json)
|
189 |
|
190 |
| | Score | Type | Dataset |
|
191 |
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
|
192 |
-
| BERTScore | 68.54 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
193 |
-
| Bleu_1 | 11.49 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
194 |
-
| Bleu_2 | 5.61 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
195 |
-
| Bleu_3 | 2.84 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
196 |
-
| Bleu_4 | 1.4 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
197 |
-
| METEOR | 19.29 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
198 |
-
| MoverScore | 51.06 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
199 |
| QAAlignedF1Score (BERTScore) | 79.72 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
200 |
| QAAlignedF1Score (MoverScore) | 53.94 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
201 |
| QAAlignedPrecision (BERTScore) | 77.58 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
202 |
| QAAlignedPrecision (MoverScore) | 52.7 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
203 |
| QAAlignedRecall (BERTScore) | 82.06 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
204 |
| QAAlignedRecall (MoverScore) | 55.32 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
205 |
-
| ROUGE_L | 15.84 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
206 |
|
207 |
|
208 |
|
|
|
46 |
- name: MoverScore (Question Generation)
|
47 |
type: moverscore_question_generation
|
48 |
value: 56.5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
|
50 |
type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer
|
51 |
value: 88.52
|
|
|
64 |
- name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
|
65 |
type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer
|
66 |
value: 62.46
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
67 |
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer]
|
68 |
type: qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer
|
69 |
value: 79.72
|
|
|
139 |
|
140 |
| | Score | Type | Dataset |
|
141 |
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
142 |
| QAAlignedF1Score (BERTScore) | 88.52 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
143 |
| QAAlignedF1Score (MoverScore) | 62.46 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
144 |
| QAAlignedPrecision (BERTScore) | 88.53 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
145 |
| QAAlignedPrecision (MoverScore) | 62.46 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
146 |
| QAAlignedRecall (BERTScore) | 88.51 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
147 |
| QAAlignedRecall (MoverScore) | 62.45 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
|
|
148 |
|
149 |
|
150 |
- ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/mt5-small-frquad-ae`](https://huggingface.co/lmqg/mt5-small-frquad-ae). [raw metric file](https://huggingface.co/lmqg/mt5-small-frquad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_frquad.default.lmqg_mt5-small-frquad-ae.json)
|
151 |
|
152 |
| | Score | Type | Dataset |
|
153 |
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
154 |
| QAAlignedF1Score (BERTScore) | 79.72 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
155 |
| QAAlignedF1Score (MoverScore) | 53.94 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
156 |
| QAAlignedPrecision (BERTScore) | 77.58 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
157 |
| QAAlignedPrecision (MoverScore) | 52.7 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
158 |
| QAAlignedRecall (BERTScore) | 82.06 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
159 |
| QAAlignedRecall (MoverScore) | 55.32 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
|
|
160 |
|
161 |
|
162 |
|