Update README.md
Browse files
README.md
CHANGED
@@ -5,6 +5,8 @@ language:
|
|
5 |
- en
|
6 |
datasets:
|
7 |
- scoris/en-lt-merged-data
|
|
|
|
|
8 |
---
|
9 |
# Overview
|
10 |
![Scoris logo](https://scoris.lt/logo_smaller.png)
|
@@ -15,7 +17,7 @@ For Lithuanian-English translation check another model [scoris/opus-mt-tc-big-lt
|
|
15 |
|
16 |
Fine-tuned on large merged data set: [scoris/en-lt-merged-data](https://huggingface.co/datasets/scoris/en-lt-merged-data) (5.4 million sentence pairs)
|
17 |
|
18 |
-
Trained on
|
19 |
|
20 |
Made by [Scoris](https://scoris.lt) team
|
21 |
|
@@ -24,7 +26,7 @@ Tested on scoris/en-lt-merged-data validation set. Metric: sacrebleu
|
|
24 |
|
25 |
| model | testset | BLEU | Gen Len |
|
26 |
|----------|---------|-------|-------|
|
27 |
-
| scoris/opus-mt-tc-big-en-lt-scoris-finetuned | scoris/en-lt-merged-data (validation) | 41.
|
28 |
| Helsinki-NLP/opus-mt-tc-big-en-lt | scoris/en-lt-merged-data (validation) | 34.2768 | 17.6664
|
29 |
|
30 |
According to [Google](https://cloud.google.com/translate/automl/docs/evaluate) BLEU score interpretation is following:
|
|
|
5 |
- en
|
6 |
datasets:
|
7 |
- scoris/en-lt-merged-data
|
8 |
+
metrics:
|
9 |
+
- sacrebleu
|
10 |
---
|
11 |
# Overview
|
12 |
![Scoris logo](https://scoris.lt/logo_smaller.png)
|
|
|
17 |
|
18 |
Fine-tuned on large merged data set: [scoris/en-lt-merged-data](https://huggingface.co/datasets/scoris/en-lt-merged-data) (5.4 million sentence pairs)
|
19 |
|
20 |
+
Trained on 6 epochs.
|
21 |
|
22 |
Made by [Scoris](https://scoris.lt) team
|
23 |
|
|
|
26 |
|
27 |
| model | testset | BLEU | Gen Len |
|
28 |
|----------|---------|-------|-------|
|
29 |
+
| scoris/opus-mt-tc-big-en-lt-scoris-finetuned | scoris/en-lt-merged-data (validation) | 41.8841 | 17.4785
|
30 |
| Helsinki-NLP/opus-mt-tc-big-en-lt | scoris/en-lt-merged-data (validation) | 34.2768 | 17.6664
|
31 |
|
32 |
According to [Google](https://cloud.google.com/translate/automl/docs/evaluate) BLEU score interpretation is following:
|