File size: 3,392 Bytes
fa63613 5fdf9f8 fa63613 b7a43fe 80b76bc b7a43fe 80b76bc b7a43fe 5fdf9f8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
---
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- en
- ja
splits:
- name: test
num_bytes: 190991
num_examples: 2000
- name: train
num_bytes: 88348569
num_examples: 1000000
- name: validation
num_bytes: 191411
num_examples: 2000
download_size: 64068812
dataset_size: 88730971
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
license: unknown
language:
- en
- ja
pretty_name: OPUS-100
---
# Dataset Card for OPUS-100-en-ja
### Dataset Summary
This corpus is extracted from **[Helsinki-NLP/opus-100](https://huggingface.co/datasets/Helsinki-NLP/opus-100)**, with Japanese and English pairs.
### How to use
It is used in much the same way as **[Helsinki-NLP/opus-100](https://huggingface.co/datasets/Helsinki-NLP/opus-100)**. The only difference is that you do not have to specify the language.
```
from datasets import load_dataset
dataset = load_dataset("Hoshikuzu/opus-100-en-ja")
```
If data loading times are too long and boring, use Streaming.
```
from datasets import load_dataset
dataset = load_dataset("Hoshikuzu/opus-100-en-ja", streaming=True)
```
## Dataset Structure
### Data Instances
```
{
'translation': {
'en': 'Yeah, Vincent Hanna.',
'ja': '- ラウール - ラウールに ヴィンセント・ハンナだ'
}
}
```
### Data Fields
Translation dictionaries containing texts from languages 1 and 2.
### Data Splits
The dataset is split into training, development, and test portions.
### Citation Information
Follow the instructions described in the Helsinki-NLP/opus-100 readme. The following is taken from Helsinki-NLP/opus-100:
If you use this corpus, please cite the paper:
```bibtex
@inproceedings{zhang-etal-2020-improving,
title = "Improving Massively Multilingual Neural Machine Translation and Zero-Shot Translation",
author = "Zhang, Biao and
Williams, Philip and
Titov, Ivan and
Sennrich, Rico",
editor = "Jurafsky, Dan and
Chai, Joyce and
Schluter, Natalie and
Tetreault, Joel",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.acl-main.148",
doi = "10.18653/v1/2020.acl-main.148",
pages = "1628--1639",
}
```
and, please, also acknowledge OPUS:
```bibtex
@inproceedings{tiedemann-2012-parallel,
title = "Parallel Data, Tools and Interfaces in {OPUS}",
author = {Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Choukri, Khalid and
Declerck, Thierry and
Do{\u{g}}an, Mehmet U{\u{g}}ur and
Maegaard, Bente and
Mariani, Joseph and
Moreno, Asuncion and
Odijk, Jan and
Piperidis, Stelios",
booktitle = "Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}'12)",
month = may,
year = "2012",
address = "Istanbul, Turkey",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf",
pages = "2214--2218",
}
``` |