|
--- |
|
license: cc-by-4.0 |
|
language: |
|
- is |
|
--- |
|
## Introduction |
|
|
|
This dataset, derived from the Icelandic Gigaword Corpus, is designed as a more comprehensive alternative to the existing dataset found at |
|
https://huggingface.co/datasets/styletts2-community/multilingual-pl-bert/tree/main/is. |
|
The original dataset, derived from just 52MB of raw text from the Icelandic Wikipedia, was processed using the espeak-ng backend for |
|
normalization and phonemization. However, the Icelandic module of espeak-ng, which has not been updated for over a decade, employs an outdated |
|
IPA dialect and a simplistic approach to stress marking. Additionally, the limited phonemization capabilities of the module independently |
|
contribute to inaccuracies in the phonetic transcriptions. |
|
|
|
Significant advancements in the normalization and G2P (Grapheme-to-Phoneme) conversion of Icelandic have been made through the Icelandic |
|
Language Technology program. More information about this program can be found [here](https://clarin.is/en/links/LTProjectPlan/). |
|
The tools developed in this program have been extensively used to enhance the quality of this dataset. |
|
|
|
## Dataset |
|
|
|
This dataset surpasses its predecessor considerably in size, incorporating not only text from the relatively small Icelandic Wikipedia but also |
|
from the extensive Icelandic Gigaword corpus. Specifically, we have enriched the |
|
[Wikipedia text](https://repository.clarin.is/repository/xmlui/handle/20.500.12537/252) with material from the |
|
[News1 corpus](https://repository.clarin.is/repository/xmlui/handle/20.500.12537/237). To adhere to the maximum size limit of 512 MB for the |
|
raw text, we combined the complete Wikipedia text with randomly shuffled documents from the News1 corpus until reaching the size cap. |
|
|
|
In total, the dataset contains `400.676` rows, each corresponding to its associated document in the IGC corpus' XML file. |
|
|
|
### Cleaning |
|
|
|
Prior to processing with the [Bert](https://huggingface.co/bert-base-multilingual-cased) tokenizer, the dataset underwent cleaning, deduplication, |
|
and language detection to filter out most non-Icelandic text. Documents containing fewer than 10 words were also removed. |
|
This preprocessing resulted in the elimination of 8,146 documents from the initial 55,475 in the Wikipedia corpus (approximately 14.7%) |
|
and 28,869 from 1,545,671 in the News1 corpus (about 1.9%). The notably higher reduction in the Wikipedia corpus primarily arose from the |
|
minimum word count criterion. However, this did not significantly diminish the total volume of text, which only saw a modest decrease from |
|
52.3MB to 49.68MB, a reduction of around 5%. |
|
|
|
### Normalization |
|
|
|
For normalization, we adapted the [Regina Normalizer](https://github.com/grammatek/regina_normalizer), which employs a BI-LSTM Part-of-Speech |
|
(PoS) tagger. Although this makes the process somewhat time-consuming, the adaptions were necessary to handle a variety of edge cases in the diverse |
|
and sometimes unclean text within the IGC. The processing of approximately 2.5 GB of raw text took about one day, utilizing 50 CPU cores. |
|
|
|
### Phonemization |
|
|
|
Phonemization was conducted using [IceG2P](https://github.com/grammatek/ice-g2p), which is also based on a BI-LSTM model. We made adaptations |
|
to ensure the IPA phoneset output aligns with the overall phoneset used in other PL-Bert datasets. Initially, we created and refined a new vocabulary |
|
from both the normalized Wikipedia and News1 corpora. Following this, the BI-LSTM model was employed to generate phonetic transcriptions for the dictionary. |
|
We also enhanced stress labeling and incorporated secondary stresses after conducting compound analysis. |
|
|
|
A significant byproduct of this effort is a considerably improved G2P dictionary with more than 2.1 million transcriptions, which we plan to |
|
integrate into the G2P module and various other open-source projects involving Icelandic G2P. |
|
|
|
Ultimately, to ensure textual coherence, all paragraphs with incorrect Grapheme-to-Phoneme (G2P) transcriptions were excluded from the dataset. |
|
|
|
## License |
|
|
|
The dataset is distributed under the same CC-by-4.0 license as the original source material from which the data was derived. |
|
|
|
|