danielschnell
commited on
Commit
·
6d050fe
1
Parent(s):
b06d53c
Updated dataset with combined full documents and improved secondary stress labels
Browse files- IGC-Wiki-News1-22.10.TEI-plbert.parquet +2 -2
- README.md +33 -8
IGC-Wiki-News1-22.10.TEI-plbert.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ceb0d29d489310b17a54fc536eb978895f4e9cb108ee839baed2333c67fc4de4
|
3 |
+
size 1083454239
|
README.md
CHANGED
@@ -5,30 +5,55 @@ language:
|
|
5 |
---
|
6 |
## Introduction
|
7 |
|
8 |
-
This dataset, derived from the Icelandic Gigaword Corpus, is designed as a more comprehensive alternative to the existing dataset found at
|
|
|
|
|
|
|
|
|
|
|
9 |
|
10 |
-
Significant advancements in the normalization and G2P (Grapheme-to-Phoneme) conversion of Icelandic have been made through the Icelandic
|
|
|
|
|
11 |
|
12 |
## Dataset
|
13 |
|
14 |
-
This dataset surpasses its predecessor in size, incorporating not only text from the relatively small Icelandic Wikipedia but also
|
|
|
|
|
|
|
|
|
15 |
|
16 |
-
In total, the dataset contains `
|
17 |
|
18 |
### Cleaning
|
19 |
|
20 |
-
Prior to processing with the [Bert](https://huggingface.co/bert-base-multilingual-cased) tokenizer, the dataset underwent cleaning, deduplication,
|
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
### Normalization
|
23 |
|
24 |
-
For normalization, we adapted the [Regina Normalizer](https://github.com/grammatek/regina_normalizer), which employs a BI-LSTM Part-of-Speech
|
|
|
|
|
25 |
|
26 |
### Phonemization
|
27 |
|
28 |
-
Phonemization was conducted using [IceG2P](https://github.com/grammatek/ice-g2p), which is also based on a BI-LSTM model. We made adaptations
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
Ultimately, to ensure textual coherence, all paragraphs with incorrect Grapheme-to-Phoneme (G2P) transcriptions were excluded from the dataset.
|
31 |
|
32 |
## License
|
33 |
|
34 |
-
The dataset is distributed under the same CC-by-4.0 license as the original source material from which the data was derived.
|
|
|
|
5 |
---
|
6 |
## Introduction
|
7 |
|
8 |
+
This dataset, derived from the Icelandic Gigaword Corpus, is designed as a more comprehensive alternative to the existing dataset found at
|
9 |
+
https://huggingface.co/datasets/styletts2-community/multilingual-pl-bert/tree/main/is.
|
10 |
+
The original dataset, derived from just 52MB of raw text from the Icelandic Wikipedia, was processed using the espeak-ng backend for
|
11 |
+
normalization and phonemization. However, the Icelandic module of espeak-ng, which has not been updated for over a decade, employs an outdated
|
12 |
+
IPA dialect and a simplistic approach to stress marking. Additionally, the limited phonemization capabilities of the module independently
|
13 |
+
contribute to inaccuracies in the phonetic transcriptions.
|
14 |
|
15 |
+
Significant advancements in the normalization and G2P (Grapheme-to-Phoneme) conversion of Icelandic have been made through the Icelandic
|
16 |
+
Language Technology program. More information about this program can be found [here](https://clarin.is/en/links/LTProjectPlan/).
|
17 |
+
The tools developed in this program have been extensively used to enhance the quality of this dataset.
|
18 |
|
19 |
## Dataset
|
20 |
|
21 |
+
This dataset surpasses its predecessor considerably in size, incorporating not only text from the relatively small Icelandic Wikipedia but also
|
22 |
+
from the extensive Icelandic Gigaword corpus. Specifically, we have enriched the
|
23 |
+
[Wikipedia text](https://repository.clarin.is/repository/xmlui/handle/20.500.12537/252) with material from the
|
24 |
+
[News1 corpus](https://repository.clarin.is/repository/xmlui/handle/20.500.12537/237). To adhere to the maximum size limit of 512 MB for the
|
25 |
+
raw text, we combined the complete Wikipedia text with randomly shuffled paragraphs from the News1 corpus until reaching the size cap.
|
26 |
|
27 |
+
In total, the dataset contains `400.676` rows, each corresponding to corresponding document in the IGC corpus' XML format.
|
28 |
|
29 |
### Cleaning
|
30 |
|
31 |
+
Prior to processing with the [Bert](https://huggingface.co/bert-base-multilingual-cased) tokenizer, the dataset underwent cleaning, deduplication,
|
32 |
+
and language detection to filter out most non-Icelandic text. Documents containing fewer than 10 words were also removed.
|
33 |
+
This preprocessing resulted in the elimination of 8,146 documents from the initial 55,475 in the Wikipedia corpus (approximately 14.7%)
|
34 |
+
and 28,869 from 1,545,671 in the News1 corpus (about 1.9%). The notably higher reduction in the Wikipedia corpus primarily arose from the
|
35 |
+
minimum word count criterion. However, this did not significantly diminish the total volume of text, which only saw a modest decrease from
|
36 |
+
52.3MB to 49.68MB, a reduction of around 5%.
|
37 |
|
38 |
### Normalization
|
39 |
|
40 |
+
For normalization, we adapted the [Regina Normalizer](https://github.com/grammatek/regina_normalizer), which employs a BI-LSTM Part-of-Speech
|
41 |
+
(PoS) tagger. Although this makes the process somewhat time-consuming, the adaptions were necessary to handle a variety of edge cases in the diverse
|
42 |
+
and sometimes unclean text within the IGC. The processing of approximately 2.5 GB of raw text took about one day, utilizing 50 CPU cores.
|
43 |
|
44 |
### Phonemization
|
45 |
|
46 |
+
Phonemization was conducted using [IceG2P](https://github.com/grammatek/ice-g2p), which is also based on a BI-LSTM model. We made adaptations
|
47 |
+
to ensure the IPA phoneset output aligns with the overall phoneset used in other PL-Bert datasets. Initially, we created and refined a new vocabulary
|
48 |
+
from both the normalized Wikipedia and News1 corpora. Following this, the BI-LSTM model was employed to generate phonetic transcriptions for the dictionary.
|
49 |
+
We also enhanced stress labeling and incorporated secondary stresses after conducting compound analysis.
|
50 |
+
|
51 |
+
A significant byproduct of this effort is a considerably improved G2P dictionary with more than 2.1 million transcriptions, which we plan to
|
52 |
+
integrate into the G2P module and various other open-source projects involving Icelandic G2P.
|
53 |
|
54 |
Ultimately, to ensure textual coherence, all paragraphs with incorrect Grapheme-to-Phoneme (G2P) transcriptions were excluded from the dataset.
|
55 |
|
56 |
## License
|
57 |
|
58 |
+
The dataset is distributed under the same CC-by-4.0 license as the original source material from which the data was derived.
|
59 |
+
|