Datasets:
File size: 1,513 Bytes
7387a58 0ecd29a d6ad13c 0ecd29a d6ad13c 0ecd29a d6ad13c 0ecd29a d6ad13c 7387a58 9328d16 d6ad13c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
---
license: cc-by-3.0
configs:
- config_name: exzellent_de
data_files: wiki_de_exzellent.parquet
- config_name: featured_en
data_files: wiki_en_featured.parquet
- config_name: exzellent_de_small
data_files: wiki_de_exzellent_small.parquet
- config_name: featured_en_small
data_files: wiki_en_featured_small.parquet
language:
- de
- en
size_categories:
- 1K<n<10K
---
# German+English Wikitext
Wikitext_en_de is a replication of the `wikitext` dataset following the work by [Merity et al. (2016)](https://arxiv.org/abs/1609.07843).
It contains (mostly) all articles that Wikipedia classifies as ["exzellent"](https://de.wikipedia.org/wiki/Wikipedia:Exzellente_Artikel) or ["featured"](https://en.wikipedia.org/wiki/Wikipedia:Featured_articles) and can be used for example for perplexity evaluation.
This dataset was created by first scraping the names of the articles belonging to these categories from Wikipedia. Afterwards, we take a recent dump from
wikipedia ("20230901.de" from [`graelo/wikipedia`](https://huggingface.co/datasets/graelo/wikipedia)) and filter the articles to only include those on either list.
| Config Name | Num Documents |
|-------------|--------------|
| exzellent_de | 2822 |
| featured_en | 6356 |
| exzellent_de_small | 1024 |
| featured_en_small | 1024 |
The code for creating the datasets is available in this repository ("wikitext_de.py", "wikitext_en.py").
Be aware that this download a whole wikipedia dump, which might take a while depending on your connection. |