Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
wikitext-en-de / README.md
bjoernp's picture
Update README.md
d6ad13c
metadata
license: cc-by-3.0
configs:
  - config_name: exzellent_de
    data_files: wiki_de_exzellent.parquet
  - config_name: featured_en
    data_files: wiki_en_featured.parquet
  - config_name: exzellent_de_small
    data_files: wiki_de_exzellent_small.parquet
  - config_name: featured_en_small
    data_files: wiki_en_featured_small.parquet
language:
  - de
  - en
size_categories:
  - 1K<n<10K

German+English Wikitext

Wikitext_en_de is a replication of the wikitext dataset following the work by Merity et al. (2016). It contains (mostly) all articles that Wikipedia classifies as "exzellent" or "featured" and can be used for example for perplexity evaluation.

This dataset was created by first scraping the names of the articles belonging to these categories from Wikipedia. Afterwards, we take a recent dump from wikipedia ("20230901.de" from graelo/wikipedia) and filter the articles to only include those on either list.

Config Name Num Documents
exzellent_de 2822
featured_en 6356
exzellent_de_small 1024
featured_en_small 1024

The code for creating the datasets is available in this repository ("wikitext_de.py", "wikitext_en.py"). Be aware that this download a whole wikipedia dump, which might take a while depending on your connection.