Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
Jeronymous's picture
Add 3 missing French dataset configs (legit documents)
64d0cc9 verified
|
raw
history blame
15 kB
metadata
pretty_name: Lucie Training Dataset
license: cc-by-nc-sa-4.0
language:
  - en
  - fr
  - de
  - es
  - it
  - code
multilinguality:
  - multilingual
task_categories:
  - text-generation
  - text2text-generation
task_ids:
  - language-modeling
tags:
  - text-generation
  - conditional-text-generation
size_categories:
  - n>1T
viewer: true
configs:
  - config_name: default
    data_files:
      - path: data/*/*/*/*parquet
        split: train
  - config_name: en
    data_files:
      - path: data/natural/en/*/*parquet
        split: train
  - config_name: fr
    data_files:
      - path: data/natural/fr/*/*parquet
        split: train
  - config_name: de
    data_files:
      - path: data/natural/de/*/*parquet
        split: train
  - config_name: es
    data_files:
      - path: data/natural/es/*/*parquet
        split: train
  - config_name: it
    data_files:
      - path: data/natural/it/*/*parquet
        split: train
  - config_name: de,fr
    data_files:
      - path: data/natural/de-fr/*/*.parquet
        split: train
  - config_name: es,en
    data_files:
      - path: data/natural/es-en/*/*.parquet
        split: train
  - config_name: fr,en
    data_files:
      - path: data/natural/fr-en/*/*.parquet
        split: train
  - config_name: it,en
    data_files:
      - path: data/natural/it-en/*/*.parquet
        split: train
  - config_name: natural
    data_files:
      - path: data/natural/*/*/*.parquet
        split: train
  - config_name: code
    data_files:
      - path: data/code/*/*/*parquet
        split: train
  - config_name: code-assembly
    data_files:
      - path: data/code/assembly/*/*.parquet
        split: train
  - config_name: code-c
    data_files:
      - path: data/code/c/*/*.parquet
        split: train
  - config_name: code-c#
    data_files:
      - path: data/code/c#/*/*.parquet
        split: train
  - config_name: code-c++
    data_files:
      - path: data/code/c++/*/*.parquet
        split: train
  - config_name: code-clojure
    data_files:
      - path: data/code/clojure/*/*.parquet
        split: train
  - config_name: code-dart
    data_files:
      - path: data/code/dart/*/*.parquet
        split: train
  - config_name: code-elixir
    data_files:
      - path: data/code/elixir/*/*.parquet
        split: train
  - config_name: code-erlang
    data_files:
      - path: data/code/erlang/*/*.parquet
        split: train
  - config_name: code-fortran
    data_files:
      - path: data/code/fortran/*/*.parquet
        split: train
  - config_name: code-go
    data_files:
      - path: data/code/go/*/*.parquet
        split: train
  - config_name: code-haskell
    data_files:
      - path: data/code/haskell/*/*.parquet
        split: train
  - config_name: code-java
    data_files:
      - path: data/code/java/*/*.parquet
        split: train
  - config_name: code-javascript
    data_files:
      - path: data/code/javascript/*/*.parquet
        split: train
  - config_name: code-julia
    data_files:
      - path: data/code/julia/*/*.parquet
        split: train
  - config_name: code-kotlin
    data_files:
      - path: data/code/kotlin/*/*.parquet
        split: train
  - config_name: code-lua
    data_files:
      - path: data/code/lua/*/*.parquet
        split: train
  - config_name: code-mathematica
    data_files:
      - path: data/code/mathematica/*/*.parquet
        split: train
  - config_name: code-matlab
    data_files:
      - path: data/code/matlab/*/*.parquet
        split: train
  - config_name: code-ocaml
    data_files:
      - path: data/code/ocaml/*/*.parquet
        split: train
  - config_name: code-perl
    data_files:
      - path: data/code/perl/*/*.parquet
        split: train
  - config_name: code-php
    data_files:
      - path: data/code/php/*/*.parquet
        split: train
  - config_name: code-python
    data_files:
      - path: data/code/python/*/*.parquet
        split: train
  - config_name: code-r
    data_files:
      - path: data/code/r/*/*.parquet
        split: train
  - config_name: code-racket
    data_files:
      - path: data/code/racket/*/*.parquet
        split: train
  - config_name: code-ruby
    data_files:
      - path: data/code/ruby/*/*.parquet
        split: train
  - config_name: code-rust
    data_files:
      - path: data/code/rust/*/*.parquet
        split: train
  - config_name: code-scala
    data_files:
      - path: data/code/scala/*/*.parquet
        split: train
  - config_name: code-swift
    data_files:
      - path: data/code/swift/*/*.parquet
        split: train
  - config_name: code-tex
    data_files:
      - path: data/code/tex/*/*.parquet
        split: train
  - config_name: code-typescript
    data_files:
      - path: data/code/typescript/*/*.parquet
        split: train
  - config_name: AmendementsParlement
    data_files:
      - path: data/natural/*/AmendementsParlement/*.parquet
        split: train
  - config_name: AmericanStories
    data_files:
      - path: data/natural/*/AmericanStories/*.parquet
        split: train
  - config_name: Claire
    data_files:
      - path: data/natural/*/Claire/*.parquet
        split: train
  - config_name: Claire-en
    data_files:
      - path: data/natural/en/Claire/*.parquet
        split: train
  - config_name: Claire-fr
    data_files:
      - path: data/natural/fr/Claire/*.parquet
        split: train
  - config_name: CroissantAligned
    data_files:
      - path: data/natural/*/CroissantAligned/*.parquet
        split: train
  - config_name: DiscoursPublics
    data_files:
      - path: data/natural/*/DiscoursPublics/*.parquet
        split: train
  - config_name: Europarl
    data_files:
      - path: data/natural/*/Europarl/*.parquet
        split: train
  - config_name: Europarl-de
    data_files:
      - path: data/natural/de/Europarl/*.parquet
        split: train
  - config_name: Europarl-en
    data_files:
      - path: data/natural/en/Europarl/*.parquet
        split: train
  - config_name: Europarl-es
    data_files:
      - path: data/natural/es/Europarl/*.parquet
        split: train
  - config_name: Europarl-fr
    data_files:
      - path: data/natural/fr/Europarl/*.parquet
        split: train
  - config_name: EuroparlAligned
    data_files:
      - path: data/natural/*/EuroparlAligned/*.parquet
        split: train
  - config_name: EuroparlAligned-de,fr
    data_files:
      - path: data/natural/de-fr/EuroparlAligned/*.parquet
        split: train
  - config_name: EuroparlAligned-es,en
    data_files:
      - path: data/natural/es-en/EuroparlAligned/*.parquet
        split: train
  - config_name: EuroparlAligned-fr,en
    data_files:
      - path: data/natural/fr-en/EuroparlAligned/*.parquet
        split: train
  - config_name: EuroparlAligned-it,en
    data_files:
      - path: data/natural/it-en/EuroparlAligned/*.parquet
        split: train
  - config_name: Eurovoc
    data_files:
      - path: data/natural/*/Eurovoc/*.parquet
        split: train
  - config_name: Eurovoc-de
    data_files:
      - path: data/natural/de/Eurovoc/*.parquet
        split: train
  - config_name: Eurovoc-en
    data_files:
      - path: data/natural/en/Eurovoc/*.parquet
        split: train
  - config_name: Eurovoc-es
    data_files:
      - path: data/natural/es/Eurovoc/*.parquet
        split: train
  - config_name: Eurovoc-it
    data_files:
      - path: data/natural/it/Eurovoc/*.parquet
        split: train
  - config_name: FineWebEdu
    data_files:
      - path: data/natural/*/FineWebEdu/*.parquet
        split: train
  - config_name: GallicaMonographies
    data_files:
      - path: data/natural/*/GallicaMonographies/*.parquet
        split: train
  - config_name: GallicaPress
    data_files:
      - path: data/natural/*/GallicaPress/*.parquet
        split: train
  - config_name: Gutenberg
    data_files:
      - path: data/natural/*/Gutenberg/*.parquet
        split: train
  - config_name: Gutenberg-de
    data_files:
      - path: data/natural/de/Gutenberg/*.parquet
        split: train
  - config_name: Gutenberg-en
    data_files:
      - path: data/natural/en/Gutenberg/*.parquet
        split: train
  - config_name: Gutenberg-es
    data_files:
      - path: data/natural/es/Gutenberg/*.parquet
        split: train
  - config_name: Gutenberg-fr
    data_files:
      - path: data/natural/fr/Gutenberg/*.parquet
        split: train
  - config_name: Gutenberg-it
    data_files:
      - path: data/natural/it/Gutenberg/*.parquet
        split: train
  - config_name: HAL
    data_files:
      - path: data/natural/*/HAL/*.parquet
        split: train
  - config_name: InterventionsParlement
    data_files:
      - path: data/natural/*/InterventionsParlement/*.parquet
        split: train
  - config_name: LEGI
    data_files:
      - path: data/natural/*/LEGI/*.parquet
        split: train
  - config_name: MathPile
    data_files:
      - path: data/natural/*/MathPile/*.parquet
        split: train
  - config_name: OpenData
    data_files:
      - path: data/natural/*/OpenData/*.parquet
        split: train
  - config_name: OpenEdition
    data_files:
      - path: data/natural/*/OpenEdition/*.parquet
        split: train
  - config_name: PeS2o
    data_files:
      - path: data/natural/*/PeS2o/*.parquet
        split: train
  - config_name: Persee
    data_files:
      - path: data/natural/*/Persee/*.parquet
        split: train
  - config_name: QuestionsEcritesParlement
    data_files:
      - path: data/natural/*/QuestionsEcritesParlement/*.parquet
        split: train
  - config_name: RedPajama
    data_files:
      - path: data/natural/*/RedPajama/*.parquet
        split: train
  - config_name: RedPajama-de
    data_files:
      - path: data/natural/de/RedPajama/*.parquet
        split: train
  - config_name: RedPajama-es
    data_files:
      - path: data/natural/es/RedPajama/*.parquet
        split: train
  - config_name: RedPajama-fr
    data_files:
      - path: data/natural/fr/RedPajama/*.parquet
        split: train
  - config_name: RedPajama-it
    data_files:
      - path: data/natural/it/RedPajama/*.parquet
        split: train
  - config_name: Stac
    data_files:
      - path: data/natural/*/Stac/*.parquet
        split: train
  - config_name: TheStack
    data_files:
      - path: data/code/*/TheStack/*.parquet
        split: train
  - config_name: Theses
    data_files:
      - path: data/natural/*/Theses/*.parquet
        split: train
  - config_name: Wikipedia
    data_files:
      - path: data/natural/*/Wikipedia/*.parquet
        split: train
  - config_name: Wikipedia-de
    data_files:
      - path: data/natural/de/Wikipedia/*.parquet
        split: train
  - config_name: Wikipedia-en
    data_files:
      - path: data/natural/en/Wikipedia/*.parquet
        split: train
  - config_name: Wikipedia-es
    data_files:
      - path: data/natural/es/Wikipedia/*.parquet
        split: train
  - config_name: Wikipedia-fr
    data_files:
      - path: data/natural/fr/Wikipedia/*.parquet
        split: train
  - config_name: Wikipedia-it
    data_files:
      - path: data/natural/it/Wikipedia/*.parquet
        split: train
  - config_name: Wikisource
    data_files:
      - path: data/natural/*/Wikisource/*.parquet
        split: train
  - config_name: Wiktionary
    data_files:
      - path: data/natural/*/Wiktionary/*.parquet
        split: train
  - config_name: YouTube
    data_files:
      - path: data/natural/*/YouTube/*.parquet
        split: train

Dataset Card

The Lucie Training Dataset is a curated collection of text data in English, French, German, Spanish and Italian, from the web, video subtitles, collections of books, newspapers, monographies, and magazines processed by Optical Character Recognition (OCR), as well as collections of files in diverse programming languages.

It was used to pretrain Lucie-7B, a foundation LLM with strong capabilities in French and English.

Dataset Description

This dataset was made to provide an extensive and diverse dataset for training Large Language Models (LLM), with the following motivations in mind:

  • Data mix:
    • French is as well represented as English (Lucie Training Dataset is one of the biggest of collection of French text data with a minimum of quality), to avoid that the LLM is culturally biased towards English.
    • German, Spanish and Italian are also represented to some extend,
    • Code is also included to boost the reasoning capabilities of LLM.
  • Data filtering and deduplication:
    • The dataset is cleaned low-quality data
    • The dataset is cleaned from duplicates to some extend, following best practices.
  • Ethics:
    • A special care was taken to respect copyright laws and the privacy of individuals. All books, newspapers, monographies, and magazines are in the public domain (which depends on the author's death date, and the country of publication).
    • There is no data from the web for which robots.txt files forbid crawling.

Dataset Structure

The corpus contains the following information for each text sample:

  • text: the text sample itself.
  • source: an identifier for the source(s) of the text sample (Wikipedia, RedPajama, Gutenberg, …). The list of all sources is described in this document.
  • id: an identifier that is unique among the source.
  • language: the language of the text sample, which can be:
    • the ISO 639-1 code of a natural language: en, fr, de, es, or it;
    • the common name prefixed by "code:" of a programming language: code:python, code:c++, …; or
    • a list of ISO 639-1 codes separated by commas, if the text sample is multilingual: fr,en, de,fr, es,en, it,en (or in the opposite order if the languages appear in the opposite order in the text).
  • url (optional): the URL of the original text sample on the web, if available.
  • title (optional): the title of the original text sample, if available.
  • author (optional): the author of the original text sample, if available. Usually the author name in plain text, except for Gutenberg where it is the JSON serialized object of the author metadata.
  • date (optional): the publication date of the original text sample, if available. The text format of the source depends on the source.
  • quality_signals (optional): a list of quality signals about the text sample (that could be used for further filtering or sample weighting). It can include indicators computed by fasttext and CCNet, statistics about occurrences of characters, words, special characters, etc. This field is always a JSON serialized object.
  • extra (optional): JSON serialized extra information about the text sample. This can include metadata about the source subset, the rights, etc.

Examples of metadata (except from text) are shown for each source in metadata_examples.json.

Example use in python

Load the dataset using the datasets library:

from datasets import load_dataset

kwargs = {"split": "train", "streaming": True}

dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", **kwargs)

Several configurations are available to select a language, a source, or both, illustrated in the following examples.

Load data in French:

dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "fr", **kwargs)

Load data where French and English are aligned:

dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "fr,en", **kwargs)

Load data corresponding to files with programming languages:

dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code", **kwargs)

Load data in Python:

dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code:python", **kwargs)

Load data from Wikipedia (in available languages):

dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia", **kwargs)

Load data from French pages of Wikipedia (wikipedia.fr):

dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia-fr", **kwargs)