Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
juliehunter commited on
Commit
4722756
·
verified ·
1 Parent(s): c37df9e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +176 -119
README.md CHANGED
@@ -463,13 +463,13 @@ in English, French, German, Spanish and Italian culled from a variety of sources
463
  digital books, newspapers, and magazines, some of which were processed by Optical Character Recognition (OCR). It also contains samples of diverse programming languages.
464
 
465
  The Lucie Training Dataset was used to pretrain [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B),
466
- a foundation LLM with strong capabilities in French and English.
467
 
468
  Table of Contents:
469
  <ul>
470
  <li><a href="#dataset-description">Dataset Description</a>
471
  <ul>
472
- <li><a href="#dataset-structure">Dataset Structure</a></li>
473
  <li><a href="#dataset-composition">Dataset Composition</a>
474
  <table>
475
  <tr>
@@ -500,7 +500,7 @@ Table of Contents:
500
  </tr>
501
  </table>
502
  </li>
503
- <li><a href="#subsets-and-versions">Subsets and Versions</a></li>
504
  <li><a href="#details-on-data-sources">Details on Data Sources</a>
505
  <table>
506
  <tr>
@@ -524,10 +524,10 @@ Table of Contents:
524
  <li><a href="#hal"> HAL</a></li>
525
  <li><a href="#interventionsparlement"> InterventionsParlement</a></li>
526
  <li><a href="#legi"> LEGI</a></li>
527
- <li><a href="#mathpile"> MathPile</a></li>
528
  <li><a href="#opendata"> OpenData</a></li>
529
  <li><a href="#openedition"> OpenEdition</a></li>
530
- <li><a href="#pes2o"> PeS2o</a></li>
531
  </ul>
532
  </td>
533
  <td style="vertical-align: top;">
@@ -536,7 +536,7 @@ Table of Contents:
536
  <li><a href="#questionsecritesparlement"> QuestionsEcritesParlement</a></li>
537
  <li><a href="#redpajama-v2"> RedPajama (v2)</a></li>
538
  <li><a href="#stac"> Stac</a></li>
539
- <li><a href="#thestack"> TheStack</a></li>
540
  <li><a href="#theses"> Theses</a></li>
541
  <li><a href="#wikipedia-wikisource-wiktionary"> Wikipedia, Wikisource, Wiktionary</a></li>
542
  <li><a href="#youtube"> YouTube</a></li>
@@ -547,8 +547,8 @@ Table of Contents:
547
  </li>
548
  </ul>
549
  </li>
550
- <li><a href="#example-use-in-python">Example use in python</a>
551
- <ul>
552
  <li><a href="#load-the-dataset">Load the dataset</a></li>
553
  <li><a href="#iterate-over-a-subset">Iterate over a subset</a></li>
554
  <li><a href="#load-a-specific-version">Load a specific version</a></li>
@@ -562,51 +562,51 @@ Table of Contents:
562
 
563
  ## Dataset Description
564
 
565
- This dataset was made to provide an extensive and diverse dataset for training Large Language Models (LLMs). Here are some of the principal features of the corpus:
566
  * Data mix:
567
- * The dataset contains equal amounts of French and English data -- it is in fact one of the biggest collections of French text data that has been preprocessed for LLM training -- with the aim of minimizing anglo-centric cultural biases.
568
  * German, Spanish and Italian are also represented in small amounts.
569
- * Code is also included to boost the reasoning capabilities of LLMs.
570
  * Data filtering and deduplication:
571
  * The dataset has been cleaned in an effort to remove very low-quality data.
572
  * Duplicate data samples have been removed to some extent, following best practices.
 
573
  * Ethics:
574
  * Special care has been taken to respect copyright laws and individual privacy.
575
- All books, newspapers, monographies, and magazines are in the public domain
576
- (which depends on the author's date of death and the country of publication).
577
- * All web data in the dataset came from sites with robots.txt files that do not forbid crawling.
578
 
579
- ### Dataset Structure
580
 
581
- The corpus contains the following information for each text sample:
582
- * `text`: the text sample itself.
583
- * [`language`](metadata/metadata_examples.json#L3): the language of the text sample (relying on the source, that information can be wrong).
584
  <br>Possible values:
585
- - an ISO 639-1 code of a natural language ("en", "fr", "de", "es", or "it"),
586
- - a common name prefixed by "code:" of a programming language ("code:python", "code:c++", …), or
587
- - a list of ISO 639-1 codes separated by commas when the text sample is multilingual and aligned ("fr,en", "de,fr", "es,en", "it,en",
588
  or one of those pairs in the opposite order if the languages appear in the opposite order in the text).
589
  * [`source`](metadata/metadata_examples.json#L4): an identifier for the source(s) of the text sample (Wikipedia, RedPajama, Gutenberg, …).
590
- All sources are described in detail [in this document](#details-on-data-sources).
591
- * [`id`](metadata/metadata_examples.json#L13): an identifier that is unique among the source.
592
  * [`url`](metadata/metadata_examples.json#L35) (optional): the URL of the original text sample on the web, if available.
593
  * [`title`](metadata/metadata_examples.json#L36) (optional): the title of the original text sample, if available.
594
  * [`author`](metadata/metadata_examples.json#L81) (optional): the author of the original text sample, if available.
595
  <details><summary>Note:</summary>
596
- Usually the author name in plain text, except for [Gutenberg books](metadata/metadata_examples.json#L91) , where it is the JSON serialized object of the author metadata.
597
  </details>
598
  * [`date`](metadata/metadata_examples.json#L6) (optional): the publication date of the original text sample, if available.
599
  <details><summary>Note:</summary>
600
- The text format of the source depends on the source.
601
  </details>
602
- * [`quality_signals`](metadata/metadata_examples.json#L17) (optional): a list of quality signals about the text sample, in JSON format (that could be used for further filtering or sample weighting).
603
  <details><summary>Note:</summary>
604
  It can include indicators computed by `fasttext` and `CCNet`, statistics about occurrences of characters, words, special characters, etc.
605
  </details>
606
  * [`extra`](metadata/metadata_examples.json#L16) (optional): extra information about the text sample, in JSON format.
607
  This can include metadata about the source subset, the rights, etc.
608
 
609
- Examples of metadata (except from `text`) are shown for each source in [metadata_examples.json](metadata/metadata_examples.json).
610
 
611
 
612
  ### Dataset Composition
@@ -620,15 +620,15 @@ broken down by source and language.
620
  Sources are grouped by category.
621
  The table provides the numbers of documents, words, tokens, and characters for each subset.
622
  All numbers in this table are available in the CSV file [dataset_composition.csv](metadata/dataset_composition.csv).
623
- The Number of tokens was computed using the tokenizer of [Lucie-7B LLM](https://huggingface.co/OpenLLM-France/Lucie-7B).
624
 
625
  <!-- The following is automatically generated. Do not update manually. -->
626
  <!-- TABLE START -->
627
  <table>
628
  <thead>
629
  <tr>
630
- <th><a href="#subset"><strong>subset</strong></a></th>
631
- <th><strong>language</strong></th>
632
  <th><strong>M docs</strong></th>
633
  <th><strong>B words</strong></th>
634
  <th><strong>B tokens</strong></th>
@@ -806,7 +806,7 @@ The Number of tokens was computed using the tokenizer of [Lucie-7B LLM](https://
806
  <tr>
807
  <td colspan="7"><h4 id="category-technical">Category: Technical</h4></td></tr>
808
  <tr>
809
- <td><a href="#pes2o"><strong>PeS2o</strong></a></td>
810
  <td><strong>English (en)</strong></td>
811
  <td>38.972</td>
812
  <td>42.296</td>
@@ -1131,7 +1131,7 @@ The Number of tokens was computed using the tokenizer of [Lucie-7B LLM](https://
1131
  <tr>
1132
  <td colspan="7"><h4 id="category-math">Category: Math</h4></td></tr>
1133
  <tr>
1134
- <td><a href="#mathpile"><strong>MathPile</strong></a></td>
1135
  <td><strong>English (en)</strong></td>
1136
  <td>0.737</td>
1137
  <td>3.408</td>
@@ -1198,7 +1198,7 @@ The Number of tokens was computed using the tokenizer of [Lucie-7B LLM](https://
1198
  <td></td>
1199
  </tr>
1200
  <tr>
1201
- <td><a href="#stac"><strong>Stac</strong></a></td>
1202
  <td><strong>English (en)</strong></td>
1203
  <td>0.0000450</td>
1204
  <td>0.0000529</td>
@@ -1256,7 +1256,7 @@ The Number of tokens was computed using the tokenizer of [Lucie-7B LLM](https://
1256
  <tr>
1257
  <td colspan="7"><h4 id="category-programming">Category: Programming</h4></td></tr>
1258
  <tr>
1259
- <td rowspan="30" style="vertical-align: top;"><a href="#thestack"><strong>TheStack</strong></a></td>
1260
  <td><strong>JAVASCRIPT</strong></td>
1261
  <td>21.109</td>
1262
  <td>8.526</td>
@@ -1530,44 +1530,44 @@ The Number of tokens was computed using the tokenizer of [Lucie-7B LLM](https://
1530
  <!-- TABLE END -->
1531
 
1532
 
1533
- ### Subsets and Versions
1534
 
1535
- As the dataset is the result of merging multiple sources, it is divided into subsets based on the source and the language of the texts.
1536
- <br> Different configurations of the dataset are available, depending on the sources and languages included.
1537
- The list of all configurations is available [in the YAML header of this README file](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/v1.2/README.md?code=true#L24).
1538
  Each configuration corresponds to a pathname pattern in the [data subdirectory](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2/data).
1539
 
1540
- The dataset is available in the following versions:
1541
  - **v1.1** / [**main**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/main/data) (default):
1542
- The data used for the first (main) pretraining phase of [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B), which approximates 2.3T tokens.
1543
- - [**v1.2**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2/data): An improved version of the data, where
1544
- - GallicaMonographies and GallicaPress have been updated to filter out documents with bad OCR quality.
1545
- - The `Ubuntu_IRC` and `PhilPapers` subsets of Pile have been refined, by fixing encoding issues and removing documents in languages other than English, French, Spanish, German and Italian.
1546
  - [**v1.2-recent-web**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2-recent-web/data) : The data used for the second pretraining phase (context extension) of [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B#2-context-extension).
1547
- This consists in the same as `v1.2` without old snapshots for web data (only year 2023 for RedPajama, and only year 2024 for FineWebEdu).
1548
- All data that was not filtered out remained unchanged.
1549
 
1550
  Except from **v1.1**, which is a git tag, all versions are git branches in the dataset repository
1551
  (e.g. [**v1.2**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2/data)).
1552
 
 
 
1553
 
1554
  ### Details on Data Sources
1555
 
1556
  #### AmendementsParlement
1557
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1558
- * <u>Extracted from</u>: [Regards citoyens](https://www.regardscitoyens.org/#&panel1-4) ([nodeputes.fr](http://www.nosdeputes.fr/), [nossenateurs.fr](http://www.nossenateurs.fr/)). [API](https://github.com/regardscitoyens). License: [CC BY-SA](https://www.regardscitoyens.org/#&panel1-2).
1559
- * <u>Description</u>: A collection of proposed amendments by the French parliament: the legal text and description of the requested modification.
1560
- <!-- * <u>Citation</u>: No paper found. -->
1561
 
1562
  #### AmericanStories
1563
  * <u>Source</u>: [dell-research-harvard/AmericanStories](https://huggingface.co/datasets/dell-research-harvard/AmericanStories). License: [CC BY 4.0](https://huggingface.co/datasets/dell-research-harvard/AmericanStories).
1564
  * <u>Extracted from</u>: [Chronicling America](https://www.loc.gov/collections/chronicling-america/about-this-collection/). License: [Open](https://www.loc.gov/collections/chronicling-america/about-this-collection/rights-and-access/).
1565
- * <u>Description</u>: "The American Stories dataset is a collection of full article texts extracted from historical U.S. newspaper images. It includes nearly 20 million scans from the public domain Chronicling America collection maintained by the Library of Congress. The dataset is designed to address the challenges posed by complex layouts and low OCR quality in existing newspaper datasets" (from the [dataset card](https://huggingface.co/datasets/dell-research-harvard/AmericanStories)). Dataset containing text retrieved through OCR.
1566
- * <u>Text Pre-processing</u>:
1567
  * <u>Filtering</u>:
1568
- To filter out documents with excessive OCR errors, the dataset was refined by discarding texts with a perplexity higher than 1500,
1569
- measured using a CCNET model in English.
1570
- The code to compute perplexity, parallelized on Parquet files, is [available here](https://github.com/OpenLLM-France/Lucie-dataset-filtering).
1571
  * <u>Citation</u>: Melissa Dell, Jacob Carlson, Tom Bryan, Emily Silcock, Abhishek Arora, Zejiang Shen, Luca D'Amico-Wong, Quan Le, Pablo Querubin and Leander Heldring (2023). "American Stories: A Large-Scale Structured Text Dataset of Historical U.S. Newspapers," [arxiv:2308.12477](https://arxiv.org/abs/2308.12477v1).
1572
 
1573
  #### Claire (French and English)
@@ -1575,70 +1575,84 @@ Except from **v1.1**, which is a git tag, all versions are git branches in the d
1575
  * French dataset: [OpenLLM-France/Claire-Dialogue-French-0.1](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1). License: [CC BY-NC-SA 4.0](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1).
1576
  * English dataset: [OpenLLM-France/Claire-Dialogue-English-0.1](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1). License: [CC BY-NC-SA 4.0](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1).
1577
  * <u>Extracted from</u>: see the datacards for the [French](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1) and [English](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1) datasets.
1578
- * <u>Description</u>: The Claire datasets are composed of transcripts of spoken conversations -- including parliamentary proceedings, interviews, debates, meetings, and free conversations -- as well as some written conversations from theater plays and written chats. The dataset is designed to help downstream performance of models fine-tuned for tasks requiring the comprehension of spontaneous spoken conversation, such as meeting summarization. Each dialogue is split into speech turns, and each speech turn is labeled with the name of the speaker or a unique identifier.
1579
  * <u>Citation</u>: Julie Hunter, Jérôme Louradour, Virgile Rennard, Ismaïl Harrando, Guokan Shang, Jean-Pierre Lorré (2023). The Claire French Dialogue Dataset. [arXiv:2311.16840](https://arxiv.org/abs/2311.16840).
1580
 
1581
  #### CroissantAligned
1582
  * <u>Source</u>: [croissantllm/croissant_dataset_no_web_data](https://huggingface.co/datasets/croissantllm/croissant_dataset_no_web_data/tree/main/aligned_36b) (subset: `aligned_36b`). License: not specified.
1583
  * <u>Extracted from</u>:
1584
- * Translation pairs: [OPUS](https://opus.nlpl.eu/) (99.6% of the data in CroissantAligned). Pairs extracted from OPUS are labeled as "UnbabelFrEn". License: .
1585
  * Thesis abstracts: French thesis abstract pairs. License: [ETALAB-Licence-Ouverte-v2.0](https://www.etalab.gouv.fr/wp-content/uploads/2017/04/ETALAB-Licence-Ouverte-v2.0.pdf).
1586
- * Song lyrics: [lacoccinelle](https://www.lacoccinelle.net). License: .
1587
- * <u>Description</u>: Data extracted from OPUS takes the form of sentences pairs, where one sentence is in French and the other is in English. OPUS pairs were passed through a custom pipeline designed to select the highest quality sentences pairs. Selected pairs are labeled "UnbabelFrEn" in the CroissantAligned dataset. The thesis abstract subset contains pairs of French or English thesis abstracts paired with translations written by the thesis author. The song lyrics are translated by contributors to www.lacoccinelle.net. Parallel data are used to boost the multilingual capabilities of models trained on them ([Faysse et al.,2024](https://arxiv.org/pdf/2402.00786)).
1588
- * <u>Text Pre-processing</u>:
1589
- * <u>Language Separation and Tagging</u>: The original text field of [the Croissant dataset](https://huggingface.co/datasets/croissantllm/croissant_dataset_no_web_data) includes both French and English passages in a non-deterministic order (sometimes English first, sometimes not), separated by different delimiters depending on the subset.
1590
- Each text was split into monolingual sentences and tagged with the appropriate language code, identified automatically using the [langid library](https://pypi.org/project/langid/).
1591
- These texts are provided separately in the Lucie-Training-Dataset under the extra field as text_fr for French and text_en for English.
1592
- * <u>Random combination of texts prefixed by language</u>: The monolingual texts were recombined using random separators and various methods of prefixing the text with the language (name or code).
1593
- This was done as a precaution to prevent models trained on this data from language switching when generating text.
1594
- It can be seen as a very basic instruction to translate the first text into the other language.
1595
  * <u>Citation</u>: Manuel Faysse, Patrick Fernandes, Nuno M. Guerreiro, António Loison, Duarte M. Alves, Caio Corro, Nicolas Boizard, João Alves, Ricardo Rei, Pedro H. Martins, Antoni Bigata Casademunt, François Yvon, André F.T. Martins, Gautier Viaud, Céline Hudelot, Pierre Colombo (2024). "CroissantLLM: A Truly Bilingual French-English Language Model," [arXiv:2402.00786](https://arxiv.org/abs/2402.00786).
1596
 
1597
  #### DiscoursPublics
1598
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1599
- * <u>Extracted from</u>: [Vie Publique](https://www.vie-publique.fr/collection-discours-publics).
1600
  * <u>Description</u>: A collection of public speeches from the principal public actors in France including speeches from the French President starting from 1974 and from the Prime Minister and members of the government starting from 1980.
1601
- * <u>Text Pre-processing</u>:
1602
- * <u>Text cleaning</u>: the mention of the source url and the number of views were removed.
1603
- <!-- * <u>Citation</u>: No paper found. -->
1604
 
1605
  #### Europarl and EuroparlAligned
1606
  * <u>Sources</u>:
1607
  * `fr-en`, `es-en`, `it-en` parallel data: [Europarl v7](https://www.statmt.org/europarl/v7/). License: [Open](https://www.statmt.org/europarl/).
1608
  * `fr`, `en`, `de`, `es` monolingual data and `de-fr` parallel data: [Europarl v10](https://www.statmt.org/europarl/v10/training-monolingual/). License: [Open](https://www.statmt.org/europarl/).
1609
  * <u>Description</u>: "The Europarl parallel corpus is extracted from the proceedings of the European Parliament. It includes versions in 21 European languages: Romanic (French, Italian, Spanish, Portuguese, Romanian), Germanic (English, Dutch, German, Danish, Swedish), Slavik (Bulgarian, Czech, Polish, Slovak, Slovene), Finni-Ugric (Finnish, Hungarian, Estonian), Baltic (Latvian, Lithuanian), and Greek. The goal of the extraction and processing was to generate sentence aligned text for statistical machine translation systems" ([www.statmt.org](https://www.statmt.org/europarl/)).
1610
- * <u>Text Pre-processing</u>:
1611
- * <u>Random Combination of Aligned Texts Prefixed by Language</u>: The same process as used for the [CroissantAligned](#croissantaligned) dataset was applied to the Europarl dataset.
1612
- In the Lucie-Training-Dataset, this dataset provides texts in the two languages under the extra sub-fields `text_1` and `text_2`, and the corresponding language codes under `lang_1` and `lang_2`.
1613
  * <u>Citation</u>: Philipp Koehn (2005). "Europarl: A Parallel Corpus for Statistical Machine Translation," MT Summit.
1614
 
1615
  #### Eurovoc
1616
- * <u>Source</u>: [EuropeanParliament/Eurovoc](https://huggingface.co/datasets/EuropeanParliament/Eurovoc). License: [EUPL 1.1](https://joinup.ec.europa.eu/licence/european-union-public-licence-version-11-or-later-eupl).
1617
- * <u>Extracted from</u>: [Cellar](https://op.europa.eu/en/web/cellar). License: [Open](https://op.europa.eu/en/web/cellar).
1618
- * <u>Description</u>: A collection of mutlilingual documents from the data repository of the Publications Office of the European Union annotated with Eurovoc labels. Dataset containing text retrieved through OCR.
 
 
 
 
 
 
 
1619
  * <u>Citations</u>:
1620
- * Ilias Chalkidis, Emmanouil Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos (2019). "Extreme Multi-Label Legal Text Classification: A Case Study in EU Legislation," Proceedings of the Natural Legal Language Processing Workshop 2019, pages 78–87, Minneapolis, Minnesota. Association for Computational Linguistics.
1621
- * Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis and Ion Androutsopoulos (2019). "Large-Scale Multi-Label Text Classification on EU Legislation," Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019), Florence, Italy, (short papers).
1622
- * Andrei-Marius Avram, Vasile Pais, and Dan Ioan Tufis (2021). "PyEuroVoc: A Tool for Multilingual Legal Document Classification with EuroVoc Descriptors," Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 92–101, Held Online. INCOMA Ltd.
1623
  * Zein Shaheen, Gerhard Wohlgenannt and Erwin Filtz (2020). "Large scale legal text classification using transformer models," [arXiv:2010.12871](https://arxiv.org/abs/2010.12871v1).
1624
 
1625
  #### FineWebEdu
1626
  * <u>Source</u>: [HuggingFaceFW/fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu). License: [ODC-BY](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu).
1627
  * <u>Extracted from</u>: [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb). License: [ODC-BY](https://huggingface.co/datasets/HuggingFaceFW/fineweb).
1628
- * <u>Description</u>: A 1.3 trillion token selection from [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb), which contains 15 trillion tokens of curated data from 96 Common Crawl dumps. Content in FineWebEdu has been selected by a custom designed classifier for its high-quality, educational content. Knowledge cutoff: 2019-2024.
 
 
 
1629
  * <u>Citation</u>: Guilherme Penedo, Hynek Kydlíček, Loubna Ben allal, Anton Lozhkov, Margaret Mitchell, Colin Raffel, Leandro Von Werra, Thomas Wolf (2024). "The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale," [ arXiv:2406.17557](https://arxiv.org/abs/2406.17557).
1630
 
1631
  #### GallicaMonographies
1632
- * <u>Source</u>: Corpus contributed by OpenLLM partners. A version is also published here: [PleIAs/French-PD-Books](https://huggingface.co/datasets/PleIAs/French-PD-Books). License: None (public domain).
1633
  * <u>Extracted from</u>: [Gallicagram](https://shiny.ens-paris-saclay.fr/app/gallicagram).
1634
  * <u>Description</u>: A large collection of French monographies in the public domain made available through the French National Library ([Gallica](https://gallica.bnf.fr/accueil/fr/content/accueil-fr?mode=desktop)). Dataset containing text retrieved through OCR.
1635
- <!-- * <u>Citation</u>: No paper found. -->
 
 
 
 
1636
 
1637
  #### GallicaPress
1638
- * <u>Source</u>: Corpus contributed by OpenLLM partners. A version is also published here: [PleIAs/French-PD-Newspapers](https://huggingface.co/datasets/PleIAs/French-PD-Newspapers). License: None (public domain).
1639
  * <u>Extracted from</u>: [Gallicagram](https://shiny.ens-paris-saclay.fr/app/gallicagram).
1640
  * <u>Description</u>: A large collection of French newspapers and periodicals in the public domain made available through the French National Library ([Gallica](https://gallica.bnf.fr/accueil/fr/content/accueil-fr?mode=desktop)). Dataset containing text retrieved through OCR.
1641
- <!-- * <u>Citation</u>: No paper found. -->
 
 
 
 
1642
 
1643
  #### Gutenberg
1644
  * <u>Source</u>: Corpus compiled by OpenLLM partners.
@@ -1646,51 +1660,65 @@ Except from **v1.1**, which is a git tag, all versions are git branches in the d
1646
  * [aleph.gutenberg.org](http://aleph.gutenberg.org/) via [Project Gutenberg](https://www.gutenberg.org/). License: [Open](https://www.gutenberg.org/policy/terms_of_use.html).
1647
  * [pgcorpus](https://github.com/pgcorpus/gutenberg). License: [CC BY-4.0](https://zenodo.org/records/2422561).
1648
  * <u>Description</u>: A collection of free eBooks, manually prepared by human annotators.
1649
- * <u>Text Pre-processing</u>:
1650
- * <u>Filtering</u>: The dataset was filtered based on the author date of death, so that only texts from authors who died more than 70 years ago are included (80 years for French authors). This filtering was done to ensure that the texts are in the public domain.
1651
- * <u>Text cleaning</u>: Headers, footers mentioning the Project Gutenberg were removed.
1652
- <!-- * <u>Citation</u>: No paper found. -->
1653
 
1654
  #### HAL
1655
- * <u>Source</u>: The ROOTS corpus by BigScience (unpublished). License: CC BY-4.0.
1656
- * <u>Extracted from</u>: [HAL](https://hal.science/).
1657
- * <u>Description</u>: A collection of scientific papers and manuscripts distributed through an open science platform. Dataset containing text retrieved through OCR.
1658
- * <u>Citation</u>: Hugo Laurençon, Lucile Saulnier, Thomas Wang, Christopher Akiki, Albert Villanova del Moral, Teven Le Scao, Leandro Von Werra, Chenghao Mou, Eduardo González Ponferrada, Huu Nguyen, Jörg Frohberg, Mario Šaško, Quentin Lhoest, Angelina McMillan-Major, Gerard Dupont, Stella Biderman, Anna Rogers, Loubna Ben allal, Francesco De Toni, Giada Pistilli, Olivier Nguyen, Somaieh Nikpoor, Maraim Masoud, Pierre Colombo, Javier de la Rosa, Paulo Villegas, Tristan Thrush, Shayne Longpre, Sebastian Nagel, Leon Weber, Manuel Muñoz, Jian Zhu, Daniel Van Strien, Zaid Alyafeai, Khalid Almubarak, Minh Chien Vu, Itziar Gonzalez-Dios, Aitor Soroa, Kyle Lo, Manan Dey, Pedro Ortiz Suarez, Aaron Gokaslan, Shamik Bose, David Adelani, Long Phan, Hieu Tran, Ian Yu, Suhas Pai, Jenny Chim, Violette Lepercq, Suzana Ilic, Margaret Mitchell, Sasha Alexandra Luccioni, Yacine Jernite (2022). [The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset](https://proceedings.neurips.cc/paper_files/paper/2022/hash/ce9e92e3de2372a4b93353eb7f3dc0bd-Abstract-Datasets_and_Benchmarks.html). Advances in Neural Information Processing Systems (NeurIPS), 35, 31809-31826.
 
 
 
 
 
1659
 
1660
 
1661
  #### InterventionsParlement
1662
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1663
- * <u>Extracted from</u>: [Regards citoyens](https://www.regardscitoyens.org/#&panel1-4) ([nodeputes.fr](http://www.nosdeputes.fr/), [nossenateurs.fr](http://www.nossenateurs.fr/)). [API](https://github.com/regardscitoyens). License: [CC BY-SA](https://www.regardscitoyens.org/#&panel1-2).
1664
  * <u>Description</u>: Transcripts of speeches made during French parlementary debates.
1665
  <!-- * <u>Citation</u>: No paper found. -->
1666
 
1667
- #### MathPile
1668
- * <u>Source</u>: [GAIR/MathPile_Commercial](https://huggingface.co/datasets/GAIR/MathPile_Commercial). License: [CC BY-SA 4.0](https://huggingface.co/datasets/GAIR/MathPile_Commercial)
 
 
 
 
 
1669
  * <u>Extracted from</u>: [MathPile](https://huggingface.co/datasets/GAIR/MathPile). License: [CC BY-SA-NC 4.0](https://huggingface.co/datasets/GAIR/MathPile).
1670
  * <u>Description</u>: A preprocessed collection of documents focused on math, including Textbooks, arXiv, Wikipedia, ProofWiki, StackExchange, and web pages from Common Crawl. The content targets a range of levels, from kindergarten through postgraduate level. MathPile_Commercial was obtained by removing documents from MathPile that do not allow commercial use.
 
 
 
 
 
1671
  * <u>Citation</u>: Zengzhi Wang, Rui Xia and Pengfei Liu (2023). "Generative AI for Math: Part I -- MathPile: A Billion-Token-Scale Pretraining Corpus for Math," [ arXiv:2312.17120](https://export.arxiv.org/abs/2312.17120).
1672
 
1673
  #### OpenData
1674
- * <u>Source</u>: [Nicolas-BZRD/DILA_OPENDATA_FR_2023](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main) (balo, dole, inca, kali, legi and sarde subsets). License: [ODC-BY](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main).
1675
  * <u>Extracted from</u>: [OpenData](https://echanges.dila.gouv.fr/OPENDATA/) (Data collection date: October, 2023).
1676
  * <u>Description</u>: "The French Government Open Data (DILA) Dataset is a collection of text data extracted from various sources provided by the French government, specifically the Direction de l'information légale et administrative (DILA). This dataset contains a wide range of legal, administrative, and legislative documents. The data has been organized into several categories for easy access and analysis" (from the [dataset card](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main)).
1677
  <!-- * <u>Citation</u>: No paper found. -->
1678
 
1679
  #### OpenEdition
1680
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1681
- * <u>Extracted from</u>: [Open Edition](https://www.openedition.org/).
1682
- <!-- * <u>Description</u>: TODO -->
1683
  <!-- * <u>Citation</u>: No paper found. -->
1684
 
1685
- #### PeS2o
1686
- * <u>Source</u>: [allenai/peS2o](https://huggingface.co/datasets/allenai/peS2o). License: [ODC BY-v1.0](https://opendatacommons.org/licenses/by/1-0/)
1687
- * <u>Extracted from</u>: [S2ORC](https://github.com/allenai/s2orc) (see [aclanthology](https://aclanthology.org/2020.acl-main.447/)). Knowledge cutoff: 2023-01-03.
1688
- * <u>Description</u>: A preprocessed collection of academic papers designed for pre-training of language models. PeS2o is composed of two subsets: one containing full papers and one containing only paper titles and abstracts. Dataset containing (some) text retrieved through OCR.
1689
- * <u>Citation</u>: Luca Soldaini and Kyle Lo (2023). "peS2o (Pretraining Efficiently on S2ORC) Dataset}, Allen Institute for AI. [GitHub](https://github.com/allenai/pes2o).
1690
 
1691
  #### Pile (Uncopyrighted)
1692
  * <u>Source</u>: [monology/pile-uncopyrighted](https://huggingface.co/datasets/monology/pile-uncopyrighted). License: [Other](https://huggingface.co/datasets/monology/pile-uncopyrighted).
1693
- * <u>Extracted from</u>: [FreeLaw](https://free.law/), [StackExchange](https://stackexchange.com/), [USPTO Backgrounds](https://bulkdata.uspto.gov/), [DM Mathematics](https://github.com/google-deepmind/mathematics_dataset), [Ubuntu IRC](https://irclogs.ubuntu.com/), [PhilPapers](https://philpapers.org/), NIH ExPorter from [The Pile](https://huggingface.co/datasets/EleutherAI/pile). License: MIT.
1694
  * <u>Description</u> (from the [Datasheet](https://arxiv.org/abs/2201.07311)):
1695
  * FreeLaw: "The Free Law Project is US registered non-profit that provide access to millions of legal opinions and analytical tools for academic studies in the legal realm."
1696
  * StackExchange: "The StackExchange dataset is a dump of anonymized user-contributed content on the Stack Exchange network, a popular collection of websites centered around user-contributed questions and answers."
@@ -1699,41 +1727,63 @@ Except from **v1.1**, which is a git tag, all versions are git branches in the d
1699
  * Ubuntu IRC: "The Ubuntu IRC dataset is derived from the publicly available chatlogs of all Ubunturelated channels on the Freenode IRC chat server."
1700
  * PhilPapers: a dataset of open access philosophy publications from an international database maintained by the Center for Digital Philosophy at the University of Western Ontario.
1701
  * NIH ExPORTER: "The NIH Grant abstracts provides a bulk-data repository for awarded applications through the ExPORTER4 service covering the fiscal years 1985-present."
 
 
 
1702
  * <u>Citations</u>:
1703
  * Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, Connor Leahy (2020). "The Pile: An 800GB Dataset of Diverse Text for Language Modeling," [ arXiv:2101.00027](https://arxiv.org/abs/2101.00027).
1704
- * Stella Biderman, Kieran Bicheno, Leo Gao (2022). "Datasheet for the Pile," [ arXiv:2201.07311](https://arxiv.org/abs/2201.07311).
1705
 
1706
  #### QuestionsEcritesParlement
1707
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1708
- * <u>Extracted from</u>: [Regards citoyens](https://www.regardscitoyens.org/#&panel1-4) ([text](https://data.regardscitoyens.org/nosdeputes.fr/)). License: [CC BY-NC-SA](https://data.regardscitoyens.org/nosdeputes.fr/).
1709
  * <u>Description</u>: Collection of long written questions, read during a session at the French National Assembly. Questions are asked by a member of the French parliament and addressed to a minister (who is given two months to respond).
1710
  <!-- * <u>Citation</u>: No paper found. -->
1711
 
1712
  #### RedPajama (v2)
1713
  * <u>Source</u>: [togethercomputer/RedPajama-Data-V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2). License: [Apache 2.0](https://github.com/togethercomputer/RedPajama-Data) (data preparation code), Not specified (data) but see [Common Crawl terms of use](https://commoncrawl.org/terms-of-use).
1714
  * <u>Extracted from</u>: [Common Crawl](https://commoncrawl.org/).
1715
- * <u>Description</u>: "RedPajama-V2 is an open dataset for training large language models. The dataset includes over 100B text documents coming from 84 CommonCrawl snapshots and processed using the [CCNet](https://github.com/facebookresearch/cc_net) pipeline. Out of these, there are 30B documents in the corpus that additionally come with quality signals, and 20B documents that are deduplicated" (from [GitHub](https://github.com/togethercomputer/RedPajama-Data)). Knowledge cutoff: 2014-2023.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1716
  * <u>Citation</u>: Together Computer (2023). "RedPajama-Data-v2: an Open Dataset with 30 Trillion Tokens for Training Large Language Models," [GitHub](https://github.com/togethercomputer/RedPajama-Data).
1717
 
1718
  #### STAC
1719
  * <u>Source</u>: [STAC](https://www.irit.fr/STAC/corpus.html). License: [CC BY-SA-NC 4.0](https://www.irit.fr/STAC/corpus.html).
1720
- * <u>Extracted from</u>: [STAC](https://www.irit.fr/STAC/corpus.html). The full STAC corpus contains annotations for discourse structure. We use only the text of the chats.
1721
- * <u>Description</u>: A collection of chats from an online version of the game Settlers of Catan.
1722
- * <u>Citation</u>: Nicholas Asher, Julie Hunter, Mathieu Morey, Farah Benamara and Stergos Afantenos (2016). "Discourse structure and dialogue acts in multiparty dialogue: the STAC corpus," The Tenth International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association, pp. 2721-2727.
1723
 
1724
- #### TheStack
1725
  * <u>Source</u>: [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup). License: [Other](https://huggingface.co/datasets/bigcode/the-stack-dedup) (mixture of copyleft licenses).
1726
- * <u>Extracted from</u>: [GHarchive](https://www.gharchive.org/)
1727
  * <u>Description</u>: "The Stack contains over 6TB of permissively-licensed source code files covering 358 programming languages. The dataset was created as part of the [BigCode Project](https://www.bigcode-project.org/), an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets. This is the near-deduplicated version with 3TB data" (from the [dataset card](https://huggingface.co/datasets/bigcode/the-stack-dedup)).
1728
  * <u>Citation</u>: Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Muñoz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau, Leandro von Werra and Harm de Vries (2022). "The Stack: 3 TB of permissively licensed source code," [arxiv:2211.15533](https://arxiv.org/abs/2211.15533).
1729
 
1730
  #### Theses
1731
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1732
- * <u>Extracted from</u>: [theses.fr](https://theses.fr/?domaine=theses) and [HAL](https://hal.science/).
1733
  * <u>Description</u>: A collection of doctoral theses published in France. Dataset containing text retrieved through OCR.
1734
- * <u>Text Pre-processing</u>:
1735
- * <u>Filtering</u>: Text with less than 1000 words or 10000 characters were removed.
1736
- * <u>Text cleaning</u>: Because the results of OCR on tables and graphics can give raise to garbage text, the text was cleaned by removing the most suspicious chunks of text. Chunks of text were removed if the detected language was not among French, English, Spanish, German and Italian, or if the perplexity of a CCNet Language Model was higher than 2000 ([details here](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/tokenization/data.py#L1946)).
 
 
 
 
 
1737
  <!-- * <u>Citation</u>: No paper found. -->
1738
 
1739
  #### Wikipedia, Wikisource, Wiktionary
@@ -1744,15 +1794,17 @@ Except from **v1.1**, which is a git tag, all versions are git branches in the d
1744
  * [OpenLLM-France/wiktionary](https://huggingface.co/datasets/OpenLLM-France/wiktionary)
1745
  * <u>Extracted from</u>: [Wikimedia dumps](https://dumps.wikimedia.org/other/enterprise_html/runs/). License: [GFDL/CC BY-SA](https://dumps.wikimedia.org/legal.html).
1746
  <!-- * <u>Description</u>: TODO -->
 
1747
  <!-- * <u>Citation</u>: No paper found. -->
1748
 
1749
  #### YouTube
1750
- * <u>Source</u>: Corpus contributed by LINAGORA Labs (OpenLLM-France).
1751
  * <u>Extracted from</u>: [YouTube](https://www.youtube.com/). <!-- License: TODO? -->
1752
  * <u>Description</u>: French subtitles from videos published with permissive licenses on YouTube. <!-- TODO -->
1753
- <!-- * <u>Citation</u>: No paper found. -->
1754
 
1755
- ## Example use in python
 
 
1756
 
1757
  ### Load the dataset
1758
 
@@ -1841,14 +1893,19 @@ name = None # or a configuration (e.g. "fr", "code-python", "Wikipedia-fr", "Pil
1841
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", name, revision="v1.2", **kwargs)
1842
  ```
1843
 
1844
-
1845
  ## Citation
1846
 
1847
  TODO
1848
 
1849
  ## Acknowledgements
1850
 
1851
- TODO
 
 
 
 
 
 
1852
 
1853
  ## Contact
1854
 
 
463
  digital books, newspapers, and magazines, some of which were processed by Optical Character Recognition (OCR). It also contains samples of diverse programming languages.
464
 
465
  The Lucie Training Dataset was used to pretrain [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B),
466
+ a foundation LLM with strong capabilities in French and English. Code for data preparation can be found in the [training respository](https://github.com/OpenLLM-France/Lucie-Training/tree/7f1f7efa1288f709662a9067bf2c3db856b850f8) for Lucie-7B.
467
 
468
  Table of Contents:
469
  <ul>
470
  <li><a href="#dataset-description">Dataset Description</a>
471
  <ul>
472
+ <li><a href="#sample-metadata">Sample Metadata</a></li>
473
  <li><a href="#dataset-composition">Dataset Composition</a>
474
  <table>
475
  <tr>
 
500
  </tr>
501
  </table>
502
  </li>
503
+ <li><a href="#configurable-subsets-and-versions">Configurable Subsets and Versions</a></li>
504
  <li><a href="#details-on-data-sources">Details on Data Sources</a>
505
  <table>
506
  <tr>
 
524
  <li><a href="#hal"> HAL</a></li>
525
  <li><a href="#interventionsparlement"> InterventionsParlement</a></li>
526
  <li><a href="#legi"> LEGI</a></li>
527
+ <li><a href="#mathpile-commercial"> MathPile (Commercial)</a></li>
528
  <li><a href="#opendata"> OpenData</a></li>
529
  <li><a href="#openedition"> OpenEdition</a></li>
530
+ <li><a href="#pes2o-v2"> PeS2o (v2)</a></li>
531
  </ul>
532
  </td>
533
  <td style="vertical-align: top;">
 
536
  <li><a href="#questionsecritesparlement"> QuestionsEcritesParlement</a></li>
537
  <li><a href="#redpajama-v2"> RedPajama (v2)</a></li>
538
  <li><a href="#stac"> Stac</a></li>
539
+ <li><a href="#thestack-v12"> TheStack (v1.2)</a></li>
540
  <li><a href="#theses"> Theses</a></li>
541
  <li><a href="#wikipedia-wikisource-wiktionary"> Wikipedia, Wikisource, Wiktionary</a></li>
542
  <li><a href="#youtube"> YouTube</a></li>
 
547
  </li>
548
  </ul>
549
  </li>
550
+ <li><a href="#example-use-in-python">Example use in Python</a></li>
551
+ <ul>
552
  <li><a href="#load-the-dataset">Load the dataset</a></li>
553
  <li><a href="#iterate-over-a-subset">Iterate over a subset</a></li>
554
  <li><a href="#load-a-specific-version">Load a specific version</a></li>
 
562
 
563
  ## Dataset Description
564
 
565
+ This dataset is intended to provide extensive and diverse multilingual data for training Large Language Models (LLMs). Here are some of the principal features of the corpus:
566
  * Data mix:
567
+ * The dataset contains more French than English data -- it is in fact one of the biggest collections of French text data that has been preprocessed for LLM training -- with the aim of minimizing anglo-centric cultural biases.
568
  * German, Spanish and Italian are also represented in small amounts.
569
+ * Code is included to boost the reasoning capabilities of LLMs.
570
  * Data filtering and deduplication:
571
  * The dataset has been cleaned in an effort to remove very low-quality data.
572
  * Duplicate data samples have been removed to some extent, following best practices.
573
+ * Web data has been filtered to minimize potentially toxic content and personally identifying information.
574
  * Ethics:
575
  * Special care has been taken to respect copyright laws and individual privacy.
576
+ All newspapers, monographies, magazines and legislative documents, as well as most books, are in the public domain
577
+ (which depends on the author's date of death and the country of publication). Other data are published with permissive licenses (e.g., CC BY or CC BY-SA).
578
+ * All web data in the dataset come from sites with robots.txt files that do not forbid crawling.
579
 
580
+ ### Sample Metadata
581
 
582
+ In addition to the `text` field, which provides the content of the sample, each training sample in the corpus contains the following metadata when available:
583
+ * [`language`](metadata/metadata_examples.json#L3): the language of the text sample (note that this information is taken from the original data source and may be incorrect).
 
584
  <br>Possible values:
585
+ - the ISO 639-1 code for a given natural language ("en", "fr", "de", "es", or "it"),
586
+ - the name of a programming language prefixed by "code:" ("code:python", "code:c++", …), or
587
+ - a list of ISO 639-1 codes separated by commas for data containing parallel translations ("fr,en", "de,fr", "es,en", "it,en",
588
  or one of those pairs in the opposite order if the languages appear in the opposite order in the text).
589
  * [`source`](metadata/metadata_examples.json#L4): an identifier for the source(s) of the text sample (Wikipedia, RedPajama, Gutenberg, …).
590
+ All sources are described in detail [below](#details-on-data-sources).
591
+ * [`id`](metadata/metadata_examples.json#L13): an identifier that is unique among documents from the same source.
592
  * [`url`](metadata/metadata_examples.json#L35) (optional): the URL of the original text sample on the web, if available.
593
  * [`title`](metadata/metadata_examples.json#L36) (optional): the title of the original text sample, if available.
594
  * [`author`](metadata/metadata_examples.json#L81) (optional): the author of the original text sample, if available.
595
  <details><summary>Note:</summary>
596
+ The author name is given in plain text, except in the case of <a href="metadata/metadata_examples.json#L91">Gutenberg books</a>, where it is the JSON serialized object of the author metadata.
597
  </details>
598
  * [`date`](metadata/metadata_examples.json#L6) (optional): the publication date of the original text sample, if available.
599
  <details><summary>Note:</summary>
600
+ The text format of the date depends on the source.
601
  </details>
602
+ * [`quality_signals`](metadata/metadata_examples.json#L17) (optional): a list of quality signals for the text sample in JSON format (which could be used for further filtering or sample weighting).
603
  <details><summary>Note:</summary>
604
  It can include indicators computed by `fasttext` and `CCNet`, statistics about occurrences of characters, words, special characters, etc.
605
  </details>
606
  * [`extra`](metadata/metadata_examples.json#L16) (optional): extra information about the text sample, in JSON format.
607
  This can include metadata about the source subset, the rights, etc.
608
 
609
+ The list of metadata available for each source is provided (without the `text` field) in [metadata_examples.json](metadata/metadata_examples.json).
610
 
611
 
612
  ### Dataset Composition
 
620
  Sources are grouped by category.
621
  The table provides the numbers of documents, words, tokens, and characters for each subset.
622
  All numbers in this table are available in the CSV file [dataset_composition.csv](metadata/dataset_composition.csv).
623
+ Token counts are computed using the tokenizer for [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B).
624
 
625
  <!-- The following is automatically generated. Do not update manually. -->
626
  <!-- TABLE START -->
627
  <table>
628
  <thead>
629
  <tr>
630
+ <th><strong>Subset</strong></th>
631
+ <th><strong>Language</strong></th>
632
  <th><strong>M docs</strong></th>
633
  <th><strong>B words</strong></th>
634
  <th><strong>B tokens</strong></th>
 
806
  <tr>
807
  <td colspan="7"><h4 id="category-technical">Category: Technical</h4></td></tr>
808
  <tr>
809
+ <td><a href="#pes2o-v2"><strong>PeS2o</strong></a></td>
810
  <td><strong>English (en)</strong></td>
811
  <td>38.972</td>
812
  <td>42.296</td>
 
1131
  <tr>
1132
  <td colspan="7"><h4 id="category-math">Category: Math</h4></td></tr>
1133
  <tr>
1134
+ <td><a href="#mathpile-commercial"><strong>MathPile</strong></a></td>
1135
  <td><strong>English (en)</strong></td>
1136
  <td>0.737</td>
1137
  <td>3.408</td>
 
1198
  <td></td>
1199
  </tr>
1200
  <tr>
1201
+ <td><a href="#stac"><strong>STAC</strong></a></td>
1202
  <td><strong>English (en)</strong></td>
1203
  <td>0.0000450</td>
1204
  <td>0.0000529</td>
 
1256
  <tr>
1257
  <td colspan="7"><h4 id="category-programming">Category: Programming</h4></td></tr>
1258
  <tr>
1259
+ <td rowspan="30" style="vertical-align: top;"><a href="#thestack-v12"><strong>TheStack</strong></a></td>
1260
  <td><strong>JAVASCRIPT</strong></td>
1261
  <td>21.109</td>
1262
  <td>8.526</td>
 
1530
  <!-- TABLE END -->
1531
 
1532
 
1533
+ ### Configurable Subsets and Versions
1534
 
1535
+ As the Lucie Training Dataset is a collection of multilingual corpora from different sources, it can be divided into subsets based on the source and language of its constituent corpora.
1536
+ <br> The list of possible configurations is available [in the YAML header of this README file](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/v1.2/README.md?code=true#L24).
 
1537
  Each configuration corresponds to a pathname pattern in the [data subdirectory](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2/data).
1538
 
1539
+ The dataset is also available in the following versions:
1540
  - **v1.1** / [**main**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/main/data) (default):
1541
+ The data used for the first (main) pretraining phase of [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B), which contains approximately 2.3T tokens. The statistics above apply to this version.
1542
+ - [**v1.2**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2/data): An improved version of the main dataset, where
1543
+ - GallicaMonographies and GallicaPress have been fltered aggressively to remove documents with low OCR quality.
1544
+ - The `Ubuntu_IRC` and `PhilPapers` subsets of Pile have been refined by fixing encoding issues and removing documents in languages other than English, French, Spanish, German and Italian.
1545
  - [**v1.2-recent-web**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2-recent-web/data) : The data used for the second pretraining phase (context extension) of [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B#2-context-extension).
1546
+ This version is identical to `v1.2` with the exception that older snapshots of web data (before 2023 for RedPajama and before 2024 for FineWebEdu) have been excluded.
1547
+ All data from `v1.1` that were not filtered out remain unchanged in `v1.2` and `v1.2-recent-web`.
1548
 
1549
  Except from **v1.1**, which is a git tag, all versions are git branches in the dataset repository
1550
  (e.g. [**v1.2**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2/data)).
1551
 
1552
+ The <a href="#example-use-in-python">Example use in Python</a> section contains example Python code for loading and iterating over the dataset with different configurations, including source, language and version.
1553
+
1554
 
1555
  ### Details on Data Sources
1556
 
1557
  #### AmendementsParlement
1558
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1559
+ * <u>Extracted from</u>: [Regards citoyens](https://www.regardscitoyens.org/#&panel1-4). License: [CC BY-SA](https://www.regardscitoyens.org/mentions-legales/).
1560
+ * <u>Description</u>: A collection of proposed amendments by the French parliament. Documents contain the text of the proposed amendment, the name of the associated law as well as information on who voted on the amendment and what was decided.
 
1561
 
1562
  #### AmericanStories
1563
  * <u>Source</u>: [dell-research-harvard/AmericanStories](https://huggingface.co/datasets/dell-research-harvard/AmericanStories). License: [CC BY 4.0](https://huggingface.co/datasets/dell-research-harvard/AmericanStories).
1564
  * <u>Extracted from</u>: [Chronicling America](https://www.loc.gov/collections/chronicling-america/about-this-collection/). License: [Open](https://www.loc.gov/collections/chronicling-america/about-this-collection/rights-and-access/).
1565
+ * <u>Description</u>: "The American Stories dataset is a collection of full article texts extracted from historical U.S. newspaper images. It includes nearly 20 million scans from the public domain Chronicling America collection maintained by the Library of Congress. The dataset is designed to address the challenges posed by complex layouts and low OCR quality in existing newspaper datasets" (from the [dataset card](https://huggingface.co/datasets/dell-research-harvard/AmericanStories)). See the dataset <a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_americanstories-english_histogram.png">composition details</a> for statistics on documents by year. Dataset containing text retrieved through OCR.
1566
+ * <u>Pre-processing</u>:
1567
  * <u>Filtering</u>:
1568
+ To filter out documents with excessive OCR errors, the dataset was refined by discarding texts with a perplexity higher than 2310,
1569
+ measured using a CCNET model in English (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/tokenization/data.py#L2106)).
1570
+ The code to compute CCNET perplexity, parallelizing on parquet files, is [available here](https://github.com/OpenLLM-France/Lucie-dataset-filtering).
1571
  * <u>Citation</u>: Melissa Dell, Jacob Carlson, Tom Bryan, Emily Silcock, Abhishek Arora, Zejiang Shen, Luca D'Amico-Wong, Quan Le, Pablo Querubin and Leander Heldring (2023). "American Stories: A Large-Scale Structured Text Dataset of Historical U.S. Newspapers," [arxiv:2308.12477](https://arxiv.org/abs/2308.12477v1).
1572
 
1573
  #### Claire (French and English)
 
1575
  * French dataset: [OpenLLM-France/Claire-Dialogue-French-0.1](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1). License: [CC BY-NC-SA 4.0](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1).
1576
  * English dataset: [OpenLLM-France/Claire-Dialogue-English-0.1](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1). License: [CC BY-NC-SA 4.0](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1).
1577
  * <u>Extracted from</u>: see the datacards for the [French](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1) and [English](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1) datasets.
1578
+ * <u>Description</u>: The Claire datasets are composed of transcripts of spoken conversations -- including parliamentary proceedings, interviews, debates, meetings, and free conversations -- as well as some written conversations from theater plays and written chats. The dataset is designed to help downstream performance of models fine-tuned for tasks requiring the comprehension of spontaneous spoken conversation, such as meeting summarization. Each dialogue is split into speech turns, and each speech turn is labeled with the name of the speaker or a unique identifier. See the composition details for the <a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_claire-french_pie.png">French dataset</a> and the <a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_claire-english_pie.png">English dataset</a> for a high-level view of the distribution of different types of documents in each dataset.
1579
  * <u>Citation</u>: Julie Hunter, Jérôme Louradour, Virgile Rennard, Ismaïl Harrando, Guokan Shang, Jean-Pierre Lorré (2023). The Claire French Dialogue Dataset. [arXiv:2311.16840](https://arxiv.org/abs/2311.16840).
1580
 
1581
  #### CroissantAligned
1582
  * <u>Source</u>: [croissantllm/croissant_dataset_no_web_data](https://huggingface.co/datasets/croissantllm/croissant_dataset_no_web_data/tree/main/aligned_36b) (subset: `aligned_36b`). License: not specified.
1583
  * <u>Extracted from</u>:
1584
+ * Translation pairs: [OPUS](https://opus.nlpl.eu/) (99.6% of the data in CroissantAligned). Pairs extracted from OPUS are labeled as "UnbabelFrEn".
1585
  * Thesis abstracts: French thesis abstract pairs. License: [ETALAB-Licence-Ouverte-v2.0](https://www.etalab.gouv.fr/wp-content/uploads/2017/04/ETALAB-Licence-Ouverte-v2.0.pdf).
1586
+ * Song lyrics: [lacoccinelle](https://www.lacoccinelle.net).
1587
+ * <u>Description</u>: CroissantAligned contains samples of parallel French/English (or English/French) data. Data extracted from OPUS takes the form of sentences pairs, where one sentence is in French and the other is in English. OPUS pairs were passed through a custom pipeline designed to select the highest quality translation examples. Selected pairs are labeled "UnbabelFrEn" in the CroissantAligned dataset. The thesis abstract subset contains thesis abstracts paired with translations written by the thesis authors. The song lyrics are translated by contributors to www.lacoccinelle.net. Parallel data are used to boost the multilingual capabilities of models trained on them ([Faysse et al.,2024](https://arxiv.org/pdf/2402.00786)).
1588
+ * <u>Pre-processing</u>:
1589
+ * <u>Language separation and tagging</u>: The original text field of [the Croissant dataset](https://huggingface.co/datasets/croissantllm/croissant_dataset_no_web_data) contains a sentence or passage in French or English immediately followed by its translation without any indication of which passage is in which language. The first step was thus to split each text into separate, monolingual passages and tag each passage with the appropriate language code, identified automatically using the [langid library](https://pypi.org/project/langid/) (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/cdec8fd6369385455829ab39c2f04bcb1a8a475a/tokenization/data.py#L1407)). In the Lucie Training Dataset, the `extra` metadata field for CroissantAligned contains separate keys, `text_fr` for French and `text_en` for English, that stores the texts separately.
1590
+ * <u>Random combination of texts prefixed by language</u>: To create the text values, each monolingual text was repaired with its translation, but random separators and various methods of prefixing the text with the language (name or code) were added.
1591
+ This was done as a precaution to prevent models trained on this data from switching languages when generating text and can be seen as a very basic instruction to translate the source (first) text into the target (second) text (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/cdec8fd6369385455829ab39c2f04bcb1a8a475a/tokenization/data.py#L1458)).
 
 
 
1592
  * <u>Citation</u>: Manuel Faysse, Patrick Fernandes, Nuno M. Guerreiro, António Loison, Duarte M. Alves, Caio Corro, Nicolas Boizard, João Alves, Ricardo Rei, Pedro H. Martins, Antoni Bigata Casademunt, François Yvon, André F.T. Martins, Gautier Viaud, Céline Hudelot, Pierre Colombo (2024). "CroissantLLM: A Truly Bilingual French-English Language Model," [arXiv:2402.00786](https://arxiv.org/abs/2402.00786).
1593
 
1594
  #### DiscoursPublics
1595
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1596
+ * <u>Extracted from</u>: [Vie Publique](https://www.vie-publique.fr/collection-discours-publics). License: [ETALAB-Licence-Ouverte-v2.0](https://www.vie-publique.fr/mentions-legales).
1597
  * <u>Description</u>: A collection of public speeches from the principal public actors in France including speeches from the French President starting from 1974 and from the Prime Minister and members of the government starting from 1980.
1598
+ * <u>Pre-processing</u>:
1599
+ * <u>Text cleaning</u>: the mention of the source url and the number of views were removed from the text.
 
1600
 
1601
  #### Europarl and EuroparlAligned
1602
  * <u>Sources</u>:
1603
  * `fr-en`, `es-en`, `it-en` parallel data: [Europarl v7](https://www.statmt.org/europarl/v7/). License: [Open](https://www.statmt.org/europarl/).
1604
  * `fr`, `en`, `de`, `es` monolingual data and `de-fr` parallel data: [Europarl v10](https://www.statmt.org/europarl/v10/training-monolingual/). License: [Open](https://www.statmt.org/europarl/).
1605
  * <u>Description</u>: "The Europarl parallel corpus is extracted from the proceedings of the European Parliament. It includes versions in 21 European languages: Romanic (French, Italian, Spanish, Portuguese, Romanian), Germanic (English, Dutch, German, Danish, Swedish), Slavik (Bulgarian, Czech, Polish, Slovak, Slovene), Finni-Ugric (Finnish, Hungarian, Estonian), Baltic (Latvian, Lithuanian), and Greek. The goal of the extraction and processing was to generate sentence aligned text for statistical machine translation systems" ([www.statmt.org](https://www.statmt.org/europarl/)).
1606
+ * <u>Pre-processing</u>:
1607
+ * <u>Random combination of aligned texts prefixed by language</u>: The same process as used for the [CroissantAligned](#croissantaligned) dataset was applied to the EuroparlAligned dataset (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/cdec8fd6369385455829ab39c2f04bcb1a8a475a/tokenization/data.py#L1350)).
1608
+ In the Lucie Training Dataset, the `extra` field in the metadata for EuroparlAligned provides texts in the two languages under the sub-fields `text_1` and `text_2`, and the corresponding language codes under `lang_1` and `lang_2`.
1609
  * <u>Citation</u>: Philipp Koehn (2005). "Europarl: A Parallel Corpus for Statistical Machine Translation," MT Summit.
1610
 
1611
  #### Eurovoc
1612
+ * <u>Source</u>: [EuropeanParliament/Eurovoc](https://huggingface.co/datasets/EuropeanParliament/Eurovoc). License: [EUPL 1.1](https://huggingface.co/datasets/EuropeanParliament/Eurovoc).
1613
+ * <u>Extracted from</u>: [Cellar](https://op.europa.eu/en/web/cellar). License: [CC BY-4.0](https://op.europa.eu/en/web/about-us/legal-notices/publications-office-of-the-european-union-copyright).
1614
+ * <u>Description</u>: A collection of mutlilingual documents from the data repository of the Publications Office of the European Union annotated with Eurovoc labels. The corpus contains legal, policy-related, historical and organizational information about the EU. Dataset containing text retrieved through OCR.
1615
+ * <u>Pre-processing</u>:
1616
+ * <u>Filtering</u>:
1617
+ To filter out documents with excessive OCR errors, the dataset was refined by discarding texts with a perplexity higher than 1500,
1618
+ measured using a CCNET model in English (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/tokenization/data.py#L1590)).
1619
+ The code to compute CCNET perplexity, parallelizing on parquet files, is [available here](https://github.com/OpenLLM-France/Lucie-dataset-filtering).
1620
+ * <u>Text cleaning</u>:
1621
+ Mentions of Credit Institutions Directives (CID) that appears in the raw texts such as `(cid:146)` were removed.
1622
  * <u>Citations</u>:
1623
+ * Ilias Chalkidis, Emmanouil Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos (2019). "[Extreme Multi-Label Legal Text Classification: A Case Study in EU Legislation](https://arxiv.org/pdf/1905.10892)," Proceedings of the Natural Legal Language Processing Workshop 2019, pages 78–87, Minneapolis, Minnesota. Association for Computational Linguistics.
1624
+ * Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis and Ion Androutsopoulos (2019). "[Large-Scale Multi-Label Text Classification on EU Legislation](https://arxiv.org/pdf/1906.02192)," Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019), Florence, Italy, (short papers).
1625
+ * Andrei-Marius Avram, Vasile Pais, and Dan Ioan Tufis (2021). "[PyEuroVoc: A Tool for Multilingual Legal Document Classification with EuroVoc Descriptors](https://arxiv.org/pdf/2108.01139)," Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 92–101, Held Online. INCOMA Ltd.
1626
  * Zein Shaheen, Gerhard Wohlgenannt and Erwin Filtz (2020). "Large scale legal text classification using transformer models," [arXiv:2010.12871](https://arxiv.org/abs/2010.12871v1).
1627
 
1628
  #### FineWebEdu
1629
  * <u>Source</u>: [HuggingFaceFW/fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu). License: [ODC-BY](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu).
1630
  * <u>Extracted from</u>: [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb). License: [ODC-BY](https://huggingface.co/datasets/HuggingFaceFW/fineweb).
1631
+ * <u>Description</u>: A 1.3 trillion token selection from [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb), which contains 15 trillion tokens of curated data from 96 Common Crawl dumps. Content in FineWebEdu has been selected by a custom designed classifier for its high-quality, educational content. Most recent crawl: 2024-10 (see <a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_finewebedu-english_histogram.png">composition details</a> for information about the crawls included in this dataset.)
1632
+ * <u>Pre-processing</u>:
1633
+ * <u>Removing duplicate urls</u>: urls were removed if their base domain overlapped with a dataset already in the Lucie Training Dataset (e.g., "philpapers.org") in order to increase diversity of content (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/tokenization/text.py#L843))
1634
+ * <u>Filtering by robots.txt files</u>: we collect robots.txt and remove all documents for which CCBot is disallowed or for which we failed to collect information as of July 2024 in an effort to select data free from opt-out evidence according to the 4th article of the copyright European directive (2019).
1635
  * <u>Citation</u>: Guilherme Penedo, Hynek Kydlíček, Loubna Ben allal, Anton Lozhkov, Margaret Mitchell, Colin Raffel, Leandro Von Werra, Thomas Wolf (2024). "The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale," [ arXiv:2406.17557](https://arxiv.org/abs/2406.17557).
1636
 
1637
  #### GallicaMonographies
1638
+ * <u>Source</u>: Corpus contributed by OpenLLM partners. A version is also published here: [PleIAs/French-PD-Books](https://huggingface.co/datasets/PleIAs/French-PD-Books). License: Public domain.
1639
  * <u>Extracted from</u>: [Gallicagram](https://shiny.ens-paris-saclay.fr/app/gallicagram).
1640
  * <u>Description</u>: A large collection of French monographies in the public domain made available through the French National Library ([Gallica](https://gallica.bnf.fr/accueil/fr/content/accueil-fr?mode=desktop)). Dataset containing text retrieved through OCR.
1641
+ * <u>Pre-processing</u>:
1642
+ * <u>Text cleaning for v1.1</u>:
1643
+ To filter out documents with excessive OCR errors, the dataset was split into chunks and chunks were kept if the source language was detected as French by [FastText](https://github.com/facebookresearch/fastText) with a confidence score of 0.65 or above, and the perplexity score, as measured using a CCNET model in French, was between 10 and 1000.
1644
+ The code to compute CCNET perplexity, parallelizing on parquet files, is [available here](https://github.com/OpenLLM-France/Lucie-dataset-filtering).
1645
+ * <u>Filtering for v1.2</u>: Using OCR scores provided in the metadata of the source corpus, documents with an OCR score of less than 90 out of 100 were filtered out.
1646
 
1647
  #### GallicaPress
1648
+ * <u>Source</u>: Corpus contributed by OpenLLM partners. A version is also published here: [PleIAs/French-PD-Newspapers](https://huggingface.co/datasets/PleIAs/French-PD-Newspapers). License: Public domain.
1649
  * <u>Extracted from</u>: [Gallicagram](https://shiny.ens-paris-saclay.fr/app/gallicagram).
1650
  * <u>Description</u>: A large collection of French newspapers and periodicals in the public domain made available through the French National Library ([Gallica](https://gallica.bnf.fr/accueil/fr/content/accueil-fr?mode=desktop)). Dataset containing text retrieved through OCR.
1651
+ * <u>Pre-processing</u>:
1652
+ * <u>Text cleaning for v1.1</u>:
1653
+ To filter out documents with excessive OCR errors, the dataset was split into chunks and chunks were kept if the source language was detected as French by [FastText](https://github.com/facebookresearch/fastText) with a confidence score of 0.65 or above, and the perplexity score, as measured using a CCNET model in French, was between 10 and 1000 (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/tokenization/data.py#L1840)).
1654
+ The code to compute CCNET perplexity, parallelizing on parquet files, is [available here](https://github.com/OpenLLM-France/Lucie-dataset-filtering).
1655
+ * <u>Filtering for v1.2</u>: Using OCR scores provided in the metadata of the source corpus, documents with an OCR score of less than 90 out of 100 were filtered out.
1656
 
1657
  #### Gutenberg
1658
  * <u>Source</u>: Corpus compiled by OpenLLM partners.
 
1660
  * [aleph.gutenberg.org](http://aleph.gutenberg.org/) via [Project Gutenberg](https://www.gutenberg.org/). License: [Open](https://www.gutenberg.org/policy/terms_of_use.html).
1661
  * [pgcorpus](https://github.com/pgcorpus/gutenberg). License: [CC BY-4.0](https://zenodo.org/records/2422561).
1662
  * <u>Description</u>: A collection of free eBooks, manually prepared by human annotators.
1663
+ * <u>Pre-processing</u>:
1664
+ * <u>Filtering</u>: The dataset was filtered based on the author date of death, so that only texts from authors who died more than 70 years ago are included (80 years for French authors). See [code details here](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/tokenization/data.py#L1136). This filtering was done to ensure that the texts are in the public domain.
1665
+ * <u>Text cleaning</u>: Headers and footers containing information about Project Gutenberg were removed (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/cdec8fd6369385455829ab39c2f04bcb1a8a475a/tokenization/text.py#L93)).
 
1666
 
1667
  #### HAL
1668
+ * <u>Source</u>: [bigscience-data/roots_fr_hal_archives_ouvertes](https://huggingface.co/datasets/bigscience-data/roots_fr_hal_archives_ouvertes). License: Roots dataset.
1669
+ * <u>Extracted from</u>: [HAL](https://hal.science/) ([Open access](https://about.hal.science/)).
1670
+ * <u>Description</u>: A collection of scientific papers and manuscripts distributed through the open science platform HAL. Dataset containing text retrieved through OCR.
1671
+ * <u>Pre-processing</u>:
1672
+ * <u>Filtering</u>:
1673
+ To filter out documents with excessive OCR errors, the dataset was refined by discarding texts with a perplexity higher than 930,
1674
+ measured using a CCNET model in French (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/tokenization/data.py#L1929)).
1675
+ The code to compute CCNET perplexity, parallelizing on parquet files, is [available here](https://github.com/OpenLLM-France/Lucie-dataset-filtering).
1676
+ * <u>Citation</u>: Hugo Laurençon, Lucile Saulnier, Thomas Wang, Christopher Akiki, Albert Villanova del Moral, Teven Le Scao, Leandro Von Werra, Chenghao Mou, Eduardo González Ponferrada, Huu Nguyen, Jörg Frohberg, Mario Šaško, Quentin Lhoest, Angelina McMillan-Major, Gerard Dupont, Stella Biderman, Anna Rogers, Loubna Ben allal, Francesco De Toni, Giada Pistilli, Olivier Nguyen, Somaieh Nikpoor, Maraim Masoud, Pierre Colombo, Javier de la Rosa, Paulo Villegas, Tristan Thrush, Shayne Longpre, Sebastian Nagel, Leon Weber, Manuel Muñoz, Jian Zhu, Daniel Van Strien, Zaid Alyafeai, Khalid Almubarak, Minh Chien Vu, Itziar Gonzalez-Dios, Aitor Soroa, Kyle Lo, Manan Dey, Pedro Ortiz Suarez, Aaron Gokaslan, Shamik Bose, David Adelani, Long Phan, Hieu Tran, Ian Yu, Suhas Pai, Jenny Chim, Violette Lepercq, Suzana Ilic, Margaret Mitchell, Sasha Alexandra Luccioni, Yacine Jernite (2022). "[The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset](https://proceedings.neurips.cc/paper_files/paper/2022/hash/ce9e92e3de2372a4b93353eb7f3dc0bd-Abstract-Datasets_and_Benchmarks.html)," Advances in Neural Information Processing Systems (NeurIPS), 35, 31809-31826.
1677
 
1678
 
1679
  #### InterventionsParlement
1680
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1681
+ * <u>Extracted from</u>: [Regards citoyens](https://www.regardscitoyens.org/#&panel1-4). License: [CC BY-SA](https://www.regardscitoyens.org/mentions-legales/).
1682
  * <u>Description</u>: Transcripts of speeches made during French parlementary debates.
1683
  <!-- * <u>Citation</u>: No paper found. -->
1684
 
1685
+ #### LEGI
1686
+ * <u>Source</u>: Corpus contributed by OpenLLM partners. A version is also published here: [Nicolas-BZRD/DILA_OPENDATA_FR_2023](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main).
1687
+ * <u>Extracted from</u>: [OpenData](https://echanges.dila.gouv.fr/OPENDATA/) (Data collection date: October, 2023).
1688
+ * <u>Description</u>: "The French Government Open Data (DILA) Dataset is a collection of text data extracted from various sources provided by the French government, specifically the Direction de l'information légale et administrative (DILA). This dataset contains a wide range of legal, administrative, and legislative documents. The data has been organized into several categories for easy access and analysis" (from the [dataset card](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main)).
1689
+
1690
+ #### MathPile (Commercial)
1691
+ * <u>Source</u>: [GAIR/MathPile_Commercial](https://huggingface.co/datasets/GAIR/MathPile_Commercial). License: [CC BY-SA 4.0](https://huggingface.co/datasets/GAIR/MathPile_Commercial).
1692
  * <u>Extracted from</u>: [MathPile](https://huggingface.co/datasets/GAIR/MathPile). License: [CC BY-SA-NC 4.0](https://huggingface.co/datasets/GAIR/MathPile).
1693
  * <u>Description</u>: A preprocessed collection of documents focused on math, including Textbooks, arXiv, Wikipedia, ProofWiki, StackExchange, and web pages from Common Crawl. The content targets a range of levels, from kindergarten through postgraduate level. MathPile_Commercial was obtained by removing documents from MathPile that do not allow commercial use.
1694
+ * <u>Pre-processing</u>:
1695
+ * <u>Formatting</u>: Converted the content of StackExchange questions and answers to match the {"text": value} format, using the following formula:
1696
+ ```python
1697
+ text = sample["question"]["Body"] + "\n\n".join([answer["Body"] for answer in sample["answers"]])
1698
+ ```
1699
  * <u>Citation</u>: Zengzhi Wang, Rui Xia and Pengfei Liu (2023). "Generative AI for Math: Part I -- MathPile: A Billion-Token-Scale Pretraining Corpus for Math," [ arXiv:2312.17120](https://export.arxiv.org/abs/2312.17120).
1700
 
1701
  #### OpenData
1702
+ * <u>Source</u>: [Nicolas-BZRD/DILA_OPENDATA_FR_2023](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main) (balo, dole, inca, kali, and sarde subsets). License: [ODC-BY](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main).
1703
  * <u>Extracted from</u>: [OpenData](https://echanges.dila.gouv.fr/OPENDATA/) (Data collection date: October, 2023).
1704
  * <u>Description</u>: "The French Government Open Data (DILA) Dataset is a collection of text data extracted from various sources provided by the French government, specifically the Direction de l'information légale et administrative (DILA). This dataset contains a wide range of legal, administrative, and legislative documents. The data has been organized into several categories for easy access and analysis" (from the [dataset card](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main)).
1705
  <!-- * <u>Citation</u>: No paper found. -->
1706
 
1707
  #### OpenEdition
1708
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1709
+ * <u>Extracted from</u>: [Open Edition](https://www.openedition.org/). License: [Open Edition Books](https://www.openedition.org/12554).
1710
+ * <u>Description</u>: A collection of scientific books, journal articles, blog entries and event descriptions.
1711
  <!-- * <u>Citation</u>: No paper found. -->
1712
 
1713
+ #### PeS2o (v2)
1714
+ * <u>Source</u>: [allenai/peS2o](https://huggingface.co/datasets/allenai/peS2o) version [v2](https://huggingface.co/datasets/allenai/peS2o/tree/main/data/v2). License: [ODC BY-v1.0](https://github.com/allenai/s2orc/).
1715
+ * <u>Extracted from</u>: [S2ORC](https://github.com/allenai/s2orc) (see [aclanthology](https://aclanthology.org/2020.acl-main.447/)). License: [ODC BY-v1.0](https://github.com/allenai/s2orc/).
1716
+ * <u>Description</u>: A preprocessed collection of academic papers designed for pre-training of language models. PeS2o is composed of two subsets: one containing full papers and one containing only paper titles and abstracts. Dataset containing (some) text retrieved through OCR. Knowledge cutoff: 2023-01-03.
1717
+ * <u>Citation</u>: Luca Soldaini and Kyle Lo (2023). "peS2o (Pretraining Efficiently on S2ORC) Dataset," Allen Institute for AI. [GitHub](https://github.com/allenai/pes2o).
1718
 
1719
  #### Pile (Uncopyrighted)
1720
  * <u>Source</u>: [monology/pile-uncopyrighted](https://huggingface.co/datasets/monology/pile-uncopyrighted). License: [Other](https://huggingface.co/datasets/monology/pile-uncopyrighted).
1721
+ * <u>Extracted from</u>: [FreeLaw](https://free.law/), [StackExchange](https://stackexchange.com/), [USPTO Backgrounds](https://bulkdata.uspto.gov/), [DM Mathematics](https://github.com/google-deepmind/mathematics_dataset), [Ubuntu IRC](https://irclogs.ubuntu.com/), [PhilPapers](https://philpapers.org/), NIH ExPorter from [The Pile](https://huggingface.co/datasets/EleutherAI/pile). License: [MIT](https://arxiv.org/pdf/2201.07311).
1722
  * <u>Description</u> (from the [Datasheet](https://arxiv.org/abs/2201.07311)):
1723
  * FreeLaw: "The Free Law Project is US registered non-profit that provide access to millions of legal opinions and analytical tools for academic studies in the legal realm."
1724
  * StackExchange: "The StackExchange dataset is a dump of anonymized user-contributed content on the Stack Exchange network, a popular collection of websites centered around user-contributed questions and answers."
 
1727
  * Ubuntu IRC: "The Ubuntu IRC dataset is derived from the publicly available chatlogs of all Ubunturelated channels on the Freenode IRC chat server."
1728
  * PhilPapers: a dataset of open access philosophy publications from an international database maintained by the Center for Digital Philosophy at the University of Western Ontario.
1729
  * NIH ExPORTER: "The NIH Grant abstracts provides a bulk-data repository for awarded applications through the ExPORTER4 service covering the fiscal years 1985-present."
1730
+ * <u>Pre-processing (v1.2 only)</u>:
1731
+ * <u>Filtering of PhilPapers</u>: Papers were removed if their language, detected using [Stanza](https://github.com/stanfordnlp/stanza), was not classified as English, French, German, Spanish or Italian.
1732
+ * <u>Filtering and text cleaning of Ubuntu IRC</u>: Texts from some channels were excluded to avoid data from languages other than English, French, German, Spanish or Italian and certain encoding errors were fixed (see [code details here](https://github.com/OpenLLM-France/Lucie-Training/blob/cdec8fd6369385455829ab39c2f04bcb1a8a475a/tokenization/text.py#L190)).
1733
  * <u>Citations</u>:
1734
  * Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, Connor Leahy (2020). "The Pile: An 800GB Dataset of Diverse Text for Language Modeling," [ arXiv:2101.00027](https://arxiv.org/abs/2101.00027).
1735
+ * Stella Biderman, Kieran Bicheno, Leo Gao (2022). "Datasheet for the Pile," [arXiv:2201.07311](https://arxiv.org/abs/2201.07311).
1736
 
1737
  #### QuestionsEcritesParlement
1738
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1739
+ * <u>Extracted from</u>: [Regards citoyens](https://www.regardscitoyens.org/#&panel1-4). License: [CC BY-SA](https://www.regardscitoyens.org/mentions-legales/).
1740
  * <u>Description</u>: Collection of long written questions, read during a session at the French National Assembly. Questions are asked by a member of the French parliament and addressed to a minister (who is given two months to respond).
1741
  <!-- * <u>Citation</u>: No paper found. -->
1742
 
1743
  #### RedPajama (v2)
1744
  * <u>Source</u>: [togethercomputer/RedPajama-Data-V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2). License: [Apache 2.0](https://github.com/togethercomputer/RedPajama-Data) (data preparation code), Not specified (data) but see [Common Crawl terms of use](https://commoncrawl.org/terms-of-use).
1745
  * <u>Extracted from</u>: [Common Crawl](https://commoncrawl.org/).
1746
+ * <u>Description</u>: "RedPajama-V2 is an open dataset for training large language models. The dataset includes over 100B text documents coming from 84 CommonCrawl snapshots and processed using the [CCNet](https://github.com/facebookresearch/cc_net) pipeline. Out of these, there are 30B documents in the corpus that additionally come with quality signals, and 20B documents that are deduplicated" (from [GitHub](https://github.com/togethercomputer/RedPajama-Data)). Most recent crawl for French data in the Lucie Training Dataset v1.1: 2023-14. (For more details on the time periods covered by crawls in this dataset see the composition details for <a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_redpajama-french_histogram.png">French</a>, <a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_redpajama-german_histogram.png">German</a>, <a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_redpajama-italian_histogram.png">Italian</a> and <a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_redpajama-spanish_histogram.png">Spanish</a>.)
1747
+ * <u>Pre-processing and deduplication</u>:
1748
+ * <u> Url filtering: </u>
1749
+ * <u>Removing duplicate urls</u>: urls were removed if their base domain overlapped with a dataset already in the Lucie Training Dataset (e.g., "theses.fr") in order to increase diversity of content (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/webdata_processing/base.py#L154)).
1750
+ * <u>Filtering certain toxic content</u>: urls from a list of blacklisted content were removed (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/webdata_processing/base.py#L177)).
1751
+ * <u>Filtering by robots.txt files</u>: we collect robots.txt and remove all documents for which CCBot is disallowed or for which we failed to collect information as of July 2024 in an effort to select data free from opt-out evidence according to the 4th article of the copyright European directive (2019).
1752
+ * <u>Filtering</u>: A series of filters were applied using [quality signals](https://github.com/togethercomputer/RedPajama-Data?tab=readme-ov-file#quality-annotations) already available in the dataset. This includes (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/d9cccb7bfac37b8c8285f9c04aa67d907ce475f0/webdata_processing/base.py#L36)):
1753
+ * CCnet perplexity below 10 or above 1000
1754
+ * C4 filtering (including removal of documents that contain toxic words)
1755
+ * Gopher filtering and repetition removal
1756
+ * Redpajama document deduplication
1757
+ * <u>Removal of personally identifying information (PII)</u>: email addresses and ip addresses were replaced with random addresses (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/webdata_processing/base.py#L301)).
1758
+ * <u>MinHash deduplication</u> was performed on each snapshot and language independantly as proposed in FineWeb. For minhash configuration [see code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/webdata_processing/minhash.py#L63).
1759
+
1760
+ The [Datatrove](https://github.com/huggingface/datatrove) library was used to perform both filtering and deduplication stages.
1761
+
1762
  * <u>Citation</u>: Together Computer (2023). "RedPajama-Data-v2: an Open Dataset with 30 Trillion Tokens for Training Large Language Models," [GitHub](https://github.com/togethercomputer/RedPajama-Data).
1763
 
1764
  #### STAC
1765
  * <u>Source</u>: [STAC](https://www.irit.fr/STAC/corpus.html). License: [CC BY-SA-NC 4.0](https://www.irit.fr/STAC/corpus.html).
1766
+ * <u>Description</u>: A collection of multiparty chats from an online version of the game Settlers of Catan. The full STAC corpus contains annotations for discourse structure. We use only the text of the chats.
1767
+ * <u>Citation</u>: Nicholas Asher, Julie Hunter, Mathieu Morey, Farah Benamara and Stergos Afantenos (2016). "[Discourse structure and dialogue acts in multiparty dialogue: the STAC corpus](https://hal.science/hal-02124399/file/asher_22646.pdf)," The Tenth International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association, pp. 2721-2727.
 
1768
 
1769
+ #### TheStack (v1.2)
1770
  * <u>Source</u>: [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup). License: [Other](https://huggingface.co/datasets/bigcode/the-stack-dedup) (mixture of copyleft licenses).
1771
+ * <u>Extracted from</u>: [GitHub](https://github.com/) via [GHarchive](https://www.gharchive.org/). Mixed licenses for source.
1772
  * <u>Description</u>: "The Stack contains over 6TB of permissively-licensed source code files covering 358 programming languages. The dataset was created as part of the [BigCode Project](https://www.bigcode-project.org/), an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets. This is the near-deduplicated version with 3TB data" (from the [dataset card](https://huggingface.co/datasets/bigcode/the-stack-dedup)).
1773
  * <u>Citation</u>: Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Muñoz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau, Leandro von Werra and Harm de Vries (2022). "The Stack: 3 TB of permissively licensed source code," [arxiv:2211.15533](https://arxiv.org/abs/2211.15533).
1774
 
1775
  #### Theses
1776
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1777
+ * <u>Extracted from</u>: [theses.fr](https://theses.fr/?domaine=theses) (License: [Licence Ouverte / Open Licence version 2.0](https://www.data.gouv.fr/fr/datasets/theses-soutenues-en-france-depuis-1985/)) and [HAL](https://hal.science/) ([Open access](https://about.hal.science/)).
1778
  * <u>Description</u>: A collection of doctoral theses published in France. Dataset containing text retrieved through OCR.
1779
+ * <u>Pre-processing</u>:
1780
+ * <u>Text cleaning</u>:
1781
+ * Title pages about HAL, pages containing a significant fraction of control characters, and duplicate lines were removed (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/cdec8fd6369385455829ab39c2f04bcb1a8a475a/tokenization/text.py#L277)).
1782
+ * Because the results of OCR on tables and graphics can give rise to garbage text, the text was cleaned by removing the most suspicious chunks.
1783
+ In particular, a chunk was removed if it was not detected as being written in French, English, Spanish, German or Italian, or if the perplexity of a CCNet Language Model on the chunk was higher than 2000 (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/tokenization/data.py#L1946)).
1784
+ The code to compute CCNET perplexity, parallelizing on parquet files, is [available here](https://github.com/OpenLLM-France/Lucie-dataset-filtering).
1785
+ * <u>Filtering</u>: Texts with fewer than 1000 words or 10000 characters were removed (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/tokenization/data.py#L1975)).
1786
+
1787
  <!-- * <u>Citation</u>: No paper found. -->
1788
 
1789
  #### Wikipedia, Wikisource, Wiktionary
 
1794
  * [OpenLLM-France/wiktionary](https://huggingface.co/datasets/OpenLLM-France/wiktionary)
1795
  * <u>Extracted from</u>: [Wikimedia dumps](https://dumps.wikimedia.org/other/enterprise_html/runs/). License: [GFDL/CC BY-SA](https://dumps.wikimedia.org/legal.html).
1796
  <!-- * <u>Description</u>: TODO -->
1797
+ <!-- * <u>Pre-processing</u>: TODO -->
1798
  <!-- * <u>Citation</u>: No paper found. -->
1799
 
1800
  #### YouTube
1801
+ * <u>Source</u>: Corpus contributed by LINAGORA Labs (OpenLLM-France) and [LeVoiceLab](https://www.levoicelab.org/).
1802
  * <u>Extracted from</u>: [YouTube](https://www.youtube.com/). <!-- License: TODO? -->
1803
  * <u>Description</u>: French subtitles from videos published with permissive licenses on YouTube. <!-- TODO -->
 
1804
 
1805
+
1806
+
1807
+ ## Example use in Python
1808
 
1809
  ### Load the dataset
1810
 
 
1893
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", name, revision="v1.2", **kwargs)
1894
  ```
1895
 
 
1896
  ## Citation
1897
 
1898
  TODO
1899
 
1900
  ## Acknowledgements
1901
 
1902
+ The Lucie Training Dataset was created by members of LINAGORA and OpenLLM-France community, including in alphabetical order: Evan Dufraisse (CEA), Olivier Gouvert (LINAGORA), Julie Hunter (LINAGORA), Pierre-Carl Langlais (OpSci/Pleias), Jean-Pierre Lorré (LINAGORA), Jérôme Louradour (LINAGORA), Michel-Marie Maudet (LINAGORA), Laura Rivière (LINAGORA), and Anastasia Stasenko (OpSci/Pleias).
1903
+
1904
+ We thank Rachel Bawden (INRIA), Clément Bénesse (Opsci), Christophe Cérisara (LORIA), Olivier Ferret (CEA), Joöl Gombin (Opsci), Ismaïl Harrando (LINAGORA), Jordan Ricker (Opsci), Guokan Shang (MBZUAI), and Yaya Sy (LORIA) for their helpful input.
1905
+
1906
+ Data storage and significant parts of the data processing were made possible through the HPC resources from GENCI–IDRIS (Grant 2024-GC011015444).
1907
+
1908
+
1909
 
1910
  ## Contact
1911