Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
Jeronymous commited on
Commit
419a82a
·
verified ·
1 Parent(s): b07ea6e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +181 -100
README.md CHANGED
@@ -456,7 +456,7 @@ configs:
456
  split: train
457
  ---
458
 
459
- # Dataset Card
460
 
461
  The Lucie Training Dataset is a curated collection of text data
462
  in English, French, German, Spanish and Italian culled from a variety of sources including: web data, video subtitles, academic papers,
@@ -475,70 +475,71 @@ Table of Contents:
475
  <tr>
476
  <td style="vertical-align: top;">
477
  <ul>
478
- <li><a href="#category-web">Web</a></li>
479
- <li><a href="#category-newspaper">Newspaper</a></li>
480
- <li><a href="#category-technical">Technical</a></li>
481
- <li><a href="#category-book">Book</a></li>
482
  </ul>
483
  </td>
484
  <td style="vertical-align: top;">
485
  <ul>
486
- <li><a href="#category-legislative-texts">Legislative Texts</a></li>
487
- <li><a href="#category-legislative-transcripts">Legislative Transcripts</a></li>
488
- <li><a href="#category-wiki">Wiki</a></li>
489
- <li><a href="#category-math">Math</a></li>
490
  </ul>
491
  </td>
492
  <td style="vertical-align: top;">
493
  <ul>
494
- <li><a href="#category-forum">Forum</a></li>
495
- <li><a href="#category-dialogue">Dialogue</a></li>
496
  <li><a href="#category-multilingual-parallel-corpora">Multilingual Parallel Corpora</a></li>
497
- <li><a href="#category-programming">Programming</a></li>
498
  </ul>
499
  </td>
500
  </tr>
501
  </table>
502
  </li>
503
- <li><a href="#details-on-data-sources">Details on Data Sources</a>
 
504
  <table>
505
  <tr>
506
  <td style="vertical-align: top;">
507
  <ul>
508
- <li><a href="#amendementsparlement">AmendementsParlement</a></li>
509
- <li><a href="#americanstories">AmericanStories</a></li>
510
- <li><a href="#claire-french-and-english">Claire (French and English)</a></li>
511
- <li><a href="#croissantaligned">CroissantAligned</a></li>
512
- <li><a href="#discourspublics">DiscoursPublics</a></li>
513
- <li><a href="#europarl-monolingual-and-parallel">Europarl (monolingual and parallel)</a></li>
514
- <li><a href="#eurovoc">Eurovoc</a></li>
515
- <li><a href="#finewebedu">FineWebEdu</a></li>
516
- <li><a href="#gallicamonographies">GallicaMonographies</a></li>
517
  </ul>
518
  </td>
519
  <td style="vertical-align: top;">
520
  <ul>
521
- <li><a href="#gallicapress">GallicaPress</a></li>
522
- <li><a href="#gutenberg">Gutenberg</a></li>
523
- <li><a href="#hal">HAL</a></li>
524
- <li><a href="#interventionsparlement">InterventionsParlement</a></li>
525
- <li><a href="#legi">LEGI</a></li>
526
- <li><a href="#mathpile">MathPile</a></li>
527
- <li><a href="#opendata">OpenData</a></li>
528
- <li><a href="#openedition">OpenEdition</a></li>
529
- <li><a href="#pes2o">PeS2o</a></li>
530
  </ul>
531
  </td>
532
  <td style="vertical-align: top;">
533
  <ul>
534
- <li><a href="#pile-uncopyrighted">Pile (Uncopyrighted)</a></li>
535
- <li><a href="#questionsecritesparlement">QuestionsEcritesParlement</a></li>
536
- <li><a href="#redpajama-v2">RedPajama (v2)</a></li>
537
- <li><a href="#stac">Stac</a></li>
538
- <li><a href="#thestack">TheStack</a></li>
539
- <li><a href="#theses">Theses</a></li>
540
- <li><a href="#wikipedia-wikisource-wiktionary">Wikipedia, Wikisource, Wiktionary</a></li>
541
- <li><a href="#youtube">YouTube</a></li>
542
  </ul>
543
  </td>
544
  </tr>
@@ -546,9 +547,15 @@ Table of Contents:
546
  </li>
547
  </ul>
548
  </li>
549
- <li><a href="#example-use-in-python">Example use in python</a></li>
550
- <li><a href="#license">License</a></li>
 
 
 
 
 
551
  <li><a href="#citation">Citation</a></li>
 
552
  <li><a href="#contact">Contact</a></li>
553
  </ul>
554
 
@@ -573,31 +580,27 @@ This dataset was made to provide an extensive and diverse dataset for training L
573
 
574
  The corpus contains the following information for each text sample:
575
  * `text`: the text sample itself.
576
- * `source`: an identifier for the source(s) of the text sample (`Wikipedia`, `RedPajama`, `Gutenberg`, ).
577
- All sources are described in detail [in this document](#details-on-data-sources).
578
- * `id`: an identifier that is unique among the source.
579
- * `language`: the language of the text sample (relying on the source, that information can be wrong).
580
- <details ><summary>Possible values:</summary>
581
- an ISO 639-1 code of a natural language ("en", "fr", "de", "es", or "it"),
582
- a common name prefixed by "code:" of a programming language ("code:python", "code:c++", …) or
583
- a list of ISO 639-1 codes separated by commas when the text sample is multilingual and aligned ("fr,en", "de,fr", "es,en", "it,en",
584
  or one of those pairs in the opposite order if the languages appear in the opposite order in the text).
585
- </details >
586
- * `url` (optional): the URL of the original text sample on the web, if available.
587
- * `title` (optional): the title of the original text sample, if available.
588
- * `author` (optional): the author of the original text sample, if available.
589
- <details ><summary>Note:</summary>
590
- Usually the author name in plain text, except for `Gutenberg` where it is the JSON serialized object of the author metadata.
591
- </details >
592
- * `date` (optional): the publication date of the original text sample, if available.
593
- <details ><summary>Note:</summary>
 
594
  The text format of the source depends on the source.
595
- </details >
596
- * `quality_signals` (optional): a list of quality signals about the text sample, in JSON format (that could be used for further filtering or sample weighting).
597
- <details ><summary>Note:</summary>
598
  It can include indicators computed by `fasttext` and `CCNet`, statistics about occurrences of characters, words, special characters, etc.
599
- </details >
600
- * `extra` (optional): extra information about the text sample, in JSON format.
601
  This can include metadata about the source subset, the rights, etc.
602
 
603
  Examples of metadata (except from `text`) are shown for each source in [metadata_examples.json](metadata/metadata_examples.json).
@@ -1004,7 +1007,7 @@ The Number of tokens was computed using the tokenizer of [Lucie-7B LLM](https://
1004
  <tr>
1005
  <td colspan="7"><h4 id="category-legislative-transcripts">Category: Legislative Transcripts</h4></td></tr>
1006
  <tr>
1007
- <td rowspan="4" style="vertical-align: top;"><a href="#europarl-monolingual-and-parallel"><strong>Europarl</strong></a></td>
1008
  <td><strong>German (de)</strong></td>
1009
  <td>0.0102</td>
1010
  <td>0.0451</td>
@@ -1212,7 +1215,7 @@ The Number of tokens was computed using the tokenizer of [Lucie-7B LLM](https://
1212
  <td></td>
1213
  </tr>
1214
  <tr>
1215
- <td rowspan="4" style="vertical-align: top;"><a href="#europarl-monolingual-and-parallel"><strong>EuroparlAligned</strong></a></td>
1216
  <td><strong>it-en</strong></td>
1217
  <td>1.901</td>
1218
  <td>0.100</td>
@@ -1523,21 +1526,47 @@ The Number of tokens was computed using the tokenizer of [Lucie-7B LLM](https://
1523
  </table>
1524
  <!-- TABLE END -->
1525
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1526
  ### Details on Data Sources
1527
 
1528
  #### AmendementsParlement
1529
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1530
  * <u>Extracted from</u>: [Regards citoyens](https://www.regardscitoyens.org/#&panel1-4) ([nodeputes.fr](http://www.nosdeputes.fr/), [nossenateurs.fr](http://www.nossenateurs.fr/)). [API](https://github.com/regardscitoyens). License: [CC BY-SA](https://www.regardscitoyens.org/#&panel1-2).
1531
  * <u>Description</u>: A collection of proposed amendments by the French parliament: the legal text and description of the requested modification.
1532
- * <u>Citation</u>: No paper found.
1533
 
1534
  #### AmericanStories
1535
  * <u>Source</u>: [dell-research-harvard/AmericanStories](https://huggingface.co/datasets/dell-research-harvard/AmericanStories). License: [CC BY 4.0](https://huggingface.co/datasets/dell-research-harvard/AmericanStories).
1536
  * <u>Extracted from</u>: [Chronicling America](https://www.loc.gov/collections/chronicling-america/about-this-collection/). License: [Open](https://www.loc.gov/collections/chronicling-america/about-this-collection/rights-and-access/).
1537
  * <u>Description</u>: "The American Stories dataset is a collection of full article texts extracted from historical U.S. newspaper images. It includes nearly 20 million scans from the public domain Chronicling America collection maintained by the Library of Congress. The dataset is designed to address the challenges posed by complex layouts and low OCR quality in existing newspaper datasets" (from the [dataset card](https://huggingface.co/datasets/dell-research-harvard/AmericanStories)). Dataset containing text retrieved through OCR.
 
 
 
 
 
1538
  * <u>Citation</u>: Melissa Dell, Jacob Carlson, Tom Bryan, Emily Silcock, Abhishek Arora, Zejiang Shen, Luca D'Amico-Wong, Quan Le, Pablo Querubin and Leander Heldring (2023). "American Stories: A Large-Scale Structured Text Dataset of Historical U.S. Newspapers," [arxiv:2308.12477](https://arxiv.org/abs/2308.12477v1).
1539
 
1540
-
1541
  #### Claire (French and English)
1542
  * <u>Sources</u>:
1543
  * French dataset: [OpenLLM-France/Claire-Dialogue-French-0.1](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1). License: [CC BY-NC-SA 4.0](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1).
@@ -1553,19 +1582,31 @@ The Number of tokens was computed using the tokenizer of [Lucie-7B LLM](https://
1553
  * Thesis abstracts: French thesis abstract pairs. License: [ETALAB-Licence-Ouverte-v2.0](https://www.etalab.gouv.fr/wp-content/uploads/2017/04/ETALAB-Licence-Ouverte-v2.0.pdf).
1554
  * Song lyrics: [lacoccinelle](https://www.lacoccinelle.net). License: .
1555
  * <u>Description</u>: Data extracted from OPUS takes the form of sentences pairs, where one sentence is in French and the other is in English. OPUS pairs were passed through a custom pipeline designed to select the highest quality sentences pairs. Selected pairs are labeled "UnbabelFrEn" in the CroissantAligned dataset. The thesis abstract subset contains pairs of French or English thesis abstracts paired with translations written by the thesis author. The song lyrics are translated by contributors to www.lacoccinelle.net. Parallel data are used to boost the multilingual capabilities of models trained on them ([Faysse et al.,2024](https://arxiv.org/pdf/2402.00786)).
 
 
 
 
 
 
 
1556
  * <u>Citation</u>: Manuel Faysse, Patrick Fernandes, Nuno M. Guerreiro, António Loison, Duarte M. Alves, Caio Corro, Nicolas Boizard, João Alves, Ricardo Rei, Pedro H. Martins, Antoni Bigata Casademunt, François Yvon, André F.T. Martins, Gautier Viaud, Céline Hudelot, Pierre Colombo (2024). "CroissantLLM: A Truly Bilingual French-English Language Model," [arXiv:2402.00786](https://arxiv.org/abs/2402.00786).
1557
 
1558
  #### DiscoursPublics
1559
- * <u>Source</u>: Corpus contributed by OpenLLM partners.
1560
- * <u>Extracted from</u>: [Vie Publique](https://www.vie-publique.fr/collection-discours-publics).
1561
- * <u>Description</u>: A collection of public speeches from the principal public actors in France including speeches from the French President starting from 1974 and from the Prime Minister and members of the government starting from 1980.
1562
- * <u>Citation</u>: No paper found.
 
 
1563
 
1564
- #### Europarl (monolingual and parallel)
1565
  * <u>Sources</u>:
1566
  * `fr-en`, `es-en`, `it-en` parallel data: [Europarl v7](https://www.statmt.org/europarl/v7/). License: [Open](https://www.statmt.org/europarl/).
1567
  * `fr`, `en`, `de`, `es` monolingual data and `de-fr` parallel data: [Europarl v10](https://www.statmt.org/europarl/v10/training-monolingual/). License: [Open](https://www.statmt.org/europarl/).
1568
  * <u>Description</u>: "The Europarl parallel corpus is extracted from the proceedings of the European Parliament. It includes versions in 21 European languages: Romanic (French, Italian, Spanish, Portuguese, Romanian), Germanic (English, Dutch, German, Danish, Swedish), Slavik (Bulgarian, Czech, Polish, Slovak, Slovene), Finni-Ugric (Finnish, Hungarian, Estonian), Baltic (Latvian, Lithuanian), and Greek. The goal of the extraction and processing was to generate sentence aligned text for statistical machine translation systems" ([www.statmt.org](https://www.statmt.org/europarl/)).
 
 
 
1569
  * <u>Citation</u>: Philipp Koehn (2005). "Europarl: A Parallel Corpus for Statistical Machine Translation," MT Summit.
1570
 
1571
  #### Eurovoc
@@ -1588,21 +1629,24 @@ The Number of tokens was computed using the tokenizer of [Lucie-7B LLM](https://
1588
  * <u>Source</u>: Corpus contributed by OpenLLM partners. A version is also published here: [PleIAs/French-PD-Books](https://huggingface.co/datasets/PleIAs/French-PD-Books). License: None (public domain).
1589
  * <u>Extracted from</u>: [Gallicagram](https://shiny.ens-paris-saclay.fr/app/gallicagram).
1590
  * <u>Description</u>: A large collection of French monographies in the public domain made available through the French National Library ([Gallica](https://gallica.bnf.fr/accueil/fr/content/accueil-fr?mode=desktop)). Dataset containing text retrieved through OCR.
1591
- * <u>Citation</u>: No paper found.
1592
 
1593
  #### GallicaPress
1594
  * <u>Source</u>: Corpus contributed by OpenLLM partners. A version is also published here: [PleIAs/French-PD-Newspapers](https://huggingface.co/datasets/PleIAs/French-PD-Newspapers). License: None (public domain).
1595
  * <u>Extracted from</u>: [Gallicagram](https://shiny.ens-paris-saclay.fr/app/gallicagram).
1596
  * <u>Description</u>: A large collection of French newspapers and periodicals in the public domain made available through the French National Library ([Gallica](https://gallica.bnf.fr/accueil/fr/content/accueil-fr?mode=desktop)). Dataset containing text retrieved through OCR.
1597
- * <u>Citation</u>: No paper found.
1598
 
1599
  #### Gutenberg
1600
- * <u>Source</u>: Corpus compiled by OpenLLM partners.
1601
- * <u>Extracted from</u>:
1602
- * [aleph.gutenberg.org](http://aleph.gutenberg.org/) via [Project Gutenberg](https://www.gutenberg.org/). License: [Open](https://www.gutenberg.org/policy/terms_of_use.html).
1603
- * [pgcorpus](https://github.com/pgcorpus/gutenberg). License: [CC BY-4.0](https://zenodo.org/records/2422561).
1604
- * <u>Description</u>: A collection of free eBooks, manually prepared by human annotators.
1605
- * <u>Citation</u>: No paper found.
 
 
 
1606
 
1607
  #### HAL
1608
  * <u>Source</u>: The ROOTS corpus by BigScience (unpublished). License: CC BY-4.0.
@@ -1615,7 +1659,7 @@ The Number of tokens was computed using the tokenizer of [Lucie-7B LLM](https://
1615
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1616
  * <u>Extracted from</u>: [Regards citoyens](https://www.regardscitoyens.org/#&panel1-4) ([nodeputes.fr](http://www.nosdeputes.fr/), [nossenateurs.fr](http://www.nossenateurs.fr/)). [API](https://github.com/regardscitoyens). License: [CC BY-SA](https://www.regardscitoyens.org/#&panel1-2).
1617
  * <u>Description</u>: Transcripts of speeches made during French parlementary debates.
1618
- * <u>Citation</u>: No paper found.
1619
 
1620
  #### MathPile
1621
  * <u>Source</u>: [GAIR/MathPile_Commercial](https://huggingface.co/datasets/GAIR/MathPile_Commercial). License: [CC BY-SA 4.0](https://huggingface.co/datasets/GAIR/MathPile_Commercial)
@@ -1627,13 +1671,13 @@ The Number of tokens was computed using the tokenizer of [Lucie-7B LLM](https://
1627
  * <u>Source</u>: [Nicolas-BZRD/DILA_OPENDATA_FR_2023](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main) (balo, dole, inca, kali, legi and sarde subsets). License: [ODC-BY](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main).
1628
  * <u>Extracted from</u>: [OpenData](https://echanges.dila.gouv.fr/OPENDATA/) (Data collection date: October, 2023).
1629
  * <u>Description</u>: "The French Government Open Data (DILA) Dataset is a collection of text data extracted from various sources provided by the French government, specifically the Direction de l'information légale et administrative (DILA). This dataset contains a wide range of legal, administrative, and legislative documents. The data has been organized into several categories for easy access and analysis" (from the [dataset card](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main)).
1630
- * <u>Citation</u>: No paper found.
1631
 
1632
  #### OpenEdition
1633
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1634
  * <u>Extracted from</u>: [Open Edition](https://www.openedition.org/).
1635
- * <u>Description</u>:
1636
- * <u>Citation</u>: No paper found.
1637
 
1638
  #### PeS2o
1639
  * <u>Source</u>: [allenai/peS2o](https://huggingface.co/datasets/allenai/peS2o). License: [ODC BY-v1.0](https://opendatacommons.org/licenses/by/1-0/)
@@ -1652,7 +1696,7 @@ The Number of tokens was computed using the tokenizer of [Lucie-7B LLM](https://
1652
  * Ubuntu IRC: "The Ubuntu IRC dataset is derived from the publicly available chatlogs of all Ubunturelated channels on the Freenode IRC chat server."
1653
  * PhilPapers: a dataset of open access philosophy publications from an international database maintained by the Center for Digital Philosophy at the University of Western Ontario.
1654
  * NIH ExPORTER: "The NIH Grant abstracts provides a bulk-data repository for awarded applications through the ExPORTER4 service covering the fiscal years 1985-present."
1655
- * <u>Citation</u>:
1656
  * Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, Connor Leahy (2020). "The Pile: An 800GB Dataset of Diverse Text for Language Modeling," [ arXiv:2101.00027](https://arxiv.org/abs/2101.00027).
1657
  * Stella Biderman, Kieran Bicheno, Leo Gao (2022). "Datasheet for the Pile," [ arXiv:2201.07311](https://arxiv.org/abs/2201.07311).
1658
 
@@ -1660,7 +1704,7 @@ The Number of tokens was computed using the tokenizer of [Lucie-7B LLM](https://
1660
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1661
  * <u>Extracted from</u>: [Regards citoyens](https://www.regardscitoyens.org/#&panel1-4) ([text](https://data.regardscitoyens.org/nosdeputes.fr/)). License: [CC BY-NC-SA](https://data.regardscitoyens.org/nosdeputes.fr/).
1662
  * <u>Description</u>: Collection of long written questions, read during a session at the French National Assembly. Questions are asked by a member of the French parliament and addressed to a minister (who is given two months to respond).
1663
- * <u>Citation</u>: No paper found.
1664
 
1665
  #### RedPajama (v2)
1666
  * <u>Source</u>: [togethercomputer/RedPajama-Data-V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2). License: [Apache 2.0](https://github.com/togethercomputer/RedPajama-Data) (data preparation code), Not specified (data) but see [Common Crawl terms of use](https://commoncrawl.org/terms-of-use).
@@ -1684,7 +1728,10 @@ The Number of tokens was computed using the tokenizer of [Lucie-7B LLM](https://
1684
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1685
  * <u>Extracted from</u>: [theses.fr](https://theses.fr/?domaine=theses) and [HAL](https://hal.science/).
1686
  * <u>Description</u>: A collection of doctoral theses published in France. Dataset containing text retrieved through OCR.
1687
- * <u>Citation</u>: No paper found.
 
 
 
1688
 
1689
  #### Wikipedia, Wikisource, Wiktionary
1690
  * <u>Source</u>: Corpus contributed by LINAGORA Labs (OpenLLM-France).
@@ -1693,40 +1740,49 @@ The Number of tokens was computed using the tokenizer of [Lucie-7B LLM](https://
1693
  * [OpenLLM-France/wikisource](https://huggingface.co/datasets/OpenLLM-France/wikisource)
1694
  * [OpenLLM-France/wiktionary](https://huggingface.co/datasets/OpenLLM-France/wiktionary)
1695
  * <u>Extracted from</u>: [Wikimedia dumps](https://dumps.wikimedia.org/other/enterprise_html/runs/). License: [GFDL/CC BY-SA](https://dumps.wikimedia.org/legal.html).
1696
- * <u>Description</u>:
1697
- * <u>Citation</u>: No paper found.
1698
 
1699
  #### YouTube
1700
  * <u>Source</u>: Corpus contributed by LINAGORA Labs (OpenLLM-France).
1701
- * <u>Extracted from</u>: [YouTube](https://www.youtube.com/). License: .
1702
- * <u>Description</u>: French subtitles from videos published with permissive licenses on YouTube.
1703
- * <u>Citation</u>: No paper found.
1704
 
1705
  ## Example use in python
1706
 
1707
- Load the dataset using the `datasets` library:
 
 
1708
  ```python
1709
  from datasets import load_dataset
1710
 
1711
- kwargs = {"split": "train", "streaming": True}
1712
-
1713
- dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", **kwargs)
1714
 
1715
  for sample in dataset:
 
1716
  text = sample["text"]
1717
- # ... do something with the text
 
1718
  ```
1719
 
 
 
1720
  Several configurations are available to select a language, a source, or both, illustrated in the following examples.
1721
 
1722
  Load data in French:
1723
  ```python
 
 
 
 
1724
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "fr", **kwargs)
1725
  ```
1726
  Load data where French and English are aligned:
1727
  ```python
1728
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "fr,en", **kwargs)
1729
  ```
 
1730
  Load data corresponding to files with programming languages:
1731
  ```python
1732
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code", **kwargs)
@@ -1735,7 +1791,8 @@ Load data in Python:
1735
  ```python
1736
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code-python", **kwargs)
1737
  ```
1738
- Load data from Wikipedia (in available languages):
 
1739
  ```python
1740
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia", **kwargs)
1741
  ```
@@ -1744,14 +1801,38 @@ Load data from French pages of Wikipedia ([wikipedia.fr](https://www.wikipedia.f
1744
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia-fr", **kwargs)
1745
  ```
1746
 
1747
- ## License
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1748
 
1749
- TODO
1750
 
1751
  ## Citation
1752
 
1753
  TODO
1754
 
 
 
 
 
1755
  ## Contact
1756
 
1757
  <pre>[email protected]</pre>
 
456
  split: train
457
  ---
458
 
459
+ # Lucie Training Dataset Card
460
 
461
  The Lucie Training Dataset is a curated collection of text data
462
  in English, French, German, Spanish and Italian culled from a variety of sources including: web data, video subtitles, academic papers,
 
475
  <tr>
476
  <td style="vertical-align: top;">
477
  <ul>
478
+ <li><a href="#category-web"> Web</a></li>
479
+ <li><a href="#category-newspaper"> Newspaper</a></li>
480
+ <li><a href="#category-technical"> Technical</a></li>
481
+ <li><a href="#category-book"> Book</a></li>
482
  </ul>
483
  </td>
484
  <td style="vertical-align: top;">
485
  <ul>
486
+ <li><a href="#category-legislative-texts"> Legislative Texts</a></li>
487
+ <li><a href="#category-legislative-transcripts"> Legislative Transcripts</a></li>
488
+ <li><a href="#category-wiki"> Wiki</a></li>
489
+ <li><a href="#category-math"> Math</a></li>
490
  </ul>
491
  </td>
492
  <td style="vertical-align: top;">
493
  <ul>
494
+ <li><a href="#category-forum"> Forum</a></li>
495
+ <li><a href="#category-dialogue"> Dialogue</a></li>
496
  <li><a href="#category-multilingual-parallel-corpora">Multilingual Parallel Corpora</a></li>
497
+ <li><a href="#category-programming"> Programming</a></li>
498
  </ul>
499
  </td>
500
  </tr>
501
  </table>
502
  </li>
503
+ <li><a href="#subsets-and-versions">Subsets and Versions</a></li>
504
+ <li><a href="#details-on-data-sources">Details on Data Sources</a>
505
  <table>
506
  <tr>
507
  <td style="vertical-align: top;">
508
  <ul>
509
+ <li><a href="#amendementsparlement"> AmendementsParlement</a></li>
510
+ <li><a href="#americanstories"> AmericanStories</a></li>
511
+ <li><a href="#claire-french-and-english"> Claire (French and English)</a></li>
512
+ <li><a href="#croissantaligned"> CroissantAligned</a></li>
513
+ <li><a href="#discourspublics"> DiscoursPublics</a></li>
514
+ <li><a href="#europarl-and-europarlaligned"> Europarl and EuroparlAligned</a></li>
515
+ <li><a href="#eurovoc"> Eurovoc</a></li>
516
+ <li><a href="#finewebedu"> FineWebEdu</a></li>
517
+ <li><a href="#gallicamonographies"> GallicaMonographies</a></li>
518
  </ul>
519
  </td>
520
  <td style="vertical-align: top;">
521
  <ul>
522
+ <li><a href="#gallicapress"> GallicaPress</a></li>
523
+ <li><a href="#gutenberg"> Gutenberg</a></li>
524
+ <li><a href="#hal"> HAL</a></li>
525
+ <li><a href="#interventionsparlement"> InterventionsParlement</a></li>
526
+ <li><a href="#legi"> LEGI</a></li>
527
+ <li><a href="#mathpile"> MathPile</a></li>
528
+ <li><a href="#opendata"> OpenData</a></li>
529
+ <li><a href="#openedition"> OpenEdition</a></li>
530
+ <li><a href="#pes2o"> PeS2o</a></li>
531
  </ul>
532
  </td>
533
  <td style="vertical-align: top;">
534
  <ul>
535
+ <li><a href="#pile-uncopyrighted"> Pile (Uncopyrighted)</a></li>
536
+ <li><a href="#questionsecritesparlement"> QuestionsEcritesParlement</a></li>
537
+ <li><a href="#redpajama-v2"> RedPajama (v2)</a></li>
538
+ <li><a href="#stac"> Stac</a></li>
539
+ <li><a href="#thestack"> TheStack</a></li>
540
+ <li><a href="#theses"> Theses</a></li>
541
+ <li><a href="#wikipedia-wikisource-wiktionary"> Wikipedia, Wikisource, Wiktionary</a></li>
542
+ <li><a href="#youtube"> YouTube</a></li>
543
  </ul>
544
  </td>
545
  </tr>
 
547
  </li>
548
  </ul>
549
  </li>
550
+ <li><a href="#example-use-in-python">Example use in python</a>
551
+ <ul>
552
+ <li><a href="#load-the-dataset">Load the dataset</a></li>
553
+ <li><a href="#iterate-over-a-subset">Iterate over a subset</a></li>
554
+ <li><a href="#load-a-specific-version">Load a specific version</a></li>
555
+ </ul>
556
+ </li>
557
  <li><a href="#citation">Citation</a></li>
558
+ <li><a href="#acknowledgements">Acknowledgements</a></li>
559
  <li><a href="#contact">Contact</a></li>
560
  </ul>
561
 
 
580
 
581
  The corpus contains the following information for each text sample:
582
  * `text`: the text sample itself.
583
+ * [`language`](metadata/metadata_examples.json#L3): the language of the text sample (relying on the source, that information can be wrong). <details><summary>Possible values:</summary>
584
+ - an ISO 639-1 code of a natural language ("en", "fr", "de", "es", or "it"),
585
+ - a common name prefixed by "code:" of a programming language ("code:python", "code:c++", …), or
586
+ - a list of ISO 639-1 codes separated by commas when the text sample is multilingual and aligned ("fr,en", "de,fr", "es,en", "it,en",
 
 
 
 
587
  or one of those pairs in the opposite order if the languages appear in the opposite order in the text).
588
+ </details>
589
+ * [`source`](metadata/metadata_examples.json#L4): an identifier for the source(s) of the text sample (Wikipedia, RedPajama, Gutenberg, …).
590
+ All sources are described in detail [in this document](#details-on-data-sources).
591
+ * [`id`](metadata/metadata_examples.json#L13): an identifier that is unique among the source.
592
+ * [`url`](metadata/metadata_examples.json#L35) (optional): the URL of the original text sample on the web, if available.
593
+ * [`title`](metadata/metadata_examples.json#L36) (optional): the title of the original text sample, if available.
594
+ * [`author`](metadata/metadata_examples.json#L81) (optional): the author of the original text sample, if available. <details><summary>Note:</summary>
595
+ Usually the author name in plain text, except for [Gutenberg books](metadata/metadata_examples.json#L91) , where it is the JSON serialized object of the author metadata.
596
+ </details>
597
+ * [`date`](metadata/metadata_examples.json#L6) (optional): the publication date of the original text sample, if available. <details><summary>Note:</summary>
598
  The text format of the source depends on the source.
599
+ </details>
600
+ * [`quality_signals`](metadata/metadata_examples.json#L17) (optional): a list of quality signals about the text sample, in JSON format (that could be used for further filtering or sample weighting). <details><summary>Note:</summary>
 
601
  It can include indicators computed by `fasttext` and `CCNet`, statistics about occurrences of characters, words, special characters, etc.
602
+ </details>
603
+ * [`extra`](metadata/metadata_examples.json#L16) (optional): extra information about the text sample, in JSON format.
604
  This can include metadata about the source subset, the rights, etc.
605
 
606
  Examples of metadata (except from `text`) are shown for each source in [metadata_examples.json](metadata/metadata_examples.json).
 
1007
  <tr>
1008
  <td colspan="7"><h4 id="category-legislative-transcripts">Category: Legislative Transcripts</h4></td></tr>
1009
  <tr>
1010
+ <td rowspan="4" style="vertical-align: top;"><a href="#europarl-and-europarlaligned"><strong>Europarl</strong></a></td>
1011
  <td><strong>German (de)</strong></td>
1012
  <td>0.0102</td>
1013
  <td>0.0451</td>
 
1215
  <td></td>
1216
  </tr>
1217
  <tr>
1218
+ <td rowspan="4" style="vertical-align: top;"><a href="#europarl-and-europarlaligned"><strong>EuroparlAligned</strong></a></td>
1219
  <td><strong>it-en</strong></td>
1220
  <td>1.901</td>
1221
  <td>0.100</td>
 
1526
  </table>
1527
  <!-- TABLE END -->
1528
 
1529
+
1530
+ ### Subsets and Versions
1531
+
1532
+ As the dataset is the result of merging multiple sources, it is divided into subsets based on the source and the language of the texts.
1533
+ <br> Different configurations of the dataset are available, depending on the sources and languages included.
1534
+ The list of all configurations is available [in the YAML header of this README file](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/v1.2/README.md?code=true#L24).
1535
+ Each configuration corresponds to a pathname pattern in the [`data` subdirectory](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/main/data).
1536
+
1537
+ The dataset is available in the following versions:
1538
+ - **v1.1** / [**main**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/main/data) (default):
1539
+ The data used for the first (main) pretraining phase of [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B), which approximates 2.3T tokens.
1540
+ - [**v1.2**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2/data): An improved version of the data, where
1541
+ - GallicaMonographies and GallicaPress have been updated to filter out documents with bad OCR quality.
1542
+ - The `Ubuntu_IRC` and `PhilPapers` subsets of Pile have been refined, by fixing encoding issues and removing documents in languages other than English, French, Spanish, German and Italian.
1543
+ - [**v1.2-recent_web**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2-recent_web/data) : The data used for the second pretraining phase (context extension) of [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B#2-context-extension).
1544
+ This consists in the same as `v1.2` without old snapshots for web data (only year 2023 for RedPajama, and only year 2024 for FineWebEdu).
1545
+ All data that was not filtered out remained unchanged.
1546
+
1547
+ Except from **v1.1**, which is a git tag, all versions are git branches in the dataset repository
1548
+ (e.g. [**v1.2**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2/data)).
1549
+
1550
+
1551
  ### Details on Data Sources
1552
 
1553
  #### AmendementsParlement
1554
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1555
  * <u>Extracted from</u>: [Regards citoyens](https://www.regardscitoyens.org/#&panel1-4) ([nodeputes.fr](http://www.nosdeputes.fr/), [nossenateurs.fr](http://www.nossenateurs.fr/)). [API](https://github.com/regardscitoyens). License: [CC BY-SA](https://www.regardscitoyens.org/#&panel1-2).
1556
  * <u>Description</u>: A collection of proposed amendments by the French parliament: the legal text and description of the requested modification.
1557
+ <!-- * <u>Citation</u>: No paper found. -->
1558
 
1559
  #### AmericanStories
1560
  * <u>Source</u>: [dell-research-harvard/AmericanStories](https://huggingface.co/datasets/dell-research-harvard/AmericanStories). License: [CC BY 4.0](https://huggingface.co/datasets/dell-research-harvard/AmericanStories).
1561
  * <u>Extracted from</u>: [Chronicling America](https://www.loc.gov/collections/chronicling-america/about-this-collection/). License: [Open](https://www.loc.gov/collections/chronicling-america/about-this-collection/rights-and-access/).
1562
  * <u>Description</u>: "The American Stories dataset is a collection of full article texts extracted from historical U.S. newspaper images. It includes nearly 20 million scans from the public domain Chronicling America collection maintained by the Library of Congress. The dataset is designed to address the challenges posed by complex layouts and low OCR quality in existing newspaper datasets" (from the [dataset card](https://huggingface.co/datasets/dell-research-harvard/AmericanStories)). Dataset containing text retrieved through OCR.
1563
+ * <u>Text Pre-processing</u>:
1564
+ * <u>Filtering</u>:
1565
+ To filter out documents with excessive OCR errors, the dataset was refined by discarding texts with a perplexity higher than 1500,
1566
+ measured using a CCNET model in English.
1567
+ The code to compute perplexity, parallelized on Parquet files, is [available here](https://github.com/OpenLLM-France/Lucie-dataset-filtering).
1568
  * <u>Citation</u>: Melissa Dell, Jacob Carlson, Tom Bryan, Emily Silcock, Abhishek Arora, Zejiang Shen, Luca D'Amico-Wong, Quan Le, Pablo Querubin and Leander Heldring (2023). "American Stories: A Large-Scale Structured Text Dataset of Historical U.S. Newspapers," [arxiv:2308.12477](https://arxiv.org/abs/2308.12477v1).
1569
 
 
1570
  #### Claire (French and English)
1571
  * <u>Sources</u>:
1572
  * French dataset: [OpenLLM-France/Claire-Dialogue-French-0.1](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1). License: [CC BY-NC-SA 4.0](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1).
 
1582
  * Thesis abstracts: French thesis abstract pairs. License: [ETALAB-Licence-Ouverte-v2.0](https://www.etalab.gouv.fr/wp-content/uploads/2017/04/ETALAB-Licence-Ouverte-v2.0.pdf).
1583
  * Song lyrics: [lacoccinelle](https://www.lacoccinelle.net). License: .
1584
  * <u>Description</u>: Data extracted from OPUS takes the form of sentences pairs, where one sentence is in French and the other is in English. OPUS pairs were passed through a custom pipeline designed to select the highest quality sentences pairs. Selected pairs are labeled "UnbabelFrEn" in the CroissantAligned dataset. The thesis abstract subset contains pairs of French or English thesis abstracts paired with translations written by the thesis author. The song lyrics are translated by contributors to www.lacoccinelle.net. Parallel data are used to boost the multilingual capabilities of models trained on them ([Faysse et al.,2024](https://arxiv.org/pdf/2402.00786)).
1585
+ * <u>Text Pre-processing</u>:
1586
+ * <u>Language Separation and Tagging</u>: The original text field of [the Croissant dataset](https://huggingface.co/datasets/croissantllm/croissant_dataset_no_web_data) includes both French and English passages in a non-deterministic order (sometimes English first, sometimes not), separated by different delimiters depending on the subset.
1587
+ Each text was split into monolingual sentences and tagged with the appropriate language code, identified automatically using the [langid library](https://pypi.org/project/langid/).
1588
+ These texts are provided separately in the Lucie-Training-Dataset under the extra field as text_fr for French and text_en for English.
1589
+ * <u>Random combination of texts prefixed by language</u>: The monolingual texts were recombined using random separators and various methods of prefixing the text with the language (name or code).
1590
+ This was done as a precaution to prevent models trained on this data from language switching when generating text.
1591
+ It can be seen as a very basic instruction to translate the first text into the other language.
1592
  * <u>Citation</u>: Manuel Faysse, Patrick Fernandes, Nuno M. Guerreiro, António Loison, Duarte M. Alves, Caio Corro, Nicolas Boizard, João Alves, Ricardo Rei, Pedro H. Martins, Antoni Bigata Casademunt, François Yvon, André F.T. Martins, Gautier Viaud, Céline Hudelot, Pierre Colombo (2024). "CroissantLLM: A Truly Bilingual French-English Language Model," [arXiv:2402.00786](https://arxiv.org/abs/2402.00786).
1593
 
1594
  #### DiscoursPublics
1595
+ * <u>Source</u>: Corpus contributed by OpenLLM partners.
1596
+ * <u>Extracted from</u>: [Vie Publique](https://www.vie-publique.fr/collection-discours-publics).
1597
+ * <u>Description</u>: A collection of public speeches from the principal public actors in France including speeches from the French President starting from 1974 and from the Prime Minister and members of the government starting from 1980.
1598
+ * <u>Text Pre-processing</u>:
1599
+ * <u>Text cleaning</u>: the mention of the source url and the number of views were removed.
1600
+ <!-- * <u>Citation</u>: No paper found. -->
1601
 
1602
+ #### Europarl and EuroparlAligned
1603
  * <u>Sources</u>:
1604
  * `fr-en`, `es-en`, `it-en` parallel data: [Europarl v7](https://www.statmt.org/europarl/v7/). License: [Open](https://www.statmt.org/europarl/).
1605
  * `fr`, `en`, `de`, `es` monolingual data and `de-fr` parallel data: [Europarl v10](https://www.statmt.org/europarl/v10/training-monolingual/). License: [Open](https://www.statmt.org/europarl/).
1606
  * <u>Description</u>: "The Europarl parallel corpus is extracted from the proceedings of the European Parliament. It includes versions in 21 European languages: Romanic (French, Italian, Spanish, Portuguese, Romanian), Germanic (English, Dutch, German, Danish, Swedish), Slavik (Bulgarian, Czech, Polish, Slovak, Slovene), Finni-Ugric (Finnish, Hungarian, Estonian), Baltic (Latvian, Lithuanian), and Greek. The goal of the extraction and processing was to generate sentence aligned text for statistical machine translation systems" ([www.statmt.org](https://www.statmt.org/europarl/)).
1607
+ * <u>Text Pre-processing</u>:
1608
+ * <u>Random Combination of Aligned Texts Prefixed by Language</u>: The same process as used for the [CroissantAligned](#croissantaligned) dataset was applied to the Europarl dataset.
1609
+ In the Lucie-Training-Dataset, this dataset provides texts in the two languages under the extra sub-fields `text_1` and `text_2`, and the corresponding language codes under `lang_1` and `lang_2`.
1610
  * <u>Citation</u>: Philipp Koehn (2005). "Europarl: A Parallel Corpus for Statistical Machine Translation," MT Summit.
1611
 
1612
  #### Eurovoc
 
1629
  * <u>Source</u>: Corpus contributed by OpenLLM partners. A version is also published here: [PleIAs/French-PD-Books](https://huggingface.co/datasets/PleIAs/French-PD-Books). License: None (public domain).
1630
  * <u>Extracted from</u>: [Gallicagram](https://shiny.ens-paris-saclay.fr/app/gallicagram).
1631
  * <u>Description</u>: A large collection of French monographies in the public domain made available through the French National Library ([Gallica](https://gallica.bnf.fr/accueil/fr/content/accueil-fr?mode=desktop)). Dataset containing text retrieved through OCR.
1632
+ <!-- * <u>Citation</u>: No paper found. -->
1633
 
1634
  #### GallicaPress
1635
  * <u>Source</u>: Corpus contributed by OpenLLM partners. A version is also published here: [PleIAs/French-PD-Newspapers](https://huggingface.co/datasets/PleIAs/French-PD-Newspapers). License: None (public domain).
1636
  * <u>Extracted from</u>: [Gallicagram](https://shiny.ens-paris-saclay.fr/app/gallicagram).
1637
  * <u>Description</u>: A large collection of French newspapers and periodicals in the public domain made available through the French National Library ([Gallica](https://gallica.bnf.fr/accueil/fr/content/accueil-fr?mode=desktop)). Dataset containing text retrieved through OCR.
1638
+ <!-- * <u>Citation</u>: No paper found. -->
1639
 
1640
  #### Gutenberg
1641
+ * <u>Source</u>: Corpus compiled by OpenLLM partners.
1642
+ * <u>Extracted from</u>:
1643
+ * [aleph.gutenberg.org](http://aleph.gutenberg.org/) via [Project Gutenberg](https://www.gutenberg.org/). License: [Open](https://www.gutenberg.org/policy/terms_of_use.html).
1644
+ * [pgcorpus](https://github.com/pgcorpus/gutenberg). License: [CC BY-4.0](https://zenodo.org/records/2422561).
1645
+ * <u>Description</u>: A collection of free eBooks, manually prepared by human annotators.
1646
+ * <u>Text Pre-processing</u>:
1647
+ * <u>Filtering</u>: The dataset was filtered based on the author date of death, so that only texts from authors who died more than 70 years ago are included (80 years for French authors). This filtering was done to ensure that the texts are in the public domain.
1648
+ * <u>Text cleaning</u>: Headers, footers mentioning the Project Gutenberg were removed.
1649
+ <!-- * <u>Citation</u>: No paper found. -->
1650
 
1651
  #### HAL
1652
  * <u>Source</u>: The ROOTS corpus by BigScience (unpublished). License: CC BY-4.0.
 
1659
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1660
  * <u>Extracted from</u>: [Regards citoyens](https://www.regardscitoyens.org/#&panel1-4) ([nodeputes.fr](http://www.nosdeputes.fr/), [nossenateurs.fr](http://www.nossenateurs.fr/)). [API](https://github.com/regardscitoyens). License: [CC BY-SA](https://www.regardscitoyens.org/#&panel1-2).
1661
  * <u>Description</u>: Transcripts of speeches made during French parlementary debates.
1662
+ <!-- * <u>Citation</u>: No paper found. -->
1663
 
1664
  #### MathPile
1665
  * <u>Source</u>: [GAIR/MathPile_Commercial](https://huggingface.co/datasets/GAIR/MathPile_Commercial). License: [CC BY-SA 4.0](https://huggingface.co/datasets/GAIR/MathPile_Commercial)
 
1671
  * <u>Source</u>: [Nicolas-BZRD/DILA_OPENDATA_FR_2023](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main) (balo, dole, inca, kali, legi and sarde subsets). License: [ODC-BY](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main).
1672
  * <u>Extracted from</u>: [OpenData](https://echanges.dila.gouv.fr/OPENDATA/) (Data collection date: October, 2023).
1673
  * <u>Description</u>: "The French Government Open Data (DILA) Dataset is a collection of text data extracted from various sources provided by the French government, specifically the Direction de l'information légale et administrative (DILA). This dataset contains a wide range of legal, administrative, and legislative documents. The data has been organized into several categories for easy access and analysis" (from the [dataset card](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main)).
1674
+ <!-- * <u>Citation</u>: No paper found. -->
1675
 
1676
  #### OpenEdition
1677
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1678
  * <u>Extracted from</u>: [Open Edition](https://www.openedition.org/).
1679
+ <!-- * <u>Description</u>: TODO -->
1680
+ <!-- * <u>Citation</u>: No paper found. -->
1681
 
1682
  #### PeS2o
1683
  * <u>Source</u>: [allenai/peS2o](https://huggingface.co/datasets/allenai/peS2o). License: [ODC BY-v1.0](https://opendatacommons.org/licenses/by/1-0/)
 
1696
  * Ubuntu IRC: "The Ubuntu IRC dataset is derived from the publicly available chatlogs of all Ubunturelated channels on the Freenode IRC chat server."
1697
  * PhilPapers: a dataset of open access philosophy publications from an international database maintained by the Center for Digital Philosophy at the University of Western Ontario.
1698
  * NIH ExPORTER: "The NIH Grant abstracts provides a bulk-data repository for awarded applications through the ExPORTER4 service covering the fiscal years 1985-present."
1699
+ * <u>Citations</u>:
1700
  * Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, Connor Leahy (2020). "The Pile: An 800GB Dataset of Diverse Text for Language Modeling," [ arXiv:2101.00027](https://arxiv.org/abs/2101.00027).
1701
  * Stella Biderman, Kieran Bicheno, Leo Gao (2022). "Datasheet for the Pile," [ arXiv:2201.07311](https://arxiv.org/abs/2201.07311).
1702
 
 
1704
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1705
  * <u>Extracted from</u>: [Regards citoyens](https://www.regardscitoyens.org/#&panel1-4) ([text](https://data.regardscitoyens.org/nosdeputes.fr/)). License: [CC BY-NC-SA](https://data.regardscitoyens.org/nosdeputes.fr/).
1706
  * <u>Description</u>: Collection of long written questions, read during a session at the French National Assembly. Questions are asked by a member of the French parliament and addressed to a minister (who is given two months to respond).
1707
+ <!-- * <u>Citation</u>: No paper found. -->
1708
 
1709
  #### RedPajama (v2)
1710
  * <u>Source</u>: [togethercomputer/RedPajama-Data-V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2). License: [Apache 2.0](https://github.com/togethercomputer/RedPajama-Data) (data preparation code), Not specified (data) but see [Common Crawl terms of use](https://commoncrawl.org/terms-of-use).
 
1728
  * <u>Source</u>: Corpus contributed by OpenLLM partners.
1729
  * <u>Extracted from</u>: [theses.fr](https://theses.fr/?domaine=theses) and [HAL](https://hal.science/).
1730
  * <u>Description</u>: A collection of doctoral theses published in France. Dataset containing text retrieved through OCR.
1731
+ * <u>Text Pre-processing</u>:
1732
+ * <u>Filtering</u>: Text with less than 1000 words or 10000 characters were removed.
1733
+ * <u>Text cleaning</u>: Because the results of OCR on tables and graphics can give raise to garbage text, the text was cleaned by removing the most suspicious chunks of text. Chunks of text were removed if the detected language was not among French, English, Spanish, German and Italian, or if the perplexity of a CCNet Language Model was higher than 2000 ([details here](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/tokenization/data.py#L1946)).
1734
+ <!-- * <u>Citation</u>: No paper found. -->
1735
 
1736
  #### Wikipedia, Wikisource, Wiktionary
1737
  * <u>Source</u>: Corpus contributed by LINAGORA Labs (OpenLLM-France).
 
1740
  * [OpenLLM-France/wikisource](https://huggingface.co/datasets/OpenLLM-France/wikisource)
1741
  * [OpenLLM-France/wiktionary](https://huggingface.co/datasets/OpenLLM-France/wiktionary)
1742
  * <u>Extracted from</u>: [Wikimedia dumps](https://dumps.wikimedia.org/other/enterprise_html/runs/). License: [GFDL/CC BY-SA](https://dumps.wikimedia.org/legal.html).
1743
+ <!-- * <u>Description</u>: TODO -->
1744
+ <!-- * <u>Citation</u>: No paper found. -->
1745
 
1746
  #### YouTube
1747
  * <u>Source</u>: Corpus contributed by LINAGORA Labs (OpenLLM-France).
1748
+ * <u>Extracted from</u>: [YouTube](https://www.youtube.com/). <!-- License: TODO? -->
1749
+ * <u>Description</u>: French subtitles from videos published with permissive licenses on YouTube. <!-- TODO -->
1750
+ <!-- * <u>Citation</u>: No paper found. -->
1751
 
1752
  ## Example use in python
1753
 
1754
+ ### Load the dataset
1755
+
1756
+ Load and iterate over the full dataset using the `datasets` library:
1757
  ```python
1758
  from datasets import load_dataset
1759
 
1760
+ dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", split="train", streaming=True)
 
 
1761
 
1762
  for sample in dataset:
1763
+
1764
  text = sample["text"]
1765
+
1766
+ # … do something with the text
1767
  ```
1768
 
1769
+ ### Iterate over a subset
1770
+
1771
  Several configurations are available to select a language, a source, or both, illustrated in the following examples.
1772
 
1773
  Load data in French:
1774
  ```python
1775
+ from datasets import load_dataset
1776
+
1777
+ kwargs = dict(split="train", streaming=True)
1778
+
1779
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "fr", **kwargs)
1780
  ```
1781
  Load data where French and English are aligned:
1782
  ```python
1783
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "fr,en", **kwargs)
1784
  ```
1785
+
1786
  Load data corresponding to files with programming languages:
1787
  ```python
1788
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code", **kwargs)
 
1791
  ```python
1792
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code-python", **kwargs)
1793
  ```
1794
+
1795
+ Load data from Wikipedia (in all available languages):
1796
  ```python
1797
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia", **kwargs)
1798
  ```
 
1801
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia-fr", **kwargs)
1802
  ```
1803
 
1804
+ Load the Pile dataset:
1805
+ ```python
1806
+ dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Pile", **kwargs)
1807
+ ```
1808
+ Load the subset "`PhilPapers`" from the Pile dataset:
1809
+ ```python
1810
+ dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Pile-PhilPapers", **kwargs)
1811
+ ```
1812
+
1813
+ ### Load a specific version
1814
+
1815
+
1816
+ You can load a specific version with the `datasets` Python package using the `revision` parameter of `load_dataset(…)`:
1817
+ ```python
1818
+ from datasets import load_dataset
1819
+
1820
+ kwargs = dict(split="train", streaming=True)
1821
+
1822
+ name = None # or a configuration (e.g. "fr", "code-python", "Wikipedia-fr", "Pile-PhilPapers")
1823
+
1824
+ dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", name, revision="v1.2", **kwargs)
1825
+ ```
1826
 
 
1827
 
1828
  ## Citation
1829
 
1830
  TODO
1831
 
1832
+ ## Acknowledgements
1833
+
1834
+ TODO
1835
+
1836
  ## Contact
1837
 
1838
  <pre>[email protected]</pre>