Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
Jeronymous commited on
Commit
aa47305
·
verified ·
1 Parent(s): 1f25082

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1045 -20
README.md CHANGED
@@ -451,33 +451,51 @@ configs:
451
  # Dataset Card
452
 
453
  The Lucie Training Dataset is a curated collection of text data
454
- in English, French, German, Spanish and Italian,
455
- from the web,
456
- video subtitles,
457
- collections of books, newspapers, monographies, and magazines processed by Optical Character Recognition (OCR),
458
- as well as collections of files in diverse programming languages.
459
 
460
- It was used to pretrain [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B),
461
  a foundation LLM with strong capabilities in French and English.
462
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
463
  ## Dataset Description
464
 
465
- This dataset was made to provide an extensive and diverse dataset for training Large Language Models (LLM),
466
- with the following motivations in mind:
467
  * Data mix:
468
- * French is as well represented as English
469
- (Lucie Training Dataset is one of the biggest of collection of French text data with a minimum of quality),
470
- to avoid that the LLM is culturally biased towards English.
471
- * German, Spanish and Italian are also represented to some extend,
472
- * Code is also included to boost the reasoning capabilities of LLM.
473
  * Data filtering and deduplication:
474
- * The dataset is cleaned low-quality data
475
- * The dataset is cleaned from duplicates to some extend, following best practices.
476
  * Ethics:
477
- * A special care was taken to respect copyright laws and the privacy of individuals.
478
  All books, newspapers, monographies, and magazines are in the public domain
479
- (which depends on the author's death date, and the country of publication).
480
- * There is no data from the web for which robots.txt files forbid crawling.
481
 
482
  ### Dataset Structure
483
 
@@ -504,7 +522,998 @@ The corpus contains the following information for each text sample:
504
 
505
  Examples of metadata (except from `text`) are shown for each source in [metadata_examples.json](metadata_examples.json).
506
 
507
- ### Example use in python
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
508
 
509
  Load the dataset using the `datasets` library:
510
  ```python
@@ -513,6 +1522,10 @@ from datasets import load_dataset
513
  kwargs = {"split": "train", "streaming": True}
514
 
515
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", **kwargs)
 
 
 
 
516
  ```
517
 
518
  Several configurations are available to select a language, a source, or both, illustrated in the following examples.
@@ -531,7 +1544,7 @@ dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code", **kwargs
531
  ```
532
  Load data in Python:
533
  ```python
534
- dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code:python", **kwargs)
535
  ```
536
  Load data from Wikipedia (in available languages):
537
  ```python
@@ -541,3 +1554,15 @@ Load data from French pages of Wikipedia ([wikipedia.fr](https://www.wikipedia.f
541
  ```python
542
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia-fr", **kwargs)
543
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
451
  # Dataset Card
452
 
453
  The Lucie Training Dataset is a curated collection of text data
454
+ in English, French, German, Spanish and Italian culled from a variety of sources including: web data, video subtitles, academic papers,
455
+ digital books, newspapers, and magazines, some of which were processed by Optical Character Recognition (OCR). It also contains samples of diverse programming languages.
 
 
 
456
 
457
+ The Lucie Training Dataset was used to pretrain [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B),
458
  a foundation LLM with strong capabilities in French and English.
459
 
460
+ Table of Contents:
461
+ * [Dataset Description](#dataset-description)
462
+ * [Dataset Structure](#dataset-structure)
463
+ * [Dataset Composition](#dataset-composition)
464
+ * [Web](#web)
465
+ * [Newspaper](#newspaper)
466
+ * [Technical](#technical)
467
+ * [Book](#book)
468
+ * [Parallel Corpora](#parallel-corpora)
469
+ * [Legislative Texts](#legislative-texts)
470
+ * [Wiki](#wiki)
471
+ * [Math](#math)
472
+ * [Forum](#forum)
473
+ * [Dialogue](#dialogue)
474
+ * [Legislative Transcripts](#legislative-transcripts)
475
+ * [Programming](#programming)
476
+ * [Details on Data Sources](#details-on-data-sources)
477
+ <!-- * [RedPajama (v2)](#redpajama-v2) -->
478
+ * [Example use in python](#example-use-in-python)
479
+ * [License](#license)
480
+ * [Citation](#citation)
481
+ * [Contact](#contact)
482
+
483
+
484
  ## Dataset Description
485
 
486
+ This dataset was made to provide an extensive and diverse dataset for training Large Language Models (LLMs). Here are some of the principal features of the corpus:
 
487
  * Data mix:
488
+ * The dataset contains equal amounts of French and English data -- it is in fact one of the biggest collections of French text data that has been preprocessed for LLM training -- with the aim of minimizing anglo-centric cultural biases.
489
+ * German, Spanish and Italian are also represented in small amounts.
490
+ * Code is also included to boost the reasoning capabilities of LLMs.
 
 
491
  * Data filtering and deduplication:
492
+ * The dataset has been cleaned in an effort to remove very low-quality data.
493
+ * Duplicate data samples have been removed to some extent, following best practices.
494
  * Ethics:
495
+ * Special care has been taken to respect copyright laws and individual privacy.
496
  All books, newspapers, monographies, and magazines are in the public domain
497
+ (which depends on the author's date of death and the country of publication).
498
+ * All web data in the dataset came from sites with robots.txt files that do not forbid crawling.
499
 
500
  ### Dataset Structure
501
 
 
522
 
523
  Examples of metadata (except from `text`) are shown for each source in [metadata_examples.json](metadata_examples.json).
524
 
525
+
526
+ ### Dataset Composition
527
+
528
+ <table>
529
+ <thead>
530
+ <tr>
531
+ <th><strong>subset</strong></th>
532
+ <th><strong>language</strong></th>
533
+ <th><strong>M docs</strong></th>
534
+ <th><strong>B words</strong></th>
535
+ <th><strong>B tokens</strong></th>
536
+ <th><strong>B chars</strong></th>
537
+ <th></th>
538
+ </tr>
539
+ </thead>
540
+ <tbody>
541
+ <tr>
542
+ <td colspan="7"><h4>Web</h4></td></tr>
543
+ <tr>
544
+ <td>[<strong> RedPajama</strong>](#redpajama-v2)</td>
545
+ <td><strong>fr</strong></td>
546
+ <td>640.77</td>
547
+ <td>477.758</td>
548
+ <td>741.023</td>
549
+ <td>2974.596</td>
550
+ <td><strong>2014</strong> (1.32 B words), <strong>2015</strong> (0.776 B words), <strong>2016</strong> (2.033 B words), <strong>2017</strong> (55.665 B words), <strong>2018</strong> (81.345 B words), <strong>2020</strong> (75.141 B words), <strong>2021</strong> (82.439 B words), <strong>2022</strong> (64.866 B words), <strong>2023</strong> (27.239 B words)</td>
551
+ </tr>
552
+ <tr>
553
+ <td><strong>[RedPajama](#redpajama-v2)</strong></td>
554
+ <td><strong>de</strong></td>
555
+ <td>162.779</td>
556
+ <td>103.078</td>
557
+ <td>201.371</td>
558
+ <td>747.631</td>
559
+ <td><strong>2021</strong> (17.591 B words), <strong>2023</strong> (24.704 B words)</td>
560
+ </tr>
561
+ <tr>
562
+ <td><strong>RedPajama</strong></td>
563
+ <td><strong>es</strong></td>
564
+ <td>169.447</td>
565
+ <td>121.751</td>
566
+ <td>197.125</td>
567
+ <td>746.984</td>
568
+ <td><strong>2021</strong> (20.821 B words), <strong>2023</strong> (28.868 B words)</td>
569
+ </tr>
570
+ <tr>
571
+ <td><strong>RedPajama</strong></td>
572
+ <td><strong>it</strong></td>
573
+ <td>97.324</td>
574
+ <td>60.194</td>
575
+ <td>108.416</td>
576
+ <td>393.012</td>
577
+ <td><strong>2021</strong> (10.266 B words), <strong>2023</strong> (14.403 B words)</td>
578
+ </tr>
579
+ <tr>
580
+ <td><strong>FineWebEdu</strong></td>
581
+ <td><strong>en</strong></td>
582
+ <td>421.209</td>
583
+ <td>327.453</td>
584
+ <td>467.837</td>
585
+ <td>2018.215</td>
586
+ <td><strong>2019</strong> (65.275 B words), <strong>2020</strong> (59.076 B words), <strong>2022</strong> (58.788 B words), <strong>2023</strong> (62.085 B words), <strong>2024</strong> (9.197 B words)</td>
587
+ </tr>
588
+ <tr>
589
+ <td colspan="7"><h4>Newspaper</h4></td></tr>
590
+ <tr>
591
+ <td><strong>GallicaPress</strong></td>
592
+ <td><strong>fr</strong></td>
593
+ <td>3.205</td>
594
+ <td>67.496</td>
595
+ <td>121.606</td>
596
+ <td>408.882</td>
597
+ <td></td>
598
+ </tr>
599
+ <tr>
600
+ <td><strong>AmericanStories</strong></td>
601
+ <td><strong>en</strong></td>
602
+ <td>59.42</td>
603
+ <td>8.902</td>
604
+ <td>14.313</td>
605
+ <td>50.844</td>
606
+ <td></td>
607
+ </tr>
608
+ <tr>
609
+ <td colspan="7"><h4>Technical</h4></td></tr>
610
+ <tr>
611
+ <td><strong>PeS2o</strong></td>
612
+ <td><strong>en</strong></td>
613
+ <td>38.972</td>
614
+ <td>42.296</td>
615
+ <td>65.365</td>
616
+ <td>268.963</td>
617
+ <td></td>
618
+ </tr>
619
+ <tr>
620
+ <td><strong>HAL</strong></td>
621
+ <td><strong>fr</strong></td>
622
+ <td>0.349</td>
623
+ <td>9.356</td>
624
+ <td>16.224</td>
625
+ <td>58.308</td>
626
+ <td></td>
627
+ </tr>
628
+ <tr>
629
+ <td><strong>Theses</strong></td>
630
+ <td><strong>fr</strong></td>
631
+ <td>0.102</td>
632
+ <td>7.547</td>
633
+ <td>14.06</td>
634
+ <td>47.758</td>
635
+ <td></td>
636
+ </tr>
637
+ <!-- <tr>
638
+ <td><strong>Persee</strong></td>
639
+ <td><strong>fr</strong></td>
640
+ <td>1.094</td>
641
+ <td>3.25</td>
642
+ <td>5.754</td>
643
+ <td>20.314</td>
644
+ <td></td>
645
+ </tr> -->
646
+ <tr>
647
+ <td><strong>Pile (USPTO_Backgrounds)</strong></td>
648
+ <td><strong>en</strong></td>
649
+ <td>5.139</td>
650
+ <td>3.492</td>
651
+ <td>5.105</td>
652
+ <td>22.309</td>
653
+ <td></td>
654
+ </tr>
655
+ <tr>
656
+ <td><strong>OpenEdition</strong></td>
657
+ <td><strong>fr</strong></td>
658
+ <td>0.939</td>
659
+ <td>2.225</td>
660
+ <td>3.604</td>
661
+ <td>14.459</td>
662
+ <td></td>
663
+ </tr>
664
+ <tr>
665
+ <td><strong>Pile (PhilPapers)</strong></td>
666
+ <td><strong>en</strong></td>
667
+ <td>0.031</td>
668
+ <td>0.363</td>
669
+ <td>0.618</td>
670
+ <td>2.304</td>
671
+ <td></td>
672
+ </tr>
673
+ <tr>
674
+ <td><strong>Pile (NIH_ExPorter)</strong></td>
675
+ <td><strong>en</strong></td>
676
+ <td>0.914</td>
677
+ <td>0.288</td>
678
+ <td>0.431</td>
679
+ <td>1.979</td>
680
+ <td></td>
681
+ </tr>
682
+ <tr>
683
+ <td colspan="7"><h4>Book</h4></td></tr>
684
+ <tr>
685
+ <td><strong>GallicaMonographies</strong></td>
686
+ <td><strong>fr</strong></td>
687
+ <td>0.278</td>
688
+ <td>15.106</td>
689
+ <td>25.169</td>
690
+ <td>90.456</td>
691
+ <td></td>
692
+ </tr>
693
+ <tr>
694
+ <td><strong>Gutenberg</strong></td>
695
+ <td><strong>en</strong></td>
696
+ <td>0.056</td>
697
+ <td>3.544</td>
698
+ <td>5.516</td>
699
+ <td>20.579</td>
700
+ <td></td>
701
+ </tr>
702
+ <tr>
703
+ <td><strong>Gutenberg</strong></td>
704
+ <td><strong>fr</strong></td>
705
+ <td>0.003</td>
706
+ <td>0.227</td>
707
+ <td>0.383</td>
708
+ <td>1.392</td>
709
+ <td></td>
710
+ </tr>
711
+ <tr>
712
+ <td><strong>Gutenberg</strong></td>
713
+ <td><strong>de</strong></td>
714
+ <td>0.002</td>
715
+ <td>0.099</td>
716
+ <td>0.193</td>
717
+ <td>0.654</td>
718
+ <td></td>
719
+ </tr>
720
+ <tr>
721
+ <td><strong>Gutenberg</strong></td>
722
+ <td><strong>it</strong></td>
723
+ <td>0.001</td>
724
+ <td>0.066</td>
725
+ <td>0.129</td>
726
+ <td>0.414</td>
727
+ <td></td>
728
+ </tr>
729
+ <tr>
730
+ <td><strong>Gutenberg</strong></td>
731
+ <td><strong>es</strong></td>
732
+ <td>0.001</td>
733
+ <td>0.051</td>
734
+ <td>0.092</td>
735
+ <td>0.303</td>
736
+ <td></td>
737
+ </tr>
738
+ <tr>
739
+ <td colspan="7"><h4>Parallel Corpora</h4></td></tr>
740
+ <tr>
741
+ <td><strong>CroissantAligned</strong></td>
742
+ <td><strong>fr-en</strong></td>
743
+ <td>408.029</td>
744
+ <td>16.911</td>
745
+ <td>25.351</td>
746
+ <td>107.003</td>
747
+ <td></td>
748
+ </tr>
749
+ <tr>
750
+ <td><strong>EuroparlAligned</strong></td>
751
+ <td><strong>it-en</strong></td>
752
+ <td>1.901</td>
753
+ <td>0.1</td>
754
+ <td>0.151</td>
755
+ <td>0.638</td>
756
+ <td></td>
757
+ </tr>
758
+ <tr>
759
+ <td><strong>EuroparlAligned</strong></td>
760
+ <td><strong>fr-en</strong></td>
761
+ <td>2.003</td>
762
+ <td>0.105</td>
763
+ <td>0.143</td>
764
+ <td>0.655</td>
765
+ <td></td>
766
+ </tr>
767
+ <tr>
768
+ <td><strong>EuroparlAligned</strong></td>
769
+ <td><strong>es-en</strong></td>
770
+ <td>1.961</td>
771
+ <td>0.103</td>
772
+ <td>0.143</td>
773
+ <td>0.631</td>
774
+ <td></td>
775
+ </tr>
776
+ <tr>
777
+ <td><strong>EuroparlAligned</strong></td>
778
+ <td><strong>de-fr</strong></td>
779
+ <td>1.792</td>
780
+ <td>0.091</td>
781
+ <td>0.141</td>
782
+ <td>0.621</td>
783
+ <td></td>
784
+ </tr>
785
+ <tr>
786
+ <td colspan="7"><h4>Legislative Texts</h4></td></tr>
787
+ <tr>
788
+ <td><strong>Pile (FreeLaw)</strong></td>
789
+ <td><strong>en</strong></td>
790
+ <td>3.415</td>
791
+ <td>8.204</td>
792
+ <td>14.011</td>
793
+ <td>52.58</td>
794
+ <td></td>
795
+ </tr>
796
+ <tr>
797
+ <td><strong>Eurovoc</strong></td>
798
+ <td><strong>en</strong></td>
799
+ <td>0.272</td>
800
+ <td>1.523</td>
801
+ <td>2.571</td>
802
+ <td>9.468</td>
803
+ <td></td>
804
+ </tr>
805
+ <tr>
806
+ <td><strong>Eurovoc</strong></td>
807
+ <td><strong>it</strong></td>
808
+ <td>0.245</td>
809
+ <td>0.731</td>
810
+ <td>1.527</td>
811
+ <td>4.867</td>
812
+ <td></td>
813
+ </tr>
814
+ <tr>
815
+ <td><strong>Eurovoc</strong></td>
816
+ <td><strong>de</strong></td>
817
+ <td>0.247</td>
818
+ <td>0.678</td>
819
+ <td>1.497</td>
820
+ <td>4.915</td>
821
+ <td></td>
822
+ </tr>
823
+ <tr>
824
+ <td><strong>Eurovoc</strong></td>
825
+ <td><strong>es</strong></td>
826
+ <td>0.246</td>
827
+ <td>0.757</td>
828
+ <td>1.411</td>
829
+ <td>4.684</td>
830
+ <td></td>
831
+ </tr>
832
+ <tr>
833
+ <td><strong>OpenData</strong></td>
834
+ <td><strong>fr</strong></td>
835
+ <td>1.169</td>
836
+ <td>0.755</td>
837
+ <td>1.209</td>
838
+ <td>4.638</td>
839
+ <td></td>
840
+ </tr>
841
+ <tr>
842
+ <td><strong>QuestionsEcritesParlement</strong></td>
843
+ <td><strong>fr</strong></td>
844
+ <td>0.189</td>
845
+ <td>0.108</td>
846
+ <td>0.156</td>
847
+ <td>0.705</td>
848
+ <td></td>
849
+ </tr>
850
+ <tr>
851
+ <td><strong>LEGI</strong></td>
852
+ <td><strong>fr</strong></td>
853
+ <td>0.621</td>
854
+ <td>0.088</td>
855
+ <td>0.145</td>
856
+ <td>0.563</td>
857
+ <td></td>
858
+ </tr>
859
+ <tr>
860
+ <td><strong>AmendementsParlement</strong></td>
861
+ <td><strong>fr</strong></td>
862
+ <td>0.673</td>
863
+ <td>0.045</td>
864
+ <td>0.074</td>
865
+ <td>0.274</td>
866
+ <td></td>
867
+ </tr>
868
+ <tr>
869
+ <td colspan="7"><h4>Wiki</h4></td></tr>
870
+ <tr>
871
+ <td><strong>Wikipedia</strong></td>
872
+ <td><strong>en</strong></td>
873
+ <td>6.893</td>
874
+ <td>4.708</td>
875
+ <td>7.898</td>
876
+ <td>26.616</td>
877
+ <td></td>
878
+ </tr>
879
+ <tr>
880
+ <td><strong>Wikipedia</strong></td>
881
+ <td><strong>de</strong></td>
882
+ <td>2.877</td>
883
+ <td>1.709</td>
884
+ <td>3.476</td>
885
+ <td>11.252</td>
886
+ <td></td>
887
+ </tr>
888
+ <tr>
889
+ <td><strong>Wikipedia</strong></td>
890
+ <td><strong>fr</strong></td>
891
+ <td>2.648</td>
892
+ <td>1.726</td>
893
+ <td>2.94</td>
894
+ <td>9.879</td>
895
+ <td></td>
896
+ </tr>
897
+ <tr>
898
+ <td><strong>Wikipedia</strong></td>
899
+ <td><strong>es</strong></td>
900
+ <td>1.947</td>
901
+ <td>1.245</td>
902
+ <td>2.124</td>
903
+ <td>7.161</td>
904
+ <td></td>
905
+ </tr>
906
+ <tr>
907
+ <td><strong>Wikipedia</strong></td>
908
+ <td><strong>it</strong></td>
909
+ <td>1.87</td>
910
+ <td>1.06</td>
911
+ <td>1.959</td>
912
+ <td>6.161</td>
913
+ <td></td>
914
+ </tr>
915
+ <tr>
916
+ <td><strong>wikisource</strong></td>
917
+ <td><strong>fr</strong></td>
918
+ <td>0.186</td>
919
+ <td>0.523</td>
920
+ <td>0.795</td>
921
+ <td>3.08</td>
922
+ <td></td>
923
+ </tr>
924
+ <tr>
925
+ <td><strong>wiktionary</strong></td>
926
+ <td><strong>fr</strong></td>
927
+ <td>0.65</td>
928
+ <td>0.053</td>
929
+ <td>0.117</td>
930
+ <td>0.347</td>
931
+ <td></td>
932
+ </tr>
933
+ <tr>
934
+ <td colspan="7"><h4>Math</h4></td></tr>
935
+ <tr>
936
+ <td><strong>MathPile</strong></td>
937
+ <td><strong>en</strong></td>
938
+ <td>0.737</td>
939
+ <td>3.408</td>
940
+ <td>9.637</td>
941
+ <td>27.29</td>
942
+ <td></td>
943
+ </tr>
944
+ <tr>
945
+ <td><strong>Pile (DM_Mathematics)</strong></td>
946
+ <td><strong>en</strong></td>
947
+ <td>0.992</td>
948
+ <td>1.746</td>
949
+ <td>4.928</td>
950
+ <td>8.127</td>
951
+ <td></td>
952
+ </tr>
953
+ <tr>
954
+ <td colspan="7"><h4>Forum</h4></td></tr>
955
+ <tr>
956
+ <td><strong>Pile (StackExchange)</strong></td>
957
+ <td><strong>en</strong></td>
958
+ <td>15.269</td>
959
+ <td>4.534</td>
960
+ <td>10.275</td>
961
+ <td>33.609</td>
962
+ <td></td>
963
+ </tr>
964
+ <tr>
965
+ <td><strong>Pile (Ubuntu_IRC)</strong></td>
966
+ <td><strong>en</strong></td>
967
+ <td>0.01</td>
968
+ <td>0.867</td>
969
+ <td>2.159</td>
970
+ <td>5.61</td>
971
+ <td></td>
972
+ </tr>
973
+ <tr>
974
+ <td colspan="7"><h4>Dialogue</h4></td></tr>
975
+ <tr>
976
+ <td><strong>Claire</strong></td>
977
+ <td><strong>en</strong></td>
978
+ <td>0.949</td>
979
+ <td>0.818</td>
980
+ <td>1.161</td>
981
+ <td>4.709</td>
982
+ <td><strong>DialogStudio</strong> (0.061 B words), <strong>BNC</strong> (0.011 B words), <strong>OANC</strong> (0.005 B words), <strong>AMI</strong> (0.001 B words), <strong>DailyDialog</strong> (0.001 B words), <strong>ICSI</strong> (0.001 B words)</td>
983
+ </tr>
984
+ <tr>
985
+ <td><strong>Claire</strong></td>
986
+ <td><strong>fr</strong></td>
987
+ <td>0.037</td>
988
+ <td>0.209</td>
989
+ <td>0.31</td>
990
+ <td>1.313</td>
991
+ <td><strong>Senat</strong> (0.051 B words), <strong>Theatre</strong> (0.017 B words), <strong>ESLO</strong> (0.005 B words), <strong>CFPP</strong> (0.001 B words), <strong>OFROM</strong> (0.001 B words), <strong>ORFEO</strong> (0.001 B words), <strong>PFC</strong> (0.001 B words), <strong>SUMM</strong> (0.001 B words), <strong>TCOF</strong> (0.001 B words), <strong>ACSYNT</strong>, <strong>CID</strong>, <strong>CLAPI</strong>, <strong>FREDSum</strong>, <strong>LINAGORA</strong>, <strong>OTG</strong>, <strong>ParisStories</strong>, <strong>Rhapsodie</strong>, <strong>UBS</strong></td>
992
+ </tr>
993
+ <tr>
994
+ <td><strong>YouTube</strong></td>
995
+ <td><strong>fr</strong></td>
996
+ <td>0.038</td>
997
+ <td>0.145</td>
998
+ <td>0.336</td>
999
+ <td>1.003</td>
1000
+ <td></td>
1001
+ </tr>
1002
+ <tr>
1003
+ <td><strong>Stac</strong></td>
1004
+ <td><strong>en</strong></td>
1005
+ <td>0.0</td>
1006
+ <td>0.0</td>
1007
+ <td>0.0</td>
1008
+ <td>0.0</td>
1009
+ <td></td>
1010
+ </tr>
1011
+ <tr>
1012
+ <td colspan="7"><h4>Legislative Transcripts</h4></td></tr>
1013
+ <tr>
1014
+ <td><strong>Europarl</strong></td>
1015
+ <td><strong>es</strong></td>
1016
+ <td>0.01</td>
1017
+ <td>0.052</td>
1018
+ <td>0.073</td>
1019
+ <td>0.325</td>
1020
+ <td></td>
1021
+ </tr>
1022
+ <tr>
1023
+ <td><strong>Europarl</strong></td>
1024
+ <td><strong>de</strong></td>
1025
+ <td>0.01</td>
1026
+ <td>0.045</td>
1027
+ <td>0.073</td>
1028
+ <td>0.327</td>
1029
+ <td></td>
1030
+ </tr>
1031
+ <tr>
1032
+ <td><strong>Europarl</strong></td>
1033
+ <td><strong>fr</strong></td>
1034
+ <td>0.01</td>
1035
+ <td>0.053</td>
1036
+ <td>0.072</td>
1037
+ <td>0.339</td>
1038
+ <td></td>
1039
+ </tr>
1040
+ <tr>
1041
+ <td><strong>Europarl</strong></td>
1042
+ <td><strong>en</strong></td>
1043
+ <td>0.011</td>
1044
+ <td>0.056</td>
1045
+ <td>0.069</td>
1046
+ <td>0.339</td>
1047
+ <td></td>
1048
+ </tr>
1049
+ <tr>
1050
+ <td><strong>DiscoursPublics</strong></td>
1051
+ <td><strong>fr</strong></td>
1052
+ <td>0.11</td>
1053
+ <td>0.163</td>
1054
+ <td>0.238</td>
1055
+ <td>1.025</td>
1056
+ <td></td>
1057
+ </tr>
1058
+ <tr>
1059
+ <td><strong>InterventionsParlement</strong></td>
1060
+ <td><strong>fr</strong></td>
1061
+ <td>1.832</td>
1062
+ <td>0.104</td>
1063
+ <td>0.157</td>
1064
+ <td>0.654</td>
1065
+ <td></td>
1066
+ </tr>
1067
+ <tr>
1068
+ <td colspan="7"><h4>Programming</h4></td></tr>
1069
+ <tr>
1070
+ <td><strong>TheStack</strong></td>
1071
+ <td><strong>JAVASCRIPT</strong></td>
1072
+ <td>21.109</td>
1073
+ <td>8.526</td>
1074
+ <td>58.609</td>
1075
+ <td>141.647</td>
1076
+ <td></td>
1077
+ </tr>
1078
+ <tr>
1079
+ <td><strong>TheStack</strong></td>
1080
+ <td><strong>JAVA</strong></td>
1081
+ <td>20.152</td>
1082
+ <td>7.421</td>
1083
+ <td>27.68</td>
1084
+ <td>89.297</td>
1085
+ <td></td>
1086
+ </tr>
1087
+ <tr>
1088
+ <td><strong>TheStack</strong></td>
1089
+ <td><strong>C</strong></td>
1090
+ <td>8.626</td>
1091
+ <td>5.916</td>
1092
+ <td>24.092</td>
1093
+ <td>57.428</td>
1094
+ <td></td>
1095
+ </tr>
1096
+ <tr>
1097
+ <td><strong>TheStack</strong></td>
1098
+ <td><strong>PHP</strong></td>
1099
+ <td>15.905</td>
1100
+ <td>4.865</td>
1101
+ <td>22.883</td>
1102
+ <td>66.844</td>
1103
+ <td></td>
1104
+ </tr>
1105
+ <tr>
1106
+ <td><strong>TheStack</strong></td>
1107
+ <td><strong>PYTHON</strong></td>
1108
+ <td>12.962</td>
1109
+ <td>5.434</td>
1110
+ <td>21.683</td>
1111
+ <td>64.304</td>
1112
+ <td></td>
1113
+ </tr>
1114
+ <tr>
1115
+ <td><strong>TheStack</strong></td>
1116
+ <td><strong>C++</strong></td>
1117
+ <td>6.378</td>
1118
+ <td>4.584</td>
1119
+ <td>18.835</td>
1120
+ <td>50.892</td>
1121
+ <td></td>
1122
+ </tr>
1123
+ <tr>
1124
+ <td><strong>TheStack</strong></td>
1125
+ <td><strong>C#</strong></td>
1126
+ <td>10.839</td>
1127
+ <td>3.574</td>
1128
+ <td>13.381</td>
1129
+ <td>46.286</td>
1130
+ <td></td>
1131
+ </tr>
1132
+ <tr>
1133
+ <td><strong>TheStack</strong></td>
1134
+ <td><strong>GO</strong></td>
1135
+ <td>4.73</td>
1136
+ <td>2.735</td>
1137
+ <td>10.262</td>
1138
+ <td>25.738</td>
1139
+ <td></td>
1140
+ </tr>
1141
+ <tr>
1142
+ <td><strong>TheStack</strong></td>
1143
+ <td><strong>TYPESCRIPT</strong></td>
1144
+ <td>10.637</td>
1145
+ <td>2.617</td>
1146
+ <td>9.836</td>
1147
+ <td>28.815</td>
1148
+ <td></td>
1149
+ </tr>
1150
+ <tr>
1151
+ <td><strong>TheStack</strong></td>
1152
+ <td><strong>RUST</strong></td>
1153
+ <td>1.387</td>
1154
+ <td>0.872</td>
1155
+ <td>3.241</td>
1156
+ <td>9.529</td>
1157
+ <td></td>
1158
+ </tr>
1159
+ <tr>
1160
+ <td><strong>TheStack</strong></td>
1161
+ <td><strong>RUBY</strong></td>
1162
+ <td>3.405</td>
1163
+ <td>0.646</td>
1164
+ <td>2.392</td>
1165
+ <td>7.139</td>
1166
+ <td></td>
1167
+ </tr>
1168
+ <tr>
1169
+ <td><strong>TheStack</strong></td>
1170
+ <td><strong>SWIFT</strong></td>
1171
+ <td>1.756</td>
1172
+ <td>0.553</td>
1173
+ <td>1.876</td>
1174
+ <td>6.134</td>
1175
+ <td></td>
1176
+ </tr>
1177
+ <tr>
1178
+ <td><strong>TheStack</strong></td>
1179
+ <td><strong>KOTLIN</strong></td>
1180
+ <td>2.243</td>
1181
+ <td>0.454</td>
1182
+ <td>1.758</td>
1183
+ <td>5.769</td>
1184
+ <td></td>
1185
+ </tr>
1186
+ <tr>
1187
+ <td><strong>TheStack</strong></td>
1188
+ <td><strong>SCALA</strong></td>
1189
+ <td>1.362</td>
1190
+ <td>0.457</td>
1191
+ <td>1.587</td>
1192
+ <td>4.862</td>
1193
+ <td></td>
1194
+ </tr>
1195
+ <tr>
1196
+ <td><strong>TheStack</strong></td>
1197
+ <td><strong>TEX</strong></td>
1198
+ <td>0.398</td>
1199
+ <td>0.394</td>
1200
+ <td>1.507</td>
1201
+ <td>3.805</td>
1202
+ <td></td>
1203
+ </tr>
1204
+ <tr>
1205
+ <td><strong>TheStack</strong></td>
1206
+ <td><strong>LUA</strong></td>
1207
+ <td>0.559</td>
1208
+ <td>0.318</td>
1209
+ <td>1.367</td>
1210
+ <td>3.279</td>
1211
+ <td></td>
1212
+ </tr>
1213
+ <tr>
1214
+ <td><strong>TheStack</strong></td>
1215
+ <td><strong>DART</strong></td>
1216
+ <td>0.933</td>
1217
+ <td>0.308</td>
1218
+ <td>1.242</td>
1219
+ <td>3.864</td>
1220
+ <td></td>
1221
+ </tr>
1222
+ <tr>
1223
+ <td><strong>TheStack</strong></td>
1224
+ <td><strong>PERL</strong></td>
1225
+ <td>0.392</td>
1226
+ <td>0.297</td>
1227
+ <td>1.149</td>
1228
+ <td>2.634</td>
1229
+ <td></td>
1230
+ </tr>
1231
+ <tr>
1232
+ <td><strong>TheStack</strong></td>
1233
+ <td><strong>MATHEMATICA</strong></td>
1234
+ <td>0.027</td>
1235
+ <td>0.12</td>
1236
+ <td>1.117</td>
1237
+ <td>1.72</td>
1238
+ <td></td>
1239
+ </tr>
1240
+ <tr>
1241
+ <td><strong>TheStack</strong></td>
1242
+ <td><strong>ASSEMBLY</strong></td>
1243
+ <td>0.248</td>
1244
+ <td>0.209</td>
1245
+ <td>0.867</td>
1246
+ <td>1.575</td>
1247
+ <td></td>
1248
+ </tr>
1249
+ <tr>
1250
+ <td><strong>TheStack</strong></td>
1251
+ <td><strong>HASKELL</strong></td>
1252
+ <td>0.545</td>
1253
+ <td>0.307</td>
1254
+ <td>0.807</td>
1255
+ <td>2.364</td>
1256
+ <td></td>
1257
+ </tr>
1258
+ <tr>
1259
+ <td><strong>TheStack</strong></td>
1260
+ <td><strong>FORTRAN</strong></td>
1261
+ <td>0.165</td>
1262
+ <td>0.192</td>
1263
+ <td>0.78</td>
1264
+ <td>1.843</td>
1265
+ <td></td>
1266
+ </tr>
1267
+ <tr>
1268
+ <td><strong>TheStack</strong></td>
1269
+ <td><strong>JULIA</strong></td>
1270
+ <td>0.299</td>
1271
+ <td>0.152</td>
1272
+ <td>0.66</td>
1273
+ <td>1.539</td>
1274
+ <td></td>
1275
+ </tr>
1276
+ <tr>
1277
+ <td><strong>TheStack</strong></td>
1278
+ <td><strong>OCAML</strong></td>
1279
+ <td>0.16</td>
1280
+ <td>0.13</td>
1281
+ <td>0.43</td>
1282
+ <td>1.107</td>
1283
+ <td></td>
1284
+ </tr>
1285
+ <tr>
1286
+ <td><strong>TheStack</strong></td>
1287
+ <td><strong>ERLANG</strong></td>
1288
+ <td>0.099</td>
1289
+ <td>0.066</td>
1290
+ <td>0.26</td>
1291
+ <td>0.726</td>
1292
+ <td></td>
1293
+ </tr>
1294
+ <tr>
1295
+ <td><strong>TheStack</strong></td>
1296
+ <td><strong>ELIXIR</strong></td>
1297
+ <td>0.282</td>
1298
+ <td>0.073</td>
1299
+ <td>0.258</td>
1300
+ <td>0.737</td>
1301
+ <td></td>
1302
+ </tr>
1303
+ <tr>
1304
+ <td><strong>TheStack</strong></td>
1305
+ <td><strong>CLOJURE</strong></td>
1306
+ <td>0.126</td>
1307
+ <td>0.045</td>
1308
+ <td>0.179</td>
1309
+ <td>0.492</td>
1310
+ <td></td>
1311
+ </tr>
1312
+ <tr>
1313
+ <td><strong>TheStack</strong></td>
1314
+ <td><strong>R</strong></td>
1315
+ <td>0.039</td>
1316
+ <td>0.028</td>
1317
+ <td>0.158</td>
1318
+ <td>0.305</td>
1319
+ <td></td>
1320
+ </tr>
1321
+ <tr>
1322
+ <td><strong>TheStack</strong></td>
1323
+ <td><strong>MATLAB</strong></td>
1324
+ <td>0.001</td>
1325
+ <td>0.009</td>
1326
+ <td>0.043</td>
1327
+ <td>0.037</td>
1328
+ <td></td>
1329
+ </tr>
1330
+ <tr>
1331
+ <td><strong>TheStack</strong></td>
1332
+ <td><strong>RACKET</strong></td>
1333
+ <td>0.004</td>
1334
+ <td>0.005</td>
1335
+ <td>0.015</td>
1336
+ <td>0.038</td>
1337
+ <td></td>
1338
+ </tr>
1339
+ </tbody>
1340
+ </table>
1341
+
1342
+
1343
+
1344
+
1345
+ ### Details on Data Sources
1346
+
1347
+ #### AmendementsParlement
1348
+ * <u>Source</u>: Corpus contributed by OpenLLM partners.
1349
+ * <u>Extracted from</u>: [Regards citoyens](https://www.regardscitoyens.org/#&panel1-4) ([nodeputes.fr](http://www.nosdeputes.fr/), [nossenateurs.fr](http://www.nossenateurs.fr/)). [API](https://github.com/regardscitoyens). License: [CC BY-SA](https://www.regardscitoyens.org/#&panel1-2).
1350
+ * <u>Description</u>: A collection of proposed amendments by the French parliament: the legal text and description of the requested modification.
1351
+ * <u>Citation</u>: No paper found.
1352
+
1353
+ #### AmericanStories
1354
+ * <u>Source</u>: [dell-research-harvard/AmericanStories](https://huggingface.co/datasets/dell-research-harvard/AmericanStories). License: [CC BY 4.0](https://huggingface.co/datasets/dell-research-harvard/AmericanStories).
1355
+ * <u>Extracted from</u>: [Chronicling America](https://www.loc.gov/collections/chronicling-america/about-this-collection/). License: [Open](https://www.loc.gov/collections/chronicling-america/about-this-collection/rights-and-access/).
1356
+ * <u>Description</u>: "The American Stories dataset is a collection of full article texts extracted from historical U.S. newspaper images. It includes nearly 20 million scans from the public domain Chronicling America collection maintained by the Library of Congress. The dataset is designed to address the challenges posed by complex layouts and low OCR quality in existing newspaper datasets" (from the [dataset card](https://huggingface.co/datasets/dell-research-harvard/AmericanStories)).
1357
+ * <u>Citation</u>: Melissa Dell, Jacob Carlson, Tom Bryan, Emily Silcock, Abhishek Arora, Zejiang Shen, Luca D'Amico-Wong, Quan Le, Pablo Querubin and Leander Heldring (2023). "American Stories: A Large-Scale Structured Text Dataset of Historical U.S. Newspapers," [arxiv:2308.12477](https://arxiv.org/abs/2308.12477v1).
1358
+
1359
+
1360
+ #### Claire (French and English)
1361
+ * <u>Sources</u>:
1362
+ * French dataset: [OpenLLM-France/Claire-Dialogue-French-0.1](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1). License: [CC BY-NC-SA 4.0](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1).
1363
+ * English dataset: [OpenLLM-France/Claire-Dialogue-English-0.1](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1). License: [CC BY-NC-SA 4.0](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1).
1364
+ * <u>Description</u>: The Claire datasets are composed of transcripts of spoken conversation -- including parliamentary proceedings, interviews, debates, meetings, and free conversations -- as well as some written conversations from theater plays and written chats. The dataset is designed to help downstream performance of models fine-tuned for tasks requiring the comprehension of spontaneous spoken conversation, such as meeting summarization. Each dialogue is split into speech turns, and each speech turn is labeled with the name of the speaker or a unique identifier.
1365
+ * <u>Citation</u>: Julie Hunter, Jérôme Louradour, Virgile Rennard, Ismaïl Harrando, Guokan Shang, Jean-Pierre Lorré (2023). The Claire French Dialogue Dataset. [arXiv:2311.16840](https://arxiv.org/abs/2311.16840).
1366
+
1367
+ #### Croissant Aligned
1368
+ * <u>Source</u>: [croissantllm/croissant_dataset_no_web_data](https://huggingface.co/datasets/croissantllm/croissant_dataset_no_web_data). License: not specified.
1369
+ * <u>Extracted from</u>: [OPUS](https://opus.nlpl.eu/), theses, [song lyrics](https://www.lacoccinelle.net)
1370
+ * <u>Description</u>: A collection of English-French translation pairs selected by a custom filtering pipeline. Designed to "improve the multilingual capabilities of the model" ([Arxiv paper](https://arxiv.org/pdf/2402.00786)).
1371
+ * <u>Citation</u>: Manuel Faysse, Patrick Fernandes, Nuno M. Guerreiro, António Loison, Duarte M. Alves, Caio Corro, Nicolas Boizard, João Alves, Ricardo Rei, Pedro H. Martins, Antoni Bigata Casademunt, François Yvon, André F.T. Martins, Gautier Viaud, Céline Hudelot, Pierre Colombo (2024). "CroissantLLM: A Truly Bilingual French-English Language Model," [arXiv:2402.00786](https://arxiv.org/abs/2402.00786).
1372
+
1373
+ #### Discours Publics <pre>(*)</pre>
1374
+ * <u>Source</u>: Corpus contributed by OpenLLM partners.
1375
+ * <u>Extracted from</u>: [Vie Publique](https://www.vie-publique.fr/collection-discours-publics).
1376
+ * <u>Description</u>: A collection of public speeches from the principal public actors in France including speeches from the French President starting from 1974 and from the Prime Minister and members of the government starting from 1980.
1377
+ * <u>Citation</u>: No paper found.
1378
+
1379
+ #### Europarl (monolingual and parallel)
1380
+ * <u>Sources</u>:
1381
+ * `fr-en`, `es-en`, `it-en` parallel data: [Europarl v7](https://www.statmt.org/europarl/v7/). License: [Open](https://www.statmt.org/europarl/).
1382
+ * `fr`, `en`, `de`, `es` monolingual data and `de-fr` parallel data: [Europarl v10](https://www.statmt.org/europarl/v10/training-monolingual/). License: [Open](https://www.statmt.org/europarl/).
1383
+ * <u>Description</u>: "The Europarl parallel corpus is extracted from the proceedings of the European Parliament. It includes versions in 21 European languages: Romanic (French, Italian, Spanish, Portuguese, Romanian), Germanic (English, Dutch, German, Danish, Swedish), Slavik (Bulgarian, Czech, Polish, Slovak, Slovene), Finni-Ugric (Finnish, Hungarian, Estonian), Baltic (Latvian, Lithuanian), and Greek. The goal of the extraction and processing was to generate sentence aligned text for statistical machine translation systems" ([www.statmt.org](https://www.statmt.org/europarl/)).
1384
+ * <u>Citation</u>: Philipp Koehn (2005). "Europarl: A Parallel Corpus for Statistical Machine Translation," MT Summit.
1385
+
1386
+ #### Eurovoc
1387
+ * <u>Source</u>: [EuropeanParliament/Eurovoc](https://huggingface.co/datasets/EuropeanParliament/Eurovoc). License: [EUPL 1.1](https://joinup.ec.europa.eu/licence/european-union-public-licence-version-11-or-later-eupl).
1388
+ * <u>Extracted from</u>: [Cellar](https://op.europa.eu/en/web/cellar). License: [Open](https://op.europa.eu/en/web/cellar).
1389
+ * <u>Description</u>: A collection of mutlilingual documents from the data repository of the Publications Office of the European Union annotated with Eurovoc labels.
1390
+ * <u>Citations</u>:
1391
+ * Ilias Chalkidis, Emmanouil Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos (2019). "Extreme Multi-Label Legal Text Classification: A Case Study in EU Legislation," Proceedings of the Natural Legal Language Processing Workshop 2019, pages 78–87, Minneapolis, Minnesota. Association for Computational Linguistics.
1392
+ * Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis and Ion Androutsopoulos (2019). "Large-Scale Multi-Label Text Classification on EU Legislation," Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019), Florence, Italy, (short papers).
1393
+ * Andrei-Marius Avram, Vasile Pais, and Dan Ioan Tufis (2021). "PyEuroVoc: A Tool for Multilingual Legal Document Classification with EuroVoc Descriptors," Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 92–101, Held Online. INCOMA Ltd.
1394
+ * Zein Shaheen, Gerhard Wohlgenannt and Erwin Filtz (2020). "Large scale legal text classification using transformer models," [arXiv:2010.12871](https://arxiv.org/abs/2010.12871v1).
1395
+
1396
+ #### FineWebEdu
1397
+ * <u>Source</u>: [HuggingFaceFW/fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu). License: [ODC-BY](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu).
1398
+ * <u>Extracted from</u>: [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb). License: [ODC-BY](https://huggingface.co/datasets/HuggingFaceFW/fineweb).
1399
+ * <u>Description</u>: A 1.3 trillion token selection from [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb), which contains 15 trillion tokens of curated data from 96 Common Crawl dumps. Content in FineWebEdu has been selected by a custom designed classifier for its high-quality, educational content.
1400
+ * <u>Citation</u>: Guilherme Penedo, Hynek Kydlíček, Loubna Ben allal, Anton Lozhkov, Margaret Mitchell, Colin Raffel, Leandro Von Werra, Thomas Wolf (2024). "The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale," [ arXiv:2406.17557](https://arxiv.org/abs/2406.17557).
1401
+
1402
+ #### GallicaMonographies
1403
+ * <u>Source</u>: Corpus contributed by OpenLLM partners. A version is also published here: [PleIAs/French-PD-Books](https://huggingface.co/datasets/PleIAs/French-PD-Books). License: None (public domain).
1404
+ * <u>Extracted from</u>: [Gallicagram](https://shiny.ens-paris-saclay.fr/app/gallicagram).
1405
+ * <u>Description</u>: A large collection of French monographies in the public domain made available through the French National Library ([Gallica](https://gallica.bnf.fr/accueil/fr/content/accueil-fr?mode=desktop)).
1406
+ * <u>Citation</u>: No paper found.
1407
+
1408
+ #### GallicaPress
1409
+ * <u>Source</u>: Corpus contributed by OpenLLM partners. A version is also published here: [PleIAs/French-PD-Newspapers](https://huggingface.co/datasets/PleIAs/French-PD-Newspapers). License: None (public domain).
1410
+ * <u>Extracted from</u>: [Gallicagram](https://shiny.ens-paris-saclay.fr/app/gallicagram).
1411
+ * <u>Description</u>: A large collection of French newspapers and periodicals in the public domain made available through the French National Library ([Gallica](https://gallica.bnf.fr/accueil/fr/content/accueil-fr?mode=desktop)).
1412
+ * <u>Citation</u>: No paper found.
1413
+
1414
+ #### Gutenberg
1415
+ * <u>Source</u>: Corpus compiled by OpenLLM partners.
1416
+ * <u>Extracted from</u>:
1417
+ * [aleph.gutenberg.org](http://aleph.gutenberg.org/) via [Project Gutenberg](https://www.gutenberg.org/). License: [Open](https://www.gutenberg.org/policy/terms_of_use.html).
1418
+ * [pgcorpus](https://github.com/pgcorpus/gutenberg). License: [CC BY-4.0](https://zenodo.org/records/2422561).
1419
+ * <u>Description</u>: A collection of free eBooks, manually prepared by human annotators.
1420
+ * <u>Citation</u>: No paper found.
1421
+
1422
+ #### HAL
1423
+ * <u>Source</u>:
1424
+ * <u>Extracted from</u>: [HAL](https://hal.science/).
1425
+ * <u>Description</u>: A collection of scientific papers and manuscripts distributed through an open science platform.
1426
+ * <u>Citation</u>:
1427
+
1428
+ #### Interventions Parlement
1429
+ * <u>Source</u>: Corpus contributed by OpenLLM partners.
1430
+ * <u>Extracted from</u>: [Regards citoyens](https://www.regardscitoyens.org/#&panel1-4) ([nodeputes.fr](http://www.nosdeputes.fr/), [nossenateurs.fr](http://www.nossenateurs.fr/)). [API](https://github.com/regardscitoyens). License: [CC BY-SA](https://www.regardscitoyens.org/#&panel1-2).
1431
+ * <u>Description</u>: Transcripts of speeches made during French parlementary debates.
1432
+ * <u>Citation</u>: No paper found.
1433
+
1434
+ #### MathPile
1435
+ * <u>Source</u>: [GAIR/MathPile_Commercial](https://huggingface.co/datasets/GAIR/MathPile_Commercial). License: CC BY-SA 4.0
1436
+ * <u>Extracted from</u>: [MathPile](https://huggingface.co/datasets/GAIR/MathPile). License: CC BY-SA-NC 4.0.
1437
+ * <u>Description</u>: A preprocessed collection of documents focused on math, including Textbooks, arXiv, Wikipedia, ProofWiki, StackExchange, and web pages from Common Crawl. The content targets a range of levels, from kindergarten through postgraduate level. MathPile_Commercial was obtained by removing documents from MathPile that do not allow commercial use.
1438
+ * <u>Citation</u>: Zengzhi Wang, Rui Xia and Pengfei Liu (2023). "Generative AI for Math: Part I -- MathPile: A Billion-Token-Scale Pretraining Corpus for Math," [ arXiv:2312.17120](https://export.arxiv.org/abs/2312.17120).
1439
+
1440
+ #### OpenData
1441
+ * <u>Source</u>: [Nicolas-BZRD/DILA_OPENDATA_FR_2023](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main) (balo, dole, inca, kali, legi and sarde subsets). License: ODC-BY.
1442
+ * <u>Extracted from</u>: [OpenData](https://echanges.dila.gouv.fr/OPENDATA/) (Data collection date: October, 2023).
1443
+ * <u>Description</u>: "The French Government Open Data (DILA) Dataset is a collection of text data extracted from various sources provided by the French government, specifically the Direction de l'information légale et administrative (DILA). This dataset contains a wide range of legal, administrative, and legislative documents. The data has been organized into several categories for easy access and analysis" (from the [dataset card](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main)).
1444
+ * <u>Citation</u>: No paper found.
1445
+
1446
+ #### OpenEdition
1447
+ * <u>Source</u>: Corpus contributed by OpenLLM partners.
1448
+ * <u>Extracted from</u>: [Open Edition](https://www.openedition.org/).
1449
+ * <u>Description</u>:
1450
+ * <u>Citation</u>: No paper found.
1451
+
1452
+ #### PeS2o
1453
+ * <u>Source</u>: [allenai/peS2o](https://huggingface.co/datasets/allenai/peS2o). License: [ODC BY-v1.0](https://opendatacommons.org/licenses/by/1-0/)
1454
+ * <u>Extracted from</u>: [S2ORC](https://github.com/allenai/s2orc) (see [aclanthology](https://aclanthology.org/2020.acl-main.447/)). Knowledge cutoff: 2023-01-03.
1455
+ * <u>Description</u>: A preprocessed collection of academic papers designed for pre-training of language models. It includes a subset of full papers and another subset of titles and abstracts.
1456
+ * <u>Citation</u>: Luca Soldaini and Kyle Lo (2023). "peS2o (Pretraining Efficiently on S2ORC) Dataset}, Allen Institute for AI. [GitHub](https://github.com/allenai/pes2o).
1457
+
1458
+ #### Pile (Uncopyrighted)
1459
+ * <u>Source</u>: [monology/pile-uncopyrighted](https://huggingface.co/datasets/monology/pile-uncopyrighted).
1460
+ * <u>Extracted from</u>: FreeLaw, StackExchange, USPTO Backgrounds, DM Mathematics, Ubuntu IRC, Phil Papers, NIH ExPorter from [The Pile](https://huggingface.co/datasets/EleutherAI/pile). License: MIT.
1461
+ * <u>Description</u> (from the [Datasheet](https://arxiv.org/abs/2201.07311)):
1462
+ * FreeLaw: "The Free Law Project is US registered non-profit that provide access to millions of legal opinions and analytical tools for academic studies in the legal realm."
1463
+ * StackExchange: "The StackExchange dataset is a dump of anonymized user-contributed content on the Stack Exchange network, a popular collection of websites centered around user-contributed questions and answers."
1464
+ * USPTO Backgrounds: "The USPTO Backgrounds dataset is a set of background sections from patents granted by the United States Patent and Trademark Office, derived from its published [bulk archives](https://bulkdata.uspto.gov/)."
1465
+ * DM Mathematics: "The DeepMind Mathematics dataset consists of a collection of mathematical problems such as algebra, arithmetic, calculus, number theory, and probability, formatted as natural language prompts [Saxton et al., 2019](https://arxiv.org/abs/1904.01557)."
1466
+ * Ubuntu IRC: "The Ubuntu IRC dataset is derived from the publicly available [chatlogs](https://irclogs.ubuntu.com/) of all Ubunturelated channels on the Freenode IRC chat server."
1467
+ * PhilPapers: [PhilPapers](https://philpapers.org/) is a dataset of open access philosophy publications from an international database maintained by the Center for Digital Philosophy at the University of Western Ontario.
1468
+ * NIH ExPORTER: "The NIH Grant abstracts provides a bulk-data repository for awarded applications through the ExPORTER4 service covering the fiscal years 1985-present."
1469
+ * <u>Citation</u>:
1470
+ * Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, Connor Leahy (2020). "The Pile: An 800GB Dataset of Diverse Text for Language Modeling," [ arXiv:2101.00027](https://arxiv.org/abs/2101.00027).
1471
+ * Stella Biderman, Kieran Bicheno, Leo Gao (2022). "Datasheet for the Pile," [ arXiv:2201.07311](https://arxiv.org/abs/2201.07311).
1472
+
1473
+ #### QuestionsEcritesParlement
1474
+ * <u>Source</u>: Corpus contributed by OpenLLM partners.
1475
+ * <u>Extracted from</u>: [Regards citoyens](https://www.regardscitoyens.org/#&panel1-4) ([text](https://data.regardscitoyens.org/nosdeputes.fr/)). License: [CC BY-NC-SA](https://data.regardscitoyens.org/nosdeputes.fr/).
1476
+ * <u>Description</u>: Collection of long written questions, read during a session at the french national assembly: from a member of french parliament to a minister (Minister got 2 month to respond). ([text](https://data.regardscitoyens.org/nosdeputes.fr/)).
1477
+ * <u>Citation</u>: No paper found.
1478
+
1479
+ #### RedPajama (v2)
1480
+ * <u>Source</u>: [togethercomputer/RedPajama-Data-V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2). License: Apache 2.0 (data preparation code), Not specified (data) but see [Common Crawl terms of use](https://commoncrawl.org/terms-of-use).
1481
+ * <u>Description</u>: "RedPajama-V2 is an open dataset for training large language models. The dataset includes over 100B text documents coming from 84 CommonCrawl snapshots and processed using the [CCNet](https://github.com/facebookresearch/cc_net) pipeline. Out of these, there are 30B documents in the corpus that additionally come with quality signals, and 20B documents that are deduplicated" (from [GitHub](https://github.com/togethercomputer/RedPajama-Data)).
1482
+ * <u>Citation</u>: Together Computer (2023). "RedPajama-Data-v2: an Open Dataset with 30 Trillion Tokens for Training Large Language Models," [GitHub](https://github.com/togethercomputer/RedPajama-Data).
1483
+
1484
+ #### STAC
1485
+ * <u>Source</u>: [STAC](https://www.irit.fr/STAC/corpus.html). License: CC BY-SA-NC 4.0.
1486
+ * <u>Description</u>: A collection of chats from an online version of the game Settlers of Catan.
1487
+ * <u>Citation</u>: Nicholas Asher, Julie Hunter, Mathieu Morey, Farah Benamara and Stergos Afantenos (2016). "Discourse structure and dialogue acts in multiparty dialogue: the STAC corpus," The Tenth International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association, pp. 2721-2727.
1488
+
1489
+ #### TheStack
1490
+ * <u>Source</u>: [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup). License: Other (mixture of copyleft licenses).
1491
+ * <u>Description</u>: "The Stack contains over 6TB of permissively-licensed source code files covering 358 programming languages. The dataset was created as part of the BigCode Project, an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets. This is the near-deduplicated version with 3TB data" (from the [dataset card](https://huggingface.co/datasets/bigcode/the-stack-dedup)).
1492
+ * <u>Citation</u>: Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Muñoz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau, Leandro von Werra and Harm de Vries (2022). "The Stack: 3 TB of permissively licensed source code," [arxiv:2211.15533](https://arxiv.org/abs/2211.15533).
1493
+
1494
+ #### Theses
1495
+ * <u>Source</u>: Corpus contributed by OpenLLM partners.
1496
+ * <u>Extracted from</u>: [theses.fr](https://theses.fr/?domaine=theses) and [HAL???]().
1497
+ * <u>Description</u>:
1498
+ * <u>Citation</u>: No paper found.
1499
+
1500
+ #### Wikipedia, Wikisource, Wiktionary
1501
+ * <u>Source</u>: Corpus contributed by LINAGORA Labs (OpenLLM-France).
1502
+ Also published here:
1503
+ * [OpenLLM-France/wikipedia](https://huggingface.co/datasets/OpenLLM-France/wikipedia)
1504
+ * [OpenLLM-France/wikisource](https://huggingface.co/datasets/OpenLLM-France/wikisource)
1505
+ * [OpenLLM-France/wiktionary](https://huggingface.co/datasets/OpenLLM-France/wiktionary)
1506
+ * <u>Extracted from</u>: [Wikimedia dumps](https://dumps.wikimedia.org/other/enterprise_html/runs/). License: [GFDL/CC BY-SA](https://dumps.wikimedia.org/legal.html)
1507
+ * <u>Description</u>:
1508
+ * <u>Citation</u>: No paper found.
1509
+
1510
+ #### YouTube
1511
+ * <u>Source</u>: Corpus contributed by LINAGORA Labs (OpenLLM-France).
1512
+ * <u>Extracted from</u>:
1513
+ * <u>Description</u>:
1514
+ * <u>Citation</u>: No paper found.
1515
+
1516
+ ## Example use in python
1517
 
1518
  Load the dataset using the `datasets` library:
1519
  ```python
 
1522
  kwargs = {"split": "train", "streaming": True}
1523
 
1524
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", **kwargs)
1525
+
1526
+ for sample in dataset:
1527
+ text = sample["text"]
1528
+ # ... do something with the text
1529
  ```
1530
 
1531
  Several configurations are available to select a language, a source, or both, illustrated in the following examples.
 
1544
  ```
1545
  Load data in Python:
1546
  ```python
1547
+ dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code-python", **kwargs)
1548
  ```
1549
  Load data from Wikipedia (in available languages):
1550
  ```python
 
1554
  ```python
1555
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia-fr", **kwargs)
1556
  ```
1557
+
1558
+ ## License
1559
+
1560
+ TODO
1561
+
1562
+ ## Citation
1563
+
1564
+ TODO
1565
+
1566
+ ## Contact
1567
+
1568
+ <pre>[email protected]</pre>