Datasets:
Jeronymous
commited on
Commit
·
0f3cb76
1
Parent(s):
789c727
Update dataset details. Add numbers about dataset composition in a CSV file
Browse files- README.md +46 -37
- metadata/dataset_composition.csv +97 -0
README.md
CHANGED
@@ -504,11 +504,11 @@ The corpus contains the following information for each text sample:
|
|
504 |
* `source`: an identifier for the source(s) of the text sample (`Wikipedia`, `RedPajama`, `Gutenberg`, …).
|
505 |
The list of all sources is described in this document.
|
506 |
* `id`: an identifier that is unique among the source.
|
507 |
-
* `language`: the language of the text sample,
|
508 |
-
*
|
509 |
-
*
|
510 |
-
* a list of ISO 639-1 codes separated by commas, if the text sample is multilingual: `fr,en`, `de,fr`, `es,en`, `it,en
|
511 |
-
|
512 |
* `url` (optional): the URL of the original text sample on the web, if available.
|
513 |
* `title` (optional): the title of the original text sample, if available.
|
514 |
* `author` (optional): the author of the original text sample, if available.
|
@@ -533,6 +533,7 @@ The following table provides an overview of the dataset composition,
|
|
533 |
broken down by source and language.
|
534 |
Sources are grouped by category.
|
535 |
The table provides the number of documents, words, tokens, and characters for each subset.
|
|
|
536 |
|
537 |
<!-- The following is automatically generated. Do not update manually. -->
|
538 |
<!-- TABLE START -->
|
@@ -1452,7 +1453,7 @@ The table provides the number of documents, words, tokens, and characters for ea
|
|
1452 |
#### AmericanStories
|
1453 |
* <u>Source</u>: [dell-research-harvard/AmericanStories](https://huggingface.co/datasets/dell-research-harvard/AmericanStories). License: [CC BY 4.0](https://huggingface.co/datasets/dell-research-harvard/AmericanStories).
|
1454 |
* <u>Extracted from</u>: [Chronicling America](https://www.loc.gov/collections/chronicling-america/about-this-collection/). License: [Open](https://www.loc.gov/collections/chronicling-america/about-this-collection/rights-and-access/).
|
1455 |
-
* <u>Description</u>: "The American Stories dataset is a collection of full article texts extracted from historical U.S. newspaper images. It includes nearly 20 million scans from the public domain Chronicling America collection maintained by the Library of Congress. The dataset is designed to address the challenges posed by complex layouts and low OCR quality in existing newspaper datasets" (from the [dataset card](https://huggingface.co/datasets/dell-research-harvard/AmericanStories)).
|
1456 |
* <u>Citation</u>: Melissa Dell, Jacob Carlson, Tom Bryan, Emily Silcock, Abhishek Arora, Zejiang Shen, Luca D'Amico-Wong, Quan Le, Pablo Querubin and Leander Heldring (2023). "American Stories: A Large-Scale Structured Text Dataset of Historical U.S. Newspapers," [arxiv:2308.12477](https://arxiv.org/abs/2308.12477v1).
|
1457 |
|
1458 |
|
@@ -1460,13 +1461,17 @@ The table provides the number of documents, words, tokens, and characters for ea
|
|
1460 |
* <u>Sources</u>:
|
1461 |
* French dataset: [OpenLLM-France/Claire-Dialogue-French-0.1](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1). License: [CC BY-NC-SA 4.0](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1).
|
1462 |
* English dataset: [OpenLLM-France/Claire-Dialogue-English-0.1](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1). License: [CC BY-NC-SA 4.0](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1).
|
1463 |
-
* <u>
|
|
|
1464 |
* <u>Citation</u>: Julie Hunter, Jérôme Louradour, Virgile Rennard, Ismaïl Harrando, Guokan Shang, Jean-Pierre Lorré (2023). The Claire French Dialogue Dataset. [arXiv:2311.16840](https://arxiv.org/abs/2311.16840).
|
1465 |
|
1466 |
#### CroissantAligned
|
1467 |
-
* <u>Source</u>: [croissantllm/croissant_dataset_no_web_data](https://huggingface.co/datasets/croissantllm/croissant_dataset_no_web_data). License: not specified.
|
1468 |
-
* <u>Extracted from</u>:
|
1469 |
-
*
|
|
|
|
|
|
|
1470 |
* <u>Citation</u>: Manuel Faysse, Patrick Fernandes, Nuno M. Guerreiro, António Loison, Duarte M. Alves, Caio Corro, Nicolas Boizard, João Alves, Ricardo Rei, Pedro H. Martins, Antoni Bigata Casademunt, François Yvon, André F.T. Martins, Gautier Viaud, Céline Hudelot, Pierre Colombo (2024). "CroissantLLM: A Truly Bilingual French-English Language Model," [arXiv:2402.00786](https://arxiv.org/abs/2402.00786).
|
1471 |
|
1472 |
#### DiscoursPublics
|
@@ -1485,7 +1490,7 @@ The table provides the number of documents, words, tokens, and characters for ea
|
|
1485 |
#### Eurovoc
|
1486 |
* <u>Source</u>: [EuropeanParliament/Eurovoc](https://huggingface.co/datasets/EuropeanParliament/Eurovoc). License: [EUPL 1.1](https://joinup.ec.europa.eu/licence/european-union-public-licence-version-11-or-later-eupl).
|
1487 |
* <u>Extracted from</u>: [Cellar](https://op.europa.eu/en/web/cellar). License: [Open](https://op.europa.eu/en/web/cellar).
|
1488 |
-
* <u>Description</u>: A collection of mutlilingual documents from the data repository of the Publications Office of the European Union annotated with Eurovoc labels.
|
1489 |
* <u>Citations</u>:
|
1490 |
* Ilias Chalkidis, Emmanouil Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos (2019). "Extreme Multi-Label Legal Text Classification: A Case Study in EU Legislation," Proceedings of the Natural Legal Language Processing Workshop 2019, pages 78–87, Minneapolis, Minnesota. Association for Computational Linguistics.
|
1491 |
* Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis and Ion Androutsopoulos (2019). "Large-Scale Multi-Label Text Classification on EU Legislation," Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019), Florence, Italy, (short papers).
|
@@ -1495,19 +1500,19 @@ The table provides the number of documents, words, tokens, and characters for ea
|
|
1495 |
#### FineWebEdu
|
1496 |
* <u>Source</u>: [HuggingFaceFW/fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu). License: [ODC-BY](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu).
|
1497 |
* <u>Extracted from</u>: [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb). License: [ODC-BY](https://huggingface.co/datasets/HuggingFaceFW/fineweb).
|
1498 |
-
* <u>Description</u>: A 1.3 trillion token selection from [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb), which contains 15 trillion tokens of curated data from 96 Common Crawl dumps. Content in FineWebEdu has been selected by a custom designed classifier for its high-quality, educational content.
|
1499 |
* <u>Citation</u>: Guilherme Penedo, Hynek Kydlíček, Loubna Ben allal, Anton Lozhkov, Margaret Mitchell, Colin Raffel, Leandro Von Werra, Thomas Wolf (2024). "The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale," [ arXiv:2406.17557](https://arxiv.org/abs/2406.17557).
|
1500 |
|
1501 |
#### GallicaMonographies
|
1502 |
* <u>Source</u>: Corpus contributed by OpenLLM partners. A version is also published here: [PleIAs/French-PD-Books](https://huggingface.co/datasets/PleIAs/French-PD-Books). License: None (public domain).
|
1503 |
* <u>Extracted from</u>: [Gallicagram](https://shiny.ens-paris-saclay.fr/app/gallicagram).
|
1504 |
-
* <u>Description</u>: A large collection of French monographies in the public domain made available through the French National Library ([Gallica](https://gallica.bnf.fr/accueil/fr/content/accueil-fr?mode=desktop)).
|
1505 |
* <u>Citation</u>: No paper found.
|
1506 |
|
1507 |
#### GallicaPress
|
1508 |
* <u>Source</u>: Corpus contributed by OpenLLM partners. A version is also published here: [PleIAs/French-PD-Newspapers](https://huggingface.co/datasets/PleIAs/French-PD-Newspapers). License: None (public domain).
|
1509 |
* <u>Extracted from</u>: [Gallicagram](https://shiny.ens-paris-saclay.fr/app/gallicagram).
|
1510 |
-
* <u>Description</u>: A large collection of French newspapers and periodicals in the public domain made available through the French National Library ([Gallica](https://gallica.bnf.fr/accueil/fr/content/accueil-fr?mode=desktop)).
|
1511 |
* <u>Citation</u>: No paper found.
|
1512 |
|
1513 |
#### Gutenberg
|
@@ -1519,10 +1524,11 @@ The table provides the number of documents, words, tokens, and characters for ea
|
|
1519 |
* <u>Citation</u>: No paper found.
|
1520 |
|
1521 |
#### HAL
|
1522 |
-
* <u>Source</u>:
|
1523 |
* <u>Extracted from</u>: [HAL](https://hal.science/).
|
1524 |
-
* <u>Description</u>: A collection of scientific papers and manuscripts distributed through an open science platform.
|
1525 |
-
* <u>Citation</u>:
|
|
|
1526 |
|
1527 |
#### InterventionsParlement
|
1528 |
* <u>Source</u>: Corpus contributed by OpenLLM partners.
|
@@ -1531,13 +1537,13 @@ The table provides the number of documents, words, tokens, and characters for ea
|
|
1531 |
* <u>Citation</u>: No paper found.
|
1532 |
|
1533 |
#### MathPile
|
1534 |
-
* <u>Source</u>: [GAIR/MathPile_Commercial](https://huggingface.co/datasets/GAIR/MathPile_Commercial). License: CC BY-SA 4.0
|
1535 |
-
* <u>Extracted from</u>: [MathPile](https://huggingface.co/datasets/GAIR/MathPile). License: CC BY-SA-NC 4.0.
|
1536 |
* <u>Description</u>: A preprocessed collection of documents focused on math, including Textbooks, arXiv, Wikipedia, ProofWiki, StackExchange, and web pages from Common Crawl. The content targets a range of levels, from kindergarten through postgraduate level. MathPile_Commercial was obtained by removing documents from MathPile that do not allow commercial use.
|
1537 |
* <u>Citation</u>: Zengzhi Wang, Rui Xia and Pengfei Liu (2023). "Generative AI for Math: Part I -- MathPile: A Billion-Token-Scale Pretraining Corpus for Math," [ arXiv:2312.17120](https://export.arxiv.org/abs/2312.17120).
|
1538 |
|
1539 |
#### OpenData
|
1540 |
-
* <u>Source</u>: [Nicolas-BZRD/DILA_OPENDATA_FR_2023](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main) (balo, dole, inca, kali, legi and sarde subsets). License: ODC-BY.
|
1541 |
* <u>Extracted from</u>: [OpenData](https://echanges.dila.gouv.fr/OPENDATA/) (Data collection date: October, 2023).
|
1542 |
* <u>Description</u>: "The French Government Open Data (DILA) Dataset is a collection of text data extracted from various sources provided by the French government, specifically the Direction de l'information légale et administrative (DILA). This dataset contains a wide range of legal, administrative, and legislative documents. The data has been organized into several categories for easy access and analysis" (from the [dataset card](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main)).
|
1543 |
* <u>Citation</u>: No paper found.
|
@@ -1551,19 +1557,19 @@ The table provides the number of documents, words, tokens, and characters for ea
|
|
1551 |
#### PeS2o
|
1552 |
* <u>Source</u>: [allenai/peS2o](https://huggingface.co/datasets/allenai/peS2o). License: [ODC BY-v1.0](https://opendatacommons.org/licenses/by/1-0/)
|
1553 |
* <u>Extracted from</u>: [S2ORC](https://github.com/allenai/s2orc) (see [aclanthology](https://aclanthology.org/2020.acl-main.447/)). Knowledge cutoff: 2023-01-03.
|
1554 |
-
* <u>Description</u>: A preprocessed collection of academic papers designed for pre-training of language models.
|
1555 |
* <u>Citation</u>: Luca Soldaini and Kyle Lo (2023). "peS2o (Pretraining Efficiently on S2ORC) Dataset}, Allen Institute for AI. [GitHub](https://github.com/allenai/pes2o).
|
1556 |
|
1557 |
#### Pile (Uncopyrighted)
|
1558 |
-
* <u>Source</u>: [monology/pile-uncopyrighted](https://huggingface.co/datasets/monology/pile-uncopyrighted).
|
1559 |
-
* <u>Extracted from</u>: FreeLaw, StackExchange, USPTO Backgrounds, DM Mathematics, Ubuntu IRC,
|
1560 |
* <u>Description</u> (from the [Datasheet](https://arxiv.org/abs/2201.07311)):
|
1561 |
* FreeLaw: "The Free Law Project is US registered non-profit that provide access to millions of legal opinions and analytical tools for academic studies in the legal realm."
|
1562 |
* StackExchange: "The StackExchange dataset is a dump of anonymized user-contributed content on the Stack Exchange network, a popular collection of websites centered around user-contributed questions and answers."
|
1563 |
-
* USPTO Backgrounds: "The USPTO Backgrounds dataset is a set of background sections from patents granted by the United States Patent and Trademark Office, derived from its published
|
1564 |
* DM Mathematics: "The DeepMind Mathematics dataset consists of a collection of mathematical problems such as algebra, arithmetic, calculus, number theory, and probability, formatted as natural language prompts [Saxton et al., 2019](https://arxiv.org/abs/1904.01557)."
|
1565 |
-
* Ubuntu IRC: "The Ubuntu IRC dataset is derived from the publicly available
|
1566 |
-
* PhilPapers:
|
1567 |
* NIH ExPORTER: "The NIH Grant abstracts provides a bulk-data repository for awarded applications through the ExPORTER4 service covering the fiscal years 1985-present."
|
1568 |
* <u>Citation</u>:
|
1569 |
* Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, Connor Leahy (2020). "The Pile: An 800GB Dataset of Diverse Text for Language Modeling," [ arXiv:2101.00027](https://arxiv.org/abs/2101.00027).
|
@@ -1572,28 +1578,31 @@ The table provides the number of documents, words, tokens, and characters for ea
|
|
1572 |
#### QuestionsEcritesParlement
|
1573 |
* <u>Source</u>: Corpus contributed by OpenLLM partners.
|
1574 |
* <u>Extracted from</u>: [Regards citoyens](https://www.regardscitoyens.org/#&panel1-4) ([text](https://data.regardscitoyens.org/nosdeputes.fr/)). License: [CC BY-NC-SA](https://data.regardscitoyens.org/nosdeputes.fr/).
|
1575 |
-
* <u>Description</u>: Collection of long written questions, read during a session at the
|
1576 |
* <u>Citation</u>: No paper found.
|
1577 |
|
1578 |
#### RedPajama (v2)
|
1579 |
-
* <u>Source</u>: [togethercomputer/RedPajama-Data-V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2). License: Apache 2.0 (data preparation code), Not specified (data) but see [Common Crawl terms of use](https://commoncrawl.org/terms-of-use).
|
1580 |
-
* <u>
|
|
|
1581 |
* <u>Citation</u>: Together Computer (2023). "RedPajama-Data-v2: an Open Dataset with 30 Trillion Tokens for Training Large Language Models," [GitHub](https://github.com/togethercomputer/RedPajama-Data).
|
1582 |
|
1583 |
#### STAC
|
1584 |
-
* <u>Source</u>: [STAC](https://www.irit.fr/STAC/corpus.html). License: CC BY-SA-NC 4.0.
|
|
|
1585 |
* <u>Description</u>: A collection of chats from an online version of the game Settlers of Catan.
|
1586 |
* <u>Citation</u>: Nicholas Asher, Julie Hunter, Mathieu Morey, Farah Benamara and Stergos Afantenos (2016). "Discourse structure and dialogue acts in multiparty dialogue: the STAC corpus," The Tenth International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association, pp. 2721-2727.
|
1587 |
|
1588 |
#### TheStack
|
1589 |
-
* <u>Source</u>: [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup). License: Other (mixture of copyleft licenses).
|
1590 |
-
* <u>
|
|
|
1591 |
* <u>Citation</u>: Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Muñoz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau, Leandro von Werra and Harm de Vries (2022). "The Stack: 3 TB of permissively licensed source code," [arxiv:2211.15533](https://arxiv.org/abs/2211.15533).
|
1592 |
|
1593 |
#### Theses
|
1594 |
* <u>Source</u>: Corpus contributed by OpenLLM partners.
|
1595 |
-
* <u>Extracted from</u>: [theses.fr](https://theses.fr/?domaine=theses) and [HAL
|
1596 |
-
* <u>Description</u>:
|
1597 |
* <u>Citation</u>: No paper found.
|
1598 |
|
1599 |
#### Wikipedia, Wikisource, Wiktionary
|
@@ -1602,14 +1611,14 @@ The table provides the number of documents, words, tokens, and characters for ea
|
|
1602 |
* [OpenLLM-France/wikipedia](https://huggingface.co/datasets/OpenLLM-France/wikipedia)
|
1603 |
* [OpenLLM-France/wikisource](https://huggingface.co/datasets/OpenLLM-France/wikisource)
|
1604 |
* [OpenLLM-France/wiktionary](https://huggingface.co/datasets/OpenLLM-France/wiktionary)
|
1605 |
-
* <u>Extracted from</u>: [Wikimedia dumps](https://dumps.wikimedia.org/other/enterprise_html/runs/). License: [GFDL/CC BY-SA](https://dumps.wikimedia.org/legal.html)
|
1606 |
* <u>Description</u>:
|
1607 |
* <u>Citation</u>: No paper found.
|
1608 |
|
1609 |
#### YouTube
|
1610 |
* <u>Source</u>: Corpus contributed by LINAGORA Labs (OpenLLM-France).
|
1611 |
-
* <u>Extracted from</u>:
|
1612 |
-
* <u>Description</u>:
|
1613 |
* <u>Citation</u>: No paper found.
|
1614 |
|
1615 |
## Example use in python
|
|
|
504 |
* `source`: an identifier for the source(s) of the text sample (`Wikipedia`, `RedPajama`, `Gutenberg`, …).
|
505 |
The list of all sources is described in this document.
|
506 |
* `id`: an identifier that is unique among the source.
|
507 |
+
* `language`: the language of the text sample (relying on the source, that information can be wrong). Possible values are:
|
508 |
+
* an ISO 639-1 code of a natural language: `en`, `fr`, `de`, `es`, or `it`;
|
509 |
+
* a common name prefixed by "`code:`" of a programming language: `code:python`, `code:c++`, …; or
|
510 |
+
* a list of ISO 639-1 codes separated by commas, if the text sample is multilingual: `fr,en`, `de,fr`, `es,en`, `it,en`,
|
511 |
+
or one of those pairs in the opposite order if the languages appear in the opposite order in the text.
|
512 |
* `url` (optional): the URL of the original text sample on the web, if available.
|
513 |
* `title` (optional): the title of the original text sample, if available.
|
514 |
* `author` (optional): the author of the original text sample, if available.
|
|
|
533 |
broken down by source and language.
|
534 |
Sources are grouped by category.
|
535 |
The table provides the number of documents, words, tokens, and characters for each subset.
|
536 |
+
All numbers in this table are available in the CSV file [dataset_composition.csv](metadata/dataset_composition.csv).
|
537 |
|
538 |
<!-- The following is automatically generated. Do not update manually. -->
|
539 |
<!-- TABLE START -->
|
|
|
1453 |
#### AmericanStories
|
1454 |
* <u>Source</u>: [dell-research-harvard/AmericanStories](https://huggingface.co/datasets/dell-research-harvard/AmericanStories). License: [CC BY 4.0](https://huggingface.co/datasets/dell-research-harvard/AmericanStories).
|
1455 |
* <u>Extracted from</u>: [Chronicling America](https://www.loc.gov/collections/chronicling-america/about-this-collection/). License: [Open](https://www.loc.gov/collections/chronicling-america/about-this-collection/rights-and-access/).
|
1456 |
+
* <u>Description</u>: "The American Stories dataset is a collection of full article texts extracted from historical U.S. newspaper images. It includes nearly 20 million scans from the public domain Chronicling America collection maintained by the Library of Congress. The dataset is designed to address the challenges posed by complex layouts and low OCR quality in existing newspaper datasets" (from the [dataset card](https://huggingface.co/datasets/dell-research-harvard/AmericanStories)). Dataset containing text retrieved through OCR.
|
1457 |
* <u>Citation</u>: Melissa Dell, Jacob Carlson, Tom Bryan, Emily Silcock, Abhishek Arora, Zejiang Shen, Luca D'Amico-Wong, Quan Le, Pablo Querubin and Leander Heldring (2023). "American Stories: A Large-Scale Structured Text Dataset of Historical U.S. Newspapers," [arxiv:2308.12477](https://arxiv.org/abs/2308.12477v1).
|
1458 |
|
1459 |
|
|
|
1461 |
* <u>Sources</u>:
|
1462 |
* French dataset: [OpenLLM-France/Claire-Dialogue-French-0.1](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1). License: [CC BY-NC-SA 4.0](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1).
|
1463 |
* English dataset: [OpenLLM-France/Claire-Dialogue-English-0.1](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1). License: [CC BY-NC-SA 4.0](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1).
|
1464 |
+
* <u>Extracted from</u>: see the datacards for the [French](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1) and [English](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1) datasets.
|
1465 |
+
* <u>Description</u>: The Claire datasets are composed of transcripts of spoken conversations -- including parliamentary proceedings, interviews, debates, meetings, and free conversations -- as well as some written conversations from theater plays and written chats. The dataset is designed to help downstream performance of models fine-tuned for tasks requiring the comprehension of spontaneous spoken conversation, such as meeting summarization. Each dialogue is split into speech turns, and each speech turn is labeled with the name of the speaker or a unique identifier.
|
1466 |
* <u>Citation</u>: Julie Hunter, Jérôme Louradour, Virgile Rennard, Ismaïl Harrando, Guokan Shang, Jean-Pierre Lorré (2023). The Claire French Dialogue Dataset. [arXiv:2311.16840](https://arxiv.org/abs/2311.16840).
|
1467 |
|
1468 |
#### CroissantAligned
|
1469 |
+
* <u>Source</u>: [croissantllm/croissant_dataset_no_web_data](https://huggingface.co/datasets/croissantllm/croissant_dataset_no_web_data/tree/main/aligned_36b) (subset: `aligned_36b`). License: not specified.
|
1470 |
+
* <u>Extracted from</u>:
|
1471 |
+
* Translation pairs: [OPUS](https://opus.nlpl.eu/) (99.6% of the data in CroissantAligned). Pairs extracted from OPUS are labeled as "UnbabelFrEn". License: .
|
1472 |
+
* Thesis abstracts: French thesis abstract pairs. License: [ETALAB-Licence-Ouverte-v2.0](https://www.etalab.gouv.fr/wp-content/uploads/2017/04/ETALAB-Licence-Ouverte-v2.0.pdf).
|
1473 |
+
* Song lyrics: [lacoccinelle](https://www.lacoccinelle.net). License: .
|
1474 |
+
* <u>Description</u>: Data extracted from OPUS takes the form of sentences pairs, where one sentence is in French and the other is in English. OPUS pairs were passed through a custom pipeline designed to select the highest quality sentences pairs. Selected pairs are labeled "UnbabelFrEn" in the CroissantAligned dataset. The thesis abstract subset contains pairs of French or English thesis abstracts paired with translations written by the thesis author. The song lyrics are translated by contributors to www.lacoccinelle.net. Parallel data are used to boost the multilingual capabilities of models trained on them ([Faysse et al.,2024](https://arxiv.org/pdf/2402.00786)).
|
1475 |
* <u>Citation</u>: Manuel Faysse, Patrick Fernandes, Nuno M. Guerreiro, António Loison, Duarte M. Alves, Caio Corro, Nicolas Boizard, João Alves, Ricardo Rei, Pedro H. Martins, Antoni Bigata Casademunt, François Yvon, André F.T. Martins, Gautier Viaud, Céline Hudelot, Pierre Colombo (2024). "CroissantLLM: A Truly Bilingual French-English Language Model," [arXiv:2402.00786](https://arxiv.org/abs/2402.00786).
|
1476 |
|
1477 |
#### DiscoursPublics
|
|
|
1490 |
#### Eurovoc
|
1491 |
* <u>Source</u>: [EuropeanParliament/Eurovoc](https://huggingface.co/datasets/EuropeanParliament/Eurovoc). License: [EUPL 1.1](https://joinup.ec.europa.eu/licence/european-union-public-licence-version-11-or-later-eupl).
|
1492 |
* <u>Extracted from</u>: [Cellar](https://op.europa.eu/en/web/cellar). License: [Open](https://op.europa.eu/en/web/cellar).
|
1493 |
+
* <u>Description</u>: A collection of mutlilingual documents from the data repository of the Publications Office of the European Union annotated with Eurovoc labels. Dataset containing text retrieved through OCR.
|
1494 |
* <u>Citations</u>:
|
1495 |
* Ilias Chalkidis, Emmanouil Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos (2019). "Extreme Multi-Label Legal Text Classification: A Case Study in EU Legislation," Proceedings of the Natural Legal Language Processing Workshop 2019, pages 78–87, Minneapolis, Minnesota. Association for Computational Linguistics.
|
1496 |
* Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis and Ion Androutsopoulos (2019). "Large-Scale Multi-Label Text Classification on EU Legislation," Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019), Florence, Italy, (short papers).
|
|
|
1500 |
#### FineWebEdu
|
1501 |
* <u>Source</u>: [HuggingFaceFW/fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu). License: [ODC-BY](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu).
|
1502 |
* <u>Extracted from</u>: [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb). License: [ODC-BY](https://huggingface.co/datasets/HuggingFaceFW/fineweb).
|
1503 |
+
* <u>Description</u>: A 1.3 trillion token selection from [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb), which contains 15 trillion tokens of curated data from 96 Common Crawl dumps. Content in FineWebEdu has been selected by a custom designed classifier for its high-quality, educational content. Knowledge cutoff: 2019-2024.
|
1504 |
* <u>Citation</u>: Guilherme Penedo, Hynek Kydlíček, Loubna Ben allal, Anton Lozhkov, Margaret Mitchell, Colin Raffel, Leandro Von Werra, Thomas Wolf (2024). "The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale," [ arXiv:2406.17557](https://arxiv.org/abs/2406.17557).
|
1505 |
|
1506 |
#### GallicaMonographies
|
1507 |
* <u>Source</u>: Corpus contributed by OpenLLM partners. A version is also published here: [PleIAs/French-PD-Books](https://huggingface.co/datasets/PleIAs/French-PD-Books). License: None (public domain).
|
1508 |
* <u>Extracted from</u>: [Gallicagram](https://shiny.ens-paris-saclay.fr/app/gallicagram).
|
1509 |
+
* <u>Description</u>: A large collection of French monographies in the public domain made available through the French National Library ([Gallica](https://gallica.bnf.fr/accueil/fr/content/accueil-fr?mode=desktop)). Dataset containing text retrieved through OCR.
|
1510 |
* <u>Citation</u>: No paper found.
|
1511 |
|
1512 |
#### GallicaPress
|
1513 |
* <u>Source</u>: Corpus contributed by OpenLLM partners. A version is also published here: [PleIAs/French-PD-Newspapers](https://huggingface.co/datasets/PleIAs/French-PD-Newspapers). License: None (public domain).
|
1514 |
* <u>Extracted from</u>: [Gallicagram](https://shiny.ens-paris-saclay.fr/app/gallicagram).
|
1515 |
+
* <u>Description</u>: A large collection of French newspapers and periodicals in the public domain made available through the French National Library ([Gallica](https://gallica.bnf.fr/accueil/fr/content/accueil-fr?mode=desktop)). Dataset containing text retrieved through OCR.
|
1516 |
* <u>Citation</u>: No paper found.
|
1517 |
|
1518 |
#### Gutenberg
|
|
|
1524 |
* <u>Citation</u>: No paper found.
|
1525 |
|
1526 |
#### HAL
|
1527 |
+
* <u>Source</u>: The ROOTS corpus by BigScience (unpublished). License: CC BY-4.0.
|
1528 |
* <u>Extracted from</u>: [HAL](https://hal.science/).
|
1529 |
+
* <u>Description</u>: A collection of scientific papers and manuscripts distributed through an open science platform. Dataset containing text retrieved through OCR.
|
1530 |
+
* <u>Citation</u>: Hugo Laurençon, Lucile Saulnier, Thomas Wang, Christopher Akiki, Albert Villanova del Moral, Teven Le Scao, Leandro Von Werra, Chenghao Mou, Eduardo González Ponferrada, Huu Nguyen, Jörg Frohberg, Mario Šaško, Quentin Lhoest, Angelina McMillan-Major, Gerard Dupont, Stella Biderman, Anna Rogers, Loubna Ben allal, Francesco De Toni, Giada Pistilli, Olivier Nguyen, Somaieh Nikpoor, Maraim Masoud, Pierre Colombo, Javier de la Rosa, Paulo Villegas, Tristan Thrush, Shayne Longpre, Sebastian Nagel, Leon Weber, Manuel Muñoz, Jian Zhu, Daniel Van Strien, Zaid Alyafeai, Khalid Almubarak, Minh Chien Vu, Itziar Gonzalez-Dios, Aitor Soroa, Kyle Lo, Manan Dey, Pedro Ortiz Suarez, Aaron Gokaslan, Shamik Bose, David Adelani, Long Phan, Hieu Tran, Ian Yu, Suhas Pai, Jenny Chim, Violette Lepercq, Suzana Ilic, Margaret Mitchell, Sasha Alexandra Luccioni, Yacine Jernite (2022). [The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset](https://proceedings.neurips.cc/paper_files/paper/2022/hash/ce9e92e3de2372a4b93353eb7f3dc0bd-Abstract-Datasets_and_Benchmarks.html). Advances in Neural Information Processing Systems (NeurIPS), 35, 31809-31826.
|
1531 |
+
|
1532 |
|
1533 |
#### InterventionsParlement
|
1534 |
* <u>Source</u>: Corpus contributed by OpenLLM partners.
|
|
|
1537 |
* <u>Citation</u>: No paper found.
|
1538 |
|
1539 |
#### MathPile
|
1540 |
+
* <u>Source</u>: [GAIR/MathPile_Commercial](https://huggingface.co/datasets/GAIR/MathPile_Commercial). License: [CC BY-SA 4.0](https://huggingface.co/datasets/GAIR/MathPile_Commercial)
|
1541 |
+
* <u>Extracted from</u>: [MathPile](https://huggingface.co/datasets/GAIR/MathPile). License: [CC BY-SA-NC 4.0](https://huggingface.co/datasets/GAIR/MathPile).
|
1542 |
* <u>Description</u>: A preprocessed collection of documents focused on math, including Textbooks, arXiv, Wikipedia, ProofWiki, StackExchange, and web pages from Common Crawl. The content targets a range of levels, from kindergarten through postgraduate level. MathPile_Commercial was obtained by removing documents from MathPile that do not allow commercial use.
|
1543 |
* <u>Citation</u>: Zengzhi Wang, Rui Xia and Pengfei Liu (2023). "Generative AI for Math: Part I -- MathPile: A Billion-Token-Scale Pretraining Corpus for Math," [ arXiv:2312.17120](https://export.arxiv.org/abs/2312.17120).
|
1544 |
|
1545 |
#### OpenData
|
1546 |
+
* <u>Source</u>: [Nicolas-BZRD/DILA_OPENDATA_FR_2023](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main) (balo, dole, inca, kali, legi and sarde subsets). License: [ODC-BY](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main).
|
1547 |
* <u>Extracted from</u>: [OpenData](https://echanges.dila.gouv.fr/OPENDATA/) (Data collection date: October, 2023).
|
1548 |
* <u>Description</u>: "The French Government Open Data (DILA) Dataset is a collection of text data extracted from various sources provided by the French government, specifically the Direction de l'information légale et administrative (DILA). This dataset contains a wide range of legal, administrative, and legislative documents. The data has been organized into several categories for easy access and analysis" (from the [dataset card](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main)).
|
1549 |
* <u>Citation</u>: No paper found.
|
|
|
1557 |
#### PeS2o
|
1558 |
* <u>Source</u>: [allenai/peS2o](https://huggingface.co/datasets/allenai/peS2o). License: [ODC BY-v1.0](https://opendatacommons.org/licenses/by/1-0/)
|
1559 |
* <u>Extracted from</u>: [S2ORC](https://github.com/allenai/s2orc) (see [aclanthology](https://aclanthology.org/2020.acl-main.447/)). Knowledge cutoff: 2023-01-03.
|
1560 |
+
* <u>Description</u>: A preprocessed collection of academic papers designed for pre-training of language models. PeS2o is composed of two subsets: one containing full papers and one containing only paper titles and abstracts. Dataset containing (some) text retrieved through OCR.
|
1561 |
* <u>Citation</u>: Luca Soldaini and Kyle Lo (2023). "peS2o (Pretraining Efficiently on S2ORC) Dataset}, Allen Institute for AI. [GitHub](https://github.com/allenai/pes2o).
|
1562 |
|
1563 |
#### Pile (Uncopyrighted)
|
1564 |
+
* <u>Source</u>: [monology/pile-uncopyrighted](https://huggingface.co/datasets/monology/pile-uncopyrighted). License: [Other](https://huggingface.co/datasets/monology/pile-uncopyrighted).
|
1565 |
+
* <u>Extracted from</u>: [FreeLaw](https://free.law/), [StackExchange](https://stackexchange.com/), [USPTO Backgrounds](https://bulkdata.uspto.gov/), [DM Mathematics](https://github.com/google-deepmind/mathematics_dataset), [Ubuntu IRC](https://irclogs.ubuntu.com/), [PhilPapers](https://philpapers.org/), NIH ExPorter from [The Pile](https://huggingface.co/datasets/EleutherAI/pile). License: MIT.
|
1566 |
* <u>Description</u> (from the [Datasheet](https://arxiv.org/abs/2201.07311)):
|
1567 |
* FreeLaw: "The Free Law Project is US registered non-profit that provide access to millions of legal opinions and analytical tools for academic studies in the legal realm."
|
1568 |
* StackExchange: "The StackExchange dataset is a dump of anonymized user-contributed content on the Stack Exchange network, a popular collection of websites centered around user-contributed questions and answers."
|
1569 |
+
* USPTO Backgrounds: "The USPTO Backgrounds dataset is a set of background sections from patents granted by the United States Patent and Trademark Office, derived from its published bulk archives."
|
1570 |
* DM Mathematics: "The DeepMind Mathematics dataset consists of a collection of mathematical problems such as algebra, arithmetic, calculus, number theory, and probability, formatted as natural language prompts [Saxton et al., 2019](https://arxiv.org/abs/1904.01557)."
|
1571 |
+
* Ubuntu IRC: "The Ubuntu IRC dataset is derived from the publicly available chatlogs of all Ubunturelated channels on the Freenode IRC chat server."
|
1572 |
+
* PhilPapers: a dataset of open access philosophy publications from an international database maintained by the Center for Digital Philosophy at the University of Western Ontario.
|
1573 |
* NIH ExPORTER: "The NIH Grant abstracts provides a bulk-data repository for awarded applications through the ExPORTER4 service covering the fiscal years 1985-present."
|
1574 |
* <u>Citation</u>:
|
1575 |
* Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, Connor Leahy (2020). "The Pile: An 800GB Dataset of Diverse Text for Language Modeling," [ arXiv:2101.00027](https://arxiv.org/abs/2101.00027).
|
|
|
1578 |
#### QuestionsEcritesParlement
|
1579 |
* <u>Source</u>: Corpus contributed by OpenLLM partners.
|
1580 |
* <u>Extracted from</u>: [Regards citoyens](https://www.regardscitoyens.org/#&panel1-4) ([text](https://data.regardscitoyens.org/nosdeputes.fr/)). License: [CC BY-NC-SA](https://data.regardscitoyens.org/nosdeputes.fr/).
|
1581 |
+
* <u>Description</u>: Collection of long written questions, read during a session at the French National Assembly. Questions are asked by a member of the French parliament and addressed to a minister (who is given two months to respond).
|
1582 |
* <u>Citation</u>: No paper found.
|
1583 |
|
1584 |
#### RedPajama (v2)
|
1585 |
+
* <u>Source</u>: [togethercomputer/RedPajama-Data-V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2). License: [Apache 2.0](https://github.com/togethercomputer/RedPajama-Data) (data preparation code), Not specified (data) but see [Common Crawl terms of use](https://commoncrawl.org/terms-of-use).
|
1586 |
+
* <u>Extracted from</u>: [Common Crawl](https://commoncrawl.org/).
|
1587 |
+
* <u>Description</u>: "RedPajama-V2 is an open dataset for training large language models. The dataset includes over 100B text documents coming from 84 CommonCrawl snapshots and processed using the [CCNet](https://github.com/facebookresearch/cc_net) pipeline. Out of these, there are 30B documents in the corpus that additionally come with quality signals, and 20B documents that are deduplicated" (from [GitHub](https://github.com/togethercomputer/RedPajama-Data)). Knowledge cutoff: 2014-2023.
|
1588 |
* <u>Citation</u>: Together Computer (2023). "RedPajama-Data-v2: an Open Dataset with 30 Trillion Tokens for Training Large Language Models," [GitHub](https://github.com/togethercomputer/RedPajama-Data).
|
1589 |
|
1590 |
#### STAC
|
1591 |
+
* <u>Source</u>: [STAC](https://www.irit.fr/STAC/corpus.html). License: [CC BY-SA-NC 4.0](https://www.irit.fr/STAC/corpus.html).
|
1592 |
+
* <u>Extracted from</u>: [STAC](https://www.irit.fr/STAC/corpus.html). The full STAC corpus contains annotations for discourse structure. We use only the text of the chats.
|
1593 |
* <u>Description</u>: A collection of chats from an online version of the game Settlers of Catan.
|
1594 |
* <u>Citation</u>: Nicholas Asher, Julie Hunter, Mathieu Morey, Farah Benamara and Stergos Afantenos (2016). "Discourse structure and dialogue acts in multiparty dialogue: the STAC corpus," The Tenth International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association, pp. 2721-2727.
|
1595 |
|
1596 |
#### TheStack
|
1597 |
+
* <u>Source</u>: [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup). License: [Other](https://huggingface.co/datasets/bigcode/the-stack-dedup) (mixture of copyleft licenses).
|
1598 |
+
* <u>Extracted from</u>: [GHarchive](https://www.gharchive.org/)
|
1599 |
+
* <u>Description</u>: "The Stack contains over 6TB of permissively-licensed source code files covering 358 programming languages. The dataset was created as part of the [BigCode Project](https://www.bigcode-project.org/), an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets. This is the near-deduplicated version with 3TB data" (from the [dataset card](https://huggingface.co/datasets/bigcode/the-stack-dedup)).
|
1600 |
* <u>Citation</u>: Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Muñoz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau, Leandro von Werra and Harm de Vries (2022). "The Stack: 3 TB of permissively licensed source code," [arxiv:2211.15533](https://arxiv.org/abs/2211.15533).
|
1601 |
|
1602 |
#### Theses
|
1603 |
* <u>Source</u>: Corpus contributed by OpenLLM partners.
|
1604 |
+
* <u>Extracted from</u>: [theses.fr](https://theses.fr/?domaine=theses) and [HAL](https://hal.science/).
|
1605 |
+
* <u>Description</u>: A collection of doctoral theses published in France. Dataset containing text retrieved through OCR.
|
1606 |
* <u>Citation</u>: No paper found.
|
1607 |
|
1608 |
#### Wikipedia, Wikisource, Wiktionary
|
|
|
1611 |
* [OpenLLM-France/wikipedia](https://huggingface.co/datasets/OpenLLM-France/wikipedia)
|
1612 |
* [OpenLLM-France/wikisource](https://huggingface.co/datasets/OpenLLM-France/wikisource)
|
1613 |
* [OpenLLM-France/wiktionary](https://huggingface.co/datasets/OpenLLM-France/wiktionary)
|
1614 |
+
* <u>Extracted from</u>: [Wikimedia dumps](https://dumps.wikimedia.org/other/enterprise_html/runs/). License: [GFDL/CC BY-SA](https://dumps.wikimedia.org/legal.html).
|
1615 |
* <u>Description</u>:
|
1616 |
* <u>Citation</u>: No paper found.
|
1617 |
|
1618 |
#### YouTube
|
1619 |
* <u>Source</u>: Corpus contributed by LINAGORA Labs (OpenLLM-France).
|
1620 |
+
* <u>Extracted from</u>: [YouTube](https://www.youtube.com/). License: .
|
1621 |
+
* <u>Description</u>: French subtitles from videos published with permissive licenses on YouTube.
|
1622 |
* <u>Citation</u>: No paper found.
|
1623 |
|
1624 |
## Example use in python
|
metadata/dataset_composition.csv
ADDED
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name,language,category,M docs,B words,B tokens,B chars,#words/doc,#chars/doc,#tokens/doc,#char/word,#tokens/word
|
2 |
+
TOTAL,,,2186.562,1356.021,2314.862,8842.200,620,4044,1059,6.5,1.71
|
3 |
+
TOTAL,fr,,653.812,583.687,928.618,3619.672,893,5536,1420,6.2,1.59
|
4 |
+
TOTAL,en,,554.289,412.202,611.894,2553.541,744,4607,1104,6.2,1.48
|
5 |
+
TOTAL,code,,125.769,51.306,228.954,630.749,408,5015,1820,12.3,4.46
|
6 |
+
TOTAL,de,,165.915,105.609,206.610,764.779,637,4609,1245,7.2,1.96
|
7 |
+
TOTAL,es,,171.651,123.857,200.825,759.457,722,4424,1170,6.1,1.62
|
8 |
+
TOTAL,it,,99.440,62.051,112.031,404.454,624,4067,1127,6.5,1.81
|
9 |
+
TOTAL,fr-en,,410.032,17.016,25.494,107.658,41,263,62,6.3,1.5
|
10 |
+
TOTAL,it-en,,1.901,0.100,0.151,0.638,53,336,79,6.4,1.51
|
11 |
+
TOTAL,es-en,,1.961,0.103,0.143,0.631,53,322,73,6.1,1.39
|
12 |
+
TOTAL,de-fr,,1.792,0.0908,0.141,0.621,51,347,79,6.8,1.55
|
13 |
+
RedPajama,fr,Web,640.770,477.758,741.023,2974.596,746,4642,1156,6.2,1.55
|
14 |
+
RedPajama,de,Web,162.779,103.078,201.371,747.631,633,4593,1237,7.3,1.95
|
15 |
+
RedPajama,es,Web,169.447,121.751,197.125,746.984,719,4408,1163,6.1,1.62
|
16 |
+
RedPajama,it,Web,97.324,60.194,108.416,393.012,618,4038,1114,6.5,1.8
|
17 |
+
FineWebEdu,en,Web,421.209,327.453,467.837,2018.215,777,4791,1111,6.2,1.43
|
18 |
+
GallicaPress,fr,Newspaper,3.205,67.496,121.606,408.882,21060,127576,37943,6.1,1.8
|
19 |
+
AmericanStories,en,Newspaper,59.420,8.902,14.313,50.844,150,856,241,5.7,1.61
|
20 |
+
PeS2o,en,Technical,38.972,42.296,65.365,268.963,1085,6901,1677,6.4,1.55
|
21 |
+
HAL,fr,Technical,0.349,9.356,16.224,58.308,26808,167072,46487,6.2,1.73
|
22 |
+
Theses,fr,Technical,0.102,7.547,14.060,47.758,73990,468216,137843,6.3,1.86
|
23 |
+
Pile (USPTO_Backgrounds),en,Technical,5.139,3.492,5.105,22.309,680,4341,993,6.4,1.46
|
24 |
+
OpenEdition,fr,Technical,0.939,2.225,3.604,14.459,2370,15398,3838,6.5,1.62
|
25 |
+
Pile (PhilPapers),en,Technical,0.0308,0.363,0.618,2.304,11786,74805,20065,6.3,1.7
|
26 |
+
Pile (NIH_ExPorter),en,Technical,0.914,0.288,0.431,1.979,315,2165,472,6.9,1.5
|
27 |
+
GallicaMonographies,fr,Book,0.278,15.106,25.169,90.456,54338,325381,90536,6.0,1.67
|
28 |
+
Gutenberg,en,Book,0.0563,3.544,5.516,20.579,62948,365524,97975,5.8,1.56
|
29 |
+
Gutenberg,fr,Book,0.00345,0.227,0.383,1.392,65797,403478,111014,6.1,1.69
|
30 |
+
Gutenberg,de,Book,0.00188,0.0987,0.193,0.654,52500,347872,102660,6.6,1.96
|
31 |
+
Gutenberg,it,Book,0.000958,0.0657,0.129,0.414,68580,432150,134656,6.3,1.96
|
32 |
+
Gutenberg,es,Book,0.000735,0.0512,0.0920,0.303,69660,412245,125170,5.9,1.8
|
33 |
+
Pile (FreeLaw),en,Legislative Texts,3.415,8.204,14.011,52.580,2402,15397,4103,6.4,1.71
|
34 |
+
Eurovoc,en,Legislative Texts,0.272,1.523,2.571,9.468,5599,34809,9452,6.2,1.69
|
35 |
+
Eurovoc,it,Legislative Texts,0.245,0.731,1.527,4.867,2984,19865,6233,6.7,2.09
|
36 |
+
Eurovoc,de,Legislative Texts,0.247,0.678,1.497,4.915,2745,19899,6061,7.2,2.21
|
37 |
+
Eurovoc,es,Legislative Texts,0.246,0.757,1.411,4.684,3077,19041,5736,6.2,1.86
|
38 |
+
OpenData,fr,Legislative Texts,1.169,0.755,1.209,4.638,646,3967,1034,6.1,1.6
|
39 |
+
QuestionsEcritesParlement,fr,Legislative Texts,0.189,0.108,0.156,0.705,571,3730,825,6.5,1.44
|
40 |
+
LEGI,fr,Legislative Texts,0.621,0.0878,0.145,0.563,141,907,233,6.4,1.65
|
41 |
+
AmendementsParlement,fr,Legislative Texts,0.673,0.0452,0.0738,0.274,67,407,110,6.1,1.63
|
42 |
+
Europarl,de,Legislative Transcripts,0.0102,0.0451,0.0734,0.327,4422,32059,7196,7.3,1.63
|
43 |
+
Europarl,es,Legislative Transcripts,0.0103,0.0524,0.0733,0.325,5087,31553,7117,6.2,1.4
|
44 |
+
Europarl,fr,Legislative Transcripts,0.0103,0.0528,0.0717,0.339,5126,32913,6961,6.4,1.36
|
45 |
+
Europarl,en,Legislative Transcripts,0.0111,0.0563,0.0690,0.339,5072,30541,6216,6.0,1.23
|
46 |
+
DiscoursPublics,fr,Legislative Transcripts,0.110,0.163,0.238,1.025,1482,9318,2164,6.3,1.46
|
47 |
+
InterventionsParlement,fr,Legislative Transcripts,1.832,0.104,0.157,0.654,57,357,86,6.3,1.51
|
48 |
+
Wikipedia,en,Wiki,6.893,4.708,7.898,26.616,683,3861,1146,5.7,1.68
|
49 |
+
Wikipedia,de,Wiki,2.877,1.709,3.476,11.252,594,3911,1208,6.6,2.03
|
50 |
+
Wikipedia,fr,Wiki,2.648,1.726,2.940,9.879,652,3731,1110,5.7,1.7
|
51 |
+
Wikipedia,es,Wiki,1.947,1.245,2.124,7.161,639,3678,1091,5.8,1.71
|
52 |
+
Wikipedia,it,Wiki,1.870,1.060,1.959,6.161,567,3295,1048,5.8,1.85
|
53 |
+
wikisource,fr,Wiki,0.186,0.523,0.795,3.080,2812,16559,4274,5.9,1.52
|
54 |
+
wiktionary,fr,Wiki,0.650,0.0531,0.117,0.347,82,534,180,6.5,2.2
|
55 |
+
MathPile,en,Math,0.737,3.408,9.637,27.290,4624,37028,13076,8.0,2.83
|
56 |
+
Pile (DM_Mathematics),en,Math,0.992,1.746,4.928,8.127,1760,8193,4968,4.7,2.82
|
57 |
+
Pile (StackExchange),en,Forum,15.269,4.534,10.275,33.609,297,2201,673,7.4,2.27
|
58 |
+
Pile (Ubuntu_IRC),en,Forum,0.0104,0.867,2.159,5.610,83365,539423,207596,6.5,2.49
|
59 |
+
Claire,en,Dialogue,0.949,0.818,1.161,4.709,862,4962,1223,5.8,1.42
|
60 |
+
Claire,fr,Dialogue,0.0393,0.210,0.311,1.314,5344,33435,7913,6.3,1.48
|
61 |
+
YouTube,fr,Dialogue,0.0375,0.145,0.336,1.003,3867,26747,8960,6.9,2.32
|
62 |
+
Stac,en,Dialogue,0.0000450,0.0000529,0.000121,0.000327,1176,7267,2689,6.2,2.29
|
63 |
+
CroissantAligned,fr-en,Multilingual Parallel Corpora,408.029,16.911,25.351,107.003,41,262,62,6.3,1.5
|
64 |
+
EuroparlAligned,it-en,Multilingual Parallel Corpora,1.901,0.100,0.151,0.638,53,336,79,6.4,1.51
|
65 |
+
EuroparlAligned,fr-en,Multilingual Parallel Corpora,2.003,0.105,0.143,0.655,52,327,71,6.2,1.36
|
66 |
+
EuroparlAligned,es-en,Multilingual Parallel Corpora,1.961,0.103,0.143,0.631,53,322,73,6.1,1.39
|
67 |
+
EuroparlAligned,de-fr,Multilingual Parallel Corpora,1.792,0.0908,0.141,0.621,51,347,79,6.8,1.55
|
68 |
+
TheStack,JAVASCRIPT,Programming,21.109,8.526,58.609,141.647,404,6710,2776,16.6,6.87
|
69 |
+
TheStack,JAVA,Programming,20.152,7.421,27.680,89.297,368,4431,1374,12.0,3.73
|
70 |
+
TheStack,C,Programming,8.626,5.916,24.092,57.428,686,6658,2793,9.7,4.07
|
71 |
+
TheStack,PHP,Programming,15.905,4.865,22.883,66.844,306,4203,1439,13.7,4.7
|
72 |
+
TheStack,PYTHON,Programming,12.962,5.434,21.683,64.304,419,4961,1673,11.8,3.99
|
73 |
+
TheStack,C++,Programming,6.378,4.584,18.835,50.892,719,7979,2953,11.1,4.11
|
74 |
+
TheStack,C#,Programming,10.839,3.574,13.381,46.286,330,4270,1235,13.0,3.74
|
75 |
+
TheStack,GO,Programming,4.730,2.735,10.262,25.738,578,5441,2170,9.4,3.75
|
76 |
+
TheStack,TYPESCRIPT,Programming,10.637,2.617,9.836,28.815,246,2709,925,11.0,3.76
|
77 |
+
TheStack,RUST,Programming,1.387,0.872,3.241,9.529,629,6870,2337,10.9,3.72
|
78 |
+
TheStack,RUBY,Programming,3.405,0.646,2.392,7.139,190,2097,702,11.1,3.7
|
79 |
+
TheStack,SWIFT,Programming,1.756,0.553,1.876,6.134,315,3493,1068,11.1,3.39
|
80 |
+
TheStack,KOTLIN,Programming,2.243,0.454,1.758,5.769,202,2572,784,12.7,3.87
|
81 |
+
TheStack,SCALA,Programming,1.362,0.457,1.587,4.862,336,3570,1165,10.6,3.47
|
82 |
+
TheStack,TEX,Programming,0.398,0.394,1.507,3.805,990,9560,3786,9.7,3.82
|
83 |
+
TheStack,LUA,Programming,0.559,0.318,1.367,3.279,569,5866,2445,10.3,4.3
|
84 |
+
TheStack,DART,Programming,0.933,0.308,1.242,3.864,330,4141,1331,12.5,4.03
|
85 |
+
TheStack,PERL,Programming,0.392,0.297,1.149,2.634,758,6719,2931,8.9,3.87
|
86 |
+
TheStack,MATHEMATICA,Programming,0.0269,0.120,1.117,1.720,4461,63941,41524,14.3,9.31
|
87 |
+
TheStack,ASSEMBLY,Programming,0.248,0.209,0.867,1.575,843,6351,3496,7.5,4.15
|
88 |
+
TheStack,HASKELL,Programming,0.545,0.307,0.807,2.364,563,4338,1481,7.7,2.63
|
89 |
+
TheStack,FORTRAN,Programming,0.165,0.192,0.780,1.843,1164,11170,4727,9.6,4.06
|
90 |
+
TheStack,JULIA,Programming,0.299,0.152,0.660,1.539,508,5147,2207,10.1,4.34
|
91 |
+
TheStack,OCAML,Programming,0.160,0.130,0.430,1.107,812,6919,2688,8.5,3.31
|
92 |
+
TheStack,ERLANG,Programming,0.0994,0.0657,0.260,0.726,661,7304,2616,11.1,3.96
|
93 |
+
TheStack,ELIXIR,Programming,0.282,0.0731,0.258,0.737,259,2613,915,10.1,3.53
|
94 |
+
TheStack,CLOJURE,Programming,0.126,0.0448,0.179,0.492,356,3905,1421,11.0,4.0
|
95 |
+
TheStack,R,Programming,0.0392,0.0278,0.158,0.305,709,7781,4031,11.0,5.68
|
96 |
+
TheStack,MATLAB,Programming,0.000967,0.00865,0.0427,0.0372,8945,38469,44157,4.3,4.94
|
97 |
+
TheStack,RACKET,Programming,0.00420,0.00479,0.0153,0.0378,1140,9000,3643,7.9,3.19
|