Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
Jeronymous commited on
Commit
39ef121
·
verified ·
1 Parent(s): a6028cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -427,6 +427,7 @@ with the following motivations in mind:
427
  * Data mix:
428
  * French is as well represented as English
429
  (Lucie Training Dataset is one of the biggest of collection of French text data with a minimum of quality),
 
430
  * German, Spanish and Italian are also represented to some extend,
431
  * Code is also included to boost the reasoning capabilities of LLM.
432
  * Data filtering and deduplication:
@@ -465,7 +466,7 @@ Examples of metadata (except from `text`) are shown for each source in [metadata
465
 
466
  ### Example use in python
467
 
468
- Load the dataset using the `datasets` library:
469
  ```python
470
  from datasets import load_dataset
471
 
@@ -476,28 +477,27 @@ dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", **kwargs)
476
 
477
  Several configurations are available to select a language, a source, or both, illustrated in the following examples.
478
 
479
- Only load data in French:
480
  ```python
481
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "fr", **kwargs)
482
  ```
483
- Load data that is aligned in French and English:
484
  ```python
485
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "fr,en", **kwargs)
486
  ```
487
- Only load data corresponding to programming languages:
488
  ```python
489
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code", **kwargs)
490
  ```
491
- Only load data in python:
492
  ```python
493
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code:python", **kwargs)
494
  ```
495
- Only load data from Wikipedia:
496
  ```python
497
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia", **kwargs)
498
  ```
499
- Only load data from Wikipedia in French:
500
  ```python
501
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia-fr", **kwargs)
502
  ```
503
-
 
427
  * Data mix:
428
  * French is as well represented as English
429
  (Lucie Training Dataset is one of the biggest of collection of French text data with a minimum of quality),
430
+ to avoid that the LLM is culturally biased towards English.
431
  * German, Spanish and Italian are also represented to some extend,
432
  * Code is also included to boost the reasoning capabilities of LLM.
433
  * Data filtering and deduplication:
 
466
 
467
  ### Example use in python
468
 
469
+ Load the dataset using the `datasets` library:
470
  ```python
471
  from datasets import load_dataset
472
 
 
477
 
478
  Several configurations are available to select a language, a source, or both, illustrated in the following examples.
479
 
480
+ Load data in French:
481
  ```python
482
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "fr", **kwargs)
483
  ```
484
+ Load data where French and English are aligned:
485
  ```python
486
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "fr,en", **kwargs)
487
  ```
488
+ Load data corresponding to files with programming languages:
489
  ```python
490
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code", **kwargs)
491
  ```
492
+ Load data in Python:
493
  ```python
494
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code:python", **kwargs)
495
  ```
496
+ Load data from Wikipedia (in available languages):
497
  ```python
498
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia", **kwargs)
499
  ```
500
+ Load data from French pages of Wikipedia ([wikipedia.fr](https://www.wikipedia.fr/)):
501
  ```python
502
  dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia-fr", **kwargs)
503
  ```