Datasets:
Jeronymous
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -427,6 +427,7 @@ with the following motivations in mind:
|
|
427 |
* Data mix:
|
428 |
* French is as well represented as English
|
429 |
(Lucie Training Dataset is one of the biggest of collection of French text data with a minimum of quality),
|
|
|
430 |
* German, Spanish and Italian are also represented to some extend,
|
431 |
* Code is also included to boost the reasoning capabilities of LLM.
|
432 |
* Data filtering and deduplication:
|
@@ -465,7 +466,7 @@ Examples of metadata (except from `text`) are shown for each source in [metadata
|
|
465 |
|
466 |
### Example use in python
|
467 |
|
468 |
-
Load the dataset using the `datasets` library:
|
469 |
```python
|
470 |
from datasets import load_dataset
|
471 |
|
@@ -476,28 +477,27 @@ dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", **kwargs)
|
|
476 |
|
477 |
Several configurations are available to select a language, a source, or both, illustrated in the following examples.
|
478 |
|
479 |
-
|
480 |
```python
|
481 |
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "fr", **kwargs)
|
482 |
```
|
483 |
-
Load data
|
484 |
```python
|
485 |
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "fr,en", **kwargs)
|
486 |
```
|
487 |
-
|
488 |
```python
|
489 |
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code", **kwargs)
|
490 |
```
|
491 |
-
|
492 |
```python
|
493 |
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code:python", **kwargs)
|
494 |
```
|
495 |
-
|
496 |
```python
|
497 |
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia", **kwargs)
|
498 |
```
|
499 |
-
|
500 |
```python
|
501 |
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia-fr", **kwargs)
|
502 |
```
|
503 |
-
|
|
|
427 |
* Data mix:
|
428 |
* French is as well represented as English
|
429 |
(Lucie Training Dataset is one of the biggest of collection of French text data with a minimum of quality),
|
430 |
+
to avoid that the LLM is culturally biased towards English.
|
431 |
* German, Spanish and Italian are also represented to some extend,
|
432 |
* Code is also included to boost the reasoning capabilities of LLM.
|
433 |
* Data filtering and deduplication:
|
|
|
466 |
|
467 |
### Example use in python
|
468 |
|
469 |
+
Load the dataset using the `datasets` library:
|
470 |
```python
|
471 |
from datasets import load_dataset
|
472 |
|
|
|
477 |
|
478 |
Several configurations are available to select a language, a source, or both, illustrated in the following examples.
|
479 |
|
480 |
+
Load data in French:
|
481 |
```python
|
482 |
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "fr", **kwargs)
|
483 |
```
|
484 |
+
Load data where French and English are aligned:
|
485 |
```python
|
486 |
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "fr,en", **kwargs)
|
487 |
```
|
488 |
+
Load data corresponding to files with programming languages:
|
489 |
```python
|
490 |
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code", **kwargs)
|
491 |
```
|
492 |
+
Load data in Python:
|
493 |
```python
|
494 |
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code:python", **kwargs)
|
495 |
```
|
496 |
+
Load data from Wikipedia (in available languages):
|
497 |
```python
|
498 |
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia", **kwargs)
|
499 |
```
|
500 |
+
Load data from French pages of Wikipedia ([wikipedia.fr](https://www.wikipedia.fr/)):
|
501 |
```python
|
502 |
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia-fr", **kwargs)
|
503 |
```
|
|