Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -24,7 +24,7 @@ task_categories:
|
|
24 |
language:
|
25 |
- hi
|
26 |
- en
|
27 |
-
pretty_name:
|
28 |
size_categories:
|
29 |
- 100K<n<1M
|
30 |
---
|
@@ -33,11 +33,11 @@ size_categories:
|
|
33 |
|
34 |
This dataset was filtered from AI4BHarat dataset [sangraha](https://huggingface.co/datasets/ai4bharat/sangraha),which is the largest high-quality, cleaned Indic language pretraining data containing 251B tokens summed up over 22 languages, extracted from curated sources, existing multilingual corpora and large scale translations.
|
35 |
|
36 |
-
This dataset only Hindi as of now
|
37 |
|
38 |
# Information
|
39 |
* First this dataset is mainly for long context training
|
40 |
-
* The minimum len is
|
41 |
|
42 |
# Getting started
|
43 |
|
@@ -46,6 +46,7 @@ For downloading the entire dataset:
|
|
46 |
from datasets import load_dataset
|
47 |
dataset = load_dataset("damerajee/long_context_hindi")
|
48 |
```
|
|
|
49 |
If dataset is too big you can simply stream:
|
50 |
```python
|
51 |
from datasets import load_dataset
|
|
|
24 |
language:
|
25 |
- hi
|
26 |
- en
|
27 |
+
pretty_name: long_context
|
28 |
size_categories:
|
29 |
- 100K<n<1M
|
30 |
---
|
|
|
33 |
|
34 |
This dataset was filtered from AI4BHarat dataset [sangraha](https://huggingface.co/datasets/ai4bharat/sangraha),which is the largest high-quality, cleaned Indic language pretraining data containing 251B tokens summed up over 22 languages, extracted from curated sources, existing multilingual corpora and large scale translations.
|
35 |
|
36 |
+
This dataset contains only Hindi as of now
|
37 |
|
38 |
# Information
|
39 |
* First this dataset is mainly for long context training
|
40 |
+
* The minimum len is `6000` and maximum len is `3754718`
|
41 |
|
42 |
# Getting started
|
43 |
|
|
|
46 |
from datasets import load_dataset
|
47 |
dataset = load_dataset("damerajee/long_context_hindi")
|
48 |
```
|
49 |
+
|
50 |
If dataset is too big you can simply stream:
|
51 |
```python
|
52 |
from datasets import load_dataset
|