Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
guipenedo HF staff commited on
Commit
c8e8c9c
1 Parent(s): 1fbf94b

readme changes

Browse files
Files changed (1) hide show
  1. README.md +14 -6
README.md CHANGED
@@ -20286,7 +20286,7 @@ configs:
20286
  + [Source Data](#source-data)
20287
  + [Data processing steps](#data-processing-steps)
20288
  + [Annotations](#annotations)
20289
- + [Personal and Sensitive Information](#personal-and-sensitive-information)
20290
  * [Considerations for Using the Data](#considerations-for-using-the-data)
20291
  + [Social Impact of Dataset](#social-impact-of-dataset)
20292
  + [Discussion of Biases](#discussion-of-biases)
@@ -20304,7 +20304,11 @@ The **🥂 FineWeb2** dataset is [fully reproducible](https://github.com/hugging
20304
 
20305
  In particular, on the set of 9 diverse languages we used to guide our processing decisions, **🥂 FineWeb2** outperforms other popular pretraining datasets covering multiple languages (such as CC-100, mC4, CulturaX or HPLT, while being substantially larger) and, in some cases, even performs better than some datasets _specifically curated_ for a single one of these languages, in our diverse set of carefully selected [evaluation tasks: FineTasks](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fine-tasks).
20306
 
20307
- The data was sourced from 96 [CommonCrawl](https://commoncrawl.org/) snapshots, spanning the _summer of 2013 to April 2024_, and processed using 🏭 [`datatrove`](https://github.com/huggingface/datatrove/), our large scale data processing library. This carefully deduplicated and filtered dataset comprises roughly **8 terabytes of compressed text data**.
 
 
 
 
20308
 
20309
  You will find our ablation and evaluation setup in this [github repo](https://github.com/huggingface/fineweb-2). We will soon upload model checkpoints from our ablation experiments.
20310
 
@@ -24390,6 +24394,8 @@ As such, we chose to only report total number of documents, disk size and words
24390
 
24391
  See the tables above for the `subset` of the language and version (filtered or removed) of the data you want to download.
24392
 
 
 
24393
  ### Using 🏭 [`datatrove`](https://github.com/huggingface/datatrove/)
24394
 
24395
  ```python
@@ -24694,13 +24700,15 @@ See "**Dataset processing steps**" above.
24694
 
24695
  We augment the original samples with the `language`, `language_script`, `language_score`, `top_langs` and `minhash_cluster_size` annotations. The language related annotations are automatically generated by our [language filter](https://github.com/huggingface/datatrove/blob/main/src/datatrove/pipeline/filters/language_filter.py). `minhash_cluster_size` is computed during the deduplication process, by saving the size of each duplicate cluster before removing all of its documents except one.
24696
 
24697
- ### Personal and Sensitive Information
24698
 
24699
  We anonymize email addresses and public IP addresses.
24700
 
24701
  For emails, we apply a regex pattern and replace any occurrence of an email address with either `[email protected]` or `[email protected]`. For IP addresses, we also employ a regex pattern and then further filter to only anonymize IP addresses [allocated for public networks](https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml). Matched IP addresses are then replaced with one of the following randomly generated IP addresses, which at the time of dataset creation were not responding to ping requests: `22.214.171.124`, `126.96.36.199`, `188.8.131.52`, `184.108.40.206`, `220.127.116.11`, and `18.104.22.168`. We decided against applying regex patterns for phone numbers due to the high false positive rate.
24702
 
24703
- Despite our efforts, given that 🥂 FineWeb2 is sourced from the internet at large, it is very likely that some personable identifiable information (PII) will be present. If you find your own PII in 🥂 FineWeb2 and would like it removed, please fill out our [PII removal form](https://forms.gle/VyNT3ZAUPZjPuWp39).
 
 
24704
 
24705
  ## Considerations for Using the Data
24706
 
@@ -24738,7 +24746,7 @@ The dataset is released under the **Open Data Commons Attribution License (ODC-B
24738
 
24739
  Stay tuned for our **upcoming 📝 blogpost** where we will detail the entire creation process of 🥂 FineWeb2, including all our experiments, how we adapted thresholds for each language and all of our results. If you haven't yet, you can check out the blogpost for the first version: [🍷 FineWeb blogpost](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1) or [read the paper](https://arxiv.org/abs/2406.17557).
24740
 
24741
- We are very soon also launching a large community effort around high quality multilingual data, be sure to check back in a few days!
24742
 
24743
  Finally, if you would like to see your language better represented in CommonCrawl, we strongly encourage you to contribute to the CommonCrawl [web-languages project](https://github.com/commoncrawl/web-languages/tree/main).
24744
 
@@ -24750,7 +24758,7 @@ Finally, if you would like to see your language better represented in CommonCraw
24750
  title = {FineWeb2: A sparkling update with 1000s of languages},
24751
  month = dec,
24752
  year = 2024,
24753
- doi = { },
24754
  url = {https://huggingface.co/datasets/HuggingFaceFW/fineweb-2}
24755
  }
24756
  ```
 
20286
  + [Source Data](#source-data)
20287
  + [Data processing steps](#data-processing-steps)
20288
  + [Annotations](#annotations)
20289
+ + [Personal and Sensitive Information and opt-out](#personal-and-sensitive-information-and-opt-out)
20290
  * [Considerations for Using the Data](#considerations-for-using-the-data)
20291
  + [Social Impact of Dataset](#social-impact-of-dataset)
20292
  + [Discussion of Biases](#discussion-of-biases)
 
20304
 
20305
  In particular, on the set of 9 diverse languages we used to guide our processing decisions, **🥂 FineWeb2** outperforms other popular pretraining datasets covering multiple languages (such as CC-100, mC4, CulturaX or HPLT, while being substantially larger) and, in some cases, even performs better than some datasets _specifically curated_ for a single one of these languages, in our diverse set of carefully selected [evaluation tasks: FineTasks](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fine-tasks).
20306
 
20307
+ <center>
20308
+ <img src="https://huggingface.co/datasets/HuggingFaceFW/admin/resolve/main/multilingual_datasets_comparison.png" alt="multilingual-comparisons">
20309
+ </center>
20310
+
20311
+ The data was sourced from 96 [CommonCrawl](https://commoncrawl.org/) snapshots, spanning the _summer of 2013 to April 2024_, and processed using 🏭 [`datatrove`](https://github.com/huggingface/datatrove/), our large scale data processing library. This carefully deduplicated and filtered dataset comprises roughly **8 terabytes of compressed text data**, with almost 3 trillion words (see [_How many tokens?_](#how-many-tokens) for more details). For PII and opt-out see [_Personal and Sensitive Information and opt-out_](#personal-and-sensitive-information-and-opt-out).
20312
 
20313
  You will find our ablation and evaluation setup in this [github repo](https://github.com/huggingface/fineweb-2). We will soon upload model checkpoints from our ablation experiments.
20314
 
 
24394
 
24395
  See the tables above for the `subset` of the language and version (filtered or removed) of the data you want to download.
24396
 
24397
+ We currently do not provide smaller `sample` versions, but by setting `limit` or using `streaming=True` you can easily fetch a sample of the data. If there is interest from the community we might upload smaller sampled versions later on.
24398
+
24399
  ### Using 🏭 [`datatrove`](https://github.com/huggingface/datatrove/)
24400
 
24401
  ```python
 
24700
 
24701
  We augment the original samples with the `language`, `language_script`, `language_score`, `top_langs` and `minhash_cluster_size` annotations. The language related annotations are automatically generated by our [language filter](https://github.com/huggingface/datatrove/blob/main/src/datatrove/pipeline/filters/language_filter.py). `minhash_cluster_size` is computed during the deduplication process, by saving the size of each duplicate cluster before removing all of its documents except one.
24702
 
24703
+ ### Personal and Sensitive Information and opt-out
24704
 
24705
  We anonymize email addresses and public IP addresses.
24706
 
24707
  For emails, we apply a regex pattern and replace any occurrence of an email address with either `[email protected]` or `[email protected]`. For IP addresses, we also employ a regex pattern and then further filter to only anonymize IP addresses [allocated for public networks](https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml). Matched IP addresses are then replaced with one of the following randomly generated IP addresses, which at the time of dataset creation were not responding to ping requests: `22.214.171.124`, `126.96.36.199`, `188.8.131.52`, `184.108.40.206`, `220.127.116.11`, and `18.104.22.168`. We decided against applying regex patterns for phone numbers due to the high false positive rate.
24708
 
24709
+ Despite our efforts, given that 🥂 FineWeb2 is sourced from the internet at large, it is very likely that some personable identifiable information (PII) will be present. If you find your own PII in 🥂 FineWeb2 and would like it removed, please fill out our [PII removal/opt out form](https://forms.gle/VyNT3ZAUPZjPuWp39).
24710
+
24711
+ CommonCrawl respects robots.txt at crawl time, but if you are a webmaster and find your website in 🥂 FineWeb2 and would like to have it removed, you may also use the [PII removal/opt out form](https://forms.gle/VyNT3ZAUPZjPuWp39).
24712
 
24713
  ## Considerations for Using the Data
24714
 
 
24746
 
24747
  Stay tuned for our **upcoming 📝 blogpost** where we will detail the entire creation process of 🥂 FineWeb2, including all our experiments, how we adapted thresholds for each language and all of our results. If you haven't yet, you can check out the blogpost for the first version: [🍷 FineWeb blogpost](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1) or [read the paper](https://arxiv.org/abs/2406.17557).
24748
 
24749
+ We are very soon also launching a large community effort around high quality multilingual data, be sure to check back in a few days! We will be coordinating on a rocketchat server we setup for this purpose, where you might also be able to find researchers working on the languages you are interested in: [rocketchat link](https://huggingface.co/spaces/HuggingFaceFW/discussion).
24750
 
24751
  Finally, if you would like to see your language better represented in CommonCrawl, we strongly encourage you to contribute to the CommonCrawl [web-languages project](https://github.com/commoncrawl/web-languages/tree/main).
24752
 
 
24758
  title = {FineWeb2: A sparkling update with 1000s of languages},
24759
  month = dec,
24760
  year = 2024,
24761
+ doi = { 10.57967/hf/3744 },
24762
  url = {https://huggingface.co/datasets/HuggingFaceFW/fineweb-2}
24763
  }
24764
  ```