File size: 283 Bytes
99664be |
1 2 3 4 5 6 7 8 |
# FineWeb Dataset - GPT-2 Tokenized
This dataset contains preprocessed and tokenized FineWeb data using the GPT-2 tokenizer.
It consists of multiple training folders containing the processed data.
Dataset structure:
- fineweb_train_000001 to fineweb_train_000005: Training folders
|