html_url
stringlengths
48
51
title
stringlengths
1
290
comments
sequencelengths
0
30
body
stringlengths
0
228k
number
int64
2
7.08k
https://github.com/huggingface/datasets/issues/6588
fix os.listdir return name is empty string
[]
### Describe the bug xlistdir return name is empty string Overloaded os.listdir ### Steps to reproduce the bug ```python from datasets.download.streaming_download_manager import xjoin from datasets.download.streaming_download_manager import xlistdir config = DownloadConfig(storage_options=options) manger = StreamingDownloadManager("ILSVRC2012",download_config=config) input_path = "lakefs://datalab/main/imagenet/ILSVRC2012.zip" download_files = manger.download_and_extract(input_path) current_dir = xjoin(download_files,"ILSVRC2012/Images/ILSVRC2012_img_train") folder_list = xlistdir(current_dir) ``` in xlistdir function Obj ["name"] ends with "/" last return "" ### Expected behavior Obj ["name"] ends with "/" return folder name ### Environment info no
6,588
https://github.com/huggingface/datasets/issues/6585
losing DatasetInfo in Dataset.map when num_proc > 1
[ "Hi ! This issue comes from the fact that `map()` with `num_proc>1` shards the dataset in multiple chunks to be processed (one per process) and merges them. The DatasetInfos of each chunk are then merged together, but for some fields like `dataset_name` it's not been implemented and default to None.\r\n\r\nThe DatasetInfo merge is defined here, in case you'd like to contribute an improvement: \r\n\r\nhttps://github.com/huggingface/datasets/blob/d2e0034122a788015c0834a72e6c6279e7ecbac5/src/datasets/info.py#L269-L270", "#self-assign" ]
### Describe the bug Hello and thanks for developing this package! When I process a Dataset with the map function using multiple processors some set attributes of the DatasetInfo get lost and are None in the resulting Dataset. ### Steps to reproduce the bug ```python from datasets import Dataset, DatasetInfo def run_map(num_proc): dataset = Dataset.from_dict( {"col1": [0, 1], "col2": [3, 4]}, info=DatasetInfo( dataset_name="my_dataset", ), ) ds = dataset.map(lambda x: x, num_proc=num_proc) print(ds.info.dataset_name) run_map(1) run_map(2) ``` This puts out: ```bash Map: 100%|██████████| 2/2 [00:00<00:00, 724.66 examples/s] my_dataset Map (num_proc=2): 100%|██████████| 2/2 [00:00<00:00, 18.25 examples/s] None ``` ### Expected behavior I expect the DatasetInfo to be kept as it was and there should be no difference in the output of running map with num_proc=1 and num_proc=2. Expected output: ```bash Map: 100%|██████████| 2/2 [00:00<00:00, 724.66 examples/s] my_dataset Map (num_proc=2): 100%|██████████| 2/2 [00:00<00:00, 18.25 examples/s] my_dataset ``` ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.17 - Python version: 3.8.18 - `huggingface_hub` version: 0.20.2 - PyArrow version: 12.0.1 - Pandas version: 2.0.3 - `fsspec` version: 2023.9.2
6,585
https://github.com/huggingface/datasets/issues/6584
np.fromfile not supported
[ "@lhoestq\r\nCan you provide me with some ideas?", "Hi ! What's the error ?", "@lhoestq \r\n```\r\nTraceback (most recent call last):\r\n File \"/home/dongzf/miniconda3/envs/dataset_ai/lib/python3.11/runpy.py\", line 198, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/dongzf/miniconda3/envs/dataset_ai/lib/python3.11/runpy.py\", line 88, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/dongzf/.vscode/extensions/ms-python.python-2023.22.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py\", line 39, in <module>\r\n cli.main()\r\n File \"/home/dongzf/.vscode/extensions/ms-python.python-2023.22.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py\", line 430, in main\r\n run()\r\n File \"/home/dongzf/.vscode/extensions/ms-python.python-2023.22.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py\", line 284, in run_file\r\n runpy.run_path(target, run_name=\"__main__\")\r\n File \"/home/dongzf/.vscode/extensions/ms-python.python-2023.22.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 321, in run_path\r\n return _run_module_code(code, init_globals, run_name,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/dongzf/.vscode/extensions/ms-python.python-2023.22.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 135, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"/home/dongzf/.vscode/extensions/ms-python.python-2023.22.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 124, in _run_code\r\n exec(code, run_globals)\r\n File \"/mnt/sda/code/dataset_ai/dataset_ai/example/test.py\", line 83, in <module>\r\n data = xnumpy_fromfile(current_dir, download_config=config,dtype=numpy.float32,)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/mnt/sda/code/dataset_ai/dataset_ai/src/datasets/download/streaming_download_manager.py\", line 765, in xnumpy_fromfile\r\n return np.fromfile(xopen(filepath_or_buffer, \"rb\", download_config=download_config).read(), *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nValueError: embedded null byte\r\n```", " not add read() \r\nthe error is \r\n\r\nreturn np.fromfile(xopen(filepath_or_buffer, \"rb\", download_config=download_config), *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nio.UnsupportedOperation: fileno", "xopen return obj do not have fileno function\r\nI don't know why?", "I used this method to read point cloud data in the script\r\n\r\n\r\n```python\r\nwith open(velodyne_filepath,\"rb\") as obj:\r\n velodyne_data = numpy.frombuffer(obj.read(), dtype=numpy.float32).reshape([-1, 4])\r\n```" ]
How to do np.fromfile to use it like np.load ```python def xnumpy_fromfile(filepath_or_buffer, *args, download_config: Optional[DownloadConfig] = None, **kwargs): import numpy as np if hasattr(filepath_or_buffer, "read"): return np.fromfile(filepath_or_buffer, *args, **kwargs) else: filepath_or_buffer = str(filepath_or_buffer) return np.fromfile(xopen(filepath_or_buffer, "rb", download_config=download_config).read(), *args, **kwargs) ``` this is not work
6,584
https://github.com/huggingface/datasets/issues/6580
dataset cache only stores one config of the dataset in parquet dir, and uses that for all other configs resulting in showing same data in all configs.
[]
### Describe the bug ds = load_dataset("ai2_arc", "ARC-Easy"), i have tried to force redownload, delete cache and changing the cache dir. ### Steps to reproduce the bug dataset = [] dataset_name = "ai2_arc" possible_configs = [ 'ARC-Challenge', 'ARC-Easy' ] for config in possible_configs: dataset_slice = load_dataset(dataset_name, config,ignore_verifications=True,cache_dir='ai2_arc_files') dataset.append(dataset_slice) ### Expected behavior all configs should get saved in cache with their respective names. ### Environment info ai2_arc
6,580
https://github.com/huggingface/datasets/issues/6579
Unable to load `eli5` dataset with streaming
[ "Hi @haok1402, I have created an issue in the Discussion tab of the corresponding dataset: https://huggingface.co/datasets/eli5/discussions/7\r\nLet's continue the discussion there!" ]
### Describe the bug Unable to load `eli5` dataset with streaming. ### Steps to reproduce the bug This fails with FileNotFoundError: https://files.pushshift.io/reddit/submissions ``` from datasets import load_dataset load_dataset("eli5", streaming=True) ``` This works correctly. ``` from datasets import load_dataset load_dataset("eli5") ``` ### Expected behavior - Loading `eli5` dataset should not raise an error under the streaming mode. - Or at the very least, show a warning that streaming mode is not supported with `eli5` dataset. ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-6.2.0-39-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.19.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3 - `fsspec` version: 2023.6.0
6,579
https://github.com/huggingface/datasets/issues/6577
502 Server Errors when streaming large dataset
[ "cc @mariosasko @lhoestq ", "Hi! We should be able to avoid this error by retrying to read the data when it happens. I'll open a PR in `huggingface_hub` to address this.", "Thanks for the fix @mariosasko! Just wondering whether \"500 error\" should also be excluded? I got these errors overnight:\r\n\r\n```\r\nhuggingface_hub.utils._errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/da\r\ntasets/sanchit-gandhi/concatenated-train-set-label-length-256/resolve/91e6a0cd0356605b021384ded813cfcf356a221c/train/tra\r\nin-02618-of-04012.parquet (Request ID: Root=1-65b18b81-627f2c2943bbb8ab68d19ee2;129537bd-1934-4257-a4d8-1cb774f8e1f8) \r\n \r\nInternal Error - We're working hard to fix this as soon as possible! \r\n```", "Gently pining @mariosasko and @Wauplin - when trying to stream this large dataset from the HF Hub, I'm running into `500 Internal Server Errors` as described above. I'd love to be able to use the Hub exclusively to stream data when training, but this error pops up a few times a week, terminating training runs and causing me to have to rewind to the last saved checkpoint. Do we reckon there's a way we can protect Datasets' streaming against these errors? The same reproducer as the [original comment](https://github.com/huggingface/datasets/issues/6577#issue-2074790848) can be used, but it's somewhat random whether we hit a 500 error. Leaving the full traceback below: \r\n\r\n```\r\nTraceback (most recent call last): \r\n File \"/home/sanchitgandhi/hf/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py\", line 308, in _worker_loo\r\np \r\n data = fetcher.fetch(index) \r\n File \"/home/sanchitgandhi/hf/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py\", line 32, in fetch \r\n data.append(next(self.dataset_iter)) \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 1367, in __iter__ \r\n yield from self._iter_pytorch() \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 1302, in _iter_pytorch \r\n for key, example in ex_iterable: \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 987, in __iter__ \r\n for x in self.ex_iterable: \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 867, in __iter__ \r\n yield from self._iter() \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 904, in _iter \r\n for key, example in iterator: \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 679, in __iter__ \r\n yield from self._iter() \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 741, in _iter [235/1892]\r\n for key, example in iterator: \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 1119, in __iter__ \r\n for key, example in self.ex_iterable: \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 282, in __iter__ \r\n for key, pa_table in self.generate_tables_fn(**self.kwargs): \r\n File \"/home/sanchitgandhi/datasets/src/datasets/packaged_modules/parquet/parquet.py\", line 87, in _generate_tables \r\n for batch_idx, record_batch in enumerate( \r\n File \"pyarrow/_parquet.pyx\", line 1587, in iter_batches \r\n File \"pyarrow/types.pxi\", line 88, in pyarrow.lib._datatype_to_pep3118 \r\n File \"/home/sanchitgandhi/datasets/src/datasets/download/streaming_download_manager.py\", line 342, in read_with_retrie\r\ns \r\n out = read(*args, **kwargs) \r\n File \"/home/sanchitgandhi/hf/lib/python3.10/site-packages/fsspec/spec.py\", line 1856, in read \r\n out = self.cache._fetch(self.loc, self.loc + length) \r\n File \"/home/sanchitgandhi/hf/lib/python3.10/site-packages/fsspec/caching.py\", line 189, in _fetch \r\n self.cache = self.fetcher(start, end) # new block replaces old \r\n File \"/home/sanchitgandhi/hf/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py\", line 629, in _fetch_rang\r\ne \r\n hf_raise_for_status(r) \r\n File \"/home/sanchitgandhi/hf/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py\", line 362, in hf_raise_for\r\n_status \r\n raise HfHubHTTPError(str(e), response=response) from e \r\nhuggingface_hub.utils._errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/da\r\ntasets/sanchit-gandhi/concatenated-train-set-label-length-256-conditioned/resolve/3c3c0cce51df9f9d2e75968bb2a1851894f504\r\n0d/train/train-03515-of-04010.parquet (Request ID: Root=1-65c7c4c4-153fe71401558c8c2d272c8a;fec3ec68-4a0a-4bfd-95ba-b0a0\r\n5684d612) \r\n \r\nInternal Error - We're working hard to fix this as soon as possible! ", "@sanchit-gandhi thanks for the feedback. I've opened https://github.com/huggingface/huggingface_hub/pull/2026 to make the download process more robust. I believe that you've witness this problem on Saturday due to the Hub outage. Hope the PR will make your life easier though :)", "Awesome, thanks @Wauplin! Makes sense re the Hub outage" ]
### Describe the bug When streaming a [large ASR dataset](https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set) from the Hug (~3TB) I often encounter 502 Server Errors seemingly randomly during streaming: ``` huggingface_hub.utils._errors.HfHubHTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/resolve/7d2acc5c59de848e456e951a76e805304d6fb350/train/train-00288-of-07135.parquet ``` This is despite the parquet file definitely existing on the Hub: https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/blob/main/train/train-00228-of-07135.parquet And having the correct commit id: [7d2acc5c59de848e456e951a76e805304d6fb350](https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/commits/main/train) I’m wondering whether this is coming from datasets? Or from the Hub side? ### Steps to reproduce the bug Reproducer: ```python from datasets import load_dataset from torch.utils.data import DataLoader from tqdm import tqdm NUM_EPOCHS = 20 dataset = load_dataset("sanchit-gandhi/concatenated-train-set", "train", streaming=True) dataset = dataset.with_format("torch") dataloader = DataLoader(dataset["train"], batch_size=256, drop_last=True, pin_memory=True, num_workers=16) for epoch in tqdm(range(NUM_EPOCHS), desc="Epoch", position=0): for batch in tqdm(dataloader, desc="Batch", position=1): continue ``` Running the above script tends to fail within about 2 hours with a traceback like the following: <details> <summary> Traceback: </summary> ```python 1029 for batch in train_loader: 1030 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 630, in __next__ 1031 data = self._next_data() 1032 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1325, in _next_data 1033 return self._process_data(data) 1034 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data 1035 data.reraise() 1036 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/_utils.py", line 694, in reraise 1037 raise exception 1038 huggingface_hub.utils._errors.HfHubHTTPError: Caught HfHubHTTPError in DataLoader worker process 10. 1039 Original Traceback (most recent call last): 1040 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 286, in hf_raise_for_status 1041 response.raise_for_status() 1042 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status 1043 raise HTTPError(http_error_msg, response=self) 1044 requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/resolve/7d2acc5c59de848e456e951a76e805304d6fb350/train/train-00288-of-07135.parquet 1045 The above exception was the direct cause of the following exception: 1046 Traceback (most recent call last): 1047 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop 1048 data = fetcher.fetch(index) 1049 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch 1050 data.append(next(self.dataset_iter)) 1051 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 1363, in __iter__ 1052 yield from self._iter_pytorch() 1053 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 1298, in _iter_pytorch 1054 for key, example in ex_iterable: 1055 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 983, in __iter__ 1056 for x in self.ex_iterable: 1057 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 863, in __iter__ 1058 yield from self._iter() 1059 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 900, in _iter 1060 for key, example in iterator: 1061 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 679, in __iter__ 1062 yield from self._iter() 1063 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 741, in _iter 1064 for key, example in iterator: 1065 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 863, in __iter__ 1066 yield from self._iter() 1067 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 900, in _iter 1068 for key, example in iterator: 1069 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 1115, in __iter__ 1070 for key, example in self.ex_iterable: 1071 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 679, in __iter__ 1072 yield from self._iter() 1073 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 741, in _iter 1074 for key, example in iterator: 1075 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 1115, in __iter__ 1076 for key, example in self.ex_iterable: 1077 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 282, in __iter__ 1078 for key, pa_table in self.generate_tables_fn(**self.kwargs): 1079 File "/home/sanchitgandhi/datasets/src/datasets/packaged_modules/parquet/parquet.py", line 87, in _generate_tables 1080 for batch_idx, record_batch in enumerate( 1081 File "pyarrow/_parquet.pyx", line 1367, in iter_batches 1082 File "pyarrow/types.pxi", line 88, in pyarrow.lib._datatype_to_pep3118 1083 File "/home/sanchitgandhi/datasets/src/datasets/download/streaming_download_manager.py", line 341, in read_with_retries 1084 out = read(*args, **kwargs) 1085 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/fsspec/spec.py", line 1856, in read 1086 out = self.cache._fetch(self.loc, self.loc + length) 1087 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/fsspec/caching.py", line 189, in _fetch 1088 self.cache = self.fetcher(start, end) # new block replaces old 1089 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/huggingface_hub/hf_file_system.py", line 626, in _fetch_range 1090 hf_raise_for_status(r) 1091 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 333, in hf_raise_for_status 1092 raise HfHubHTTPError(str(e), response=response) from e 1093 huggingface_hub.utils._errors.HfHubHTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/resolve/7d2acc5c59de848e456e951a76e805304d6fb350/train/train-00288-of-07135.parquet ``` </details> ### Expected behavior Should be able to stream the dataset without any 502 error. ### Environment info - `datasets` version: 2.16.2.dev0 - Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - `huggingface_hub` version: 0.20.1 - PyArrow version: 14.0.2 - Pandas version: 2.0.3 - `fsspec` version: 2023.10.0
6,577
https://github.com/huggingface/datasets/issues/6576
document page 404 not found after redirection
[ "Thanks for reporting! I've opened a PR with a fix." ]
### Describe the bug The redirected page encountered 404 not found. ### Steps to reproduce the bug 1. In this tutorial: https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt original md: https://github.com/huggingface/course/blob/2c733c2246b8b7e0e6f19a9e5d15bb12df43b2a3/chapters/en/chapter5/4.mdx#L49 ``` By default, 🤗 Datasets will decompress the files needed to load a dataset. If you want to preserve hard drive space, you can pass `DownloadConfig(delete_extracted=True)` to the `download_config` argument of `load_dataset()`. See the [documentation](https://huggingface.co/docs/datasets/package_reference/builder_classes.html?#datasets.utils.DownloadConfig) for more details. ``` The documentation points to `https://huggingface.co/docs/datasets/package_reference/builder_classes.html?#datasets.utils.DownloadConfig` it shows `The documentation page PACKAGE_REFERENCE/BUILDER_CLASSES.HTML doesn’t exist in v2.16.1, but exists on the main version. Click [here](https://huggingface.co/docs/datasets/main/en/package_reference/builder_classes.html) to redirect to the main version of the documentation.` But the redirected website `https://huggingface.co/docs/datasets/main/en/package_reference/builder_classes.html` is 404 not found. ### Expected behavior I Guess the redirected webisite should be `https://huggingface.co/docs/datasets/main/en/package_reference/builder_classes` (without `.html`) or `https://huggingface.co/docs/datasets/main/en/package_reference/builder_classes#datasets.DownloadConfig`. ### Environment info Datasets main
6,576
https://github.com/huggingface/datasets/issues/6571
Make DatasetDict.column_names return a list instead of dict
[]
Currently, `DatasetDict.column_names` returns a dict, with each split name as keys and the corresponding list of column names as values. However, by construction, all splits have the same column names. I think it makes more sense to return a single list with the column names, which is the same for all the split keys.
6,571
https://github.com/huggingface/datasets/issues/6570
No online docs for 2.16 release
[ "Though the `build / build_main_documentation` CI job ran for 2.16.0: https://github.com/huggingface/datasets/actions/runs/7300836845/job/19896275099 🤔 ", "Yes, I saw it. Maybe @mishig25 can give us some hint...", "fixed https://huggingface.co/docs/datasets/v2.16.0/en/index", "Still missing 2.16.1.", "> Still missing 2.16.1.\r\n\r\nre-running the doc-buld job for the missing ones should fix\r\n\r\n", "Re-running the job for the 2.16.1 release: https://github.com/huggingface/datasets/actions/runs/7365231552/job/20310278583", "Fixed for 2.16.1: https://huggingface.co/docs/datasets/v2.16.1/en/index" ]
We do not have the online docs for the latest minor release 2.16 (2.16.0 nor 2.16.1). In the online docs, the latest version appearing is 2.15.0: https://huggingface.co/docs/datasets/index ![Screenshot from 2024-01-09 08-43-08](https://github.com/huggingface/datasets/assets/8515462/83613222-867f-41f4-8833-7a4a76582f44)
6,570
https://github.com/huggingface/datasets/issues/6569
WebDataset ignores features defined in YAML or passed to load_dataset
[]
we should not override if the features exist already https://github.com/huggingface/datasets/blob/d26abadce0b884db32382b92422d8a6aa997d40a/src/datasets/packaged_modules/webdataset/webdataset.py#L78-L85
6,569
https://github.com/huggingface/datasets/issues/6568
keep_in_memory=True does not seem to work
[ "Seems like I just used the old code which did not have `keep_in_memory=True` argument, sorry.\r\n\r\nAlthough i encountered a different problem – at 97% my python process just hung for around 11 minutes with no logs (when running dataset.map without `keep_in_memory=True` over around 3 million of dataset samples)...", "Can you open a new issue and provide a bit more details ? What kind of map operations did you run ?", "Hey. I will try to find some free time to describe it.\r\n\r\n(can't do it now, cause i need to reproduce it myself to be sure about everything, which requires spinning a new Azuree VM, copying a huge dataset to drive from network disk for a long time etc...)", "@lhoestq loading dataset like this does not spawn 50 python processes:\r\n\r\n```\r\ndatasets.load_dataset(\"/preprocessed_2256k/train\", num_proc=50)\r\n```\r\n\r\nI have 64 vCPU so i hoped it could speed up the dataset loading...\r\n\r\nMy dataset onlly has images and metadata.csv with text column alongside image file path column", "now noticed\r\n```\r\n'Setting num_proc from 50 back to 1 for the train split to disable multiprocessing as it only contains one shard\r\n```\r\n\r\nAny way to work around this?", "@lhoestq thanks, [this helped](https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/arrow_dataset.py#L1053)\r\n\r\n" ]
UPD: [Fixed](https://github.com/huggingface/datasets/issues/6568#issuecomment-1880817794) . But a new issue came up :(
6,568
https://github.com/huggingface/datasets/issues/6567
AttributeError: 'str' object has no attribute 'to'
[ "I think you are reporting an issue with the `transformers` library. Note this is the repository of the `datasets` library. I recommend that you open an issue in their repository: https://github.com/huggingface/transformers/issues\r\n\r\nEDIT: I have not the rights to transfer the issue\r\n~~I am transferring your issue to their repository.~~", "Thanks, I hope someone from transformers library addresses this issue.\r\n\r\nOn Mon, Jan 8, 2024 at 15:29 Albert Villanova del Moral <\r\n***@***.***> wrote:\r\n\r\n> I think you are reporting an issue with the transformers library. Note\r\n> this is the repository of the datasets library. I am transferring your\r\n> issue to their repository.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6567#issuecomment-1880688586>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AE4LJNOYMD6WJMXFKPMH6DLYNO7PJAVCNFSM6AAAAABBQ63HWOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOBQGY4DQNJYGY>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n", "@andysingal, I recommend that you open an issue in their repository: https://github.com/huggingface/transformers/issues\r\nI don't have the rights to transfer this issue to their repo." ]
### Describe the bug ``` -------------------------------------------------------------------------- AttributeError Traceback (most recent call last) [<ipython-input-6-80c6086794e8>](https://localhost:8080/#) in <cell line: 10>() 8 report_to="wandb") 9 ---> 10 trainer = Trainer( 11 model=model, 12 args=training_args, 1 frames [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _move_model_to_device(self, model, device) 688 689 def _move_model_to_device(self, model, device): --> 690 model = model.to(device) 691 # Moving a model to an XLA device disconnects the tied weights, so we have to retie them. 692 if self.args.parallel_mode == ParallelMode.TPU and hasattr(model, "tie_weights"): AttributeError: 'str' object has no attribute 'to' ``` ### Steps to reproduce the bug here is the notebook: ``` https://colab.research.google.com/drive/10JDBNsLlYrQdnI2FWfDK3F5M8wvVUDXG?usp=sharing ``` ### Expected behavior run the Training ### Environment info Colab Notebook , T4
6,567
https://github.com/huggingface/datasets/issues/6566
I train controlnet_sdxl in bf16 datatype, got unsupported ERROR in datasets
[ "I also see the same error and get passed it by casting that line to float. \r\n\r\nso `for x in obj.detach().cpu().numpy()` becomes `for x in obj.detach().to(torch.float).cpu().numpy()`\r\n\r\nI got the idea from [this ](https://github.com/kohya-ss/sd-webui-additional-networks/pull/128/files) PR where someone was facing a similar issue (in a different repository). I guess numpy doesn't support bfloat16.\r\n\r\n" ]
### Describe the bug ``` Traceback (most recent call last): File "train_controlnet_sdxl.py", line 1252, in <module> main(args) File "train_controlnet_sdxl.py", line 1013, in main train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint) File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 592, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3093, in map for rank, done, content in Dataset._map_single(**dataset_kwargs): File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3489, in _map_single writer.write_batch(batch) File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_writer.py", line 557, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 248, in pyarrow.lib.array File "pyarrow/array.pxi", line 113, in pyarrow.lib._handle_arrow_array_protocol File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_writer.py", line 191, in __arrow_array__ out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True)) File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/features/features.py", line 447, in cast_to_python_objects return _cast_to_python_objects( File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/features/features.py", line 324, in _cast_to_python_objects for x in obj.detach().cpu().numpy() TypeError: Got unsupported ScalarType BFloat16 ``` ### Steps to reproduce the bug Here is my train script I use BF16 type,I use diffusers train my model ``` export MODEL_DIR="/home/mhh/sd_models/stable-diffusion-xl-base-1.0" export OUTPUT_DIR="./control_net" export VAE_NAME="/home/mhh/sd_models/sdxl-vae-fp16-fix" accelerate launch train_controlnet_sdxl.py \ --pretrained_model_name_or_path=$MODEL_DIR \ --output_dir=$OUTPUT_DIR \ --pretrained_vae_model_name_or_path=$VAE_NAME \ --dataset_name=/home/mhh/sd_datasets/fusing/fill50k \ --mixed_precision="bf16" \ --resolution=1024 \ --learning_rate=1e-5 \ --max_train_steps=200 \ --validation_image "/home/mhh/sd_datasets/controlnet_image/conditioning_image_1.png" "/home/mhh/sd_datasets/controlnet_image/conditioning_image_2.png" \ --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ --validation_steps=50 \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --report_to="wandb" \ --seed=42 \ ``` ### Expected behavior When I changed the data type to fp16, it worked. ### Environment info datasets 2.16.1 numpy 1.24.4
6,566
https://github.com/huggingface/datasets/issues/6565
`drop_last_batch=True` for IterableDataset map function is ignored with multiprocessing DataLoader
[ "My current workaround this issue is to return `None` in the second element and then filter out samples which have `None` in them.\r\n\r\n```python\r\ndef merge_samples(batch):\r\n if len(batch['a']) == 1:\r\n batch['c'] = [batch['a'][0]]\r\n batch['d'] = [None]\r\n else:\r\n batch['c'] = [batch['a'][0]]\r\n batch['d'] = [batch['a'][1]]\r\n return batch\r\n \r\ndef filter_fn(x):\r\n return x['d'] is not None\r\n\r\n# other code...\r\nmapped = mapped.filter(filter_fn)\r\n```" ]
### Describe the bug Scenario: - Interleaving two iterable datasets of unequal lengths (`all_exhausted`), followed by a batch mapping with batch size 2 to effectively merge the two datasets and get a sample from each dataset in a single batch, with `drop_last_batch=True` to skip the last batch in case it doesn't have two samples. What works: - Using DataLoader with `num_workers=0` What does not work: - Using DataLoader with `num_workers=1`, errors in the last batch. Basically, `drop_last_batch=True` is ignored when using multiple dataloading workers. Please take a look at the minimal repro script below. ### Steps to reproduce the bug ```python from datasets import Dataset, interleave_datasets from torch.utils.data import DataLoader def merge_samples(batch): assert len(batch['a']) == 2, "Batch size must be 2" batch['c'] = [batch['a'][0]] batch['d'] = [batch['a'][1]] return batch def gen1(): for ii in range(1, 8385): yield {"a": ii} def gen2(): for ii in range(1, 5302): yield {"a": ii} if __name__ == '__main__': dataset1 = Dataset.from_generator(gen1).to_iterable_dataset(num_shards=1024) dataset2 = Dataset.from_generator(gen2).to_iterable_dataset(num_shards=1024) interleaved = interleave_datasets([dataset1, dataset2], stopping_strategy="all_exhausted") mapped = interleaved.map(merge_samples, batched=True, batch_size=2, remove_columns=interleaved.column_names, drop_last_batch=True) # Works loader = DataLoader(mapped, batch_size=32, num_workers=0) i = 0 for b in loader: print(i, b['c'].shape, b['d'].shape) i += 1 print("DataLoader with num_workers=0 works") # Doesn't work loader = DataLoader(mapped, batch_size=32, num_workers=1) i = 0 for b in loader: print(i, b['c'].shape, b['d'].shape) i += 1 ``` ### Expected behavior `drop_last_batch=True` should have same behaviour for `num_workers=0` and `num_workers>=1` ### Environment info - `datasets` version: 2.16.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.10.12 - `huggingface_hub` version: 0.20.2 - PyArrow version: 12.0.1 - Pandas version: 2.0.3 - `fsspec` version: 2023.6.0 I have also tested on Linux and got the same behavior.
6,565
https://github.com/huggingface/datasets/issues/6564
`Dataset.filter` missing `with_rank` parameter
[ "Thanks for reporting! I've opened a PR with a fix", "@mariosasko thank you very much :)" ]
### Describe the bug The issue shall be open: https://github.com/huggingface/datasets/issues/6435 When i try to pass `with_rank` to `Dataset.filter()`, i get this: `Dataset.filter() got an unexpected keyword argument 'with_rank'` ### Steps to reproduce the bug Run notebook: https://colab.research.google.com/drive/1WUNKph8BdP0on5ve3gQnh_PE0cFLQqTn?usp=sharing ### Expected behavior Should work? ### Environment info NVIDIA RTX 4090
6,564
https://github.com/huggingface/datasets/issues/6563
`ImportError`: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (.../huggingface_hub/utils/__init__.py)
[ "@Wauplin Do you happen to know what's up?", "<del>Installing `datasets` from `main` did the trick so I guess it will be fixed in the next release.\r\n\r\nNVM https://github.com/huggingface/datasets/blob/d26abadce0b884db32382b92422d8a6aa997d40a/src/datasets/utils/info_utils.py#L5", "@wasertech upgrading `huggingface_hub` to a newer version should fix your issue. Latest version is 0.20.2. ", "Ha yes I had pinned `tokenizers` to an old version so it downgraded `huggingface_hub`. Note to myself keep HuggingFace modules relatively close together chronologically release wise.", "Glad to know your problem's solved! ", "@Wauplin Thanks for your insight 👍", "pip install --upgrade huggingface-hub" ]
### Describe the bug Yep its not [there](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/__init__.py) anymore. ```text + python /home/trainer/sft_train.py --model_name cognitivecomputations/dolphin-2.2.1-mistral-7b --dataset_name wasertech/OneOS --load_in_4bit --use_peft --batch_size 4 --num_train_epochs 1 --learning_rate 1.41e-5 --gradient_accumulation_steps 8 --seq_length 4096 --output_dir output --log_with wandb Traceback (most recent call last): File "/home/trainer/sft_train.py", line 22, in <module> from datasets import load_dataset File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/__init__.py", line 22, in <module> from .arrow_dataset import Dataset File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 66, in <module> from .arrow_reader import ArrowReader File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/arrow_reader.py", line 30, in <module> from .download.download_config import DownloadConfig File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/download/__init__.py", line 9, in <module> from .download_manager import DownloadManager, DownloadMode File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/download/download_manager.py", line 31, in <module> from ..utils import tqdm as hf_tqdm File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/utils/__init__.py", line 19, in <module> from .info_utils import VerificationMode File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 5, in <module> from huggingface_hub.utils import insecure_hashlib ImportError: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (/home/trainer/llm-train/lib/python3.8/site-packages/huggingface_hub/utils/__init__.py) ``` ### Steps to reproduce the bug Using `datasets==2.16.1` and `huggingface_hub== 0.17.3`, load a dataset with `load_dataset`. ### Expected behavior The dataset should be (downloaded - if needed - and) returned. ### Environment info ```text trainer@a311ae86939e:/mnt$ pip show datasets Name: datasets Version: 2.16.1 Summary: HuggingFace community-driven open-source library of datasets Home-page: https://github.com/huggingface/datasets Author: HuggingFace Inc. Author-email: [email protected] License: Apache 2.0 Location: /home/trainer/llm-train/lib/python3.8/site-packages Requires: packaging, pyyaml, multiprocess, pyarrow-hotfix, pandas, pyarrow, xxhash, dill, numpy, aiohttp, tqdm, fsspec, requests, filelock, huggingface-hub Required-by: trl, lm-eval, evaluate trainer@a311ae86939e:/mnt$ pip show huggingface_hub Name: huggingface-hub Version: 0.17.3 Summary: Client library to download and publish models, datasets and other repos on the huggingface.co hub Home-page: https://github.com/huggingface/huggingface_hub Author: Hugging Face, Inc. Author-email: [email protected] License: Apache Location: /home/trainer/llm-train/lib/python3.8/site-packages Requires: requests, pyyaml, packaging, typing-extensions, tqdm, filelock, fsspec Required-by: transformers, tokenizers, peft, evaluate, datasets, accelerate trainer@a311ae86939e:/mnt$ huggingface-cli env Copy-and-paste the text below in your GitHub issue. - huggingface_hub version: 0.17.3 - Platform: Linux-6.5.13-7-MANJARO-x86_64-with-glibc2.29 - Python version: 3.8.10 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Token path ?: /home/trainer/.cache/huggingface/token - Has saved token ?: True - Who am I ?: wasertech - Configured git credential helpers: - FastAI: N/A - Tensorflow: N/A - Torch: 2.1.2 - Jinja2: 3.1.2 - Graphviz: N/A - Pydot: N/A - Pillow: 10.2.0 - hf_transfer: N/A - gradio: N/A - tensorboard: N/A - numpy: 1.24.4 - pydantic: N/A - aiohttp: 3.9.1 - ENDPOINT: https://huggingface.co - HUGGINGFACE_HUB_CACHE: /home/trainer/.cache/huggingface/hub - HUGGINGFACE_ASSETS_CACHE: /home/trainer/.cache/huggingface/assets - HF_TOKEN_PATH: /home/trainer/.cache/huggingface/token - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False - HF_HUB_ENABLE_HF_TRANSFER: False ```
6,563
https://github.com/huggingface/datasets/issues/6562
datasets.DownloadMode.FORCE_REDOWNLOAD use cache to download dataset features with load_dataset function
[]
### Describe the bug I have updated my dataset by adding a new feature, and push it to the hub. When I want to download it on my machine which contain the old version by using `datasets.load_dataset("your_dataset_name", download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` I get an error (paste bellow). Seems that the load_dataset function still use the old features schema instead of downloading everything new from the HUB. I find a way to go around this issue by manually deleting the old dataset cache. But from my understanding of `datasets.DownloadMode.FORCE_REDOWNLOAD` option, the dataset cache should be ignored. ### Steps to reproduce the bug 1. Download your dataset in your machine using `datasets.load_dataset` 2. Create a new feature in your dataset and push it to the hub 3. On the same machine redownload your dataset using `datasets.load_dataset("your_dataset_name", download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` ### Expected behavior ` ValueError: Couldn't cast id: string level: string context: list<element: string> child 0, element: string type: string answer: string question: string supporting_facts: list<element: string> child 0, element: string fra_answer: string fra_question: string -- schema metadata -- huggingface: '{"info": {"features": {"id": {"dtype": "string", "_type": "' + 490 to {'id': Value(dtype='string', id=None), 'level': Value(dtype='string', id=None), 'context': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'type': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'supporting_facts': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)} because column names don't match The above exception was the direct cause of the following exception: DatasetGenerationError ... DatasetGenerationError: An error occurred while generating the dataset` ### Environment info datasets-2.16.1 huggingface-hub-0.20.2
6,562
https://github.com/huggingface/datasets/issues/6561
Document YAML configuration with "data_dir"
[ "In particular, I would like to have an example of how to replace the following configuration (from https://huggingface.co/docs/hub/datasets-manual-configuration#splits)\r\n\r\n```\r\n---\r\nconfigs:\r\n- config_name: default\r\n data_files:\r\n - split: train\r\n path: \"data/*.csv\"\r\n - split: test\r\n path: \"holdout/*.csv\"\r\n---\r\n```\r\n\r\nwith the `data_dir` field." ]
See https://huggingface.co/datasets/uonlp/CulturaX/discussions/15#6597e83f185db94370d6bf50 for reference
6,561
https://github.com/huggingface/datasets/issues/6560
Support Video
[]
### Feature request HF datasets are awesome in supporting text and images. Will be great to see such a support in videos :) ### Motivation Video generation :) ### Your contribution Will probably be limited to raising this feature request ;)
6,560
https://github.com/huggingface/datasets/issues/6559
Latest version 2.16.1, when load dataset error occurs. ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default']
[ "Hi ! The \"allenai--c4\" config doesn't exist (this naming schema comes from old versions of `datasets`)\r\n\r\nYou can load it this way instead:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ncache_dir = 'path/to/your/cache/directory'\r\ndataset = load_dataset('allenai/c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', cache_dir=cache_dir)\r\n```", "> Hi ! The \"allenai--c4\" config doesn't exist (this naming schema comes from old versions of `datasets`)\r\n> \r\n> You can load it this way instead:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> cache_dir = 'path/to/your/cache/directory'\r\n> dataset = load_dataset('allenai/c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', cache_dir=cache_dir)\r\n> ```\r\n\r\nthanks, the command run successfully in the latest version\r\n", "> Hi ! The \"allenai--c4\" config doesn't exist (this naming schema comes from old versions of `datasets`)\r\n> \r\n> You can load it this way instead:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> cache_dir = 'path/to/your/cache/directory'\r\n> dataset = load_dataset('allenai/c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', cache_dir=cache_dir)\r\n> ```\r\n\r\n@lhoestq \r\nIn this case, should we traverse through al 1024 json files to load the whole dataset?\r\nThanks!", "It will only load the first file (`data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}` only mentions one file)", "> It will only load the first file (`data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}` only mentions one file)\r\n\r\nThen what if we want to load the whole dataset?", "There is a \"en\" subset that you can load (see the list in the \"subset\" dropdown at https://huggingface.co/datasets/allenai/c4)\r\n\r\n```python\r\ndataset = load_dataset('allenai/c4', 'en', split=\"train\")\r\n```\r\n\r\nalternatively you can specify all the the files yourself using a glob pattern (or a list):\r\n\r\n```python\r\ndataset = load_dataset('allenai/c4', data_files='en/c4-train.00000-of-*.json.gz', split=\"train\")\r\n```", "> There is a \"en\" subset that you can load (see the list in the \"subset\" dropdown at https://huggingface.co/datasets/allenai/c4)\r\n> \r\n> ```python\r\n> dataset = load_dataset('allenai/c4', 'en', split=\"train\")\r\n> ```\r\n> \r\n> alternatively you can specify all the the files yourself using a glob pattern (or a list):\r\n> \r\n> ```python\r\n> dataset = load_dataset('allenai/c4', data_files='en/c4-train.00000-of-*.json.gz', split=\"train\")\r\n> ```\r\n\r\nThanks, the second solution works. The first line simply fails due to missing schema specific to this dataset.", "The latest version of `datasets` seems to have broken my dataset for my users (see this Hugging Face issue: https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus/discussions/3). I changed it by renaming my dataset's config to `default` instead of `train` and then updating my dataset card accordingly." ]
### Describe the bug python script is: ``` from datasets import load_dataset cache_dir = 'path/to/your/cache/directory' dataset = load_dataset('allenai/c4','allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', use_auth_token=False, cache_dir=cache_dir) ``` the script success when datasets version is 2.14.7. when using 2.16.1, error occurs ` ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default']` ### Steps to reproduce the bug 1. pip install datasets==2.16.1 2. run python script: ``` from datasets import load_dataset cache_dir = 'path/to/your/cache/directory' dataset = load_dataset('allenai/c4','allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', use_auth_token=False, cache_dir=cache_dir) ``` ### Expected behavior the dataset should be loaded successful in the latest version. ### Environment info datasets 2.16.1
6,559
https://github.com/huggingface/datasets/issues/6558
OSError: image file is truncated (1 bytes not processed) #28323
[ "You can add \r\n\r\n```python\r\nfrom PIL import ImageFile\r\nImageFile.LOAD_TRUNCATED_IMAGES = True\r\n```\r\n\r\nafter the imports to be able to read truncated images." ]
### Describe the bug ``` --------------------------------------------------------------------------- OSError Traceback (most recent call last) Cell In[24], line 28 23 return example 25 # Filter the dataset 26 # filtered_dataset = dataset.filter(contains_number) 27 # Add the 'label' field in the dataset ---> 28 labeled_dataset = dataset.filter(contains_number).map(add_label) 29 # View the structure of the updated dataset 30 print(labeled_dataset) File /usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py:975, in DatasetDict.filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, fn_kwargs, num_proc, desc) 972 if cache_file_names is None: 973 cache_file_names = {k: None for k in self} 974 return DatasetDict( --> 975 { 976 k: dataset.filter( 977 function=function, 978 with_indices=with_indices, 979 input_columns=input_columns, 980 batched=batched, 981 batch_size=batch_size, 982 keep_in_memory=keep_in_memory, 983 load_from_cache_file=load_from_cache_file, 984 cache_file_name=cache_file_names[k], 985 writer_batch_size=writer_batch_size, 986 fn_kwargs=fn_kwargs, 987 num_proc=num_proc, 988 desc=desc, 989 ) 990 for k, dataset in self.items() 991 } 992 ) File /usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py:976, in <dictcomp>(.0) 972 if cache_file_names is None: 973 cache_file_names = {k: None for k in self} 974 return DatasetDict( 975 { --> 976 k: dataset.filter( 977 function=function, 978 with_indices=with_indices, 979 input_columns=input_columns, 980 batched=batched, 981 batch_size=batch_size, 982 keep_in_memory=keep_in_memory, 983 load_from_cache_file=load_from_cache_file, 984 cache_file_name=cache_file_names[k], 985 writer_batch_size=writer_batch_size, 986 fn_kwargs=fn_kwargs, 987 num_proc=num_proc, 988 desc=desc, 989 ) 990 for k, dataset in self.items() 991 } 992 ) File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:557, in transmit_format.<locals>.wrapper(*args, **kwargs) 550 self_format = { 551 "type": self._format_type, 552 "format_kwargs": self._format_kwargs, 553 "columns": self._format_columns, 554 "output_all_columns": self._output_all_columns, 555 } 556 # apply actual function --> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 559 # re-apply format to the output File /usr/local/lib/python3.10/dist-packages/datasets/fingerprint.py:481, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 477 validate_fingerprint(kwargs[fingerprint_name]) 479 # Call actual function --> 481 out = func(dataset, *args, **kwargs) 483 # Update fingerprint of in-place transforms + update in-place history of transforms 485 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3623, in Dataset.filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 3620 if len(self) == 0: 3621 return self -> 3623 indices = self.map( 3624 function=partial( 3625 get_indices_from_mask_function, function, batched, with_indices, input_columns, self._indices 3626 ), 3627 with_indices=True, 3628 features=Features({"indices": Value("uint64")}), 3629 batched=True, 3630 batch_size=batch_size, 3631 remove_columns=self.column_names, 3632 keep_in_memory=keep_in_memory, 3633 load_from_cache_file=load_from_cache_file, 3634 cache_file_name=cache_file_name, 3635 writer_batch_size=writer_batch_size, 3636 fn_kwargs=fn_kwargs, 3637 num_proc=num_proc, 3638 suffix_template=suffix_template, 3639 new_fingerprint=new_fingerprint, 3640 input_columns=input_columns, 3641 desc=desc or "Filter", 3642 ) 3643 new_dataset = copy.deepcopy(self) 3644 new_dataset._indices = indices.data File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:592, in transmit_tasks.<locals>.wrapper(*args, **kwargs) 590 self: "Dataset" = kwargs.pop("self") 591 # apply actual function --> 592 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 593 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 594 for dataset in datasets: 595 # Remove task templates if a column mapping of the template is no longer valid File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:557, in transmit_format.<locals>.wrapper(*args, **kwargs) 550 self_format = { 551 "type": self._format_type, 552 "format_kwargs": self._format_kwargs, 553 "columns": self._format_columns, 554 "output_all_columns": self._output_all_columns, 555 } 556 # apply actual function --> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 559 # re-apply format to the output File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3093, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 3087 if transformed_dataset is None: 3088 with hf_tqdm( 3089 unit=" examples", 3090 total=pbar_total, 3091 desc=desc or "Map", 3092 ) as pbar: -> 3093 for rank, done, content in Dataset._map_single(**dataset_kwargs): 3094 if done: 3095 shards_done += 1 File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3470, in Dataset._map_single(shard, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset) 3466 indices = list( 3467 range(*(slice(i, i + batch_size).indices(shard.num_rows))) 3468 ) # Something simpler? 3469 try: -> 3470 batch = apply_function_on_filtered_inputs( 3471 batch, 3472 indices, 3473 check_same_num_examples=len(shard.list_indexes()) > 0, 3474 offset=offset, 3475 ) 3476 except NumExamplesMismatchError: 3477 raise DatasetTransformationNotAllowedError( 3478 "Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it." 3479 ) from None File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3349, in Dataset._map_single.<locals>.apply_function_on_filtered_inputs(pa_inputs, indices, check_same_num_examples, offset) 3347 if with_rank: 3348 additional_args += (rank,) -> 3349 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) 3350 if isinstance(processed_inputs, LazyDict): 3351 processed_inputs = { 3352 k: v for k, v in processed_inputs.data.items() if k not in processed_inputs.keys_to_format 3353 } File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:6212, in get_indices_from_mask_function(function, batched, with_indices, input_columns, indices_mapping, *args, **fn_kwargs) 6209 if input_columns is None: 6210 # inputs only contains a batch of examples 6211 batch: dict = inputs[0] -> 6212 num_examples = len(batch[next(iter(batch.keys()))]) 6213 for i in range(num_examples): 6214 example = {key: batch[key][i] for key in batch} File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:272, in LazyDict.__getitem__(self, key) 270 value = self.data[key] 271 if key in self.keys_to_format: --> 272 value = self.format(key) 273 self.data[key] = value 274 self.keys_to_format.remove(key) File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:375, in LazyBatch.format(self, key) 374 def format(self, key): --> 375 return self.formatter.format_column(self.pa_table.select([key])) File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:442, in PythonFormatter.format_column(self, pa_table) 440 def format_column(self, pa_table: pa.Table) -> list: 441 column = self.python_arrow_extractor().extract_column(pa_table) --> 442 column = self.python_features_decoder.decode_column(column, pa_table.column_names[0]) 443 return column File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:218, in PythonFeaturesDecoder.decode_column(self, column, column_name) 217 def decode_column(self, column: list, column_name: str) -> list: --> 218 return self.features.decode_column(column, column_name) if self.features else column File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1951, in Features.decode_column(self, column, column_name) 1938 def decode_column(self, column: list, column_name: str): 1939 """Decode column with custom feature decoding. 1940 1941 Args: (...) 1948 `list[Any]` 1949 """ 1950 return ( -> 1951 [decode_nested_example(self[column_name], value) if value is not None else None for value in column] 1952 if self._column_requires_decoding[column_name] 1953 else column 1954 ) File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1951, in <listcomp>(.0) 1938 def decode_column(self, column: list, column_name: str): 1939 """Decode column with custom feature decoding. 1940 1941 Args: (...) 1948 `list[Any]` 1949 """ 1950 return ( -> 1951 [decode_nested_example(self[column_name], value) if value is not None else None for value in column] 1952 if self._column_requires_decoding[column_name] 1953 else column 1954 ) File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1339, in decode_nested_example(schema, obj, token_per_repo_id) 1336 elif isinstance(schema, (Audio, Image)): 1337 # we pass the token to read and decode files from private repositories in streaming mode 1338 if obj is not None and schema.decode: -> 1339 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) 1340 return obj File /usr/local/lib/python3.10/dist-packages/datasets/features/image.py:185, in Image.decode_example(self, value, token_per_repo_id) 183 else: 184 image = PIL.Image.open(BytesIO(bytes_)) --> 185 image.load() # to avoid "Too many open files" errors 186 return image File /usr/local/lib/python3.10/dist-packages/PIL/ImageFile.py:254, in ImageFile.load(self) 252 break 253 else: --> 254 raise OSError( 255 "image file is truncated " 256 f"({len(b)} bytes not processed)" 257 ) 259 b = b + s 260 n, err_code = decoder.decode(b) OSError: image file is truncated (1 bytes not processed) ``` ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("mehul7/captioned_military_aircraft") from transformers import AutoImageProcessor checkpoint = "microsoft/resnet-50" image_processor = AutoImageProcessor.from_pretrained(checkpoint) import re from PIL import Image import io def contains_number(example): try: image = Image.open(io.BytesIO(example["image"]['bytes'])) t = image_processor(images=image, return_tensors="pt")['pixel_values'] except Exception as e: print(f"Error processing image:{example['text']}") return False return bool(re.search(r'\d', example['text'])) # Define a function to add the 'label' field def add_label(example): lab = example['text'].split() temp = 'NOT' for item in lab: if str(item[-1]).isdigit(): temp = item break example['label'] = temp return example # Filter the dataset # filtered_dataset = dataset.filter(contains_number) # Add the 'label' field in the dataset labeled_dataset = dataset.filter(contains_number).map(add_label) # View the structure of the updated dataset print(labeled_dataset) ``` ### Expected behavior needs to form labels same as : https://www.kaggle.com/code/jiabaowangts/dataset-air/notebook ### Environment info Kaggle notebook P100
6,558
https://github.com/huggingface/datasets/issues/6554
Parquet exports are used even if revision is passed
[ "I don't think this bug is a thing ? Do you have some code that leads to this issue ?" ]
We should not used Parquet exports if `revision` is passed. I think this is a regression.
6,554
https://github.com/huggingface/datasets/issues/6553
Cannot import name 'load_dataset' from .... module ‘datasets’
[ "I don't know My conpany conputer cannot work. but in my computer, it work?", "Do you have a folder in your working directory called datasets?" ]
### Describe the bug use python -m pip install datasets to install ### Steps to reproduce the bug from datasets import load_dataset ### Expected behavior it doesn't work ### Environment info datasets version==2.15.0 python == 3.10.12 linux version I don't know??
6,553
https://github.com/huggingface/datasets/issues/6552
Loading a dataset from Google Colab hangs at "Resolving data files".
[ "This bug comes from the `huggingface_hub` library, see: https://github.com/huggingface/huggingface_hub/issues/1952\r\n\r\nA fix is provided at https://github.com/huggingface/huggingface_hub/pull/1953. Feel free to install `huggingface_hub` from this PR, or wait for it to be merged and the new version of `huggingface_hub` to be released", "Thanks!" ]
### Describe the bug Hello, I'm trying to load a dataset from Google Colab but the process hangs at `Resolving data files`: ![image](https://github.com/huggingface/datasets/assets/99779/7175ad85-e571-46ed-9f87-92653985777d) It is happening when the `_get_origin_metadata` definition is invoked: ```python def _get_origin_metadata( data_files: List[str], max_workers=64, download_config: Optional[DownloadConfig] = None, ) -> Tuple[str]: return thread_map( partial(_get_single_origin_metadata, download_config=download_config), data_files, max_workers=max_workers, tqdm_class=hf_tqdm, desc="Resolving data files", disable=len(data_files) <= 16, ``` The thread is then stuck at `waiter.acquire()` in the builtin `threading.py` file. I can load the dataset just fine on my machine. Cheers, Thomas ### Steps to reproduce the bug In Google Colab: ```python !pip install datasets from datasets import load_dataset dataset = load_dataset("colour-science/color-checker-detection-dataset") ``` ### Expected behavior The dataset should be loaded. ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.20.1 - PyArrow version: 10.0.1 - Pandas version: 1.5.3 - `fsspec` version: 2023.6.0
6,552
https://github.com/huggingface/datasets/issues/6549
Loading from hf hub with clearer error message
[ "Maybe we can add a helper message like `Maybe try again using \"hf://path/without/resolve\"` if the path contains `/resolve/` ?\r\n\r\ne.g.\r\n\r\n```\r\nFileNotFoundError: Unable to find 'hf://datasets/HuggingFaceTB/eval_data/resolve/main/eval_data_context_and_answers.json'\r\nIt looks like you used parts of the URL of the file from the Hugging Face website, but you should remove the \"/resolve/<revision>\" part to have a valid `hf://` path.\r\nPlease try again using this path instead:\r\n hf://datasets/HuggingFaceTB/eval_data/eval_data_context_and_answers.json\r\n```\r\n\r\nand suggest `f\"hf://datasets/HuggingFaceTB/eval_data@{revision}/eval_data_context_and_answers.json\"` if revision != \"main\"\r\n\r\nEDIT: I think this message should also be raised from the `huggingface_hub`'s `HfFileSystem` implementation" ]
### Feature request Shouldn't this kinda work ? ``` Dataset.from_json("hf://datasets/HuggingFaceTB/eval_data/resolve/main/eval_data_context_and_answers.json") ``` I got an error ``` File ~/miniconda3/envs/datatrove/lib/python3.10/site-packages/datasets/data_files.py:380, in resolve_pattern(pattern, base_path, allowed_extensions, download_config) 378 if allowed_extensions is not None: 379 error_msg += f" with any supported extension {list(allowed_extensions)}" --> 380 raise FileNotFoundError(error_msg) 381 return out FileNotFoundError: Unable to find 'hf://datasets/HuggingFaceTB/eval_data/resolve/main/eval_data_context_and_answers.json' (I'm logged in) ``` Fix: the correct path is ``` hf://datasets/HuggingFaceTB/eval_data/eval_data_context_and_answers.json ``` Proposal: raise a clearer error ### Motivation Clearer error message ### Your contribution Can open a PR
6,549
https://github.com/huggingface/datasets/issues/6548
Skip if a dataset has issues
[ "It looks like a transient DNS issue. It should work fine now if you try again.\r\n\r\nThere is no parameter in load_dataset to skip failed downloads. In your case it would have skipped every single subsequent download until the DNS issue was resolved anyway." ]
### Describe the bug Hello everyone, I'm using **load_datasets** from **huggingface** to download the datasets and I'm facing an issue, the download starts but it reaches some state and then fails with the following error: Couldn't reach https://huggingface.co/datasets/wikimedia/wikipedia/resolve/4cb9b0d719291f1a10f96f67d609c5d442980dc9/20231101.ext/train-00000-of-00001.parquet Failed to resolve \'huggingface.co\' ([Errno -3] Temporary failure in name resolution)"))'))) ![image](https://github.com/huggingface/datasets/assets/143214684/8847d9cb-529e-4eda-9c76-282713dfa3af) so I was wondering is there a parameter to be passed to load_dataset() to skip files that can't be downloaded?? ### Steps to reproduce the bug Parameter to be passed to load_dataset() of huggingface to skip files that can't be downloaded?? ### Expected behavior load_dataset() finishes without error ### Environment info None
6,548
https://github.com/huggingface/datasets/issues/6545
`image` column not automatically inferred if image dataset only contains 1 image
[]
### Describe the bug By default, the standard Image Dataset maps out `file_name` to `image` when loading an Image Dataset. However, if the dataset contains only 1 image, this does not take place ### Steps to reproduce the bug Input (dataset with one image `multimodalart/repro_1_image`) ```py from datasets import load_dataset dataset = load_dataset("multimodalart/repro_1_image") dataset ``` Output: ```py DatasetDict({ train: Dataset({ features: ['file_name', 'prompt'], num_rows: 1 }) }) ``` Input (dataset with 2+ images `multimodalart/repro_2_image`) ```py from datasets import load_dataset dataset = load_dataset("multimodalart/repro_2_image") dataset ``` Output: ```py DatasetDict({ train: Dataset({ features: ['image', 'prompt'], num_rows: 2 }) }) ``` ### Expected behavior Expected to map `file_name` → `image` for all dataset sizes, including 1. ### Environment info Both latest main and 2.16.0
6,545
https://github.com/huggingface/datasets/issues/6542
Datasets : wikipedia 20220301.en error
[ "Hi ! We now recommend using the `wikimedia/wikipedia` dataset, can you try loading this one instead ?\r\n\r\n```python\r\nwiki_dataset = load_dataset(\"wikimedia/wikipedia\", \"20231101.en\")\r\n```", "This bug has been fixed in `2.16.1` thanks to https://github.com/huggingface/datasets/pull/6544, feel free to update `datasets` and re-run your code :)\r\n\r\n```\r\npip install -U datasets\r\n```" ]
### Describe the bug When I used load_dataset to download this data set, the following error occurred. The main problem was that the target data did not exist. ### Steps to reproduce the bug 1.I tried downloading directly. ```python wiki_dataset = load_dataset("wikipedia", "20220301.en") ``` An exception occurred ``` MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/ If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). Example of usage: `load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')` ``` 2.I modified the code as prompted. ```python wiki_dataset = load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner') ``` An exception occurred: ``` FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json ``` ### Expected behavior I searched in the parent directory of the corresponding URL, but there was no corresponding "20220301" directory. I really need this data set and hope to provide a download method. ### Environment info python 3.8 datasets 2.16.0 apache-beam 2.52.0 dill 0.3.7
6,542
https://github.com/huggingface/datasets/issues/6541
Dataset not loading successfully.
[ "This is a problem with your environment. You should be able to fix it by upgrading `numpy` based on [this](https://github.com/numpy/numpy/issues/23570) issue.", "Bro I already update numpy package.", "Then, this shouldn't throw an error on your machine:\r\n```python\r\nimport numpy\r\nnumpy._no_nep50_warning\r\n```\r\n\r\nIf it does, run `python -m pip install numpy` to ensure the correct `pip` is used for the package installation.", "Your suggestion to run `python -m pip install numpy` proved to be successful, and my issue has been resolved. I am grateful for your assistance, @mariosasko" ]
### Describe the bug When I run down the below code shows this error: AttributeError: module 'numpy' has no attribute '_no_nep50_warning' I also added this issue in transformers library please check out: [link](https://github.com/huggingface/transformers/issues/28099) ### Steps to reproduce the bug ## Reproduction Hi, please check this line of code, when I run Show attribute error. ``` from datasets import load_dataset from transformers import WhisperProcessor, WhisperForConditionalGeneration # Select an audio file and read it: ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") audio_sample = ds[0]["audio"] waveform = audio_sample["array"] sampling_rate = audio_sample["sampling_rate"] # Load the Whisper model in Hugging Face format: processor = WhisperProcessor.from_pretrained("openai/whisper-tiny.en") model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en") # Use the model and processor to transcribe the audio: input_features = processor( waveform, sampling_rate=sampling_rate, return_tensors="pt" ).input_features # Generate token ids predicted_ids = model.generate(input_features) # Decode token ids to text transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) transcription[0] ``` **Attribute Error** ``` AttributeError Traceback (most recent call last) Cell In[9], line 6 4 # Select an audio file and read it: 5 ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") ----> 6 audio_sample = ds[0]["audio"] 7 waveform = audio_sample["array"] 8 sampling_rate = audio_sample["sampling_rate"] File /opt/pytorch/lib/python3.8/site-packages/datasets/arrow_dataset.py:2795, in Dataset.__getitem__(self, key) 2793 def __getitem__(self, key): # noqa: F811 2794 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 2795 return self._getitem(key) File /opt/pytorch/lib/python3.8/site-packages/datasets/arrow_dataset.py:2780, in Dataset._getitem(self, key, **kwargs) 2778 formatter = get_formatter(format_type, features=self._info.features, **format_kwargs) 2779 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) -> 2780 formatted_output = format_table( 2781 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 2782 ) 2783 return formatted_output File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:629, in format_table(table, key, formatter, format_columns, output_all_columns) 627 python_formatter = PythonFormatter(features=formatter.features) 628 if format_columns is None: --> 629 return formatter(pa_table, query_type=query_type) 630 elif query_type == "column": 631 if key in format_columns: File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:396, in Formatter.__call__(self, pa_table, query_type) 394 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]: 395 if query_type == "row": --> 396 return self.format_row(pa_table) 397 elif query_type == "column": 398 return self.format_column(pa_table) File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:437, in PythonFormatter.format_row(self, pa_table) 435 return LazyRow(pa_table, self) 436 row = self.python_arrow_extractor().extract_row(pa_table) --> 437 row = self.python_features_decoder.decode_row(row) 438 return row File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:215, in PythonFeaturesDecoder.decode_row(self, row) 214 def decode_row(self, row: dict) -> dict: --> 215 return self.features.decode_example(row) if self.features else row File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1917, in Features.decode_example(self, example, token_per_repo_id) 1903 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1904 """Decode example with custom feature decoding. 1905 1906 Args: (...) 1914 `dict[str, Any]` 1915 """ -> 1917 return { 1918 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1919 if self._column_requires_decoding[column_name] 1920 else value 1921 for column_name, (feature, value) in zip_dict( 1922 {key: value for key, value in self.items() if key in example}, example 1923 ) 1924 } File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1918, in <dictcomp>(.0) 1903 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1904 """Decode example with custom feature decoding. 1905 1906 Args: (...) 1914 `dict[str, Any]` 1915 """ 1917 return { -> 1918 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1919 if self._column_requires_decoding[column_name] 1920 else value 1921 for column_name, (feature, value) in zip_dict( 1922 {key: value for key, value in self.items() if key in example}, example 1923 ) 1924 } File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1339, in decode_nested_example(schema, obj, token_per_repo_id) 1336 elif isinstance(schema, (Audio, Image)): 1337 # we pass the token to read and decode files from private repositories in streaming mode 1338 if obj is not None and schema.decode: -> 1339 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) 1340 return obj File /opt/pytorch/lib/python3.8/site-packages/datasets/features/audio.py:191, in Audio.decode_example(self, value, token_per_repo_id) 189 array = array.T 190 if self.mono: --> 191 array = librosa.to_mono(array) 192 if self.sampling_rate and self.sampling_rate != sampling_rate: 193 array = librosa.resample(array, orig_sr=sampling_rate, target_sr=self.sampling_rate) File /opt/pytorch/lib/python3.8/site-packages/lazy_loader/__init__.py:78, in attach.<locals>.__getattr__(name) 76 submod_path = f"{package_name}.{attr_to_modules[name]}" 77 submod = importlib.import_module(submod_path) ---> 78 attr = getattr(submod, name) 80 # If the attribute lives in a file (module) with the same 81 # name as the attribute, ensure that the attribute and *not* 82 # the module is accessible on the package. 83 if name == attr_to_modules[name]: File /opt/pytorch/lib/python3.8/site-packages/lazy_loader/__init__.py:77, in attach.<locals>.__getattr__(name) 75 elif name in attr_to_modules: 76 submod_path = f"{package_name}.{attr_to_modules[name]}" ---> 77 submod = importlib.import_module(submod_path) 78 attr = getattr(submod, name) 80 # If the attribute lives in a file (module) with the same 81 # name as the attribute, ensure that the attribute and *not* 82 # the module is accessible on the package. File /usr/lib/python3.8/importlib/__init__.py:127, in import_module(name, package) 125 break 126 level += 1 --> 127 return _bootstrap._gcd_import(name[level:], package, level) File <frozen importlib._bootstrap>:1014, in _gcd_import(name, package, level) File <frozen importlib._bootstrap>:991, in _find_and_load(name, import_) File <frozen importlib._bootstrap>:975, in _find_and_load_unlocked(name, import_) File <frozen importlib._bootstrap>:671, in _load_unlocked(spec) File <frozen importlib._bootstrap_external>:848, in exec_module(self, module) File <frozen importlib._bootstrap>:219, in _call_with_frames_removed(f, *args, **kwds) File /opt/pytorch/lib/python3.8/site-packages/librosa/core/audio.py:13 11 import audioread 12 import numpy as np ---> 13 import scipy.signal 14 import soxr 15 import lazy_loader as lazy File /opt/pytorch/lib/python3.8/site-packages/scipy/signal/__init__.py:323 314 from ._spline import ( # noqa: F401 315 cspline2d, 316 qspline2d, (...) 319 symiirorder2, 320 ) 322 from ._bsplines import * --> 323 from ._filter_design import * 324 from ._fir_filter_design import * 325 from ._ltisys import * File /opt/pytorch/lib/python3.8/site-packages/scipy/signal/_filter_design.py:16 13 from numpy.polynomial.polynomial import polyval as npp_polyval 14 from numpy.polynomial.polynomial import polyvalfromroots ---> 16 from scipy import special, optimize, fft as sp_fft 17 from scipy.special import comb 18 from scipy._lib._util import float_factorial File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/__init__.py:405 1 """ 2 ===================================================== 3 Optimization and root finding (:mod:`scipy.optimize`) (...) 401 402 """ 404 from ._optimize import * --> 405 from ._minimize import * 406 from ._root import * 407 from ._root_scalar import * File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_minimize.py:26 24 from ._trustregion_krylov import _minimize_trust_krylov 25 from ._trustregion_exact import _minimize_trustregion_exact ---> 26 from ._trustregion_constr import _minimize_trustregion_constr 28 # constrained minimization 29 from ._lbfgsb_py import _minimize_lbfgsb File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_trustregion_constr/__init__.py:4 1 """This module contains the equality constrained SQP solver.""" ----> 4 from .minimize_trustregion_constr import _minimize_trustregion_constr 6 __all__ = ['_minimize_trustregion_constr'] File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_trustregion_constr/minimize_trustregion_constr.py:5 3 from scipy.sparse.linalg import LinearOperator 4 from .._differentiable_functions import VectorFunction ----> 5 from .._constraints import ( 6 NonlinearConstraint, LinearConstraint, PreparedConstraint, strict_bounds) 7 from .._hessian_update_strategy import BFGS 8 from .._optimize import OptimizeResult File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_constraints.py:8 6 from ._optimize import OptimizeWarning 7 from warnings import warn, catch_warnings, simplefilter ----> 8 from numpy.testing import suppress_warnings 9 from scipy.sparse import issparse 12 def _arr_to_scalar(x): 13 # If x is a numpy array, return x.item(). This will 14 # fail if the array has more than one element. File /opt/pytorch/lib/python3.8/site-packages/numpy/testing/__init__.py:11 8 from unittest import TestCase 10 from . import _private ---> 11 from ._private.utils import * 12 from ._private.utils import (_assert_valid_refcount, _gen_alignment_data) 13 from ._private import extbuild, decorators as dec File /opt/pytorch/lib/python3.8/site-packages/numpy/testing/_private/utils.py:480 476 pprint.pprint(desired, msg) 477 raise AssertionError(msg.getvalue()) --> 480 @np._no_nep50_warning() 481 def assert_almost_equal(actual,desired,decimal=7,err_msg='',verbose=True): 482 """ 483 Raises an AssertionError if two items are not equal up to desired 484 precision. (...) 548 549 """ 550 __tracebackhide__ = True # Hide traceback for py.test File /opt/pytorch/lib/python3.8/site-packages/numpy/__init__.py:313, in __getattr__(attr) 305 raise AttributeError(__former_attrs__[attr]) 307 # Importing Tester requires importing all of UnitTest which is not a 308 # cheap import Since it is mainly used in test suits, we lazy import it 309 # here to save on the order of 10 ms of import time for most users 310 # 311 # The previous way Tester was imported also had a side effect of adding 312 # the full `numpy.testing` namespace --> 313 if attr == 'testing': 314 import numpy.testing as testing 315 return testing AttributeError: module 'numpy' has no attribute '_no_nep50_warning' ``` ### Expected behavior ``` ' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.' ``` Also, make sure this script is provided for your official website so please update: [script](https://huggingface.co/docs/transformers/model_doc/whisper) ### Environment info **System Info** * transformers -> 4.36.1 * datasets -> 2.15.0 * huggingface_hub -> 0.19.4 * python -> 3.8.10 * accelerate -> 0.25.0 * pytorch -> 2.0.1+cpu * Using GPU in Script -> No
6,541
https://github.com/huggingface/datasets/issues/6540
Extreme inefficiency for `save_to_disk` when merging datasets
[ "Concatenating datasets doesn't create any indices mapping - so flattening indices is not needed (unless you shuffle the dataset).\r\nCan you share the snippet of code you are using to merge your datasets and save them to disk ?" ]
### Describe the bug Hi, I tried to merge in total 22M sequences of data, where each sequence is of maximum length 2000. I found that merging these datasets and then `save_to_disk` is extremely slow because of flattening the indices. Wondering if you have any suggestions or guidance on this. Thank you very much! ### Steps to reproduce the bug The source data is too big to demonstrate ### Expected behavior The source data is too big to demonstrate ### Environment info python 3.9.0 datasets 2.7.0 pytorch 2.0.0 tokenizers 0.13.1 transformers 4.31.0
6,540
https://github.com/huggingface/datasets/issues/6539
'Repo card metadata block was not found' when loading a pragmeval dataset
[]
### Describe the bug I can't load dataset subsets of 'pragmeval'. The funny thing is I ran the dataset author's [colab notebook](https://colab.research.google.com/drive/1sg--LF4z7XR1wxAOfp0-3d4J6kQ9nj_A?usp=sharing) and it works just fine. I tried to install exactly the same packages that are installed on colab using poetry, so my environment info only differs from the one from colab in linux version - I still get the same bug outside colab. ### Steps to reproduce the bug Install dependencies with poetry pyproject.toml ``` [tool.poetry] name = "project" version = "0.1.0" description = "" authors = [] [tool.poetry.dependencies] python = "^3.10" datasets = "2.16.0" pandas = "1.5.3" pyarrow = "10.0.1" huggingface-hub = "0.19.4" fsspec = "2023.6.0" [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api" ``` `poetry run python -c "import datasets; print(datasets.get_dataset_config_names('pragmeval'))` prints ['default'] ### Expected behavior The command should print ``` ['emergent', 'emobank-arousal', 'emobank-dominance', 'emobank-valence', 'gum', 'mrda', 'pdtb', 'persuasiveness-claimtype', 'persuasiveness-eloquence', 'persuasiveness-premisetype', 'persuasiveness-relevance', 'persuasiveness-specificity', 'persuasiveness-strength', 'sarcasm', 'squinky-formality', 'squinky-implicature', 'squinky-informativeness', 'stac', 'switchboard', 'verifiability'] ``` ### Environment info - `datasets` version: 2.16.0 - Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.19.4 - PyArrow version: 10.0.1 - Pandas version: 1.5.3 - `fsspec` version: 2023.6.0
6,539
https://github.com/huggingface/datasets/issues/6538
ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py)
[ "Hi ! Are you sure you have `datasets` 2.16 ? I just checked and on 2.16 I can run `from datasets.arrow_writer import SchemaInferenceError` without error", "I have the same issue - using with datasets version 2.16.1. Also this is on a kaggle notebook - other people with the same issue also seem to be having it on kaggle?", "I have the same issue now and didn't have this problem around 2 weeks ago.", "> Hi ! Are you sure you have `datasets` 2.16 ? I just checked and on 2.16 I can run `from datasets.arrow_writer import SchemaInferenceError` without error\r\n\r\nYes, I am sure\r\n\r\n```\r\n!pip show datasets\r\nName: datasets\r\nVersion: 2.16.1\r\nSummary: HuggingFace community-driven open-source library of datasets\r\nHome-page: https://github.com/huggingface/datasets\r\nAuthor: HuggingFace Inc.\r\nAuthor-email: [email protected]\r\nLicense: Apache 2.0\r\nLocation: /opt/conda/lib/python3.10/site-packages\r\nRequires: aiohttp, dill, filelock, fsspec, huggingface-hub, multiprocess, numpy, packaging, pandas, pyarrow, pyarrow-hotfix, pyyaml, requests, tqdm, xxhash\r\nRequired-by: trl\r\n```", "> I have the same issue - using with datasets version 2.16.1. Also this is on a kaggle notebook - other people with the same issue also seem to be having it on kaggle?\r\n\r\nDon't know about other people. But I am having this issue whose solution I can't find anywhere. And this issue still persists. ", "> I have the same issue now and didn't have this problem around 2 weeks ago.\r\n\r\nSame here", "I was having the same issue but the datasets version was 2.6.1, after I updated it to latest(2.16), error is gone while importing.\r\n", "> I was having the same issue but the datasets version was 2.6.1, after I updated it to latest(2.16), error is gone while importing.\r\n\r\nI also have datasets version 2.16, but the error is still there.", "Can you try re-installing `datasets` ?", "> Can you try re-installing `datasets` ?\r\n\r\nI tried re-installing. Still getting the same error. \r\n", "> > Can you try re-installing `datasets` ?\r\n> \r\n> I tried re-installing. Still getting the same error.\r\n\r\nIn kaggle I used:\r\n- `%pip install -U datasets`\r\nand then restarted runtime and then everything works fine.", "> > > Can you try re-installing `datasets` ?\r\n> > \r\n> > \r\n> > I tried re-installing. Still getting the same error.\r\n> \r\n> In kaggle I used:\r\n> \r\n> * `%pip install -U datasets`\r\n> and then restarted runtime and then everything works fine.\r\n\r\nYes, this is working. When I restart the runtime after installing packages, it's working perfectly. Thank you so much. But why do we need to restart runtime every time after installing packages?", "> > > > Can you try re-installing `datasets` ?\r\n> > > \r\n> > > \r\n> > > I tried re-installing. Still getting the same error.\r\n> > \r\n> > \r\n> > In kaggle I used:\r\n> > \r\n> > * `%pip install -U datasets`\r\n> > and then restarted runtime and then everything works fine.\r\n> \r\n> Yes, this is working. When I restart the runtime after installing packages, it's working perfectly. Thank you so much. But why do we need to restart runtime every time after installing packages?\r\nFor some packages it is required.\r\nhttps://stackoverflow.com/questions/57831187/need-to-restart-runtime-before-import-an-installed-package-in-colab\r\n", "> > > > > Can you try re-installing `datasets` ?\r\n> > > > \r\n> > > > \r\n> > > > I tried re-installing. Still getting the same error.\r\n> > > \r\n> > > \r\n> > > In kaggle I used:\r\n> > > \r\n> > > * `%pip install -U datasets`\r\n> > > and then restarted runtime and then everything works fine.\r\n> > \r\n> > \r\n> > Yes, this is working. When I restart the runtime after installing packages, it's working perfectly. Thank you so much. But why do we need to restart runtime every time after installing packages?\r\n> > For some packages it is required.\r\n> > https://stackoverflow.com/questions/57831187/need-to-restart-runtime-before-import-an-installed-package-in-colab\r\n\r\nThank you for your assistance. I dedicated the past 2-3 weeks to resolving this issue. Interestingly, it runs flawlessly in Colab without requiring a runtime restart. However, the problem persisted exclusively in Kaggle. I appreciate your help once again. Thank you.", "Closing this issue as it is not related to the datasets library; rather, it's linked to platform-related issues." ]
### Describe the bug While importing from packages getting the error Code: ``` import os import torch from datasets import load_dataset, Dataset from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, HfArgumentParser, TrainingArguments, pipeline, logging ) from peft import LoraConfig, PeftModel from trl import SFTTrainer from huggingface_hub import login import pandas as pd ``` Error: ```` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[5], line 14 4 from transformers import ( 5 AutoModelForCausalLM, 6 AutoTokenizer, (...) 11 logging 12 ) 13 from peft import LoraConfig, PeftModel ---> 14 from trl import SFTTrainer 15 from huggingface_hub import login 16 import pandas as pd File /opt/conda/lib/python3.10/site-packages/trl/__init__.py:21 8 from .import_utils import ( 9 is_diffusers_available, 10 is_npu_available, (...) 13 is_xpu_available, 14 ) 15 from .models import ( 16 AutoModelForCausalLMWithValueHead, 17 AutoModelForSeq2SeqLMWithValueHead, 18 PreTrainedModelWrapper, 19 create_reference_model, 20 ) ---> 21 from .trainer import ( 22 DataCollatorForCompletionOnlyLM, 23 DPOTrainer, 24 IterativeSFTTrainer, 25 PPOConfig, 26 PPOTrainer, 27 RewardConfig, 28 RewardTrainer, 29 SFTTrainer, 30 ) 33 if is_diffusers_available(): 34 from .models import ( 35 DDPOPipelineOutput, 36 DDPOSchedulerOutput, 37 DDPOStableDiffusionPipeline, 38 DefaultDDPOStableDiffusionPipeline, 39 ) File /opt/conda/lib/python3.10/site-packages/trl/trainer/__init__.py:44 42 from .ppo_trainer import PPOTrainer 43 from .reward_trainer import RewardTrainer, compute_accuracy ---> 44 from .sft_trainer import SFTTrainer 45 from .training_configs import RewardConfig File /opt/conda/lib/python3.10/site-packages/trl/trainer/sft_trainer.py:23 21 import torch.nn as nn 22 from datasets import Dataset ---> 23 from datasets.arrow_writer import SchemaInferenceError 24 from datasets.builder import DatasetGenerationError 25 from transformers import ( 26 AutoModelForCausalLM, 27 AutoTokenizer, (...) 33 TrainingArguments, 34 ) ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py ```` transformers version: 4.36.2 python version: 3.10.12 datasets version: 2.16.1 ### Steps to reproduce the bug 1. Install packages ``` !pip install -U datasets trl accelerate peft bitsandbytes transformers trl huggingface_hub ``` 2. import packages ``` import os import torch from datasets import load_dataset, Dataset from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, HfArgumentParser, TrainingArguments, pipeline, logging ) from peft import LoraConfig, PeftModel from trl import SFTTrainer from huggingface_hub import login import pandas as pd ``` ### Expected behavior No error while importing ### Environment info - `datasets` version: 2.16.0 - Platform: Linux-5.15.133+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.20.1 - PyArrow version: 11.0.0 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
6,538
https://github.com/huggingface/datasets/issues/6537
Adding support for netCDF (*.nc) files
[ "Related to #3113 ", "Conceptually, we can use xarray to load the netCDF file, then xarray -> pandas -> pyarrow.", "I'd still need to verify that such a conversion would be lossless, especially for multi-dimensional data." ]
### Feature request netCDF (*.nc) is a file format for storing multidimensional scientific data, which is used by packages like `xarray` (labelled multi-dimensional arrays in Python). It would be nice to have native support for netCDF in `datasets`. ### Motivation When uploading *.nc files onto Huggingface Hub through the `datasets` API, I would like to be able to preview the dataset without converting it to another format. ### Your contribution I can submit a PR, provided I have the time.
6,537
https://github.com/huggingface/datasets/issues/6536
datasets.load_dataset raises FileNotFoundError for datasets==2.16.0
[ "Hi ! Thanks for reporting\r\n\r\nThis is a bug in 2.16.0 for some datasets when `cache_dir` is a relative path. I opened https://github.com/huggingface/datasets/pull/6543 to fix this", "We just released 2.16.1 with a fix:\r\n\r\n```\r\npip install -U datasets\r\n```" ]
### Describe the bug Seems `datasets.load_dataset` raises FileNotFoundError for some hub datasets with the latest `datasets==2.16.0` ### Steps to reproduce the bug For example `pip install datasets==2.16.0` then ```python import datasets datasets.load_dataset("wentingzhao/anthropic-hh-first-prompt", cache_dir='cache1')["train"] ``` This will raise: ```bash Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/load.py", line 2545, in load_dataset builder_instance.download_and_prepare( File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/builder.py", line 1003, in download_and_prepare self._download_and_prepare( File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/builder.py", line 1076, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 43, in _split_generators data_files = dl_manager.download_and_extract(self.config.data_files) File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/download/download_manager.py", line 566, in download_and_extract return self.extract(self.download(url_or_urls)) File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/download/download_manager.py", line 539, in extract extracted_paths = map_nested( File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 466, in map_nested mapped = [ File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 467, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 387, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 387, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 370, in _single_map_nested return function(data_struct) File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/download/download_manager.py", line 451, in _download out = cached_path(url_or_filename, download_config=download_config) File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 188, in cached_path output_path = get_from_cache( File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 570, in get_from_cache raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/wentingzhao/anthropic-hh-first-prompt/resolve/11b393a5545f706a357ebcd4a5285d93db176715/cache1/downloads/87d66c365626feca116cba323c4856c9aae056e4503f09f23e34aa085eb9de15 ``` However, seems it works fine for some datasets, for example, if works fine for `datasets.load_dataset("ag_news", cache_dir='cache2')["test"]` But the dataset works fine for datasets==2.15.0, for example `pip install datasets==2.15.0`, then ```python import datasets datasets.load_dataset("wentingzhao/anthropic-hh-first-prompt", cache_dir='cache3')["train"] Dataset({ features: ['user', 'system', 'source'], num_rows: 8552 }) ``` ### Expected behavior 2.16.0 should work as same as 2.15.0 for all datasets ### Environment info python3.9 conda env tested on MacOS and Linux
6,536
https://github.com/huggingface/datasets/issues/6535
IndexError: Invalid key: 47682 is out of bounds for size 0 while using PEFT
[ "@sabman @pvl @kashif @vigsterkr ", "This is surely the same issue as /static-proxy?url=https%3A%2F%2Fdiscuss.huggingface.co%2Ft%2Findexerror-invalid-key-16-is-out-of-bounds-for-size-0%2F14298%2F25 that comes from the `transformers` `Trainer`. You should add `remove_unused_columns=False` to `TrainingArguments`\r\n\r\nAlso check your logs: the `Trainer` should log the length of your dataset before training starts and it surely showed length=0.", "the same error \r\nIndexError: Invalid key: 22330 is out of bounds for size 0" ]
### Describe the bug I am trying to fine-tune the t5 model on the paraphrasing task. While running the same code without- model = get_peft_model(model, config) the model trains without any issues. However, using the model returned from get_peft_model raises the following error due to datasets- IndexError: Invalid key: 47682 is out of bounds for size 0. I had raised this in https://github.com/huggingface/peft/issues/1299#issue-2056173386 and they suggested that I raise it here. Here is the complete error- IndexError Traceback (most recent call last) in <cell line: 1>() ----> 1 trainer.train() 11 frames [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1553 hf_hub_utils.enable_progress_bars() 1554 else: -> 1555 return inner_training_loop( 1556 args=args, 1557 resume_from_checkpoint=resume_from_checkpoint, [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1836 1837 step = -1 -> 1838 for step, inputs in enumerate(epoch_iterator): 1839 total_batched_samples += 1 1840 if rng_to_sync: [/usr/local/lib/python3.10/dist-packages/accelerate/data_loader.py](https://localhost:8080/#) in iter(self) 446 # We iterate one batch ahead to check when we are at the end 447 try: --> 448 current_batch = next(dataloader_iter) 449 except StopIteration: 450 yield [/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py](https://localhost:8080/#) in next(self) 628 # TODO(https://github.com/pytorch/pytorch/issues/76750) 629 self._reset() # type: ignore[call-arg] --> 630 data = self._next_data() 631 self._num_yielded += 1 632 if self._dataset_kind == _DatasetKind.Iterable and \ [/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py](https://localhost:8080/#) in _next_data(self) 672 def _next_data(self): 673 index = self._next_index() # may raise StopIteration --> 674 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 675 if self._pin_memory: 676 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) [/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py](https://localhost:8080/#) in fetch(self, possibly_batched_index) 47 if self.auto_collation: 48 if hasattr(self.dataset, "getitems") and self.dataset.getitems: ---> 49 data = self.dataset.getitems(possibly_batched_index) 50 else: 51 data = [self.dataset[idx] for idx in possibly_batched_index] [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in getitems(self, keys) 2802 def getitems(self, keys: List) -> List: 2803 """Can be used to get a batch using a list of integers indices.""" -> 2804 batch = self.getitem(keys) 2805 n_examples = len(batch[next(iter(batch))]) 2806 return [{col: array[i] for col, array in batch.items()} for i in range(n_examples)] [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in getitem(self, key) 2798 def getitem(self, key): # noqa: F811 2799 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 2800 return self._getitem(key) 2801 2802 def getitems(self, keys: List) -> List: [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in _getitem(self, key, **kwargs) 2782 format_kwargs = format_kwargs if format_kwargs is not None else {} 2783 formatter = get_formatter(format_type, features=self._info.features, **format_kwargs) -> 2784 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) 2785 formatted_output = format_table( 2786 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns [/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in query_table(table, key, indices) 581 else: 582 size = indices.num_rows if indices is not None else table.num_rows --> 583 _check_valid_index_key(key, size) 584 # Query the main table 585 if indices is None: [/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in _check_valid_index_key(key, size) 534 elif isinstance(key, Iterable): 535 if len(key) > 0: --> 536 _check_valid_index_key(int(max(key)), size=size) 537 _check_valid_index_key(int(min(key)), size=size) 538 else: [/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in _check_valid_index_key(key, size) 524 if isinstance(key, int): 525 if (key < 0 and key + size < 0) or (key >= size): --> 526 raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") 527 return 528 elif isinstance(key, slice): IndexError: Invalid key: 47682 is out of bounds for size 0 ### Steps to reproduce the bug device = "cuda:0" if torch.cuda.is_available() else "cpu" #defining model name for tokenizer and model loading model_name= "t5-small" #loading the tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) def preprocess_function(data, tokenizer): inputs = [f"Paraphrase this sentence: {doc}" for doc in data["text"]] model_inputs = tokenizer(inputs, max_length=150, truncation=True) labels = [ast.literal_eval(i)[0] for i in data['paraphrases']] labels = tokenizer(labels, max_length=150, truncation=True) model_inputs["labels"] = labels["input_ids"] return model_inputs train_dataset = load_dataset("humarin/chatgpt-paraphrases", split="train").shuffle(seed=42).select(range(50000)) val_dataset = load_dataset("humarin/chatgpt-paraphrases", split="train").shuffle(seed=42).select(range(50000,55000)) tokenized_train = train_dataset.map(lambda batch: preprocess_function(batch, tokenizer), batched=True) tokenized_val = val_dataset.map(lambda batch: preprocess_function(batch, tokenizer), batched=True) def print_trainable_parameters(model): """ Prints the number of trainable parameters in the model. """ trainable_params = 0 all_param = 0 for _, param in model.named_parameters(): all_param += param.numel() if param.requires_grad: trainable_params += param.numel() print( f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}" ) config = LoraConfig( r=16, #attention heads lora_alpha=32, #alpha scaling lora_dropout=0.05, bias="none", task_type="Seq2Seq" ) #loading the model model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device) model = get_peft_model(model, config) print_trainable_parameters(model) #loading the data collator data_collator = DataCollatorForSeq2Seq( tokenizer=tokenizer, model=model, label_pad_token_id=-100, padding="longest" ) #defining the training arguments training_args = Seq2SeqTrainingArguments( output_dir=os.getcwd(), evaluation_strategy="epoch", save_strategy="epoch", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, weight_decay=1e-3, save_total_limit=3, load_best_model_at_end=True, num_train_epochs=1, predict_with_generate=True ) def compute_metric_with_extra(tokenizer): def compute_metrics(eval_preds): metric = evaluate.load('rouge') preds, labels = eval_preds # decode preds and labels labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) # rougeLSum expects newline after each sentence decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds] decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels] result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True) return result return compute_metrics trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=tokenized_train, eval_dataset=tokenized_val, tokenizer=tokenizer, data_collator=data_collator, compute_metrics= compute_metric_with_extra(tokenizer) ) trainer.train() ### Expected behavior I would want the trainer to train normally as it was before I used- model = get_peft_model(model, config) ### Environment info datasets version- 2.16.0 peft version- 0.7.1 transformers version- 4.35.2 accelerate version- 0.25.0 python- 3.10.12 enviroment- google colab
6,535
https://github.com/huggingface/datasets/issues/6534
How to configure multiple folders in the same zip package
[ "@albertvillanova" ]
How should I write "config" in readme when all the data, such as train test, is in a zip file train floder and test floder in data.zip
6,534
https://github.com/huggingface/datasets/issues/6533
ted_talks_iwslt | Error: Config name is missing
[ "Hi ! Thanks for reporting. I opened https://github.com/huggingface/datasets/pull/6544 to fix this", "We just released 2.16.1 with a fix:\r\n\r\n```\r\npip install -U datasets\r\n```" ]
### Describe the bug Running load_dataset using the newest `datasets` library like below on the ted_talks_iwslt using year pair data will throw an error "Config name is missing" see also: https://huggingface.co/datasets/ted_talks_iwslt/discussions/3 likely caused by #6493, where the `and not config_kwargs` part in the if logic was removed https://github.com/huggingface/datasets/blob/ef3b5dd3633995c95d77f35fb17f89ff44990bc4/src/datasets/builder.py#L512 ### Steps to reproduce the bug run: ```python load_dataset("ted_talks_iwslt", language_pair=("ja", "en"), year="2015") ``` ### Expected behavior Load the data without error ### Environment info datasets 2.16.0
6,533
https://github.com/huggingface/datasets/issues/6532
[Feature request] Indexing datasets by a customly-defined id field to enable random access dataset items via the id
[ "You can simply use a python dict as index:\r\n\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> ds = load_dataset(\"BeIR/dbpedia-entity\", \"corpus\", split=\"corpus\")\r\n>>> index = {key: idx for idx, key in enumerate(ds[\"_id\"])}\r\n>>> ds[index[\"<dbpedia:Pikachu>\"]]\r\n{'_id': '<dbpedia:Pikachu>',\r\n 'title': 'Pikachu',\r\n 'text': 'Pikachu (Japanese: ピカチュウ) are a fictional species of Pokémon. Pokémon are fictional creatures that appear in an assortment of comic books, animated movies and television shows, video games, and trading card games licensed by The Pokémon Company, a Japanese corporation. The Pikachu design was conceived by Ken Sugimori.'}\r\n```", "Thanks for your reply. Yes, I can do that, but it is time-consuming to do that every time I launch the program (some datasets are extremely big). HF Datasets has a nice feature to support instant data loading and efficient random access via row ids. I'm curious if this beneficial feature could be further extended to custom data columns.\r\n", "+1 on the issue I think it would be extremely useful", "+1. This could be very useful.", "+1 - currently having to manually implement this" ]
### Feature request Some datasets may contain an id-like field, for example the `id` field in [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) and the `_id` field in [BeIR/dbpedia-entity](https://huggingface.co/datasets/BeIR/dbpedia-entity). HF datasets support efficient random access via row, but not via this kinds of id fields. I wonder if it is possible to add support for indexing by a custom "id-like" field to enable random access via such ids. The ids may be numbers or strings. ### Motivation In some cases, especially during inference/evaluation, I may want to find out the item that has a specified id, defined by the dataset itself. For example, in a typical re-ranking setting in information retrieval, the user may want to re-rank the set of candidate documents of each query. The input is usually presented in a TREC-style run file, with the following format: ``` <qid> Q0 <docno> <rank> <score> <tag> ``` The re-ranking program should be able to fetch the queries and documents according to the `<qid>` and `<docno>`, which are the original id defined in the query/document datasets. To accomplish this, I have to iterate over the whole HF dataset to get the mapping from real ids to row ids every time I start the program, which is time-consuming. Thus I want HF dataset to provide options for users to index by a custom id column, not by row. ### Your contribution I'm not an expert in this project and I'm afraid that I'm not able to make contributions on the code.
6,532
https://github.com/huggingface/datasets/issues/6530
Impossible to save a mapped dataset to disk
[ "I solved it with `train_dataset.with_format(None)`\r\nBut then faced some more issues (which i later solved too).\r\n\r\nHuggingface does not seem to care, so I do. Here is an updated training script which saves a pre-processed (mapped) dataset to your local directory if you specify `--save_precomputed_data_dir=DIR_NAME`. Then use `--train_precomputed_data_dir` with the same dir to load it instead of `--dataset_name`.\r\n\r\n[Proper SDXL trainer code](https://github.com/kopyl/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py)\r\n[Notebook for pre-computing a dataset and saving locally](https://colab.research.google.com/drive/17Yo08hePx-NlHs99RecdeiNc8CQg4O7l?usp=sharing)\r\n\r\nExample:\r\n\r\n1st run (nothing is pre-computed yet):\r\n```\r\naccelerate launch train_text_to_image_sdxl.py \\\r\n --pretrained_model_name_or_path=stabilityai/stable-diffusion-xl-base-1.0 \\\r\n --pretrained_vae_model_name_or_path=madebyollin/sdxl-vae-fp16-fix \\\r\n --dataset_name=lambdalabs/pokemon-blip-captions \\\r\n --save_precomputed_data_dir=\"test-5\"\r\n```\r\n\r\n2nd run (the pre-computed dataset is saved to `test-5` directory):\r\n```\r\naccelerate launch train_text_to_image_sdxl.py \\\r\n --pretrained_model_name_or_path=stabilityai/stable-diffusion-xl-base-1.0 \\\r\n --pretrained_vae_model_name_or_path=madebyollin/sdxl-vae-fp16-fix \\\r\n --train_precomputed_data_dir test-5\r\n```\r\n\r\nThis way when you're gonna be using your pre-computed dataset you don't need to worry about re-mapping your dataset when you change an argument for your trainer script" ]
### Describe the bug I want to play around with different hyperparameters when training but don't want to re-map my dataset with 3 million samples each time for tens of hours when I [fully fine-tune SDXL](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py). After I do the mapping like this: ``` train_dataset = train_dataset.map(compute_embeddings_fn, batched=True) train_dataset = train_dataset.map( compute_vae_encodings_fn, batched=True, batch_size=16, ) ``` and try to save it like this: `train_dataset.save_to_disk("test")` i get this error ([full traceback](https://pastebin.com/kq3vt739)): ``` TypeError: Object of type function is not JSON serializable The format kwargs must be JSON serializable, but key 'transform' isn't. ``` But what is interesting is that pushing to hub works like that: `train_dataset.push_to_hub("kopyl/mapped-833-icons-sdxl-1024-dataset", token=True)` Here is the link of the pushed dataset: https://huggingface.co/datasets/kopyl/mapped-833-icons-sdxl-1024-dataset ### Steps to reproduce the bug Here is the self-contained notebook: https://colab.research.google.com/drive/1RtCsEMVcwWcMwlWURk_cj_9xUBHz065M?usp=sharing ### Expected behavior It should be easily saved to disk ### Environment info NVIDIA A100, Linux (NC24ads A100 v4 from Azure), CUDA 12.2. [pip freeze](https://pastebin.com/QTNb6iru)
6,530
https://github.com/huggingface/datasets/issues/6529
Impossible to only download a test split
[ "The only way right now is to load with streaming=True", "This feature has been proposed for a long time. I'm looking forward to the implementation. On clusters `streaming=True` is not an option since we do not have Internet on compute nodes. See: https://github.com/huggingface/datasets/discussions/1896#discussioncomment-2359593" ]
I've spent a significant amount of time trying to locate the split object inside my _split_generators() custom function. Then after diving [in the code](https://github.com/huggingface/datasets/blob/5ff3670c18ed34fa8ddfa70a9aa403ae6cc9ad54/src/datasets/load.py#L2558) I realized that `download_and_prepare` is executed before! split is passed to the dataset builder in `as_dataset`. If I'm not missing something, this seems like bad design, for the following use case: > Imagine there is a huge dataset that has an evaluation test set and you want to just download and run just to compare your method. Is there a current workaround that can help me achieve the same result? Thank you,
6,529
https://github.com/huggingface/datasets/issues/6524
Streaming the Pile: Missing Files
[ "Hello @FelixLabelle,\r\n\r\nAs you can see in the Community tab of the corresponding dataset, it is a known issue: https://huggingface.co/datasets/EleutherAI/pile/discussions/15\r\n\r\nThe data has been taken down due to reported copyright infringement.\r\n\r\nFeel free to continue the discussion there." ]
### Describe the bug The pile does not stream, a "File not Found error" is returned. It looks like the Pile's files have been moved. ### Steps to reproduce the bug To reproduce run the following code: ``` from datasets import load_dataset dataset = load_dataset('EleutherAI/pile', 'en', split='train', streaming=True) next(iter(dataset)) ``` I get the following error: `FileNotFoundError: https://the-eye.eu/public/AI/pile/train/00.jsonl.zst` ### Expected behavior Return the data in a stream. ### Environment info - `datasets` version: 2.12.0 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.11.5 - Huggingface_hub version: 0.15.1 - PyArrow version: 11.0.0 - Pandas version: 2.0.3
6,524
https://github.com/huggingface/datasets/issues/6522
Loading HF Hub Dataset (private org repo) fails to load all features
[]
### Describe the bug When pushing a `Dataset` with multiple `Features` (`input`, `output`, `tags`) to Huggingface Hub (private org repo), and later downloading the `Dataset`, only `input` and `output` load - I believe the expected behavior is for all `Features` to be loaded by default? ### Steps to reproduce the bug Pushing the data. `data_concat` is a `list` of `dict`s. ```python for datum in data_concat: datum_tags = {d["key"]: d["value"] for d in datum["tags"]} split_fraction = # some logic that generates a train/test split number if split_faction < test_fraction: data_test.append(datum) else: data_train.append(datum) dataset = DatasetDict( { "train": Dataset.from_list(data_train), "test": Dataset.from_list(data_test), "full": Dataset.from_list(data_concat), }, ) dataset_shuffled = dataset.shuffle(seed=shuffle_seed) dataset_shuffled.push_to_hub( repo_id=hf_repo_id, private=True, config_name=m, revision=revision, token=hf_token, ) ``` Loading it later: ```python dataset = datasets.load_dataset( path=hf_repo_id, name=name, token=hf_token, ) ``` Produces: ``` DatasetDict({ train: Dataset({ features: ['input', 'output'], num_rows: <obfuscated> }) test: Dataset({ features: ['input', 'output'], num_rows: <obfuscated> }) full: Dataset({ features: ['input', 'output'], num_rows: <obfuscated> }) }) ``` ### Expected behavior The expected result is below: ``` DatasetDict({ train: Dataset({ features: ['input', 'output', 'tags'], num_rows: <obfuscated> }) test: Dataset({ features: ['input', 'output', 'tags'], num_rows: <obfuscated> }) full: Dataset({ features: ['input', 'output', 'tags'], num_rows: <obfuscated> }) }) ``` My workaround is as follows: ```python dsinfo = datasets.get_dataset_config_info( path=data_files, config_name=data_config, token=hf_token, ) allfeatures = dsinfo.features.copy() if "tags" not in allfeatures: allfeatures["tags"] = [{"key": Value(dtype="string", id=None), "value": Value(dtype="string", id=None)}] dataset = datasets.load_dataset( path=data_files, name=data_config, features=allfeatures, token=hf_token, ) ``` Interestingly enough (and perhaps a related bug?), if I don't add the `tags` to `allfeatures` above (i.e. only loading `input` and `output`), it throws an error when executing `load_dataset`: ``` ValueError: Couldn't cast tags: list<element: struct<key: string, value: string>> child 0, element: struct<key: string, value: string> child 0, key: string child 1, value: string input: <obfuscated> output: <obfuscated> -- schema metadata -- huggingface: '{"info": {"features": {"tags": [{"key": {"dtype": "string",' + 532 to {'input': <obfuscated>, 'output': <obfuscated> because column names don't match ``` Traceback for this: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/load.py", line 2152, in load_dataset builder_instance.download_and_prepare( File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 948, in download_and_prepare self._download_and_prepare( File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 1043, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 1805, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 1950, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Environment info - `datasets` version: 2.15.0 - Platform: macOS-14.0-arm64-arm-64bit - Python version: 3.11.5 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
6,522
https://github.com/huggingface/datasets/issues/6521
The order of the splits is not preserved
[ "After investigation, I think the issue was introduced by the use of the Parquet export:\r\n- #6448\r\n\r\nI am proposing a fix.\r\n\r\nCC: @lhoestq " ]
We had a regression and the order of the splits is not preserved. They are alphabetically sorted, instead of preserving original "train", "validation", "test" order. Check: In branch "main" ```python In [9]: dataset = load_dataset("adversarial_qa", '"adversarialQA") In [10]: dataset Out[10]: DatasetDict({ test: Dataset({ features: ['id', 'title', 'context', 'question', 'answers', 'metadata'], num_rows: 3000 }) train: Dataset({ features: ['id', 'title', 'context', 'question', 'answers', 'metadata'], num_rows: 30000 }) validation: Dataset({ features: ['id', 'title', 'context', 'question', 'answers', 'metadata'], num_rows: 3000 }) }) ``` Before (2.15.0) it was: ```python DatasetDict({ train: Dataset({ features: ['id', 'title', 'context', 'question', 'answers', 'metadata'], num_rows: 30000 }) validation: Dataset({ features: ['id', 'title', 'context', 'question', 'answers', 'metadata'], num_rows: 3000 }) test: Dataset({ features: ['id', 'title', 'context', 'question', 'answers', 'metadata'], num_rows: 3000 }) }) ``` See issues: - https://huggingface.co/datasets/adversarial_qa/discussions/3 - https://huggingface.co/datasets/beans/discussions/4 This is a regression because it was previously fixed. See: - #6196 - #5728
6,521
https://github.com/huggingface/datasets/issues/6517
Bug get_metadata_patterns arg error
[]
https://github.com/huggingface/datasets/blob/3f149204a2a5948287adcade5e90707aa5207a92/src/datasets/load.py#L1240C1-L1240C69 metadata_patterns = get_metadata_patterns(base_path, download_config=self.download_config)
6,517
https://github.com/huggingface/datasets/issues/6515
Why call http_head() when fsspec_head() succeeds
[]
https://github.com/huggingface/datasets/blob/a91582de288d98e94bcb5ab634ca1cfeeff544c5/src/datasets/utils/file_utils.py#L510C1-L523C14
6,515
https://github.com/huggingface/datasets/issues/6513
Support huggingface-hub 0.20.0
[]
CI to test the support of `huggingface-hub` 0.20.0: https://github.com/huggingface/datasets/compare/main...ci-test-huggingface-hub-v0.20.0.rc1 We need to merge: - #6510 - #6512 - #6516
6,513
https://github.com/huggingface/datasets/issues/6507
where is glue_metric.py> @Frankie123421 what was the resolution to this?
[]
> @Frankie123421 what was the resolution to this? use glue_metric.py instead of glue.py in load_metric _Originally posted by @Frankie123421 in https://github.com/huggingface/datasets/issues/2117#issuecomment-905093763_
6,507
https://github.com/huggingface/datasets/issues/6506
Incorrect test set labels for RTE and CoLA datasets via load_dataset
[ "As this is a specific issue of the \"glue\" dataset, I have transferred it to the dataset Discussion page: https://huggingface.co/datasets/glue/discussions/15\r\n\r\nLet's continue the discussion there!" ]
### Describe the bug The test set labels for the RTE and CoLA datasets when loading via datasets load_dataset are all -1. Edit: It appears this is also the case for every other dataset except for MRPC (stsb, sst2, qqp, mnli (both matched and mismatched), qnli, wnli, ax). Is this intended behavior to safeguard the test set for evaluation purposes? ### Steps to reproduce the bug !pip install datasets from datasets import load_dataset rte_data = load_dataset('glue', 'rte') cola_data = load_dataset('glue', 'cola') print(rte_data['test'][0:30]['label']) print(cola_data['test'][0:30]['label']) Output: [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1] [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1] The non-label test data seems to be fine: e.g. rte_data['test'][1] is: {'sentence1': "Authorities in Brazil say that more than 200 people are being held hostage in a prison in the country's remote, Amazonian-jungle state of Rondonia.", 'sentence2': 'Authorities in Brazil hold 200 people as hostage.', 'label': -1, 'idx': 1} Training and validation data are also fine: e.g. rte_data['train][0] is: {'sentence1': 'No Weapons of Mass Destruction Found in Iraq Yet.', 'sentence2': 'Weapons of Mass Destruction Found in Iraq.', 'label': 1, 'idx': 0} ### Expected behavior Expected the labels to be binary 0/1 values; Got all -1s instead ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.19.4 - PyArrow version: 10.0.1 - Pandas version: 1.5.3 - `fsspec` version: 2023.6.0
6,506
https://github.com/huggingface/datasets/issues/6505
Got stuck when I trying to load a dataset
[ "I ran into the same problem when I used a server cluster (Slurm system managed) that couldn't load any of the huggingface datasets or models, but it worked on my laptop. I suspected some system configuration-related problem, but I had no idea. \r\nMy problems are consistent with [issue #2618](https://github.com/huggingface/datasets/issues/2618). All the huggingface-related libraries I use are the latest versions.\r\n\r\n", "> I ran into the same problem when I used a server cluster (Slurm system managed) that couldn't load any of the huggingface datasets or models, but it worked on my laptop. I suspected some system configuration-related problem, but I had no idea. My problems are consistent with [issue #2618](https://github.com/huggingface/datasets/issues/2618). All the huggingface-related libraries I use are the latest versions.\r\n\r\nhave you solved this issue yet? i met the same problem on server but everything works on laptop. I think maybe the filelock repo is contradictory with file system.", "I am having the same issue on a computing cluster but this works on my laptop as well. I instead have this error:\r\n`/home/.conda/envs/py10/lib/python3.10/site-packages/filelock/_unix.py\", line 43, in _acquire\r\n fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)\r\nOSError: [Errno 5] Input/output error`\r\n\r\nthe load_dataset command does not work on server for local or hosted hugging-face datasets, and I have tried for several files", "Same here. Is there any solution?", "In my case, `.cahce` was in a shared folder. Moving it into the user's home folder fixed the problem. #2618 for more details", "> In my case, `.cahce` was in a shared folder. Moving it into the user's home folder fixed the problem. #2618 for more details在我的情况下, `.cahce` 在一个共享文件夹中。将其移动到用户的主文件夹中解决了问题。 #2618 获取更多详细信息。\r\n\r\nCan you be more specific? thank." ]
### Describe the bug Hello, everyone. I met a problem when I am trying to load a data file using load_dataset method on a Debian 10 system. The data file is not very large, only 1.63MB with 600 records. Here is my code: from datasets import load_dataset dataset = load_dataset('json', data_files='mypath/oaast_rm_zh.json') I waited it for 20 minutes. It still no response. I cannot using Ctrl+C to cancel the command. I have to use Ctrl+Z to kill it. I also try it with a txt file, it still no response in a long time. I can load the same file successfully using my laptop (windows 10, python 3.8.5, datasets==2.14.5). I can also make it on another computer (Ubuntu 20.04.5 LTS, python 3.10.13, datasets 2.14.7). It only takes me 1-2 miniutes. Could you give me some suggestions? Thank you. ### Steps to reproduce the bug from datasets import load_dataset dataset = load_dataset('json', data_files='mypath/oaast_rm_zh.json') ### Expected behavior I hope it can load the file successfully. ### Environment info OS: Debian GNU/Linux 10 Python: Python 3.10.13 Pip list: Package Version ------------------------- ------------ accelerate 0.25.0 addict 2.4.0 aiofiles 23.2.1 aiohttp 3.9.1 aiosignal 1.3.1 aliyun-python-sdk-core 2.14.0 aliyun-python-sdk-kms 2.16.2 altair 5.2.0 annotated-types 0.6.0 anyio 3.7.1 async-timeout 4.0.3 attrs 23.1.0 certifi 2023.11.17 cffi 1.16.0 charset-normalizer 3.3.2 click 8.1.7 contourpy 1.2.0 crcmod 1.7 cryptography 41.0.7 cycler 0.12.1 datasets 2.14.7 dill 0.3.7 docstring-parser 0.15 einops 0.7.0 exceptiongroup 1.2.0 fastapi 0.105.0 ffmpy 0.3.1 filelock 3.13.1 fonttools 4.46.0 frozenlist 1.4.1 fsspec 2023.10.0 gast 0.5.4 gradio 3.50.2 gradio_client 0.6.1 h11 0.14.0 httpcore 1.0.2 httpx 0.25.2 huggingface-hub 0.19.4 idna 3.6 importlib-metadata 7.0.0 importlib-resources 6.1.1 jieba 0.42.1 Jinja2 3.1.2 jmespath 0.10.0 joblib 1.3.2 jsonschema 4.20.0 jsonschema-specifications 2023.11.2 kiwisolver 1.4.5 markdown-it-py 3.0.0 MarkupSafe 2.1.3 matplotlib 3.8.2 mdurl 0.1.2 modelscope 1.10.0 mpmath 1.3.0 multidict 6.0.4 multiprocess 0.70.15 networkx 3.2.1 nltk 3.8.1 numpy 1.26.2 nvidia-cublas-cu12 12.1.3.1 nvidia-cuda-cupti-cu12 12.1.105 nvidia-cuda-nvrtc-cu12 12.1.105 nvidia-cuda-runtime-cu12 12.1.105 nvidia-cudnn-cu12 8.9.2.26 nvidia-cufft-cu12 11.0.2.54 nvidia-curand-cu12 10.3.2.106 nvidia-cusolver-cu12 11.4.5.107 nvidia-cusparse-cu12 12.1.0.106 nvidia-nccl-cu12 2.18.1 nvidia-nvjitlink-cu12 12.3.101 nvidia-nvtx-cu12 12.1.105 orjson 3.9.10 oss2 2.18.3 packaging 23.2 pandas 2.1.4 peft 0.7.1 Pillow 10.1.0 pip 23.3.1 platformdirs 4.1.0 protobuf 4.25.1 psutil 5.9.6 pyarrow 14.0.1 pyarrow-hotfix 0.6 pycparser 2.21 pycryptodome 3.19.0 pydantic 2.5.2 pydantic_core 2.14.5 pydub 0.25.1 Pygments 2.17.2 pyparsing 3.1.1 python-dateutil 2.8.2 python-multipart 0.0.6 pytz 2023.3.post1 PyYAML 6.0.1 referencing 0.32.0 regex 2023.10.3 requests 2.31.0 rich 13.7.0 rouge-chinese 1.0.3 rpds-py 0.13.2 safetensors 0.4.1 scipy 1.11.4 semantic-version 2.10.0 sentencepiece 0.1.99 setuptools 68.2.2 shtab 1.6.5 simplejson 3.19.2 six 1.16.0 sniffio 1.3.0 sortedcontainers 2.4.0 sse-starlette 1.8.2 starlette 0.27.0 sympy 1.12 tiktoken 0.5.2 tokenizers 0.15.0 tomli 2.0.1 toolz 0.12.0 torch 2.1.2 tqdm 4.66.1 transformers 4.36.1 triton 2.1.0 trl 0.7.4 typing_extensions 4.9.0 tyro 0.6.0 tzdata 2023.3 urllib3 2.1.0 uvicorn 0.24.0.post1 websockets 11.0.3 wheel 0.41.2 xxhash 3.4.1 yapf 0.40.2 yarl 1.9.4 zipp 3.17.0
6,505
https://github.com/huggingface/datasets/issues/6504
Error Pushing to Hub
[]
### Describe the bug Error when trying to push a dataset in a special format to hub ### Steps to reproduce the bug ``` import datasets from datasets import Dataset dataset_dict = { "filename": ["apple", "banana"], "token": [[[1,2],[3,4]],[[1,2],[3,4]]], "label": [0, 1], } dataset = Dataset.from_dict(dataset_dict) dataset = dataset.cast_column("token", datasets.features.features.Array2D(shape=(2, 2),dtype="int16")) dataset.push_to_hub("SequenceModel/imagenet_val_256") ``` Error: ``` ... ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:python/tuple' in "<unicode string>", line 8, column 16: shape: !!python/tuple ^ ``` ### Expected behavior Dataset being pushed to hub ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-5.19.0-1022-gcp-x86_64-with-glibc2.35 - Python version: 3.11.5 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
6,504
https://github.com/huggingface/datasets/issues/6501
OverflowError: value too large to convert to int32_t
[]
### Describe the bug ![image](https://github.com/huggingface/datasets/assets/47747764/f58044fb-ddda-48b6-ba68-7bbfef781630) ### Steps to reproduce the bug just loading datasets ### Expected behavior how can I fix it ### Environment info pip install /mnt/cluster/zhangfan/study_info/LLaMA-Factory/peft-0.6.0-py3-none-any.whl pip install huggingface_hub-0.19.4-py3-none-any.whl tokenizers-0.15.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl transformers-4.36.1-py3-none-any.whl pyarrow_hotfix-0.6-py3-none-any.whl datasets-2.15.0-py3-none-any.whl tyro-0.5.18-py3-none-any.whl trl-0.7.4-py3-none-any.whl done
6,501
https://github.com/huggingface/datasets/issues/6497
Support setting a default config name in push_to_hub
[]
In order to convert script-datasets to no-script datasets, we need to support setting a default config name for those scripts that set one.
6,497
https://github.com/huggingface/datasets/issues/6496
Error when writing a dataset to HF Hub: A commit has happened since. Please refresh and try again.
[ "I transferred from datasets-server, since the issue is more about `datasets` and the integration with `huggingface_hub`." ]
**Describe the bug** Getting a `412 Client Error: Precondition Failed` when trying to write a dataset to the HF hub. ``` huggingface_hub.utils._errors.HfHubHTTPError: 412 Client Error: Precondition Failed for url: https://huggingface.co/api/datasets/GLorr/test-dask/commit/main (Request ID: Root=1-657ae26f-3bd92bf861bb254b2cc0826c;50a09ab7-9347-406a-ba49-69f98abee9cc) A commit has happened since. Please refresh and try again. ``` **Steps to reproduce the bug** This is a minimal reproducer: ``` import dask.dataframe as dd import pandas as pd import random import os import huggingface_hub import datasets huggingface_hub.login(token=os.getenv("HF_TOKEN")) data = {"number": [random.randint(0,10) for _ in range(1000)]} df = pd.DataFrame.from_dict(data) dataframe = dd.from_pandas(df, npartitions=1) dataframe = dataframe.repartition(npartitions=3) schema = datasets.Features({"number": datasets.Value("int64")}).arrow_schema repo_id = "GLorr/test-dask" repo_path = f"hf://datasets/{repo_id}" huggingface_hub.create_repo(repo_id=repo_id, repo_type="dataset", exist_ok=True) dd.to_parquet(dataframe, path=f"{repo_path}/data", schema=schema) ``` **Expected behavior** Would expect to write to the hub without any problem. **Environment info** ``` datasets==2.15.0 huggingface-hub==0.19.4 ```
6,496
https://github.com/huggingface/datasets/issues/6494
Image Data loaded Twice
[]
### Describe the bug ![1702472610561](https://github.com/huggingface/datasets/assets/28867010/4b7ef5e7-32c3-4b73-84cb-5de059caa0b6) When I learn from https://huggingface.co/docs/datasets/image_load and try to load image data from a folder. I noticed that the image was read twice in the returned data. As you can see in the attached image, there are only four images in the train folder, but reading brings up eight images ### Steps to reproduce the bug from datasets import Dataset, load_dataset dataset = load_dataset("imagefolder", data_dir="data/", drop_labels=False) # print(dataset["train"][0]["image"] == dataset["train"][1]["image"]) print(dataset) print(dataset["train"]["image"]) print(len(dataset["train"]["image"])) ### Expected behavior DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 8 }) }) [<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D1CA8B0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D2452E0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D245310>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D2453A0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D245460>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D245430>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D2454F0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D245550>] 8 ### Environment info - `datasets` version: 2.14.5 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.9.17 - Huggingface_hub version: 0.19.4 - PyArrow version: 13.0.0 - Pandas version: 2.0.3
6,494
https://github.com/huggingface/datasets/issues/6495
Newline characters don't behave as expected when calling dataset.info
[]
### System Info - `transformers` version: 4.32.1 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.11.5 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.2 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.1+cpu (False) - Tensorflow version (GPU?): 2.15.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @marios ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction [Source](https://huggingface.co/docs/datasets/v2.2.1/en/access) ``` from datasets import load_dataset dataset = load_dataset('glue', 'mrpc', split='train') dataset.info ``` DatasetInfo(description='GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\n', citation='@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@inproceedings{wang2019glue,\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\n note={In the Proceedings of ICLR.},\n year={2019}\n}\n', homepage='https://www.microsoft.com/en-us/download/details.aspx?id=52398', license='', features={'sentence1': Value(dtype='string', id=None), 'sentence2': Value(dtype='string', id=None), 'label': ClassLabel(names=['not_equivalent', 'equivalent'], id=None), 'idx': Value(dtype='int32', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name='glue', dataset_name=None, config_name='mrpc', version=1.0.0, splits={'train': SplitInfo(name='train', num_bytes=943843, num_examples=3668, shard_lengths=None, dataset_name='glue'), 'validation': SplitInfo(name='validation', num_bytes=105879, num_examples=408, shard_lengths=None, dataset_name='glue'), 'test': SplitInfo(name='test', num_bytes=442410, num_examples=1725, shard_lengths=None, dataset_name='glue')}, download_checksums={'https://dl.fbaipublicfiles.com/glue/data/mrpc_dev_ids.tsv': {'num_bytes': 6222, 'checksum': None}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt': {'num_bytes': 1047044, 'checksum': None}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt': {'num_bytes': 441275, 'checksum': None}}, download_size=1494541, post_processing_size=None, dataset_size=1492132, size_in_bytes=2986673) ### Expected behavior ``` from datasets import load_dataset dataset = load_dataset('glue', 'mrpc', split='train') dataset.info ``` DatasetInfo( description='GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\n', citation='@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@inproceedings{wang2019glue,\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\n note={In the Proceedings of ICLR.},\n year={2019}\n}\n', homepage='https://www.microsoft.com/en-us/download/details.aspx?id=52398', license='', features={'sentence1': Value(dtype='string', id=None), 'sentence2': Value(dtype='string', id=None), 'label': ClassLabel(num_classes=2, names=['not_equivalent', 'equivalent'], names_file=None, id=None), 'idx': Value(dtype='int32', id=None)}, post_processed=None, supervised_keys=None, builder_name='glue', config_name='mrpc', version=1.0.0, splits={'train': SplitInfo(name='train', num_bytes=943851, num_examples=3668, dataset_name='glue'), 'validation': SplitInfo(name='validation', num_bytes=105887, num_examples=408, dataset_name='glue'), 'test': SplitInfo(name='test', num_bytes=442418, num_examples=1725, dataset_name='glue')}, download_checksums={'https://dl.fbaipublicfiles.com/glue/data/mrpc_dev_ids.tsv': {'num_bytes': 6222, 'checksum': '971d7767d81b997fd9060ade0ec23c4fc31cbb226a55d1bd4a1bac474eb81dc7'}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt': {'num_bytes': 1047044, 'checksum': '60a9b09084528f0673eedee2b69cb941920f0b8cd0eeccefc464a98768457f89'}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt': {'num_bytes': 441275, 'checksum': 'a04e271090879aaba6423d65b94950c089298587d9c084bf9cd7439bd785f784'}}, download_size=1494541, post_processing_size=None, dataset_size=1492156, size_in_bytes=2986697 )
6,495
https://github.com/huggingface/datasets/issues/6490
`load_dataset(...,save_infos=True)` not working without loading script
[ "Also, once the README.md exists in the python environment it is used when loading another dataset in the same format (e.g. json) since it always resolves the path to the same directory.\r\nThe consequence here is any other dataset won't load because of infos mismatch.\r\nTo reproduce this aspect:\r\n1. Do a `load_datasets(...,save_infos=True)` with one dataset without a loading script\r\n2. Try to load another dataset without a loading script in the same format (e.g. json) but with a different schema " ]
### Describe the bug It seems that saving a dataset infos back into the card file is not working for datasets without a loading script. After tracking the problem a bit it looks like saving the infos uses `Builder.get_imported_module_dir()` as its destination directory. Internally this is a call to `inspect.getfile()` but since the actual builder class used is dynamically created (cf. `datasets.load.configure_builder_class`) this method actually return te path to the parent builder class (e.g. `datasets.packaged_modules.json.JSON`). ### Steps to reproduce the bug 1. Have a local dataset without any loading script 2. Make sure there are no dataset infos in the README.md 3. Load with `save_infos=True` 4. No change in the dataset README.md 5. A new README.md file is created in the directory of the parent builder class (e.g. for json in `.../site-packages/datasets/packaged_modules/json/README.md`) ### Expected behavior The dataset README.md should be updated and no file should be created in the python environment. ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.1.3 - `fsspec` version: 2023.6.0
6,490
https://github.com/huggingface/datasets/issues/6489
load_dataset imageflder for aws s3 path
[]
### Feature request I would like to load a dataset from S3 using the imagefolder option something like `dataset = datasets.load_dataset('imagefolder', data_dir='s3://.../lsun/train/bedroom', fs=S3FileSystem(), streaming=True) ` ### Motivation no need of data_files ### Your contribution no experience with this
6,489
https://github.com/huggingface/datasets/issues/6488
429 Client Error
[ "Transferring repos as this is a datasets issue ", "I'm getting a similar issue even though I've already downloaded the dataset 😅 \r\n\r\n```\r\nhuggingface_hub.utils._errors.HfHubHTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/api/datasets/HuggingFaceM4/WebSight\r\n```" ]
Hello, I was downloading the following dataset and after 20% of data was downloaded, I started getting error 429. It is not resolved since a few days. How should I resolve it? Thanks Dataset: https://huggingface.co/datasets/cerebras/SlimPajama-627B Error: `requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/datasets/cerebras/SlimPajama-627B/resolve/2d0accdd58c5d5511943ca1f5ff0e3eb5e293543/train/chunk1/example_train_3300.jsonl.zst`
6,488
https://github.com/huggingface/datasets/issues/6485
FileNotFoundError: [Errno 2] No such file or directory: 'nul'
[ "Hi! It seems like the problem is your environment. Maybe this issue can help: https://github.com/pytest-dev/pytest/issues/9519. " ]
### Describe the bug it seems that sth wrong with my terrible "bug body" life, When i run this code, "import datasets" i meet this error FileNotFoundError: [Errno 2] No such file or directory: 'nul' ![image](https://github.com/huggingface/datasets/assets/73683903/3973c120-ebb1-42b7-bede-b9de053e861d) ![image](https://github.com/huggingface/datasets/assets/73683903/0496adff-a7a7-4dcb-929e-ec11ede71f04) ### Steps to reproduce the bug 1.import datasets ### Expected behavior i just run a single line code and stuct in this bug ### Environment info OS: Windows10 Datasets==2.15.0 python=3.10
6,485
https://github.com/huggingface/datasets/issues/6483
Iterable Dataset: rename column clashes with remove column
[ "Column \"text\" doesn't exist anymore so you can't remove it", "You can get the expected result by fixing typos in the snippet :)\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# load LS in streaming mode\r\ndataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\n\r\n# check original features\r\ndataset_features = dataset.features.keys()\r\nprint(\"Original features: \", dataset_features)\r\n\r\n# rename \"text\" -> \"sentence\"\r\ndataset = dataset.rename_column(\"text\", \"sentence\")\r\n\r\n# remove unwanted columns\r\nCOLUMNS_TO_KEEP = {\"audio\", \"sentence\"}\r\ndataset = dataset.remove_columns(set(dataset.features) - COLUMNS_TO_KEEP)\r\n\r\n# stream first sample, should return \"audio\" and \"sentence\" columns\r\nprint(next(iter(dataset)))\r\n```", "Fixed code:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# load LS in streaming mode\r\ndataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\n\r\n# check original features\r\ndataset_features = dataset.features.keys()\r\nprint(\"Original features: \", dataset_features)\r\n\r\n# rename \"text\" -> \"sentence\"\r\ndataset = dataset.rename_column(\"text\", \"sentence\")\r\ndataset_features = dataset.features.keys()\r\n\r\n# remove unwanted columns\r\nCOLUMNS_TO_KEEP = {\"audio\", \"sentence\"}\r\ndataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP))\r\n\r\n# stream first sample, should return \"audio\" and \"sentence\" columns\r\nprint(next(iter(dataset)))\r\n```", "Whoops 😅 Thanks for the swift reply both! Works like a charm!" ]
### Describe the bug Suppose I have a two iterable datasets, one with the features: * `{"audio", "text", "column_a"}` And the other with the features: * `{"audio", "sentence", "column_b"}` I want to combine both datasets using `interleave_datasets`, which requires me to unify the column names. I would typically do this by: 1. Renaming the common columns to the same name (e.g. `"text"` -> `"sentence"`) 2. Removing the unwanted columns (e.g. `"column_a"`, `"column_b"`) However, the process of renaming and removing columns in an iterable dataset doesn't work, since we need to preserve the original text column, meaning we can't combine the datasets. ### Steps to reproduce the bug ```python from datasets import load_dataset # load LS in streaming mode dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True) # check original features dataset_features = dataset.features.keys() print("Original features: ", dataset_features) # rename "text" -> "sentence" dataset = dataset.rename_column("text", "sentence") # remove unwanted columns COLUMNS_TO_KEEP = {"audio", "sentence"} dataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP)) # stream first sample, should return "audio" and "sentence" columns print(next(iter(dataset))) ``` Traceback: ```python --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[5], line 17 14 COLUMNS_TO_KEEP = {"audio", "sentence"} 15 dataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP)) ---> 17 print(next(iter(dataset))) File ~/datasets/src/datasets/iterable_dataset.py:1353, in IterableDataset.__iter__(self) 1350 yield formatter.format_row(pa_table) 1351 return -> 1353 for key, example in ex_iterable: 1354 if self.features: 1355 # `IterableDataset` automatically fills missing columns with None. 1356 # This is done with `_apply_feature_types_on_example`. 1357 example = _apply_feature_types_on_example( 1358 example, self.features, token_per_repo_id=self._token_per_repo_id 1359 ) File ~/datasets/src/datasets/iterable_dataset.py:652, in MappedExamplesIterable.__iter__(self) 650 yield from ArrowExamplesIterable(self._iter_arrow, {}) 651 else: --> 652 yield from self._iter() File ~/datasets/src/datasets/iterable_dataset.py:729, in MappedExamplesIterable._iter(self) 727 if self.remove_columns: 728 for c in self.remove_columns: --> 729 del transformed_example[c] 730 yield key, transformed_example 731 current_idx += 1 KeyError: 'text' ``` => we see that `datasets` is looking for the column "text", even though we've renamed this to "sentence" and then removed the un-wanted "text" column from our dataset. ### Expected behavior Should be able to rename and remove columns from iterable dataset. ### Environment info - `datasets` version: 2.15.1.dev0 - Platform: macOS-13.5.1-arm64-arm-64bit - Python version: 3.11.6 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.1.2 - `fsspec` version: 2023.9.2
6,483
https://github.com/huggingface/datasets/issues/6484
[Feature Request] Dataset versioning
[ "Hello @kenfus, this is meant to be possible to do yes. Let me ping @lhoestq or @mariosasko from the `datasets` team (`huggingface_hub` is only the underlying library to download files from the Hub but here it looks more like a `datasets` problem). ", "Hi! https://github.com/huggingface/datasets/pull/6459 will fix this." ]
**Is your feature request related to a problem? Please describe.** I am working on a project, where I would like to test different preprocessing methods for my ML-data. Thus, I would like to work a lot with revisions and compare them. Currently, I was not able to make it work with the revision keyword because it was not redownloading the data, it was reading in some cached data, until I put `download_mode="force_redownload"`, even though the reversion was different. Of course, I may have done something wrong or missed a setting somewhere! **Describe the solution you'd like** The solution would allow me to easily work with revisions: - create a new dataset (by combining things, different preprocessing, ..) and give it a new revision (v.1.2.3), maybe like this: `dataset_audio.push_to_hub('kenfus/xy', revision='v1.0.2')` - then, get the current revision as follows: ``` dataset = load_dataset( 'kenfus/xy', revision='v1.0.2', ) ``` this downloads the new version and does not load in a different revision and all future map, filter, .. operations are done on this dataset and not loaded from cache produced from a different revision. - if I rerun the run, the caching should be smart enough in every step to not reuse a mapping operation on a different revision. **Describe alternatives you've considered** I created my own caching, putting `download_mode="force_redownload"` and `load_from_cache_file=False,` everywhere. **Additional context** Thanks a lot for your great work! Creating NLP datasets and training a model with them is really easy and straightforward with huggingface. This is the data loading in my script: ``` ## CREATE PATHS prepared_dataset_path = os.path.join( DATA_FOLDER, str(DATA_VERSION), "prepared_dataset" ) os.makedirs(os.path.join(DATA_FOLDER, str(DATA_VERSION)), exist_ok=True) ## LOAD DATASET if os.path.exists(prepared_dataset_path): print("Loading prepared dataset from disk...") dataset_prepared = load_from_disk(prepared_dataset_path) else: print("Loading dataset from HuggingFace Datasets...") dataset = load_dataset( PATH_TO_DATASET, revision=DATA_VERSION, download_mode="force_redownload" ) print("Preparing dataset...") dataset_prepared = dataset.map( prepare_dataset, remove_columns=["audio", "transcription"], num_proc=os.cpu_count(), load_from_cache_file=False, ) dataset_prepared.save_to_disk(prepared_dataset_path) del dataset if CHECK_DATASET: ## CHECK DATASET dataset_prepared = dataset_prepared.map( check_dimensions, num_proc=os.cpu_count(), load_from_cache_file=False ) dataset_filtered = dataset_prepared.filter( lambda example: not example["incorrect_dimension"], load_from_cache_file=False, ) for example in dataset_prepared.filter( lambda example: example["incorrect_dimension"], load_from_cache_file=False ): print(example["path"]) print( f"Number of examples with incorrect dimension: {len(dataset_prepared) - len(dataset_filtered)}" ) print("Number of examples train: ", len(dataset_filtered["train"])) print("Number of examples test: ", len(dataset_filtered["test"])) ```
6,484
https://github.com/huggingface/datasets/issues/6481
using torchrun, save_to_disk suddenly shows SIGTERM
[]
### Describe the bug When I run my code using the "torchrun" command, when the code reaches the "save_to_disk" part, suddenly I get the following warning and error messages: Because the dataset is too large, the "save_to_disk" function splits it into 70 parts for saving. However, an error occurs suddenly when it reaches the 14th shard. WARNING: torch.distributed.elastic.multiprocessing.api: Sending process 2224968 closing signal SIGTERM ERROR: torch.distributed.elastic.multiprocessing.api: failed (exitcode: -7). traceback: Signal 7 (SIGBUS) received by PID 2224967. ### Steps to reproduce the bug ds_shard = ds_shard.map(map_fn, *args, **kwargs) ds_shard.save_to_disk(ds_shard_filepaths[rank]) Saving the dataset (14/70 shards): 20%|██ | 875350/4376702 [00:19<01:53, 30863.15 examples/s] WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2224968 closing signal SIGTERM ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -7) local_rank: 0 (pid: 2224967) of binary: /home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/python Traceback (most recent call last): File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/torchrun", line 8, in <module> sys.exit(main()) File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper return f(*args, **kwargs) File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 794, in main run(args) File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ========================================================== run.py FAILED ---------------------------------------------------------- Failures: <NO_OTHER_FAILURES> ---------------------------------------------------------- Root Cause (first observed failure): [0]: time : 2023-12-08_20:09:04 rank : 0 (local_rank: 0) exitcode : -7 (pid: 2224967) error_file: <N/A> traceback : Signal 7 (SIGBUS) received by PID 2224967 ### Expected behavior I hope it can save successfully without any issues, but it seems there is a problem. ### Environment info `datasets` version: 2.14.6 - Platform: Linux-4.19.90-24.4.v2101.ky10.aarch64-aarch64-with-glibc2.28 - Python version: 3.10.11 - Huggingface_hub version: 0.17.3 - PyArrow version: 14.0.0 - Pandas version: 2.1.2
6,481
https://github.com/huggingface/datasets/issues/6478
How to load data from lakefs
[ "You can create a `pandas` DataFrame following [this](https://lakefs.io/data-version-control/dvc-using-python/) tutorial, and then convert this DataFrame to a `Dataset` with `datasets.Dataset.from_pandas`. For larger datasets (to memory map them), you can use `Dataset.from_generator` with a generator function that reads lakeFS files with `s3fs`.", "@mariosasko hello,\r\nThis can achieve and https://huggingface.co/datasets Does the same effect apply to the dataset? For example, downloading while using", "There is a blogspot from lakes on this topic: https://lakefs.io/blog/data-version-control-hugging-face-datasets/" ]
My dataset is stored on the company's lakefs server. How can I write code to load the dataset? It would be great if I could provide code examples or provide some references
6,478
https://github.com/huggingface/datasets/issues/6476
CI on windows is broken: PermissionError
[]
See: https://github.com/huggingface/datasets/actions/runs/7104781624/job/19340572394 ``` FAILED tests/test_load.py::test_loading_from_the_datasets_hub - NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\tmpfcnps56i\\hf-internal-testing___dataset_with_script\\default\\0.0.0\\c240e2be3370bdbd\\dataset_with_script-train.arrow' ```
6,476
https://github.com/huggingface/datasets/issues/6475
laion2B-en failed to load on Windows with PrefetchVirtualMemory failed
[ "~~You will see this error if the cache dir filepath contains relative `..` paths. Use `os.path.realpath(_CACHE_DIR)` before passing it to the `load_dataset` function.~~", "This is a real issue and not related to paths.", "Based on the StackOverflow answer, this causes the error to go away:\r\n```diff\r\ndiff --git a/table.py b/table.py\r\n--- a/table.py\t\r\n+++ b/table.py\t(date 1701824849806)\r\n@@ -47,7 +47,7 @@\r\n \r\n \r\n def _memory_mapped_record_batch_reader_from_file(filename: str) -> pa.RecordBatchStreamReader:\r\n- memory_mapped_stream = pa.memory_map(filename)\r\n+ memory_mapped_stream = pa.memory_map(filename, \"r+\")\r\n return pa.ipc.open_stream(memory_mapped_stream)\r\n```\r\nBut now loading the dataset goes very, very slowly, which is unexpected.", "I don't really comprehend what it is that `datasets` gave me when it downloaded the laion2B-en dataset, because nothing can seemingly read these 1024 .arrow files it is retrieving. Not `polars`, not `pyarrow`, it's not an `ipc` file, it's not a `parquet` file...", "Hi! \r\n\r\nInstead of generating one (potentially large) Arrow file, we shard the generated data into 500 MB shards because memory-mapping large Arrow files can be problematic on some systems. Maybe deleting the dataset's cache and increasing the shard size (controlled with the `datasets.config.MAX_SHARD_SIZE` variable; e.g. to \"4GB\") can fix the issue for you.\r\n\r\n> I don't really comprehend what it is that `datasets` gave me when it downloaded the laion2B-en dataset, because nothing can seemingly read these 1024 .arrow files it is retrieving. Not `polars`, not `pyarrow`, it's not an `ipc` file, it's not a `parquet` file...\r\n\r\nOur `.arrow` files are in the [Arrow streaming format](https://arrow.apache.org/docs/python/ipc.html#using-streams). To load them as a `polars` DataFrame, do the following:\r\n```python\r\ndf = pl.from_arrow(Dataset.from_from(path_to_arrow_file).data.table)\r\n```\r\n\r\nWe plan to switch to the IPC version eventually.\r\n", "Hmm, I have a feeling this works fine on Linux, and is a real bug for however `datasets` is doing the sharding on Windows. I will follow up, but I think this is a real bug." ]
### Describe the bug I have downloaded laion2B-en, and I'm receiving the following error trying to load it: ``` Resolving data files: 100%|██████████| 128/128 [00:00<00:00, 1173.79it/s] Traceback (most recent call last): File "D:\Art-Workspace\src\artworkspace\tokeneval\compute_frequencies.py", line 31, in <module> count = compute_frequencies() ^^^^^^^^^^^^^^^^^^^^^ File "D:\Art-Workspace\src\artworkspace\tokeneval\compute_frequencies.py", line 17, in compute_frequencies laion2b_dataset = load_dataset("laion/laion2B-en", split="train", cache_dir=_CACHE_DIR, keep_in_memory=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\load.py", line 2165, in load_dataset ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\builder.py", line 1187, in as_dataset datasets = map_nested( ^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\utils\py_utils.py", line 456, in map_nested return function(data_struct) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\builder.py", line 1217, in _build_single_dataset ds = self._as_dataset( ^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\builder.py", line 1291, in _as_dataset dataset_kwargs = ArrowReader(cache_dir, self.info).read( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 244, in read return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 265, in read_files pa_table = self._read_files(files, in_memory=in_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 200, in _read_files pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 336, in _get_table_from_filename table = ArrowReader.read_table(filename, in_memory=in_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 357, in read_table return table_cls.from_file(filename) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\table.py", line 1059, in from_file table = _memory_mapped_arrow_table_from_file(filename) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\table.py", line 66, in _memory_mapped_arrow_table_from_file pa_table = opened_stream.read_all() ^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow\ipc.pxi", line 757, in pyarrow.lib.RecordBatchReader.read_all File "pyarrow\error.pxi", line 91, in pyarrow.lib.check_status OSError: [WinError 8] PrefetchVirtualMemory failed. Detail: [Windows error 8] Not enough memory resources are available to process this command. ``` This error is probably a red herring: https://stackoverflow.com/questions/50263929/numpy-memmap-returns-not-enough-memory-while-there-are-plenty-available In other words, the issue is related to asking for a memory mapping of length N > M the length of the file on Windows. This gracefully succeeds on Linux. I have 1024 arrow files in my cache instead of 128 like in the repository for it. Probably related. I don't know why `datasets` reorganized/rewrote the dataset in my cache to be 1024 slices instead of the original 128. ### Steps to reproduce the bug ``` # as a huggingface developer, you may already have laion2B-en somewhere _CACHE_DIR = "." from datasets import load_dataset load_dataset("laion/laion2B-en", split="train", cache_dir=_CACHE_DIR, keep_in_memory=False) ``` ### Expected behavior This should correctly load as a memory mapped Arrow dataset. ### Environment info - `datasets` version: 2.15.0 - Platform: Windows-10-10.0.20348-SP0 (this is windows 2022) - Python version: 3.11.4 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.1.2 - `fsspec` version: 2023.10.0
6,475
https://github.com/huggingface/datasets/issues/6472
CI quality is broken
[]
See: https://github.com/huggingface/datasets/actions/runs/7100835633/job/19327734359 ``` Would reformat: src/datasets/features/image.py 1 file would be reformatted, 253 files left unchanged ```
6,472
https://github.com/huggingface/datasets/issues/6470
If an image in a dataset is corrupted, we get unescapable error
[]
### Describe the bug Example discussed in detail here: https://huggingface.co/datasets/sasha/birdsnap/discussions/1 ### Steps to reproduce the bug ``` from datasets import load_dataset, VerificationMode dataset = load_dataset( 'sasha/birdsnap', split="train", verification_mode=VerificationMode.ALL_CHECKS, streaming=True # I recommend using streaming=True when reproducing, as this dataset is large ) for idx, row in enumerate(dataset): # Iterating to 9287 took 7 minutes for me # If you already have the data locally cached and set streaming=False, you see the same error just by with dataset[9287] pass # error at 9287 OSError: image file is truncated (45 bytes not processed) # note that we can't avoid the error using a try/except + continue inside the loop ``` ### Expected behavior Able to escape errors in casting to Image() without killing the whole loop ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.31 - Python version: 3.11.5 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.1.3 - `fsspec` version: 2023.10.0
6,470
https://github.com/huggingface/datasets/issues/6467
New version release request
[ "We will publish it soon (we usually do it in intervals of 1-2 months, so probably next week)", "Thanks!" ]
### Feature request Hi! I am using `datasets` in library `xtuner` and am highly interested in the features introduced since v2.15.0. To avoid installation from source in our pypi wheels, we are eagerly waiting for the new release. So, Does your team have a new release plan for v2.15.1 and could you please share it with us? Thanks very much! ### Motivation . ### Your contribution .
6,467
https://github.com/huggingface/datasets/issues/6466
Can't align optional features of struct
[ "Friendly bump, I would be happy to work on this issue once I get the go-ahead from the dev team. ", "Thanks for the PR!\r\n\r\nI'm struggling with this as well and would love to see this PR merged. My case is slightly different, with keys completely missing rather than being `None`:\r\n\r\n```\r\nds = Dataset.from_dict({'speaker': [{'name': 'Ben'}]})\r\nds2 = Dataset.from_dict({'speaker': [{'name': 'Fred', 'email': '[email protected]'}]})\r\nprint(concatenate_datasets([ds, ds2]).features)\r\nprint(concatenate_datasets([ds, ds2]).to_dict())\r\n```\r\n\r\nI would expect this to work as well because other Dataset functions already handle this situation well. For example, this works just as expected:\r\n\r\n```\r\nds = Dataset.from_dict({'n': [1,2]})\r\nds_mapped = ds.map(lambda x: {\r\n 'speaker': {'name': 'Ben'} if x['n'] == 1 else {'name': 'Fred', 'email': '[email protected]'}\r\n})\r\nprint(ds_mapped)\r\n```", "@vova-cyberhaven can you check with the new release if it fixes your issue? " ]
### Describe the bug Hello! I'm currently experiencing an issue where I can't concatenate datasets if an inner field of a Feature is Optional. I have a column named `speaker`, and this holds some information about a speaker. ```python @dataclass class Speaker: name: str email: Optional[str] ``` If I have two datasets, one happens to have `email` always None, then I get `The features can't be aligned because the key email of features` ### Steps to reproduce the bug You can run the following script: ```python ds = Dataset.from_dict({'speaker': [{'name': 'Ben', 'email': None}]}) ds2 = Dataset.from_dict({'speaker': [{'name': 'Fred', 'email': '[email protected]'}]}) concatenate_datasets([ds, ds2]) >>>The features can't be aligned because the key speaker of features {'speaker': {'email': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None)}} has unexpected type - {'email': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None)} (expected either {'email': Value(dtype='null', id=None), 'name': Value(dtype='string', id=None)} or Value("null"). ``` ### Expected behavior I think this should work; if two top-level columns were in the same situation it would properly cast to `string`. ```python ds = Dataset.from_dict({'email': [None, None]}) ds2 = Dataset.from_dict({'email': ['[email protected]', '[email protected]']}) concatenate_datasets([ds, ds2]) >>> # Works! ``` ### Environment info - `datasets` version: 2.15.1.dev0 - Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.35 - Python version: 3.9.13 - `huggingface_hub` version: 0.19.4 - PyArrow version: 9.0.0 - Pandas version: 1.4.4 - `fsspec` version: 2023.6.0 I would be happy to fix this issue.
6,466
https://github.com/huggingface/datasets/issues/6465
`load_dataset` uses out-of-date cache instead of re-downloading a changed dataset
[ "Hi, thanks for reporting! https://github.com/huggingface/datasets/pull/6459 will fix this." ]
### Describe the bug When a dataset is updated on the hub, using `load_dataset` will load the locally cached dataset instead of re-downloading the updated dataset ### Steps to reproduce the bug Here is a minimal example script to 1. create an initial dataset and upload 2. download it so it is stored in cache 3. change the dataset and re-upload 4. redownload ```python import time from datasets import Dataset, DatasetDict, DownloadMode, load_dataset username = "YOUR_USERNAME_HERE" initial = Dataset.from_dict({"foo": [1, 2, 3]}) print(f"Intial {initial['foo']}") initial_ds = DatasetDict({"train": initial}) initial_ds.push_to_hub("test") time.sleep(1) download = load_dataset(f"{username}/test", split="train") changed = download.map(lambda x: {"foo": x["foo"] + 1}) print(f"Changed {changed['foo']}") changed.push_to_hub("test") time.sleep(1) download_again = load_dataset(f"{username}/test", split="train") print(f"Download Changed {download_again['foo']}") # >>> gives the out-dated [1,2,3] when it should be changed [2,3,4] ``` The redownloaded dataset should be the changed dataset but it is actually the cached, initial dataset. Force-redownloading gives the correct dataset ```python download_again_force = load_dataset(f"{username}/test", split="train", download_mode=DownloadMode.FORCE_REDOWNLOAD) print(f"Force Download Changed {download_again_force['foo']}") # >>> [2,3,4] ``` ### Expected behavior I assumed there should be some sort of hashing that should check for changes in the dataset and re-download if the hashes don't match ### Environment info - `datasets` version: 2.15.0 │ - Platform: Linux-5.15.0-1028-nvidia-x86_64-with-glibc2.17 │ - Python version: 3.8.17 │ - `huggingface_hub` version: 0.19.4 │ - PyArrow version: 13.0.0 │ - Pandas version: 2.0.3 │ - `fsspec` version: 2023.6.0
6,465
https://github.com/huggingface/datasets/issues/6460
jsonlines files don't load with `load_dataset`
[ "Hi @serenalotreck,\r\n\r\nWe use Apache Arrow `pyarrow` to read jsonlines and it throws an error when trying to load your data files:\r\n```python\r\nIn [1]: import pyarrow as pa\r\n\r\nIn [2]: data = pa.json.read_json(\"train.jsonl\")\r\n---------------------------------------------------------------------------\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-14-e9b104832528> in <module>\r\n----> 1 data = pa.json.read_json(\"train.jsonl\")\r\n\r\n.../huggingface/datasets/venv/lib/python3.9/site-packages/pyarrow/_json.pyx in pyarrow._json.read_json()\r\n\r\n.../huggingface/datasets/venv/lib/python3.9/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n.../huggingface/datasets/venv/lib/python3.9/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: JSON parse error: Column(/ner/[]/[]/[]) changed from number to string in row 0\r\n```\r\n\r\nI think it has to do with the data structure of the fields \"ner\" (and also \"relations\"):\r\n```json\r\n\"ner\": [\r\n [\r\n [0, 4, \"Biochemical_process\"], \r\n [15, 16, \"Protein\"]\r\n ], \r\n```\r\nArrow interprets this data structure as an array, an arrays contain just a single data type: \r\n- when reading sequentially, it finds first the `0` and infers that the data is of type `number`;\r\n- when it finds the string `\"Biochemical_process\"`, it cannot cast it to number and throws the `ArrowInvalid` error\r\n\r\nOne solution could be to change the data structure of your data files. Any other ideas, @huggingface/datasets ?", "Hi @albertvillanova, \r\n\r\nThanks for the explanation! To the best of my knowledge, arrays in a json [can contain multiple data types](https://docs.actian.com/ingres/11.2/index.html#page/SQLRef/Data_Types.htm), and I'm able to read these files with the `jsonlines` package. Is the requirement for arrays to only have one data type specific to PyArrow?\r\n\r\nI'd prefer to keep the data structure as is, since it's a specific input requirement for the models this data was generated for. Any thoughts on how to enable the use of `load_dataset` with this dataset would be great!", "Hi again @serenalotreck,\r\n\r\nYes, it is specific to PyArrow: as far as I know, Arrow does not support arrays with multiple data types.\r\n\r\nAs this is related specifically to your dataset structure (and not the `datasets` library), I have created a dedicated issue in your dataset page: https://huggingface.co/datasets/slotreck/pickle/discussions/1\r\n\r\nLet's continue the discussion there! :hugs: ", "> Hi again @serenalotreck,\r\n> \r\n> Yes, it is specific to PyArrow: as far as I know, Arrow does not support arrays with multiple data types.\r\n> \r\n> As this is related specifically to your dataset structure (and not the `datasets` library), I have created a dedicated issue in your dataset page: https://huggingface.co/datasets/slotreck/pickle/discussions/1\r\n> \r\n> Let's continue the discussion there! 🤗\r\n\r\nThis is really terrible. My JSONL format data is very simple, but I still report this error\r\n![image](https://github.com/huggingface/datasets/assets/58240629/e3fed922-ced4-406c-b5bc-90a4b891c4ee)\r\nThe error message is as follows:\r\n File \"pyarrow/_json.pyx\", line 290, in pyarrow._json.read_json\r\n File \"pyarrow/error.pxi\", line 144, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 100, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: JSON parse error: Column(/inputs) changed from string to number in row 208\r\n" ]
### Describe the bug While [the docs](https://huggingface.co/docs/datasets/upload_dataset#upload-dataset) seem to state that `.jsonl` is a supported extension for `datasets`, loading the dataset results in a `JSONDecodeError`. ### Steps to reproduce the bug Code: ``` from datasets import load_dataset dset = load_dataset('slotreck/pickle') ``` Traceback: ``` Downloading readme: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 925/925 [00:00<00:00, 3.11MB/s] Downloading and preparing dataset json/slotreck--pickle to /mnt/home/lotrecks/.cache/huggingface/datasets/slotreck___json/slotreck--pickle-0c311f36ed032b04/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96... Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 589k/589k [00:00<00:00, 18.9MB/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 104k/104k [00:00<00:00, 4.61MB/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 170k/170k [00:00<00:00, 7.71MB/s] Downloading data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.77it/s] Extracting data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 523.92it/s] Generating train split: 0 examples [00:00, ? examples/s]Failed to read file '/mnt/home/lotrecks/.cache/huggingface/datasets/downloads/6ec07bb2f279c9377036af6948532513fa8f48244c672d2644a2d7018ee5c9cb' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Column(/ner/[]/[]/[]) changed from number to string in row 0 Traceback (most recent call last): File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 144, in _generate_tables dataset = json.load(f) File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/__init__.py", line 296, in load parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/__init__.py", line 348, in loads return _default_decoder.decode(s) File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/decoder.py", line 340, in decode raise JSONDecodeError("Extra data", s, end) json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 3086) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1879, in _prepare_split_single for _, table in generator: File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 147, in _generate_tables raise e File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size) File "pyarrow/_json.pyx", line 259, in pyarrow._json.read_json File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: JSON parse error: Column(/ner/[]/[]/[]) changed from number to string in row 0 The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/load.py", line 1815, in load_dataset storage_options=storage_options, File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 913, in download_and_prepare **download_and_prepare_kwargs, File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1004, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1768, in _prepare_split gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1912, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior For the dataset to be loaded without error. ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core - Python version: 3.7.12 - Huggingface_hub version: 0.15.1 - PyArrow version: 8.0.0 - Pandas version: 1.3.5
6,460
https://github.com/huggingface/datasets/issues/6457
`TypeError`: huggingface_hub.hf_file_system.HfFileSystem.find() got multiple values for keyword argument 'maxdepth'
[ "Updating `fsspec>=2023.10.0` did solve the issue.", "May be it should be pinned somewhere?", "> Maybe this should go in datasets directly... anyways you can easily fix this error by updating datasets>=2.15.1.dev0.\r\n\r\n@lhoestq @mariosasko for what I understand this is a bug fixed in `datasets` already, right? No need to do anything in `huggingface_hub`?", "I've opened a PR with a fix in `huggingface_hub`: https://github.com/huggingface/huggingface_hub/pull/1875", "Thanks! PR is merged and will be shipped in next release of `huggingface_hub`." ]
### Describe the bug Please see https://github.com/huggingface/huggingface_hub/issues/1872 ### Steps to reproduce the bug Please see https://github.com/huggingface/huggingface_hub/issues/1872 ### Expected behavior Please see https://github.com/huggingface/huggingface_hub/issues/1872 ### Environment info Please see https://github.com/huggingface/huggingface_hub/issues/1872
6,457
https://github.com/huggingface/datasets/issues/6451
Unable to read "marsyas/gtzan" data
[ "Hi! We've merged a [PR](https://huggingface.co/datasets/marsyas/gtzan/discussions/1) that fixes the script's path logic on Windows.", "I have transferred the discussion to the corresponding dataset: https://huggingface.co/datasets/marsyas/gtzan/discussions/2\r\n\r\nLet's continue there.", "@mariosasko @albertvillanova \r\n\r\nThank you both very much for the speedy resolution :)" ]
Hi, this is my code and the error: ``` from datasets import load_dataset gtzan = load_dataset("marsyas/gtzan", "all") ``` [error_trace.txt](https://github.com/huggingface/datasets/files/13464397/error_trace.txt) [audio_yml.txt](https://github.com/huggingface/datasets/files/13464410/audio_yml.txt) Python 3.11.5 Jupyter Notebook 6.5.4 Windows 10 I'm able to download and work with other datasets, but not this one. For example, both these below work fine: ``` from datasets import load_dataset dataset = load_dataset("facebook/voxpopuli", "pl", split="train", streaming=True) minds = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` Thanks for your help https://huggingface.co/datasets/marsyas/gtzan/tree/main
6,451
https://github.com/huggingface/datasets/issues/6450
Support multiple image/audio columns in ImageFolder/AudioFolder
[ "A duplicate of https://github.com/huggingface/datasets/issues/5760" ]
### Feature request Have a metadata.csv file with multiple columns that point to relative image or audio files. ### Motivation Currently, ImageFolder allows one column, called `file_name`, pointing to relative image files. On the same model, AudioFolder allows one column, called `file_name`, pointing to relative audio files. But it's not possible to have two image columns, or to have two audio column, or to have one audio column and one image column. ### Your contribution no specific contribution
6,450
https://github.com/huggingface/datasets/issues/6447
Support one dataset loader per config when using YAML
[]
### Feature request See https://huggingface.co/datasets/datasets-examples/doc-unsupported-1 I would like to use CSV loader for the "csv" config, JSONL loader for the "jsonl" config, etc. ### Motivation It would be more flexible for the users ### Your contribution No specific contribution
6,447
https://github.com/huggingface/datasets/issues/6446
Speech Commands v2 dataset doesn't match AST-v2 config
[ "You can use `.align_labels_with_mapping` on the dataset to align the labels with the model config.\r\n\r\nRegarding the number of labels, only the special `_silence_` label corresponding to noise is missing, which is consistent with the model paper (reports training on 35 labels). You can run a `.filter` to drop it.\r\n\r\nPS: You should create a discussion on a model/dataset repo (on the Hub) for these kinds of questions", "Thanks, will keep that in mind. But I tried running `dataset_aligned = dataset.align_labels_with_mapping(model.config.id2label, 'label')`, and received this error: \r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/victor/anaconda3/envs/transformers-v2/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 5928, in align_labels_with_mapping\r\n label2id = {k.lower(): v for k, v in label2id.items()}\r\n File \"/Users/victor/anaconda3/envs/transformers-v2/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 5928, in <dictcomp>\r\n label2id = {k.lower(): v for k, v in label2id.items()}\r\nAttributeError: 'int' object has no attribute 'lower'\r\n```\r\nMy guess is that the dataset `label` column is purely an int ID, and I'm not sure there's a way to identify which class label the ID belongs to in the dataset easily.", "Replacing `model.config.id2label` with `model.config.label2id` should fix the issue.\r\n\r\nSo, the full code to align the labels with the model config is as follows:\r\n```python\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoFeatureExtractor, AutoModelForAudioClassification\r\n\r\n# extractor = AutoFeatureExtractor.from_pretrained(\"MIT/ast-finetuned-speech-commands-v2\")\r\nmodel = AutoModelForAudioClassification.from_pretrained(\"MIT/ast-finetuned-speech-commands-v2\")\r\n\r\nds = load_dataset(\"speech_commands\", \"v0.02\")\r\nds = ds.filter(lambda label: label != ds[\"train\"].features[\"label\"].str2int(\"_silence_\"), input_columns=\"label\")\r\nds = ds.align_labels_with_mapping(model.config.label2id, \"label\")\r\n```" ]
### Describe the bug [According](https://huggingface.co/MIT/ast-finetuned-speech-commands-v2) to `MIT/ast-finetuned-speech-commands-v2`, the model was trained on the Speech Commands v2 dataset. However, while the model config says the model should have 35 class labels, the dataset itself has 36 class labels. Moreover, the class labels themselves don't match between the model config and the dataset. It is difficult to reproduce the data used to fine tune `MIT/ast-finetuned-speech-commands-v2`. ### Steps to reproduce the bug ``` >>> model = ASTForAudioClassification.from_pretrained("MIT/ast-finetuned-speech-commands-v2") >>> model.config.id2label {0: 'backward', 1: 'follow', 2: 'five', 3: 'bed', 4: 'zero', 5: 'on', 6: 'learn', 7: 'two', 8: 'house', 9: 'tree', 10: 'dog', 11: 'stop', 12: 'seven', 13: 'eight', 14: 'down', 15: 'six', 16: 'forward', 17: 'cat', 18: 'right', 19: 'visual', 20: 'four', 21: 'wow', 22: 'no', 23: 'nine', 24: 'off', 25: 'three', 26: 'left', 27: 'marvin', 28: 'yes', 29: 'up', 30: 'sheila', 31: 'happy', 32: 'bird', 33: 'go', 34: 'one'} >>> dataset = load_dataset("speech_commands", "v0.02", split="test") >>> torch.unique(torch.Tensor(dataset['label'])) tensor([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24., 25., 26., 27., 28., 29., 30., 31., 32., 33., 34., 35.]) ``` If you try to explore the [dataset itself](https://huggingface.co/datasets/speech_commands/viewer/v0.02/test), you can see that the id to label does not match what is provided by `model.config.id2label`. ### Expected behavior The labels should match completely and there should be the same number of label classes between the model config and the dataset itself. ### Environment info datasets = 2.14.6, transformers = 4.33.3
6,446
https://github.com/huggingface/datasets/issues/6443
Trouble loading files defined in YAML explicitly
[ "There is a typo in one of the file names - `data/edf.csv` should be renamed to `data/def.csv` 🙂. ", "wow, I reviewed it twice to avoid being ashamed like that, but... I didn't notice the typo.\r\n\r\n---\r\n\r\nBesides this: do you think we would be able to improve the error message to make this clearer?" ]
Look at https://huggingface.co/datasets/severo/doc-yaml-2 It's a reproduction of the example given in the docs at https://huggingface.co/docs/hub/datasets-manual-configuration ``` You can select multiple files per split using a list of paths: my_dataset_repository/ ├── README.md ├── data/ │ ├── abc.csv │ └── def.csv └── holdout/ └── ghi.csv --- configs: - config_name: default data_files: - split: train path: - "data/abc.csv" - "data/def.csv" - split: test path: "holdout/ghi.csv" --- ``` It raises the following error: ``` Error code: ConfigNamesError Exception: FileNotFoundError Message: Couldn't find a dataset script at /src/services/worker/severo/doc-yaml-2/doc-yaml-2.py or any data file in the same directory. Couldn't find 'severo/doc-yaml-2' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/severo/doc-yaml-2@938a0578fb4c6bc9da7d80b06a3ba39c2834b0c2/data/def.csv' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.arrow', '.txt', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip'] Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 65, in compute_config_names_response for config in sorted(get_dataset_config_names(path=dataset, token=hf_token)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1507, in dataset_module_factory raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at /src/services/worker/severo/doc-yaml-2/doc-yaml-2.py or any data file in the same directory. Couldn't find 'severo/doc-yaml-2' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/severo/doc-yaml-2@938a0578fb4c6bc9da7d80b06a3ba39c2834b0c2/data/def.csv' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.arrow', '.txt', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip'] ```
6,443
https://github.com/huggingface/datasets/issues/6442
Trouble loading image folder with additional features - metadata file ignored
[ "I reproduced too:\r\n- root: metadata file is ignored (https://huggingface.co/datasets/severo/doc-image-3)\r\n- data/ dir: metadata file is ignored (https://huggingface.co/datasets/severo/doc-image-4)\r\n- train/ dir: works (https://huggingface.co/datasets/severo/doc-image-5)" ]
### Describe the bug Loading image folder with a caption column using `load_dataset(<image_folder_path>)` doesn't load the captions. When loading a local image folder with captions using `datasets==2.13.0` ``` from datasets import load_dataset data = load_dataset(<image_folder_path>) data.column_names ``` yields `{'train': ['image', 'prompt']}` but when using `datasets==2.15.0` yeilds `{'train': ['image']}` Putting the images and `metadata.jsonl` file into a nested `train` folder **or** loading with `load_dataset("imagefolder", data_dir=<image_folder_path>)` solves the issue and yields `{'train': ['image', 'prompt']}` ### Steps to reproduce the bug 1. create a folder `<image_folder_path>` that contains images and a metadata file with additional features- e.g. "prompt" 2. run: ``` from datasets import load_dataset data = load_dataset("<image_folder_path>") data.column_names ``` ### Expected behavior `{'train': ['image', 'prompt']}` ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-5.15.120+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.19.4 - PyArrow version: 9.0.0 - Pandas version: 1.5.3 - `fsspec` version: 2023.6.0
6,442
https://github.com/huggingface/datasets/issues/6441
Trouble Loading a Gated Dataset For User with Granted Permission
[ "> Also when they try to click the url link for the dataset they get a 404 error.\r\n\r\nThis seems to be a Hub error then (cc @SBrandeis)", "Could you report this to /static-proxy?url=https%3A%2F%2Fdiscuss.huggingface.co%2Fc%2Fhub%2F23%2C providing the URL of the dataset, or at least if the dataset is public or private?", "Thanks for the reply! I've created an issue on the hub's board here: /static-proxy?url=https%3A%2F%2Fdiscuss.huggingface.co%2Ft%2Ftrouble-loading-a-gated-dataset-for-user-with-granted-permission%2F65565" ]
### Describe the bug I have granted permissions to several users to access a gated huggingface dataset. The users accepted the invite and when trying to load the dataset using their access token they get `FileNotFoundError: Couldn't find a dataset script at .....` . Also when they try to click the url link for the dataset they get a 404 error. ### Steps to reproduce the bug 1. Grant access to gated dataset for specific users 2. Users accept invitation 3. Users login to hugging face hub using cli login 4. Users run load_dataset ### Expected behavior Dataset is loaded normally for users who were granted access to the gated dataset. ### Environment info datasets==2.15.0
6,441
https://github.com/huggingface/datasets/issues/6440
`.map` not hashing under python 3.9
[ "Tried to upgrade Python to 3.11 - still get this message. A partial solution is to NOT use `num_proc` at all. It will be considerably longer to finish the job.", "Hi! The `model = torch.compile(model)` line is problematic for our hashing logic. We would have to merge https://github.com/huggingface/datasets/pull/5867 to support hashing `torch.compile`-ed models/functions. \r\n\r\nI've started refactoring the hashing logic and plan to incorporate a fix for `torch.compile` as part of it, so this should be addressed soon (probably this or next week). " ]
### Describe the bug The `.map` function cannot hash under python 3.9. Tried to use [the solution here](https://github.com/huggingface/datasets/issues/4521#issuecomment-1205166653), but still get the same message: `Parameter 'function'=<function map_to_pred at 0x7fa0b49ead30> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.` ### Steps to reproduce the bug ```python def map_to_pred(batch): """ Perform inference on an audio batch Parameters: batch (dict): A dictionary containing audio data and other related information. Returns: dict: The input batch dictionary with added prediction and transcription fields. """ audio = batch['audio'] input_features = processor( audio['array'], sampling_rate=audio['sampling_rate'], return_tensors="pt").input_features input_features = input_features.to('cuda') with torch.no_grad(): predicted_ids = model.generate(input_features) preds = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0] batch['prediction'] = processor.tokenizer._normalize(preds) batch["transcription"] = processor.tokenizer._normalize(batch['transcription']) return batch MODEL_CARD = "openai/whisper-small" MODEL_NAME = MODEL_CARD.rsplit('/', maxsplit=1)[-1] model = WhisperForConditionalGeneration.from_pretrained(MODEL_CARD) processor = AutoProcessor.from_pretrained( MODEL_CARD, language="english", task="transcribe") model = torch.compile(model) dt = load_dataset("audiofolder", data_dir=config['DATA']['dataset'], split="test") dt = dt.cast_column("audio", Audio(sampling_rate=16000)) result = coraal_dt.map(map_to_pred, num_proc=16) ``` ### Expected behavior Hashed and cached dataset starts inferencing ### Environment info - `transformers` version: 4.35.0 - Platform: Linux-5.14.0-284.30.1.el9_2.x86_64-x86_64-with-glibc2.34 - Python version: 3.9.18 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
6,440
https://github.com/huggingface/datasets/issues/6439
Download + preparation speed of datasets.load_dataset is 20x slower than huggingface hub snapshot and manual loding
[]
### Describe the bug I am working with a dataset I am trying to publish. The path is Antreas/TALI. It's a fairly large dataset, and contains images, video, audio and text. I have been having multiple problems when the dataset is being downloaded using the load_dataset function -- even with 64 workers taking more than 7 days to process. With snapshot download it takes 12 hours, and that includes the dataset preparation done using load_dataset and passing the dataset parquet file paths. Find the script I am using below: ```python import multiprocessing as mp import pathlib from typing import Optional import datasets from rich import print from tqdm import tqdm def download_dataset_via_hub( dataset_name: str, dataset_download_path: pathlib.Path, num_download_workers: int = mp.cpu_count(), ): import huggingface_hub as hf_hub download_folder = hf_hub.snapshot_download( repo_id=dataset_name, repo_type="dataset", cache_dir=dataset_download_path, resume_download=True, max_workers=num_download_workers, ignore_patterns=[], ) return pathlib.Path(download_folder) / "data" def load_dataset_via_hub( dataset_download_path: pathlib.Path, num_download_workers: int = mp.cpu_count(), dataset_name: Optional[str] = None, ): from dataclasses import dataclass, field from datasets import ClassLabel, Features, Image, Sequence, Value dataset_path = download_dataset_via_hub( dataset_download_path=dataset_download_path, num_download_workers=num_download_workers, dataset_name=dataset_name, ) # Building a list of file paths for validation set train_files = [ file.as_posix() for file in pathlib.Path(dataset_path).glob("*.parquet") if "train" in file.as_posix() ] val_files = [ file.as_posix() for file in pathlib.Path(dataset_path).glob("*.parquet") if "val" in file.as_posix() ] test_files = [ file.as_posix() for file in pathlib.Path(dataset_path).glob("*.parquet") if "test" in file.as_posix() ] print( f"Found {len(test_files)} files for testing set, {len(train_files)} for training set and {len(val_files)} for validation set" ) data_files = { "test": test_files, "val": val_files, "train": train_files, } features = Features( { "image": Image( decode=True ), # Set `decode=True` if you want to decode the images, otherwise `decode=False` "image_url": Value("string"), "item_idx": Value("int64"), "wit_features": Sequence( { "attribution_passes_lang_id": Value("bool"), "caption_alt_text_description": Value("string"), "caption_reference_description": Value("string"), "caption_title_and_reference_description": Value("string"), "context_page_description": Value("string"), "context_section_description": Value("string"), "hierarchical_section_title": Value("string"), "is_main_image": Value("bool"), "language": Value("string"), "page_changed_recently": Value("bool"), "page_title": Value("string"), "page_url": Value("string"), "section_title": Value("string"), } ), "wit_idx": Value("int64"), "youtube_title_text": Value("string"), "youtube_description_text": Value("string"), "youtube_video_content": Value("binary"), "youtube_video_starting_time": Value("string"), "youtube_subtitle_text": Value("string"), "youtube_video_size": Value("int64"), "youtube_video_file_path": Value("string"), } ) dataset = datasets.load_dataset( "parquet" if dataset_name is None else dataset_name, data_files=data_files, features=features, num_proc=1, cache_dir=dataset_download_path / "cache", ) return dataset if __name__ == "__main__": dataset_cache = pathlib.Path("/disk/scratch_fast0/tali/") dataset = load_dataset_via_hub(dataset_cache, dataset_name="Antreas/TALI")[ "test" ] for sample in tqdm(dataset): print(list(sample.keys())) ``` Also, streaming this dataset has been a very painfully slow process. Streaming the train set takes 15m to start, and streaming the test and val sets takes 3 hours to start! ### Steps to reproduce the bug 1. Run the code I provided to get a sense of how fast snapshot + manual is 2. Run datasets.load_dataset("Antreas/TALI") to get a sense of the speed of that OP. 3. You should now have an appreciation of how long these things take. ### Expected behavior The load dataset function should be at least as fast as the huggingface snapshot download function in terms of downloading dataset files. Not 20 times slower. ### Environment info - `datasets` version: 2.14.5 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - Huggingface_hub version: 0.17.3 - PyArrow version: 13.0.0 - Pandas version: 2.1.1
6,439
https://github.com/huggingface/datasets/issues/6438
Support GeoParquet
[ "Thank you, @severo ! I would be more than happy to help in any way I can. I am not familiar with this repo's codebase, but I would be eager to contribute. :)\r\n\r\nFor the preview in Datasets Hub, I think it makes sense to just display the geospatial column as text. If there were a dataset loader, though, I think it should be able to support the geospatial components. Geopandas is probably the most user-friendly interface for that. I'm not sure if it's currently relevant in the context of geoparquet, but I think the pyogrio driver is faster than fiona.\r\n\r\nBut the whole gdal dependency thing can be a real pain. If anything, it would need to be an optional dependency. Maybe it would be best if the loader tries importing relevant geospatial libraries, and in the event of an ImportError, falls back to text for the geometry column.\r\n\r\nPlease let me know if I can be of assistance, and thanks again for creating this Issue. :)", "Just hitting into this same issue too showing GeoParquet files in Datasets Viewer. I tried to implement a custom reader for GeoParquet in https://huggingface.co/datasets/weiji14/clay_vector_embeddings/discussions/1, but it seems like HuggingFace has disabled datasets with custom loading scripts from using the dataset viewer according to /static-proxy?url=https%3A%2F%2Fdiscuss.huggingface.co%2Ft%2Fdataset-repo-requires-arbitrary-python-code-execution%2F59346 :frowning_face: \r\n\r\n![image](https://github.com/huggingface/datasets/assets/23487320/2f84d8ce-91c2-48cb-b72c-547ea8583892)\r\n\r\nI'm thinking now if there's a way to simply map files with GeoParquet extensions (*.gpq, *.geoparquet, etc) to use the Parquet reader. Maybe we could allowlist these geoparquet file extensions at https://github.com/huggingface/datasets/blame/0caf91285116ec910f409e82cc6e1f4cff7496e3/src/datasets/packaged_modules/__init__.py#L30-L51? Having the table columns show up would be a quick win.\r\n\r\nLonger term though, it would certainly be nice if the WKB geometry columns could be displayed in a nicer form. Geopandas' [read_parquet](https://geopandas.org/en/v0.14.1/docs/reference/api/geopandas.read_parquet.html) function is supposedly faster than `pyogrio.read_dataframe` according to https://github.com/geopandas/geopandas/discussions/2724#discussioncomment-4606048, but there's also [`pyogrio.raw.read_arrow`](https://pyogrio.readthedocs.io/en/latest/api.html#pyogrio.raw.read_arrow) now that can read into a `pyarrow.Table` directly.", "Update: It looks like renaming the GeoParquet file to have a file extension of `*.parquet` works (see https://huggingface.co/datasets/weiji14/clay_vector_embeddings). HuggingFace's default parquet reader is able to read the GeoParquet file, though the geometry column is of an unknown type:\r\n\r\n![image](https://github.com/huggingface/datasets/assets/23487320/9060c300-d595-4409-9ccb-5e0207396883)\r\n\r\nI've opened a quick PR at #6508 to allow files with a `*.geoparquet` or `*.gpq` extension to be read using the default Parquet reader. Let's see how that goes :smile:", "@joshuasundance-swca, @weiji14, If I'm understanding this correctly, the code below wouldn't be recommended to due to dependency headaches? If that's the case, what solution would there be to see the geometry features for .gpq files in huggingfaceHub? \r\n\r\ncode for dataset_loader.py\r\n```\r\nimport geopandas as gpd\r\n# ... (other imports remain the same)\r\n\r\nclass ClayVectorEmbeddings(datasets.ArrowBasedBuilder):\r\n # ... (other parts of the class remain the same)\r\n\r\n def _info(self):\r\n # Read the GeoParquet file to get the schema for the 'geometry' feature\r\n gdf = gpd.read_file(\"path/to/your/geoparquet/file.gpq\") # Replace with your file path\r\n geometry_schema = str(gdf.geometry.dtype)\r\n\r\n return datasets.DatasetInfo(\r\n # This is the description that will appear on the datasets page.\r\n description=\"Clay Vector Embeddings in GeoParquet format.\",\r\n # This defines the different columns of the dataset and their types\r\n features=datasets.Features(\r\n {\r\n \"source_url\": datasets.Value(dtype=\"string\"),\r\n \"date\": datasets.Value(dtype=\"date32\"),\r\n \"embeddings\": datasets.Value(\"string\"),\r\n \"geometry\": datasets.Value(dtype=geometry_schema), # Use the schema read by GeoPandas\r\n # ... (other features)\r\n }\r\n ),\r\n )\r\n\r\n# ... (rest of the script remains the same)\r\n\r\n```", "Hi @mehrdad-es, I'm not sure if HuggingFace would be keen to add `geopandas` to HuggingFace Hub (maybe a question for @severo?). Having a geometry viewer would be an even bigger task, and if you're thinking of a map-viewer, it might involve some redesign of the website UI. Some of my colleagues are working on streamlining GeoParquet visualization from cloud-hosted instances like HuggingFace (see e.g. https://github.com/developmentseed/lonboard/issues/314), and we could definitely come up with something if there's interest.", "I've created https://github.com/huggingface/datasets-server/issues/2416 to discuss the possibility of supporting (vectorial) geospatial columns in the dataset viewer, or in the converted parquet files.\r\n\r\nAt the same time, it would be super interesting to see what is already possible to do with a Hugging Face dataset that hosts geospatial data. \r\n\r\n> Some of my colleagues are working on streamlining GeoParquet visualization from cloud-hosted instances like HuggingFace (see e.g. https://github.com/developmentseed/lonboard/issues/314), and we could definitely come up with something if there's interest.\r\n\r\nIt would be awesome to show this inside a [Space](https://huggingface.co/docs/hub/spaces)." ]
### Feature request Support the GeoParquet format ### Motivation GeoParquet (https://geoparquet.org/) is a common format for sharing vectorial geospatial data on the cloud, along with "traditional" data columns. It would be nice to be able to load this format with datasets, and more generally, in the Datasets Hub (see https://huggingface.co/datasets/joshuasundance/govgis_nov2023-slim-spatial/discussions/1). ### Your contribution I would be happy to help work on a PR (but I don't think I can do one on my own). Also, we have to define what we want to support: - load all the columns, but get the "geospatial" column in text-only mode for now - or, fully support the spatial features, maybe taking inspiration from (or depending upon) https://geopandas.org/en/stable/index.html (which itself depends on https://fiona.readthedocs.io/en/stable/, which requires a local install of https://gdal.org/)
6,438
https://github.com/huggingface/datasets/issues/6437
Problem in training iterable dataset
[ "Has anyone ever encountered this problem before?", "`split_dataset_by_node` doesn't give the exact same number of examples to each node in the case of iterable datasets, though it tries to be as equal as possible. In particular if your dataset is sharded and you have a number of shards that is a factor of the number of workers, then the shards will be evenly distributed among workers. If the shards don't contain the same number of examples, then some workers might end up with more examples than others.\r\n\r\nHowever if you use a Dataset you'll end up with the same amount of data, because we know the length of the dataset we can split it exactly where we want. Also Dataset objects don't load the full dataset in memory; instead it memory maps Arrow files from disk.", "> `split_dataset_by_node` doesn't give the exact same number of examples to each node in the case of iterable datasets, though it tries to be as equal as possible. In particular if your dataset is sharded and you have a number of shards that is a factor of the number of workers, then the shards will be evenly distributed among workers. If the shards don't contain the same number of examples, then some workers might end up with more examples than others.\r\n> \r\n> However if you use a Dataset you'll end up with the same amount of data, because we know the length of the dataset we can split it exactly where we want. Also Dataset objects don't load the full dataset in memory; instead it memory maps Arrow files from disk.\r\n\r\nThanks for your answer! I finally solve it by using the torch.distributed.algorithms.join.Join. I think maybe some rookie like me would face the same question the day after tomorrow hh.", "Great ! Maybe it can be worth having an example that we can include in the docs for other people, did you need anything else than the Join context manager used with the model and optimizer ?", "> Great ! Maybe it can be worth having an example that we can include in the docs for other people, did you need anything else than the Join context manager used with the model and optimizer ?\r\n\r\nI think it's none. I have tried barrier() to solve the problem but I failed. Maybe it's a tool for other situation." ]
### Describe the bug I am using PyTorch DDP (Distributed Data Parallel) to train my model. Since the data is too large to load into memory at once, I am using load_dataset to read the data as an iterable dataset. I have used datasets.distributed.split_dataset_by_node to distribute the dataset. However, I have noticed that this distribution results in different processes having different amounts of data to train on. As a result, when the earliest process finishes training and starts predicting on the test set, other processes are still training, causing the overall training speed to be very slow. ### Steps to reproduce the bug ``` def train(args, model, device, train_loader, optimizer, criterion, epoch, length): model.train() idx_length = 0 for batch_idx, data in enumerate(train_loader): s_time = time.time() X = data['X'] target = data['y'].reshape(-1, 28) X, target = X.to(device), target.to(device) optimizer.zero_grad() output = model(X) loss = criterion(output, target) loss.backward() optimizer.step() idx_length += 1 if batch_idx % args.log_interval == 0: # print('Train Epoch: {} Batch_idx: {} Process: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( # epoch, batch_idx, torch.distributed.get_rank(), batch_idx * len(X), length / torch.distributed.get_world_size(), # 100. * batch_idx * len( # X) * torch.distributed.get_world_size() / length, loss.item())) print('Train Epoch: {} Batch_idx: {} Process: {} [{}/{} ({:.0f}%)]\t'.format( epoch, batch_idx, torch.distributed.get_rank(), batch_idx * len(X), length / torch.distributed.get_world_size(), 100. * batch_idx * len( X) * torch.distributed.get_world_size() / length)) if args.dry_run: break print('Process %s length: %s time: %s' % (torch.distributed.get_rank(), idx_length, datetime.datetime.now())) train_iterable_dataset = load_dataset("parquet", data_files=data_files, split="train", streaming=True) test_iterable_dataset = load_dataset("parquet", data_files=data_files, split="test", streaming=True) train_iterable_dataset = train_iterable_dataset.map(process_fn) test_iterable_dataset = test_iterable_dataset.map(process_fn) train_iterable_dataset = train_iterable_dataset.map(scale) test_iterable_dataset = test_iterable_dataset.map(scale) train_iterable_dataset = datasets.distributed.split_dataset_by_node(train_iterable_dataset, world_size=world_size, rank=local_rank).shuffle(seed=1234) test_iterable_dataset = datasets.distributed.split_dataset_by_node(test_iterable_dataset, world_size=world_size, rank=local_rank).shuffle(seed=1234) print(torch.distributed.get_rank(), train_iterable_dataset.n_shards, test_iterable_dataset.n_shards) train_kwargs = {'batch_size': args.batch_size} test_kwargs = {'batch_size': args.test_batch_size} if use_cuda: cuda_kwargs = {'num_workers': 3,#ngpus_per_node, 'pin_memory': True, 'shuffle': False} train_kwargs.update(cuda_kwargs) test_kwargs.update(cuda_kwargs) train_loader = torch.utils.data.DataLoader(train_iterable_dataset, **train_kwargs, # sampler=torch.utils.data.distributed.DistributedSampler( # train_iterable_dataset, # num_replicas=ngpus_per_node, # rank=0) ) test_loader = torch.utils.data.DataLoader(test_iterable_dataset, **test_kwargs, # sampler=torch.utils.data.distributed.DistributedSampler( # test_iterable_dataset, # num_replicas=ngpus_per_node, # rank=0) ) for epoch in range(1, args.epochs + 1): start_time = time.time() train_iterable_dataset.set_epoch(epoch) test_iterable_dataset.set_epoch(epoch) train(args, model, device, train_loader, optimizer, criterion, epoch, train_len) test(args, model, device, criterion2, test_loader) ``` And here’s the part of output: ``` Train Epoch: 1 Batch_idx: 5000 Process: 0 [320000/4710975.0 (7%)] Train Epoch: 1 Batch_idx: 5000 Process: 1 [320000/4710975.0 (7%)] Train Epoch: 1 Batch_idx: 5000 Process: 2 [320000/4710975.0 (7%)] Train Epoch: 1 Batch_idx: 5862 Process: 3 Data_length: 12 coststime: 0.04095172882080078 Train Epoch: 1 Batch_idx: 5862 Process: 0 Data_length: 3 coststime: 0.0751960277557373 Train Epoch: 1 Batch_idx: 5867 Process: 3 Data_length: 49 coststime: 0.0032558441162109375 Train Epoch: 1 Batch_idx: 5872 Process: 1 Data_length: 2 coststime: 0.022842884063720703 Train Epoch: 1 Batch_idx: 5876 Process: 3 Data_length: 63 coststime: 0.002694845199584961 Process 3 length: 5877 time: 2023-11-17 17:03:26.582317 Train epoch 1 costTime: 241.72063446044922s . Process 3 Start to test. 3 0 tensor(45508.8516, device='cuda:3') 3 100 tensor(45309.0469, device='cuda:3') 3 200 tensor(45675.3047, device='cuda:3') 3 300 tensor(45263.0273, device='cuda:3') Process 3 Reduce metrics. Train Epoch: 2 Batch_idx: 0 Process: 3 [0/4710975.0 (0%)] Train Epoch: 1 Batch_idx: 5882 Process: 1 Data_length: 63 coststime: 0.05185818672180176 Train Epoch: 1 Batch_idx: 5887 Process: 1 Data_length: 12 coststime: 0.006895303726196289 Process 1 length: 5888 time: 2023-11-17 17:20:48.578204 Train epoch 1 costTime: 1285.7279663085938s . Process 1 Start to test. 1 0 tensor(45265.9141, device='cuda:1') ``` ### Expected behavior I'd like to know how to fix this problem. ### Environment info ``` torch==2.0 datasets==2.14.0 ```
6,437
https://github.com/huggingface/datasets/issues/6436
TypeError: <lambda>() takes 0 positional arguments but 1 was given
[ "This looks like a problem with your environment rather than `datasets`.", "I meet the same problem,\r\nand originally use\r\n```python\r\nlocale.getpreferredencoding = lambda : \"UTF-8\"\r\n```\r\nand change to\r\n```\r\nlocale.getpreferredencoding = lambda x: \"UTF-8\"\r\n```\r\nand it works." ]
### Describe the bug ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-35-7b6becee3685>](https://localhost:8080/#) in <cell line: 1>() ----> 1 from datasets import Dataset 9 frames [/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module> 20 __version__ = "2.15.0" 21 ---> 22 from .arrow_dataset import Dataset 23 from .arrow_reader import ReadInstruction 24 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module> 61 import pyarrow.compute as pc 62 from huggingface_hub import CommitOperationAdd, CommitOperationDelete, DatasetCard, DatasetCardData, HfApi ---> 63 from multiprocess import Pool 64 from requests import HTTPError 65 [/usr/local/lib/python3.10/dist-packages/multiprocess/__init__.py](https://localhost:8080/#) in <module> 31 32 import sys ---> 33 from . import context 34 35 # [/usr/local/lib/python3.10/dist-packages/multiprocess/context.py](https://localhost:8080/#) in <module> 4 5 from . import process ----> 6 from . import reduction 7 8 __all__ = () [/usr/local/lib/python3.10/dist-packages/multiprocess/reduction.py](https://localhost:8080/#) in <module> 14 import os 15 try: ---> 16 import dill as pickle 17 except ImportError: 18 import pickle [/usr/local/lib/python3.10/dist-packages/dill/__init__.py](https://localhost:8080/#) in <module> 24 25 ---> 26 from ._dill import ( 27 dump, dumps, load, loads, copy, 28 Pickler, Unpickler, register, pickle, pickles, check, [/usr/local/lib/python3.10/dist-packages/dill/_dill.py](https://localhost:8080/#) in <module> 166 try: 167 from _pyio import open as _open --> 168 PyTextWrapperType = get_file_type('r', buffering=-1, open=_open) 169 PyBufferedRandomType = get_file_type('r+b', buffering=-1, open=_open) 170 PyBufferedReaderType = get_file_type('rb', buffering=-1, open=_open) [/usr/local/lib/python3.10/dist-packages/dill/_dill.py](https://localhost:8080/#) in get_file_type(*args, **kwargs) 154 def get_file_type(*args, **kwargs): 155 open = kwargs.pop("open", __builtin__.open) --> 156 f = open(os.devnull, *args, **kwargs) 157 t = type(f) 158 f.close() [/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in open(file, mode, buffering, encoding, errors, newline, closefd, opener) 280 return result 281 encoding = text_encoding(encoding) --> 282 text = TextIOWrapper(buffer, encoding, errors, newline, line_buffering) 283 result = text 284 text.mode = mode [/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in __init__(self, buffer, encoding, errors, newline, line_buffering, write_through) 2043 encoding = "utf-8" 2044 else: -> 2045 encoding = locale.getpreferredencoding(False) 2046 2047 if not isinstance(encoding, str): TypeError: <lambda>() takes 0 positional arguments but 1 was given ``` or ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-36-652e886d387f>](https://localhost:8080/#) in <cell line: 1>() ----> 1 import datasets 9 frames [/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module> 20 __version__ = "2.15.0" 21 ---> 22 from .arrow_dataset import Dataset 23 from .arrow_reader import ReadInstruction 24 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module> 61 import pyarrow.compute as pc 62 from huggingface_hub import CommitOperationAdd, CommitOperationDelete, DatasetCard, DatasetCardData, HfApi ---> 63 from multiprocess import Pool 64 from requests import HTTPError 65 [/usr/local/lib/python3.10/dist-packages/multiprocess/__init__.py](https://localhost:8080/#) in <module> 31 32 import sys ---> 33 from . import context 34 35 # [/usr/local/lib/python3.10/dist-packages/multiprocess/context.py](https://localhost:8080/#) in <module> 4 5 from . import process ----> 6 from . import reduction 7 8 __all__ = () [/usr/local/lib/python3.10/dist-packages/multiprocess/reduction.py](https://localhost:8080/#) in <module> 14 import os 15 try: ---> 16 import dill as pickle 17 except ImportError: 18 import pickle [/usr/local/lib/python3.10/dist-packages/dill/__init__.py](https://localhost:8080/#) in <module> 24 25 ---> 26 from ._dill import ( 27 dump, dumps, load, loads, copy, 28 Pickler, Unpickler, register, pickle, pickles, check, [/usr/local/lib/python3.10/dist-packages/dill/_dill.py](https://localhost:8080/#) in <module> 166 try: 167 from _pyio import open as _open --> 168 PyTextWrapperType = get_file_type('r', buffering=-1, open=_open) 169 PyBufferedRandomType = get_file_type('r+b', buffering=-1, open=_open) 170 PyBufferedReaderType = get_file_type('rb', buffering=-1, open=_open) [/usr/local/lib/python3.10/dist-packages/dill/_dill.py](https://localhost:8080/#) in get_file_type(*args, **kwargs) 154 def get_file_type(*args, **kwargs): 155 open = kwargs.pop("open", __builtin__.open) --> 156 f = open(os.devnull, *args, **kwargs) 157 t = type(f) 158 f.close() [/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in open(file, mode, buffering, encoding, errors, newline, closefd, opener) 280 return result 281 encoding = text_encoding(encoding) --> 282 text = TextIOWrapper(buffer, encoding, errors, newline, line_buffering) 283 result = text 284 text.mode = mode [/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in __init__(self, buffer, encoding, errors, newline, line_buffering, write_through) 2043 encoding = "utf-8" 2044 else: -> 2045 encoding = locale.getpreferredencoding(False) 2046 2047 if not isinstance(encoding, str): TypeError: <lambda>() takes 0 positional arguments but 1 was given ``` ### Steps to reproduce the bug `import datasets` on colab ### Expected behavior work fine ### Environment info colab `!pip install datasets`
6,436
https://github.com/huggingface/datasets/issues/6435
Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
[ "[This doc section](https://huggingface.co/docs/datasets/main/en/process#multiprocessing) explains how to modify the script to avoid this error.", "@mariosasko thank you very much, i'll check it", "@mariosasko no it does not\r\n\r\n`Dataset.filter() got an unexpected keyword argument 'with_rank'`" ]
### Describe the bug 1. I ran dataset mapping with `num_proc=6` in it and got this error: `RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method` I can't actually find a way to run multi-GPU dataset mapping. Can you help? ### Steps to reproduce the bug 1. Rund SDXL training with `num_proc=6`: https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py ### Expected behavior Should work well ### Environment info 6x A100 SXM, Linux
6,435
https://github.com/huggingface/datasets/issues/6432
load_dataset does not load all of the data in my input file
[ "You should use `datasets.load_dataset` instead of `nlp.load_dataset`, as the `nlp` package is outdated.\r\n\r\nIf switching to `datasets.load_dataset` doesn't fix the issue, sharing the JSON file (feel free to replace the data with dummy data) would be nice so that we can reproduce it ourselves." ]
### Describe the bug I have 127 elements in my input dataset. When I do a len on the dataset after loaded, it is only 124 elements. ### Steps to reproduce the bug train_dataset = nlp.load_dataset(data_args.dataset_path, name=data_args.qg_format, split=nlp.Split.TRAIN) valid_dataset = nlp.load_dataset(data_args.dataset_path, name=data_args.qg_format, split=nlp.Split.VALIDATION) logger.info(len(train_dataset)) logger.info(len(valid_dataset)) Both train and valid input are 127 items. However, they both only load 124 items. The input format is in json. At the end of the day, I am trying to create .pt files. ### Expected behavior I see all 127 elements in my dataset when performing len ### Environment info Python 3.10. CentOS operating system. nlp==0.40, datasets==2.14.5, transformers==4.26.1
6,432
https://github.com/huggingface/datasets/issues/6422
Allow to choose the `writer_batch_size` when using `save_to_disk`
[ "We have a config variable that controls the batch size in `save_to_disk`:\r\n```python\r\nimport datasets\r\ndatasets.config.DEFAULT_MAX_BATCH_SIZE = <smaller_batch_size>\r\n...\r\nds.save_to_disk(...)\r\n```", "Thank you for your answer!\r\n\r\nFrom what I am reading in `https://github.com/huggingface/datasets/blob/2.14.5/src/datasets/arrow_dataset.py`, every function involved (`select`, `shard`, ...) has a default hardcoded batch size of 1000, as such:\r\n```python\r\ndef select(\r\n self,\r\n indices: Iterable,\r\n keep_in_memory: bool = False,\r\n indices_cache_file_name: Optional[str] = None,\r\n writer_batch_size: Optional[int] = 1000,\r\n new_fingerprint: Optional[str] = None,\r\n ) -> \"Dataset\":\r\n...\r\n```\r\nThen, `ArrowWriter` is instantiated with the specified `writer_batch_size`. In `ArrowWriter`, `writer_batch_size` is set to `datasets.config.DEFAULT_MAX_BATCH_SIZE` if it is `None`(https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_writer.py#L345C14-L345C31). However, in our case, it is already set to 1000 by \"parent\" methods, so it won't happen.\r\n\r\nNevertheless, due to this: \r\n```python\r\ndef _save_to_disk_single(job_id: int, shard: \"Dataset\", fpath: str, storage_options: Optional[dict]):\r\n batch_size = config.DEFAULT_MAX_BATCH_SIZE\r\n...\r\n```\r\nit seems to work. I will use it as such, but it should maybe be added to documentation? And maybe improved in next versions?" ]
### Feature request Add an argument in `save_to_disk` regarding batch size, which would be passed to `shard` and other methods. ### Motivation The `Dataset.save_to_disk` method currently calls `shard` without passing a `writer_batch_size` argument, thus implicitly using the default value (1000). This can result in RAM saturation when using a lot of processes on long text sequences or other modalities, or for specific IO configs. ### Your contribution I would be glad to submit a PR, as long as it does not imply extensive tests refactoring.
6,422
https://github.com/huggingface/datasets/issues/6417
Bug: LayoutLMv3 finetuning on FUNSD Notebook; Arrow Error
[ "Very strange: `datasets-cli env`\r\n> \r\n> Copy-and-paste the text below in your GitHub issue.\r\n> \r\n> - `datasets` version: 2.9.0\r\n> - Platform: macOS-14.0-arm64-arm-64bit\r\n> - Python version: 3.9.13\r\n> - PyArrow version: 8.0.0\r\n> - Pandas version: 1.3.5\r\n\r\nAfter updating datasets and pyarrow on base environment, although I am using a different one called layoutLM\r\n\r\n> Copy-and-paste the text below in your GitHub issue.\r\n> \r\n> - `datasets` version: 2.14.6\r\n> - Platform: macOS-14.0-arm64-arm-64bit\r\n> - Python version: 3.9.18\r\n> - Huggingface_hub version: 0.17.3\r\n> - PyArrow version: 14.0.1\r\n> - Pandas version: 2.1.3", "Hi! The latest (patch) release (published a few hours ago) includes a fix for this [PyArrow security issue](https://github.com/advisories/GHSA-5wvp-7f3h-6wmm). To install it, run `pip install -U datasets`.", "> Hi! The latest (patch) release (published a few hours ago) includes a fix for this [PyArrow security issue](https://github.com/advisories/GHSA-5wvp-7f3h-6wmm). To install it, run `pip install -U datasets`.\r\n\r\nThanks for the info and the latest release, it seems this has also solved my issue. First run after the update worked and I am training right now :D\r\nWill close the Issu" ]
### Describe the bug Arrow issues when running the example Notebook laptop locally on Mac with M1. Works on Google Collab. **Notebook**: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv3/Fine_tune_LayoutLMv3_on_FUNSD_(HuggingFace_Trainer).ipynb **Error**: `ValueError: Arrow type extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>> does not have a datasets dtype equivalent.` **Caused by**: ``` # we need to define custom features for `set_format` (used later on) to work properly features = Features({ 'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)), 'input_ids': Sequence(feature=Value(dtype='int64')), 'attention_mask': Sequence(Value(dtype='int64')), 'bbox': Array2D(dtype="int64", shape=(512, 4)), 'labels': Sequence(feature=Value(dtype='int64')), }) ``` ### Steps to reproduce the bug Run the notebook provided, locally. If possible also on M1. ### Expected behavior The cell where features are mapped to Array2D and Array3D should work without any issues. ### Environment info Tried with Python 3.9 and 3.10 conda envs. Running Mac M1. `pip show datasets` > Name: datasets Version: 2.14.6 Summary: HuggingFace community-driven open-source library of datasets `pip list` > Package Version > ------------------------- ------------ > accelerate 0.24.1 > aiohttp 3.8.6 > aiosignal 1.3.1 > anyio 3.5.0 > appnope 0.1.2 > argon2-cffi 21.3.0 > argon2-cffi-bindings 21.2.0 > asttokens 2.0.5 > async-timeout 4.0.3 > attrs 23.1.0 > backcall 0.2.0 > beautifulsoup4 4.12.2 > bleach 4.1.0 > certifi 2023.7.22 > cffi 1.15.1 > charset-normalizer 3.3.2 > comm 0.1.2 > datasets 2.14.6 > debugpy 1.6.7 > decorator 5.1.1 > defusedxml 0.7.1 > dill 0.3.7 > entrypoints 0.4 > exceptiongroup 1.0.4 > executing 0.8.3 > fastjsonschema 2.16.2 > filelock 3.13.1 > frozenlist 1.4.0 > fsspec 2023.10.0 > huggingface-hub 0.17.3 > idna 3.4 > importlib-metadata 6.0.0 > IProgress 0.4 > ipykernel 6.25.0 > ipython 8.15.0 > ipython-genutils 0.2.0 > jedi 0.18.1 > Jinja2 3.1.2 > joblib 1.3.2 > jsonschema 4.19.2 > jsonschema-specifications 2023.7.1 > jupyter_client 7.4.9 > jupyter_core 5.5.0 > jupyter-server 1.23.4 > jupyterlab-pygments 0.1.2 > MarkupSafe 2.1.1 > matplotlib-inline 0.1.6 > mistune 2.0.4 > mpmath 1.3.0 > multidict 6.0.4 > multiprocess 0.70.15 > nbclassic 1.0.0 > nbclient 0.8.0 > nbconvert 7.10.0 > nbformat 5.9.2 > nest-asyncio 1.5.6 > networkx 3.2.1 > notebook 6.5.4 > notebook_shim 0.2.3 > numpy 1.26.1 > packaging 23.1 > pandas 2.1.3 > pandocfilters 1.5.0 > parso 0.8.3 > pexpect 4.8.0 > pickleshare 0.7.5 > Pillow 10.1.0 > pip 23.3 > platformdirs 3.10.0 > prometheus-client 0.14.1 > prompt-toolkit 3.0.36 > psutil 5.9.0 > ptyprocess 0.7.0 > pure-eval 0.2.2 > pyarrow 14.0.1 > pycparser 2.21 > Pygments 2.15.1 > python-dateutil 2.8.2 > pytz 2023.3.post1 > PyYAML 6.0.1 > pyzmq 23.2.0 > referencing 0.30.2 > regex 2023.10.3 > requests 2.31.0 > rpds-py 0.10.6 > safetensors 0.4.0 > scikit-learn 1.3.2 > scipy 1.11.3 > Send2Trash 1.8.2 > seqeval 1.2.2 > setuptools 68.0.0 > six 1.16.0 > sniffio 1.2.0 > soupsieve 2.5 > stack-data 0.2.0 > sympy 1.12 > terminado 0.17.1 > threadpoolctl 3.2.0 > tinycss2 1.2.1 > tokenizers 0.14.1 > torch 2.1.0 > tornado 6.3.3 > tqdm 4.66.1 > traitlets 5.7.1 > transformers 4.36.0.dev0 > typing_extensions 4.7.1 > tzdata 2023.3 > urllib3 2.0.7 > wcwidth 0.2.5 > webencodings 0.5.1 > websocket-client 0.58.0 > wheel 0.41.2 > xxhash 3.4.1 > yarl 1.9.2 > zipp 3.11.0
6,417
https://github.com/huggingface/datasets/issues/6412
User token is printed out!
[ "Indeed, this is not a good practice. I've opened a PR that removes the token value from the (deprecation) warning." ]
This line prints user token on command line! Is it safe? https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/load.py#L2091
6,412
https://github.com/huggingface/datasets/issues/6410
Datasets does not load HuggingFace Repository properly
[ "Hi! You can avoid the error by requesting only the `jsonl` files. `dataset = load_dataset(\"ai4privacy/pii-masking-200k\", data_files=[\"*.jsonl\"])`.\r\n\r\nOur data file inference does not filter out (incompatible) `json` files because `json` and `jsonl` use the same builder. Still, I think the inference should differentiate these extensions because it's safe to assume that loading them together will lead to an error. WDYT @lhoestq? ", "Raising an error if there is a mix of json and jsonl in the builder makes sense yea" ]
### Describe the bug Dear Datasets team, We just have published a dataset on Huggingface: https://huggingface.co/ai4privacy However, when trying to read it using the Dataset library we get an error. As I understand jsonl files are compatible, could you please clarify how we can solve the issue? Please let me know and we would be more than happy to adapt the structure of the repository or meta data so it works easier: ```python from datasets import load_dataset dataset = load_dataset("ai4privacy/pii-masking-200k") ``` ``` Downloading readme: 100% 11.8k/11.8k [00:00<00:00, 512kB/s] Downloading data files: 100% 1/1 [00:11<00:00, 11.16s/it] Downloading data: 100% 64.3M/64.3M [00:02<00:00, 32.9MB/s] Downloading data: 100% 113M/113M [00:03<00:00, 35.0MB/s] Downloading data: 100% 97.7M/97.7M [00:02<00:00, 46.1MB/s] Downloading data: 100% 90.8M/90.8M [00:02<00:00, 44.9MB/s] Downloading data: 100% 7.63k/7.63k [00:00<00:00, 41.0kB/s] Downloading data: 100% 1.03k/1.03k [00:00<00:00, 9.44kB/s] Extracting data files: 100% 1/1 [00:00<00:00, 29.26it/s] Generating train split: 209261/0 [00:05<00:00, 41201.25 examples/s] --------------------------------------------------------------------------- ValueError Traceback (most recent call last) [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1939 ) -> 1940 writer.write_table(table) 1941 num_examples_progress_update += len(table) 8 frames [/usr/local/lib/python3.10/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in write_table(self, pa_table, writer_batch_size) 571 pa_table = pa_table.combine_chunks() --> 572 pa_table = table_cast(pa_table, self._schema) 573 if self.embed_local_files: [/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in table_cast(table, schema) 2327 if table.schema != schema: -> 2328 return cast_table_to_schema(table, schema) 2329 elif table.schema.metadata != schema.metadata: [/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in cast_table_to_schema(table, schema) 2285 if sorted(table.column_names) != sorted(features): -> 2286 raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match") 2287 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] ValueError: Couldn't cast JOBTYPE: int64 PHONEIMEI: int64 ACCOUNTNAME: int64 VEHICLEVIN: int64 GENDER: int64 CURRENCYCODE: int64 CREDITCARDISSUER: int64 JOBTITLE: int64 SEX: int64 CURRENCYSYMBOL: int64 IP: int64 EYECOLOR: int64 MASKEDNUMBER: int64 SECONDARYADDRESS: int64 JOBAREA: int64 ACCOUNTNUMBER: int64 language: string BITCOINADDRESS: int64 MAC: int64 SSN: int64 EMAIL: int64 ETHEREUMADDRESS: int64 DOB: int64 VEHICLEVRM: int64 IPV6: int64 AMOUNT: int64 URL: int64 PHONENUMBER: int64 PIN: int64 TIME: int64 CREDITCARDNUMBER: int64 FIRSTNAME: int64 IBAN: int64 BIC: int64 COUNTY: int64 STATE: int64 LASTNAME: int64 ZIPCODE: int64 HEIGHT: int64 ORDINALDIRECTION: int64 MIDDLENAME: int64 STREET: int64 USERNAME: int64 CURRENCY: int64 PREFIX: int64 USERAGENT: int64 CURRENCYNAME: int64 LITECOINADDRESS: int64 CREDITCARDCVV: int64 AGE: int64 CITY: int64 PASSWORD: int64 BUILDINGNUMBER: int64 IPV4: int64 NEARBYGPSCOORDINATE: int64 DATE: int64 COMPANYNAME: int64 to {'masked_text': Value(dtype='string', id=None), 'unmasked_text': Value(dtype='string', id=None), 'privacy_mask': Value(dtype='string', id=None), 'span_labels': Value(dtype='string', id=None), 'bio_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'tokenised_text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)} because column names don't match The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) [<ipython-input-2-f1c6811e9c83>](https://localhost:8080/#) in <cell line: 3>() 1 from datasets import load_dataset 2 ----> 3 dataset = load_dataset("ai4privacy/pii-masking-200k") [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2151 2152 # Download and prepare data -> 2153 builder_instance.download_and_prepare( 2154 download_config=download_config, 2155 download_mode=download_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 952 if num_proc is not None: 953 prepare_split_kwargs["num_proc"] = num_proc --> 954 self._download_and_prepare( 955 dl_manager=dl_manager, 956 verification_mode=verification_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 1047 try: 1048 # Prepare split will record examples associated to the split -> 1049 self._prepare_split(split_generator, **prepare_split_kwargs) 1050 except OSError as e: 1051 raise OSError( [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, file_format, num_proc, max_shard_size) 1811 job_id = 0 1812 with pbar: -> 1813 for job_id, done, content in self._prepare_split_single( 1814 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args 1815 ): [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1956 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1957 e = e.__context__ -> 1958 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1959 1960 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ``` Thank you and have a great day ahead ### Steps to reproduce the bug Open Google Colab Notebook: Run command: !pip3 install datasets Run code: from datasets import load_dataset dataset = load_dataset("ai4privacy/pii-masking-200k") ### Expected behavior Download the dataset successfully from HuggingFace to the notebook so that we can start working with it ### Environment info - `datasets` version: 2.14.6 - Platform: Linux-5.15.120+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.19.1 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
6,410
https://github.com/huggingface/datasets/issues/6409
using DownloadManager to download from local filesystem and disable_progress_bar, there will be an exception
[]
### Describe the bug i'm using datasets.download.download_manager.DownloadManager to download files like "file:///a/b/c.txt", and i disable_progress_bar() to disable bar. there will be an exception as follows: `AttributeError: 'function' object has no attribute 'close' Exception ignored in: <function TqdmCallback.__del__ at 0x7fa8683d84c0> Traceback (most recent call last): File "/home/protoss.gao/.local/lib/python3.9/site-packages/fsspec/callbacks.py", line 233, in __del__ self.tqdm.close()` i check your source code in datasets/utils/file_utils.py:348 you define TqdmCallback derive from fsspec.callbacks.TqdmCallback but in the newest fsspec code [https://github.com/fsspec/filesystem_spec/blob/master/fsspec/callbacks.py](url) , line 146, in this case, _DEFAULT_CALLBACK will take effect, but in line 234, it calls "close()" function which _DEFAULT_CALLBACK don't have such thing. so i think the class "TqdmCallback" in datasets/utils/file_utils.py may override "__del__" function or report this bug to fsspec. ### Steps to reproduce the bug as i said ### Expected behavior no exception ### Environment info datasets: 2.14.4 python: 3.9 platform: x86_64
6,409
https://github.com/huggingface/datasets/issues/6408
`IterableDataset` lost but not keep columns when map function adding columns with names in `remove_columns`
[]
### Describe the bug IterableDataset lost but not keep columns when map function adding columns with names in remove_columns, Dataset not. May be related to the code below: https://github.com/huggingface/datasets/blob/06c3ffb8d068b6307b247164b10f7c7311cefed4/src/datasets/iterable_dataset.py#L750-L756 ### Steps to reproduce the bug ```python dataset: IterableDataset = load_dataset("Anthropic/hh-rlhf", streaming=True, split="train") column_names = list(next(iter(dataset)).keys()) # ['chosen', 'rejected'] # map_fn will return dict {"chosen": xxx, "rejected": xxx, "prompt": xxx, "history": xxxx} dataset = dataset.map(map_fn, batched=True, remove_columns=column_names) next(iter(dataset)) # output # {'prompt': 'xxx, 'history': xxx} ``` ```python # when load_dataset with streaming=False, the column_names are kept: dataset: Dataset = load_dataset("Anthropic/hh-rlhf", streaming=False, split="train") column_names = list(next(iter(dataset)).keys()) # ['chosen', 'rejected'] # map_fn will return dict {"chosen": xxx, "rejected": xxx, "prompt": xxx, "history": xxxx} dataset = dataset.map(map_fn, batched=True, remove_columns=column_names) next(iter(dataset)) # output # {'prompt': 'xxx, 'history': xxx, "chosen": xxx, "rejected": xxx} ``` ### Expected behavior IterableDataset keep columns when map function adding columns with names in remove_columns ### Environment info datasets==2.14.6
6,408
https://github.com/huggingface/datasets/issues/6407
Loading the dataset from private S3 bucket gives "TypeError: cannot pickle '_contextvars.Context' object"
[]
### Describe the bug I'm trying to read the parquet file from the private s3 bucket using the `load_dataset` function, but I receive `TypeError: cannot pickle '_contextvars.Context' object` error I'm working on a machine with `~/.aws/credentials` file. I can't give credentials and the path to a file in a private bucket for obvious reasons, but I'll try to give all possible outputs. ### Steps to reproduce the bug ```python import s3fs from datasets import load_dataset from aiobotocore.session import get_session DATA_PATH = "s3://bucket_name/path/validation.parquet" fs = s3fs.S3FileSystem(session=get_session()) ``` `fs.stat` returns the data, so we can say that fs is working and we have all permissions ```python fs.stat(DATA_PATH) # Returns: # {'ETag': '"123123a-19"', # 'LastModified': datetime.datetime(2023, 11, 1, 10, 16, 57, tzinfo=tzutc()), # 'size': 312237170, # 'name': 'bucket_name/path/validation.parquet', # 'type': 'file', # 'StorageClass': 'STANDARD', # 'VersionId': 'Abc.HtmsC9h.as', # 'ContentType': 'binary/octet-stream'} ``` ```python fs.storage_options # Returns: # {'session': <aiobotocore.session.AioSession at 0x7f9193fa53c0>} ``` ```python ds = load_dataset("parquet", data_files={"train": DATA_PATH}, storage_options=fs.storage_options) ``` <details> <summary>Returns such error (expandable)</summary> ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[88], line 1 ----> 1 ds = load_dataset("parquet", data_files={"train": DATA_PATH}, storage_options=fs.storage_options) File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/load.py:2153, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2150 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 2152 # Download and prepare data -> 2153 builder_instance.download_and_prepare( 2154 download_config=download_config, 2155 download_mode=download_mode, 2156 verification_mode=verification_mode, 2157 try_from_hf_gcs=try_from_hf_gcs, 2158 num_proc=num_proc, 2159 storage_options=storage_options, 2160 ) 2162 # Build dataset for splits 2163 keep_in_memory = ( 2164 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 2165 ) File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/builder.py:954, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 952 if num_proc is not None: 953 prepare_split_kwargs["num_proc"] = num_proc --> 954 self._download_and_prepare( 955 dl_manager=dl_manager, 956 verification_mode=verification_mode, 957 **prepare_split_kwargs, 958 **download_and_prepare_kwargs, 959 ) 960 # Sync info 961 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/builder.py:1027, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 1025 split_dict = SplitDict(dataset_name=self.dataset_name) 1026 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) -> 1027 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 1029 # Checksums verification 1030 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums: File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py:34, in Parquet._split_generators(self, dl_manager) 32 if not self.config.data_files: 33 raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}") ---> 34 data_files = dl_manager.download_and_extract(self.config.data_files) 35 if isinstance(data_files, (str, list, tuple)): 36 files = data_files File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_manager.py:565, in DownloadManager.download_and_extract(self, url_or_urls) 549 def download_and_extract(self, url_or_urls): 550 """Download and extract given `url_or_urls`. 551 552 Is roughly equivalent to: (...) 563 extracted_path(s): `str`, extracted paths of given URL(s). 564 """ --> 565 return self.extract(self.download(url_or_urls)) File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_manager.py:420, in DownloadManager.download(self, url_or_urls) 401 def download(self, url_or_urls): 402 """Download given URL(s). 403 404 By default, only one process is used for download. Pass customized `download_config.num_proc` to change this behavior. (...) 418 ``` 419 """ --> 420 download_config = self.download_config.copy() 421 download_config.extract_compressed_file = False 422 if download_config.download_desc is None: File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_config.py:94, in DownloadConfig.copy(self) 93 def copy(self) -> "DownloadConfig": ---> 94 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()}) File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_config.py:94, in <dictcomp>(.0) 93 def copy(self) -> "DownloadConfig": ---> 94 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()}) File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy) 229 memo[id(x)] = y 230 for key, value in x.items(): --> 231 y[deepcopy(key, memo)] = deepcopy(value, memo) 232 return y File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 269 if state is not None: 270 if deep: --> 271 state = deepcopy(state, memo) 272 if hasattr(y, '__setstate__'): 273 y.__setstate__(state) File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy) 229 memo[id(x)] = y 230 for key, value in x.items(): --> 231 y[deepcopy(key, memo)] = deepcopy(value, memo) 232 return y File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 269 if state is not None: 270 if deep: --> 271 state = deepcopy(state, memo) 272 if hasattr(y, '__setstate__'): 273 y.__setstate__(state) [... skipping similar frames: _deepcopy_dict at line 231 (2 times), deepcopy at line 146 (2 times)] File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 269 if state is not None: 270 if deep: --> 271 state = deepcopy(state, memo) 272 if hasattr(y, '__setstate__'): 273 y.__setstate__(state) [... skipping similar frames: deepcopy at line 146 (1 times)] File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy) 229 memo[id(x)] = y 230 for key, value in x.items(): --> 231 y[deepcopy(key, memo)] = deepcopy(value, memo) 232 return y File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:206, in _deepcopy_list(x, memo, deepcopy) 204 append = y.append 205 for a in x: --> 206 append(deepcopy(a, memo)) 207 return y File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 269 if state is not None: 270 if deep: --> 271 state = deepcopy(state, memo) 272 if hasattr(y, '__setstate__'): 273 y.__setstate__(state) File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy) 229 memo[id(x)] = y 230 for key, value in x.items(): --> 231 y[deepcopy(key, memo)] = deepcopy(value, memo) 232 return y File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:238, in _deepcopy_method(x, memo) 237 def _deepcopy_method(x, memo): # Copy instance methods --> 238 return type(x)(x.__func__, deepcopy(x.__self__, memo)) File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 269 if state is not None: 270 if deep: --> 271 state = deepcopy(state, memo) 272 if hasattr(y, '__setstate__'): 273 y.__setstate__(state) File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy) 229 memo[id(x)] = y 230 for key, value in x.items(): --> 231 y[deepcopy(key, memo)] = deepcopy(value, memo) 232 return y File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy) 229 memo[id(x)] = y 230 for key, value in x.items(): --> 231 y[deepcopy(key, memo)] = deepcopy(value, memo) 232 return y File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 269 if state is not None: 270 if deep: --> 271 state = deepcopy(state, memo) 272 if hasattr(y, '__setstate__'): 273 y.__setstate__(state) [... skipping similar frames: _deepcopy_dict at line 231 (3 times), deepcopy at line 146 (3 times), deepcopy at line 172 (3 times), _reconstruct at line 271 (2 times)] File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 269 if state is not None: 270 if deep: --> 271 state = deepcopy(state, memo) 272 if hasattr(y, '__setstate__'): 273 y.__setstate__(state) [... skipping similar frames: _deepcopy_dict at line 231 (1 times), deepcopy at line 146 (1 times)] File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy) 229 memo[id(x)] = y 230 for key, value in x.items(): --> 231 y[deepcopy(key, memo)] = deepcopy(value, memo) 232 return y File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:265, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 263 if deep and args: 264 args = (deepcopy(arg, memo) for arg in args) --> 265 y = func(*args) 266 if deep: 267 memo[id(x)] = y File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:264, in <genexpr>(.0) 262 deep = memo is not None 263 if deep and args: --> 264 args = (deepcopy(arg, memo) for arg in args) 265 y = func(*args) 266 if deep: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in _deepcopy_tuple(x, memo, deepcopy) 210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy): --> 211 y = [deepcopy(a, memo) for a in x] 212 # We're not going to put the tuple in the memo, but it's still important we 213 # check for it, in case the tuple contains recursive mutable structures. 214 try: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in <listcomp>(.0) 210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy): --> 211 y = [deepcopy(a, memo) for a in x] 212 # We're not going to put the tuple in the memo, but it's still important we 213 # check for it, in case the tuple contains recursive mutable structures. 214 try: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 269 if state is not None: 270 if deep: --> 271 state = deepcopy(state, memo) 272 if hasattr(y, '__setstate__'): 273 y.__setstate__(state) File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in _deepcopy_tuple(x, memo, deepcopy) 210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy): --> 211 y = [deepcopy(a, memo) for a in x] 212 # We're not going to put the tuple in the memo, but it's still important we 213 # check for it, in case the tuple contains recursive mutable structures. 214 try: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in <listcomp>(.0) 210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy): --> 211 y = [deepcopy(a, memo) for a in x] 212 # We're not going to put the tuple in the memo, but it's still important we 213 # check for it, in case the tuple contains recursive mutable structures. 214 try: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy) 229 memo[id(x)] = y 230 for key, value in x.items(): --> 231 y[deepcopy(key, memo)] = deepcopy(value, memo) 232 return y File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:161, in deepcopy(x, memo, _nil) 159 reductor = getattr(x, "__reduce_ex__", None) 160 if reductor is not None: --> 161 rv = reductor(4) 162 else: 163 reductor = getattr(x, "__reduce__", None) TypeError: cannot pickle '_contextvars.Context' object ``` </details> ### Expected behavior If I choose to load the file from the public bucket with `anon=True` passed - everything works, so I expected loading from the private bucket to work as well ### Environment info - `datasets` version: 2.14.6 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.10.13 - Huggingface_hub version: 0.19.1 - PyArrow version: 14.0.1 - Pandas version: 1.5.3 - s3fs version: 2023.10.0 - fsspec version: 2023.10.0 - aiobotocore version: 2.7.0
6,407
https://github.com/huggingface/datasets/issues/6406
CI Build PR Documentation is broken: ImportError: cannot import name 'TypeAliasType' from 'typing_extensions'
[]
Our CI Build PR Documentation is broken. See: https://github.com/huggingface/datasets/actions/runs/6799554060/job/18486828777?pr=6390 ``` ImportError: cannot import name 'TypeAliasType' from 'typing_extensions' ```
6,406
https://github.com/huggingface/datasets/issues/6405
ConfigNamesError on a simple CSV file
[ "The viewer is working now. \r\n\r\nBased on the repo commit history, the bug was due to the incorrect format of the `features` field in the README YAML (`Value` requires `dtype`, e.g., `Value(\"string\")`, but it was not specified)", "Feel free to close the issue", "Oh, OK! Thanks. So, there was no reason to open an issue" ]
See https://huggingface.co/datasets/Nguyendo1999/mmath/discussions/1 ``` Error code: ConfigNamesError Exception: TypeError Message: __init__() missing 1 required positional argument: 'dtype' Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 65, in compute_config_names_response for config in sorted(get_dataset_config_names(path=dataset, token=hf_token)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1512, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1489, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1039, in get_module dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 468, in from_dataset_card_data dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 399, in _from_yaml_dict yaml_data["features"] = Features._from_yaml_list(yaml_data["features"]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1838, in _from_yaml_list return cls.from_dict(from_yaml_inner(yaml_data)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1690, in from_dict obj = generate_from_dict(dic) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1345, in generate_from_dict return {key: generate_from_dict(value) for key, value in obj.items()} File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1345, in <dictcomp> return {key: generate_from_dict(value) for key, value in obj.items()} File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1353, in generate_from_dict return class_type(**{k: v for k, v in obj.items() if k in field_names}) TypeError: __init__() missing 1 required positional argument: 'dtype' ``` This is the CSV file: https://huggingface.co/datasets/Nguyendo1999/mmath/blob/dbcdd7c2c6fc447f852ec136a7532292802bb46f/math_train.csv
6,405
https://github.com/huggingface/datasets/issues/6403
Cannot import datasets on google colab (python 3.10.12)
[ "You are most likely using an outdated version of `datasets` in the notebook, which can be verified with the `!datasets-cli env` command. You can run `!pip install -U datasets` to update the installation.", "okay, it works! thank you so much! 😄 " ]
### Describe the bug I'm trying A full colab demo notebook of zero-shot-distillation from https://github.com/huggingface/transformers/tree/main/examples/research_projects/zero-shot-distillation but i got this type of error when importing datasets on my google colab (python version is 3.10.12) ![image](https://github.com/huggingface/datasets/assets/15389235/6f7758a2-681d-4436-87d0-5e557838e368) I found the same problem that have been solved in [#3326 ] but it seem still error on the google colab. I can't try on my local using jupyter notebook because of my laptop resource doesn't fulfill the requirements. Please can anyone help me solve this problem. Thank you 😅 ### Steps to reproduce the bug Error: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) [<ipython-input-8-b6e092f83978>](https://localhost:8080/#) in <cell line: 1>() ----> 1 from datasets import load_dataset 2 3 # Print all the available datasets 4 from huggingface_hub import list_datasets 5 print([dataset.id for dataset in list_datasets()]) 6 frames [/usr/lib/python3.10/functools.py](https://localhost:8080/#) in update_wrapper(wrapper, wrapped, assigned, updated) 59 # Issue #17482: set __wrapped__ last so we don't inadvertently copy it 60 # from the wrapped function when updating __dict__ ---> 61 wrapper.__wrapped__ = wrapped 62 # Return the wrapper so this can be used as a decorator via partial() 63 return wrapper AttributeError: readonly attribute ``` ### Expected behavior Run success on Google Colab (free) ### Environment info Windows 11 x64, Google Colab free
6,403
https://github.com/huggingface/datasets/issues/6401
dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text") not working
[ "Seems like it's a problem with the dataset, since in the [README](https://huggingface.co/datasets/Hyperspace-Technologies/scp-wiki-text/blob/main/README.md) the validation is not specified. Try cloning the dataset, removing the README (or validation split), and loading it locally/ ", "@VarunNSrivastava thanks brother, working beautiful now\r\n\r\n```\r\nC:\\_Work\\_datasets>py dataset.py\r\nDownloading data files: 100%|████████████████████████████████████████████████████████████████████| 3/3 [00:00<?, ?it/s]\r\nExtracting data files: 100%|████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 599.90it/s]\r\nGenerating train split: 314294 examples [00:00, 1293222.03 examples/s]\r\nGenerating validation split: 120 examples [00:00, 59053.91 examples/s]\r\nGenerating test split: 34922 examples [00:00, 1343275.84 examples/s]\r\n```" ]
### Describe the bug ``` (datasets) mruserbox@guru-X99:/media/10TB_HHD/_LLM_DATASETS$ python dataset.py Downloading readme: 100%|███████████████████████████████████| 360/360 [00:00<00:00, 2.16MB/s] Downloading data: 100%|█████████████████████████████████| 65.1M/65.1M [00:19<00:00, 3.38MB/s] Downloading data: 100%|█████████████████████████████████| 6.35k/6.35k [00:00<00:00, 20.7kB/s] Downloading data: 100%|█████████████████████████████████| 7.29M/7.29M [00:01<00:00, 3.99MB/s] Downloading data files: 100%|██████████████████████████████████| 3/3 [00:21<00:00, 7.14s/it] Extracting data files: 100%|█████████████████████████████████| 3/3 [00:00<00:00, 1624.23it/s] Generating train split: 100%|█████████████| 314294/314294 [00:00<00:00, 668186.58 examples/s] Generating validation split: 120 examples [00:00, 100422.28 examples/s] Generating test split: 100%|████████████████| 34922/34922 [00:00<00:00, 754683.41 examples/s] Traceback (most recent call last): File "/media/10TB_HHD/_LLM_DATASETS/dataset.py", line 3, in <module> dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text") File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/load.py", line 2153, in load_dataset builder_instance.download_and_prepare( File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/builder.py", line 954, in download_and_prepare self._download_and_prepare( File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/builder.py", line 1067, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/utils/info_utils.py", line 93, in verify_splits raise UnexpectedSplits(str(set(recorded_splits) - set(expected_splits))) datasets.utils.info_utils.UnexpectedSplits: {'validation'} ``` ### Steps to reproduce the bug Name: `dataset.py` Code: ``` from datasets import load_dataset dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text") ``` ### Expected behavior Run without errors ### Environment info ``` name: datasets channels: - defaults dependencies: - _libgcc_mutex=0.1=main - _openmp_mutex=5.1=1_gnu - bzip2=1.0.8=h7b6447c_0 - ca-certificates=2023.08.22=h06a4308_0 - ld_impl_linux-64=2.38=h1181459_1 - libffi=3.4.4=h6a678d5_0 - libgcc-ng=11.2.0=h1234567_1 - libgomp=11.2.0=h1234567_1 - libstdcxx-ng=11.2.0=h1234567_1 - libuuid=1.41.5=h5eee18b_0 - ncurses=6.4=h6a678d5_0 - openssl=3.0.12=h7f8727e_0 - python=3.10.13=h955ad1f_0 - readline=8.2=h5eee18b_0 - setuptools=68.0.0=py310h06a4308_0 - sqlite=3.41.2=h5eee18b_0 - tk=8.6.12=h1ccaba5_0 - wheel=0.41.2=py310h06a4308_0 - xz=5.4.2=h5eee18b_0 - zlib=1.2.13=h5eee18b_0 - pip: - aiohttp==3.8.6 - aiosignal==1.3.1 - async-timeout==4.0.3 - attrs==23.1.0 - certifi==2023.7.22 - charset-normalizer==3.3.2 - click==8.1.7 - datasets==2.14.6 - dill==0.3.7 - filelock==3.13.1 - frozenlist==1.4.0 - fsspec==2023.10.0 - huggingface-hub==0.19.0 - idna==3.4 - multidict==6.0.4 - multiprocess==0.70.15 - numpy==1.26.1 - openai==0.27.8 - packaging==23.2 - pandas==2.1.3 - pip==23.3.1 - platformdirs==4.0.0 - pyarrow==14.0.1 - python-dateutil==2.8.2 - pytz==2023.3.post1 - pyyaml==6.0.1 - requests==2.31.0 - six==1.16.0 - tomli==2.0.1 - tqdm==4.66.1 - typer==0.9.0 - typing-extensions==4.8.0 - tzdata==2023.3 - urllib3==2.0.7 - xxhash==3.4.1 - yarl==1.9.2 prefix: /home/mruserbox/miniconda3/envs/datasets ```
6,401
https://github.com/huggingface/datasets/issues/6400
Safely load datasets by disabling execution of dataset loading script
[ "great idea IMO\r\n\r\nthis could be a `trust_remote_code=True` flag like in transformers. We could also default to loading the Parquet conversion rather than executing code (for dataset repos that have both)", "@julien-c that would be great!", "We added the `trust_remote_code` argument to `load_dataset()` in `datasets` 2.16:\r\n- in the future users will have to pass trust_remote_code=True to use datasets with a script\r\n- for now we just show a warning when a dataset script is used\r\n- we fallback on the Hugging Face Parquet exports when possible (to keep compatibility with old datasets with scripts)\r\n\r\nSo feel free to use `trust_remote_code=False` in the meantime to disable loading from dataset loading scripts :)", "Passing `trust_remote_code=True` explicitly is now mandatory to load a dataset with a script since https://github.com/huggingface/datasets/pull/6954" ]
### Feature request Is there a way to disable execution of dataset loading script using `load_dataset`? This is a security vulnerability that could lead to arbitrary code execution. Any suggested workarounds are welcome as well. ### Motivation This is a security vulnerability that could lead to arbitrary code execution. ### Your contribution n/a
6,400
https://github.com/huggingface/datasets/issues/6399
TypeError: Cannot convert pyarrow.lib.ChunkedArray to pyarrow.lib.Array
[ "Seconding encountering this issue." ]
### Describe the bug Hi, I am preprocessing a large custom dataset with numpy arrays. I am running into this TypeError during writing in a dataset.map() function. I've tried decreasing writer batch size, but this error persists. This error does not occur for smaller datasets. Thank you! ### Steps to reproduce the bug Traceback (most recent call last): File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/multiprocess/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1354, in _write_generator_to_queue for i, result in enumerate(func(**kwargs)): File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3493, in _map_single writer.write_batch(batch) File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/arrow_writer.py", line 555, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 243, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/arrow_writer.py", line 184, in __arrow_array__ out = numpy_to_pyarrow_listarray(data) File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/features/features.py", line 1394, in numpy_to_pyarrow_listarray values = pa.ListArray.from_arrays(offsets, values) File "pyarrow/array.pxi", line 2004, in pyarrow.lib.ListArray.from_arrays TypeError: Cannot convert pyarrow.lib.ChunkedArray to pyarrow.lib.Array ### Expected behavior Type should not be a ChunkedArray ### Environment info datasets v2.14.5 arrow v1.2.3 pyarrow v12.0.1
6,399
https://github.com/huggingface/datasets/issues/6397
Raise a different exception for inexisting dataset vs files without known extension
[]
See https://github.com/huggingface/datasets-server/issues/2082#issuecomment-1805716557 We have the same error for: - https://huggingface.co/datasets/severo/a_dataset_that_does_not_exist: a dataset that does not exist - https://huggingface.co/datasets/severo/test_files_without_extension: a dataset with files without a known extension ``` >>> import datasets >>> datasets.get_dataset_config_names('severo/a_dataset_that_does_not_exist') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names dataset_module = dataset_module_factory( File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1508, in dataset_module_factory raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at /home/slesage/hf/datasets-server/services/worker/severo/a_dataset_that_does_not_exist/a_dataset_that_does_not_exist.py or any data file in the same directory. Couldn't find 'severo/a_dataset_that_does_not_exist' on the Hugging Face Hub either: FileNotFoundError: Dataset 'severo/a_dataset_that_does_not_exist' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`. >>> datasets.get_dataset_config_names('severo/test_files_without_extension') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names dataset_module = dataset_module_factory( File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1508, in dataset_module_factory raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at /home/slesage/hf/datasets-server/services/worker/severo/test_files_without_extension/test_files_without_extension.py or any data file in the same directory. Couldn't find 'severo/test_files_without_extension' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in severo/test_files_without_extension. ``` To differentiate, we must parse the error message (only the end is different). We should have a different exception for these two errors.
6,397