Datasets:
url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.24B
2.76B
| node_id
stringlengths 18
19
| number
int64 4.35k
7.35k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
3
| milestone
dict | comments
int64 0
49
| created_at
timestamp[ms] | updated_at
timestamp[ms] | closed_at
timestamp[ms] | author_association
stringclasses 4
values | active_lock_reason
null | body
stringlengths 1
47.9k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7347/comments | https://api.github.com/repos/huggingface/datasets/issues/7347/events | https://github.com/huggingface/datasets/issues/7347 | 2,760,282,339 | I_kwDODunzps6khpDj | 7,347 | Converting Arrow to WebDataset TAR Format for Offline Use | {
"login": "katie312",
"id": 91370128,
"node_id": "MDQ6VXNlcjkxMzcwMTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/91370128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/katie312",
"html_url": "https://github.com/katie312",
"followers_url": "https://api.github.com/users/katie312/followers",
"following_url": "https://api.github.com/users/katie312/following{/other_user}",
"gists_url": "https://api.github.com/users/katie312/gists{/gist_id}",
"starred_url": "https://api.github.com/users/katie312/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/katie312/subscriptions",
"organizations_url": "https://api.github.com/users/katie312/orgs",
"repos_url": "https://api.github.com/users/katie312/repos",
"events_url": "https://api.github.com/users/katie312/events{/privacy}",
"received_events_url": "https://api.github.com/users/katie312/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 4 | 2024-12-27T01:40:44 | 2024-12-31T17:38:00 | 2024-12-28T15:38:03 | NONE | null | ### Feature request
Hi,
I've downloaded an Arrow-formatted dataset offline using the hugggingface's datasets library by:
```
import json
from datasets import load_dataset
dataset = load_dataset("pixparse/cc3m-wds")
dataset.save_to_disk("./cc3m_1")
```
now I need to convert it to WebDataset's TAR format for offline data ingestion.
Is there a straightforward method to achieve this conversion without an internet connection? Can I simply convert it by
```
tar -cvf
```
btw, when I tried:
```
import webdataset as wds
from huggingface_hub import get_token
from torch.utils.data import DataLoader
hf_token = get_token()
url = "https://huggingface.co/datasets/timm/imagenet-12k-wds/resolve/main/imagenet12k-train-{{0000..1023}}.tar"
url = f"pipe:curl -s -L {url} -H 'Authorization:Bearer {hf_token}'"
dataset = wds.WebDataset(url).decode()
dataset.save_to_disk("./cc3m_webdataset")
```
error occured:
```
AttributeError: 'WebDataset' object has no attribute 'save_to_disk'
```
Thanks a lot!
### Motivation
Converting Arrow to WebDataset TAR Format
### Your contribution
No clue yet | {
"login": "katie312",
"id": 91370128,
"node_id": "MDQ6VXNlcjkxMzcwMTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/91370128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/katie312",
"html_url": "https://github.com/katie312",
"followers_url": "https://api.github.com/users/katie312/followers",
"following_url": "https://api.github.com/users/katie312/following{/other_user}",
"gists_url": "https://api.github.com/users/katie312/gists{/gist_id}",
"starred_url": "https://api.github.com/users/katie312/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/katie312/subscriptions",
"organizations_url": "https://api.github.com/users/katie312/orgs",
"repos_url": "https://api.github.com/users/katie312/repos",
"events_url": "https://api.github.com/users/katie312/events{/privacy}",
"received_events_url": "https://api.github.com/users/katie312/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7347/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7346 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7346/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7346/comments | https://api.github.com/repos/huggingface/datasets/issues/7346/events | https://github.com/huggingface/datasets/issues/7346 | 2,758,752,118 | I_kwDODunzps6kbzd2 | 7,346 | OSError: Invalid flatbuffers message. | {
"login": "antecede",
"id": 46232487,
"node_id": "MDQ6VXNlcjQ2MjMyNDg3",
"avatar_url": "https://avatars.githubusercontent.com/u/46232487?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antecede",
"html_url": "https://github.com/antecede",
"followers_url": "https://api.github.com/users/antecede/followers",
"following_url": "https://api.github.com/users/antecede/following{/other_user}",
"gists_url": "https://api.github.com/users/antecede/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antecede/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antecede/subscriptions",
"organizations_url": "https://api.github.com/users/antecede/orgs",
"repos_url": "https://api.github.com/users/antecede/repos",
"events_url": "https://api.github.com/users/antecede/events{/privacy}",
"received_events_url": "https://api.github.com/users/antecede/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-25T11:38:52 | 2024-12-25T12:03:13 | null | NONE | null | ### Describe the bug
When loading a large 2D data (1000 × 1152) with a large number of (2,000 data in this case) in `load_dataset`, the error message `OSError: Invalid flatbuffers message` is reported.
When only 300 pieces of data of this size (1000 × 1152) are stored, they can be loaded correctly.
When 2,000 2D arrays are stored in each file, about 100 files are generated, each with a file size of about 5-6GB. But when 300 2D arrays are stored in each file, **about 600 files are generated, which is too many files**.
### Steps to reproduce the bug
error:
```python
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
Cell In[2], line 4
1 from datasets import Dataset
2 from datasets import load_dataset
----> 4 real_dataset = load_dataset("arrow", data_files='tensorData/real_ResidueTensor/*', split="train")#.with_format("torch") # , split="train"
5 # sim_dataset = load_dataset("arrow", data_files='tensorData/sim_ResidueTensor/*', split="train").with_format("torch")
6 real_dataset
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/load.py:2151](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/load.py#line=2150), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2148 return builder_instance.as_streaming_dataset(split=split)
2150 # Download and prepare data
-> 2151 builder_instance.download_and_prepare(
2152 download_config=download_config,
2153 download_mode=download_mode,
2154 verification_mode=verification_mode,
2155 num_proc=num_proc,
2156 storage_options=storage_options,
2157 )
2159 # Build dataset for splits
2160 keep_in_memory = (
2161 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2162 )
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/builder.py:924](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/builder.py#line=923), in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
922 if num_proc is not None:
923 prepare_split_kwargs["num_proc"] = num_proc
--> 924 self._download_and_prepare(
925 dl_manager=dl_manager,
926 verification_mode=verification_mode,
927 **prepare_split_kwargs,
928 **download_and_prepare_kwargs,
929 )
930 # Sync info
931 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/builder.py:978](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/builder.py#line=977), in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
976 split_dict = SplitDict(dataset_name=self.dataset_name)
977 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 978 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
980 # Checksums verification
981 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/packaged_modules/arrow/arrow.py:47](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/packaged_modules/arrow/arrow.py#line=46), in Arrow._split_generators(self, dl_manager)
45 with open(file, "rb") as f:
46 try:
---> 47 reader = pa.ipc.open_stream(f)
48 except pa.lib.ArrowInvalid:
49 reader = pa.ipc.open_file(f)
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.py:190](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.py#line=189), in open_stream(source, options, memory_pool)
171 def open_stream(source, *, options=None, memory_pool=None):
172 """
173 Create reader for Arrow streaming format.
174
(...)
188 A reader for the given source
189 """
--> 190 return RecordBatchStreamReader(source, options=options,
191 memory_pool=memory_pool)
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.py:52](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.py#line=51), in RecordBatchStreamReader.__init__(self, source, options, memory_pool)
50 def __init__(self, source, *, options=None, memory_pool=None):
51 options = _ensure_default_ipc_read_options(options)
---> 52 self._open(source, options=options, memory_pool=memory_pool)
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.pxi:1006](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.pxi#line=1005), in pyarrow.lib._RecordBatchStreamReader._open()
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/error.pxi:155](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/error.pxi#line=154), in pyarrow.lib.pyarrow_internal_check_status()
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/error.pxi:92](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/error.pxi#line=91), in pyarrow.lib.check_status()
OSError: Invalid flatbuffers message.
```
reproduce:Here is just an example result, the real 2D matrix is the output of the ESM large model, and the matrix size is approximate
```python
import numpy as np
import pyarrow as pa
random_arrays_list = [np.random.rand(1000, 1152) for _ in range(2000)]
table = pa.Table.from_pydict({
'tensor': [tensor.tolist() for tensor in random_arrays_list]
})
import pyarrow.feather as feather
feather.write_feather(table, 'test.arrow')
from datasets import load_dataset
dataset = load_dataset("arrow", data_files='test.arrow', split="train")
```
### Expected behavior
`load_dataset` load the dataset as normal as `feather.read_feather`
```python
import pyarrow.feather as feather
feather.read_feather('tensorData/real_ResidueTensor/real_tensor_1.arrow')
```
Plus `load_dataset("parquet", data_files='test.arrow', split="train")` works fine
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.26.5
- PyArrow version: 18.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7346/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7345 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7345/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7345/comments | https://api.github.com/repos/huggingface/datasets/issues/7345/events | https://github.com/huggingface/datasets/issues/7345 | 2,758,585,709 | I_kwDODunzps6kbK1t | 7,345 | Different behaviour of IterableDataset.map vs Dataset.map with remove_columns | {
"login": "vttrifonov",
"id": 12157034,
"node_id": "MDQ6VXNlcjEyMTU3MDM0",
"avatar_url": "https://avatars.githubusercontent.com/u/12157034?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vttrifonov",
"html_url": "https://github.com/vttrifonov",
"followers_url": "https://api.github.com/users/vttrifonov/followers",
"following_url": "https://api.github.com/users/vttrifonov/following{/other_user}",
"gists_url": "https://api.github.com/users/vttrifonov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vttrifonov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vttrifonov/subscriptions",
"organizations_url": "https://api.github.com/users/vttrifonov/orgs",
"repos_url": "https://api.github.com/users/vttrifonov/repos",
"events_url": "https://api.github.com/users/vttrifonov/events{/privacy}",
"received_events_url": "https://api.github.com/users/vttrifonov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-25T07:36:48 | 2024-12-25T07:36:48 | null | NONE | null | ### Describe the bug
The following code
```python
import datasets as hf
ds1 = hf.Dataset.from_list([{'i': i} for i in [0,1]])
#ds1 = ds1.to_iterable_dataset()
ds2 = ds1.map(
lambda i: {'i': i+1},
input_columns = ['i'],
remove_columns = ['i']
)
list(ds2)
```
produces
```python
[{'i': 1}, {'i': 2}]
```
as expected. If the line that converts `ds1` to iterable is uncommented so that the `ds2` is a map of an `IterableDataset`, the result is
```python
[{},{}]
```
I expected the output to be the same as before. It seems that in the second case the removed column is not added back into the output.
The issue seems to be [here](https://github.com/huggingface/datasets/blob/6c6a82a573f946c4a81069f56446caed15cee9c2/src/datasets/iterable_dataset.py#L1093): the columns are removed after the mapping which is not what we want (or what the [documentation says](https://github.com/huggingface/datasets/blob/6c6a82a573f946c4a81069f56446caed15cee9c2/src/datasets/iterable_dataset.py#L2370)) because we want the columns removed from the transformed example but then added if the map produced them.
This is `datasets==3.2.0` and `python==3.10`
### Steps to reproduce the bug
see above
### Expected behavior
see above
### Environment info
see above | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7345/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7344/comments | https://api.github.com/repos/huggingface/datasets/issues/7344/events | https://github.com/huggingface/datasets/issues/7344 | 2,754,735,951 | I_kwDODunzps6kMe9P | 7,344 | HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access SlimPajama-627B or c4 on TPUs | {
"login": "clankur",
"id": 9397233,
"node_id": "MDQ6VXNlcjkzOTcyMzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9397233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clankur",
"html_url": "https://github.com/clankur",
"followers_url": "https://api.github.com/users/clankur/followers",
"following_url": "https://api.github.com/users/clankur/following{/other_user}",
"gists_url": "https://api.github.com/users/clankur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clankur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clankur/subscriptions",
"organizations_url": "https://api.github.com/users/clankur/orgs",
"repos_url": "https://api.github.com/users/clankur/repos",
"events_url": "https://api.github.com/users/clankur/events{/privacy}",
"received_events_url": "https://api.github.com/users/clankur/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-22T16:30:07 | 2024-12-22T16:30:07 | null | NONE | null | ### Describe the bug
I am trying to run some trainings on Google's TPUs using Huggingface's DataLoader on [SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B) and [c4](https://huggingface.co/datasets/allenai/c4), but I end up running into `429 Client Error: Too Many Requests for URL` error when I call `load_dataset`. The even odder part is that I am able to sucessfully run trainings with the [wikitext dataset](https://huggingface.co/datasets/Salesforce/wikitext). Is there something I need to setup to specifically train with SlimPajama or C4 with TPUs because I am not clear why I am getting these errors.
### Steps to reproduce the bug
These are the commands you could run to produce the error below but you will require a ClearML account (you can create one [here](https://app.clear.ml/login?redirect=%2Fdashboard)) with a queue setup to run on Google TPUs
```bash
git clone https://github.com/clankur/muGPT.git
cd muGPT
python -m train --config-name=slim_v4-32_84m.yaml +training.queue={NAME_OF_CLEARML_QUEUE}
```
The error I see:
```
Traceback (most recent call last):
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/clearml/binding/hydra_bind.py", line 230, in _patched_task_function
return task_function(a_config, *a_args, **a_kwargs)
File "/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/train.py", line 1037, in main
main_contained(config, logger)
File "/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/train.py", line 840, in main_contained
loader = get_loader("train", config.training_data, config.training.tokens)
File "/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/input_loader.py", line 549, in get_loader
return HuggingFaceDataLoader(split, config, token_batch_params)
File "/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/input_loader.py", line 395, in __init__
self.dataset = load_dataset(
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 2112, in load_dataset
builder_instance = load_dataset_builder(
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 1798, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 1495, in dataset_module_factory
raise e1 from None
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 1479, in dataset_module_factory
).get_module()
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 1034, in get_module
else get_data_patterns(base_path, download_config=self.download_config)
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/data_files.py", line 457, in get_data_patterns
return _get_data_files_patterns(resolver)
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/data_files.py", line 248, in _get_data_files_patterns
data_files = pattern_resolver(pattern)
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/data_files.py", line 340, in resolve_pattern
for filepath, info in fs.glob(pattern, detail=True).items()
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 409, in glob
return super().glob(path, **kwargs)
File "/home/clankur/.clearml/venvs-builds/3.10/lib/python3.10/site-packages/fsspec/spec.py", line 602, in glob
allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs)
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 429, in find
out = self._ls_tree(path, recursive=True, refresh=refresh, revision=resolved_path.revision, **kwargs)
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 358, in _ls_tree
self._ls_tree(
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 375, in _ls_tree
for path_info in tree:
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 3080, in list_repo_tree
for path_info in paginate(path=tree_url, headers=headers, params={"recursive": recursive, "expand": expand}):
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/utils/_pagination.py", line 46, in paginate
hf_raise_for_status(r)
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 477, in hf_raise_for_status
raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/api/datasets/cerebras/SlimPajama-627B/tree/2d0accdd58c5d5511943ca1f5ff0e3eb5e293543?recursive=True&expand=True&cursor=ZXlKbWFXeGxYMjVoYldVaU9pSjBaWE4wTDJOb2RXNXJNUzlsZUdGdGNHeGxYMmh2YkdSdmRYUmZPVFEzTG1wemIyNXNMbnB6ZENKOTo2MjUw (Request ID: Root=1-67673de9-1413900606ede7712b08ef2c;1304c09c-3e69-4222-be14-f10ee709d49c)
maximum queue size reached
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
```
### Expected behavior
I'd expect the DataLoader to load from the SlimPajama-627B and c4 dataset without issue.
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-5.8.0-1035-gcp-x86_64-with-glibc2.31
- Python version: 3.10.16
- Huggingface_hub version: 0.26.5
- PyArrow version: 18.1.0
- Pandas version: 2.2.3 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7344/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7343 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7343/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7343/comments | https://api.github.com/repos/huggingface/datasets/issues/7343/events | https://github.com/huggingface/datasets/issues/7343 | 2,750,525,823 | I_kwDODunzps6j8bF_ | 7,343 | [Bug] Inconsistent behavior of data_files and data_dir in load_dataset method. | {
"login": "JasonCZH4",
"id": 74161960,
"node_id": "MDQ6VXNlcjc0MTYxOTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/74161960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JasonCZH4",
"html_url": "https://github.com/JasonCZH4",
"followers_url": "https://api.github.com/users/JasonCZH4/followers",
"following_url": "https://api.github.com/users/JasonCZH4/following{/other_user}",
"gists_url": "https://api.github.com/users/JasonCZH4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JasonCZH4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JasonCZH4/subscriptions",
"organizations_url": "https://api.github.com/users/JasonCZH4/orgs",
"repos_url": "https://api.github.com/users/JasonCZH4/repos",
"events_url": "https://api.github.com/users/JasonCZH4/events{/privacy}",
"received_events_url": "https://api.github.com/users/JasonCZH4/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-19T14:31:27 | 2024-12-19T14:31:27 | null | NONE | null | ### Describe the bug
Inconsistent operation of data_files and data_dir in load_dataset method.
### Steps to reproduce the bug
# First
I have three files, named 'train.json', 'val.json', 'test.json'.
Each one has a simple dict `{text:'aaa'}`.
Their path are `/data/train.json`, `/data/val.json`, `/data/test.json`
I load dataset with `data_files` argument:
```py
files = [os.path.join('./data',file) for file in os.listdir('./data')]
ds = load_dataset(
path='json',
data_files=files,)
```
And I get:
```py
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 3
})
})
```
However, If I load dataset with `data_dir` argument:
```py
ds = load_dataset(
path='json',
data_dir='./data',)
```
And I get:
```py
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 1
})
validation: Dataset({
features: ['text'],
num_rows: 1
})
test: Dataset({
features: ['text'],
num_rows: 1
})
})
```
Two results are not the same. Their behaviors are not equal, even if the statement [here](https://github.com/huggingface/datasets/blob/d0c152a979d91cc34b605c0298aebc650ab7dd27/src/datasets/load.py#L1790) said that their behaviors are equal.
# Second
If some filename include 'test' while others do not, `load_dataset` only return `test` dataset and others files are **abandoned**.
Given two files named `test.json` and `1.json`
Each one has a simple dict `{text:'aaa'}`.
I load the dataset using:
```py
ds = load_dataset(
path='json',
data_dir='./data',)
```
Only `test` is returned, `1.json` is missing:
```py
DatasetDict({
test: Dataset({
features: ['text'],
num_rows: 1
})
})
```
Things do not change even I manually set `split='train'`
### Expected behavior
1. Fix the above bugs.
2. Although the document says that load_dataset method will `Find which file goes into which split (e.g. train/test) based on file and directory names or on the YAML configuration`, I hope I can manually decide whether to do so. Sometimes users may accidentally put a `test` string in the filename but they just want a single `train` dataset. If the number of files in `data_dir` is huge, it's not easy to find out what cause the second situation metioned above.
### Environment info
datasets==3.2.0
Ubuntu18.84 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7343/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7342 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7342/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7342/comments | https://api.github.com/repos/huggingface/datasets/issues/7342/events | https://github.com/huggingface/datasets/pull/7342 | 2,749,572,310 | PR_kwDODunzps6FvgcK | 7,342 | Update LICENSE | {
"login": "eliebak",
"id": 97572401,
"node_id": "U_kgDOBdDWMQ",
"avatar_url": "https://avatars.githubusercontent.com/u/97572401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliebak",
"html_url": "https://github.com/eliebak",
"followers_url": "https://api.github.com/users/eliebak/followers",
"following_url": "https://api.github.com/users/eliebak/following{/other_user}",
"gists_url": "https://api.github.com/users/eliebak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eliebak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eliebak/subscriptions",
"organizations_url": "https://api.github.com/users/eliebak/orgs",
"repos_url": "https://api.github.com/users/eliebak/repos",
"events_url": "https://api.github.com/users/eliebak/events{/privacy}",
"received_events_url": "https://api.github.com/users/eliebak/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-19T08:17:50 | 2024-12-19T08:44:08 | 2024-12-19T08:44:08 | NONE | null | null | {
"login": "eliebak",
"id": 97572401,
"node_id": "U_kgDOBdDWMQ",
"avatar_url": "https://avatars.githubusercontent.com/u/97572401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliebak",
"html_url": "https://github.com/eliebak",
"followers_url": "https://api.github.com/users/eliebak/followers",
"following_url": "https://api.github.com/users/eliebak/following{/other_user}",
"gists_url": "https://api.github.com/users/eliebak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eliebak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eliebak/subscriptions",
"organizations_url": "https://api.github.com/users/eliebak/orgs",
"repos_url": "https://api.github.com/users/eliebak/repos",
"events_url": "https://api.github.com/users/eliebak/events{/privacy}",
"received_events_url": "https://api.github.com/users/eliebak/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7342/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7342",
"html_url": "https://github.com/huggingface/datasets/pull/7342",
"diff_url": "https://github.com/huggingface/datasets/pull/7342.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7342.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7341 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7341/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7341/comments | https://api.github.com/repos/huggingface/datasets/issues/7341/events | https://github.com/huggingface/datasets/pull/7341 | 2,745,658,561 | PR_kwDODunzps6FiGlt | 7,341 | minor video docs on how to install | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-17T18:06:17 | 2024-12-17T18:11:17 | 2024-12-17T18:11:15 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7341/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7341",
"html_url": "https://github.com/huggingface/datasets/pull/7341",
"diff_url": "https://github.com/huggingface/datasets/pull/7341.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7341.patch",
"merged_at": "2024-12-17T18:11:14"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7340/comments | https://api.github.com/repos/huggingface/datasets/issues/7340/events | https://github.com/huggingface/datasets/pull/7340 | 2,745,473,274 | PR_kwDODunzps6FhdR2 | 7,340 | don't import soundfile in tests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-17T16:49:55 | 2024-12-17T16:54:04 | 2024-12-17T16:50:24 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7340/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7340",
"html_url": "https://github.com/huggingface/datasets/pull/7340",
"diff_url": "https://github.com/huggingface/datasets/pull/7340.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7340.patch",
"merged_at": "2024-12-17T16:50:24"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7339 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7339/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7339/comments | https://api.github.com/repos/huggingface/datasets/issues/7339/events | https://github.com/huggingface/datasets/pull/7339 | 2,745,460,060 | PR_kwDODunzps6FhaTl | 7,339 | Update CONTRIBUTING.md | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-17T16:45:25 | 2024-12-17T16:51:36 | 2024-12-17T16:46:30 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7339/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7339",
"html_url": "https://github.com/huggingface/datasets/pull/7339",
"diff_url": "https://github.com/huggingface/datasets/pull/7339.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7339.patch",
"merged_at": "2024-12-17T16:46:30"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7337 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7337/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7337/comments | https://api.github.com/repos/huggingface/datasets/issues/7337/events | https://github.com/huggingface/datasets/issues/7337 | 2,744,877,569 | I_kwDODunzps6jm4IB | 7,337 | One or several metadata.jsonl were found, but not in the same directory or in a parent directory of | {
"login": "mst272",
"id": 67250532,
"node_id": "MDQ6VXNlcjY3MjUwNTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/67250532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mst272",
"html_url": "https://github.com/mst272",
"followers_url": "https://api.github.com/users/mst272/followers",
"following_url": "https://api.github.com/users/mst272/following{/other_user}",
"gists_url": "https://api.github.com/users/mst272/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mst272/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mst272/subscriptions",
"organizations_url": "https://api.github.com/users/mst272/orgs",
"repos_url": "https://api.github.com/users/mst272/repos",
"events_url": "https://api.github.com/users/mst272/events{/privacy}",
"received_events_url": "https://api.github.com/users/mst272/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-17T12:58:43 | 2024-12-17T12:58:43 | null | NONE | null | ### Describe the bug
ImageFolder with metadata.jsonl error. I downloaded liuhaotian/LLaVA-CC3M-Pretrain-595K locally from Hugging Face. According to the tutorial in https://huggingface.co/docs/datasets/image_dataset#image-captioning, only put images.zip and metadata.jsonl containing information in the same folder. However, after loading, an error was reported: One or several metadata.jsonl were found, but not in the same directory or in a parent directory of.
The data in my jsonl file is as follows:
> {"id": "GCC_train_002448550", "file_name": "GCC_train_002448550.jpg", "conversations": [{"from": "human", "value": "<image>\nProvide a brief description of the given image."}, {"from": "gpt", "value": "a view of a city , where the flyover was proposed to reduce the increasing traffic on thursday ."}]}
### Steps to reproduce the bug
from datasets import load_dataset
image = load_dataset("imagefolder",data_dir='data/opensource_data')
### Expected behavior
success
### Environment info
datasets==3.2.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7337/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7336 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7336/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7336/comments | https://api.github.com/repos/huggingface/datasets/issues/7336/events | https://github.com/huggingface/datasets/issues/7336 | 2,744,746,456 | I_kwDODunzps6jmYHY | 7,336 | Clarify documentation or Create DatasetCard | {
"login": "August-murr",
"id": 145011209,
"node_id": "U_kgDOCKSyCQ",
"avatar_url": "https://avatars.githubusercontent.com/u/145011209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/August-murr",
"html_url": "https://github.com/August-murr",
"followers_url": "https://api.github.com/users/August-murr/followers",
"following_url": "https://api.github.com/users/August-murr/following{/other_user}",
"gists_url": "https://api.github.com/users/August-murr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/August-murr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/August-murr/subscriptions",
"organizations_url": "https://api.github.com/users/August-murr/orgs",
"repos_url": "https://api.github.com/users/August-murr/repos",
"events_url": "https://api.github.com/users/August-murr/events{/privacy}",
"received_events_url": "https://api.github.com/users/August-murr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-12-17T12:01:00 | 2024-12-17T12:01:00 | null | NONE | null | ### Feature request
I noticed that you can use a Model Card instead of a Dataset Card when pushing a dataset to the Hub, but this isn’t clearly mentioned in [the docs.](https://huggingface.co/docs/datasets/dataset_card)
- Update the docs to clarify that a Model Card can work for datasets too.
- It might be worth creating a dedicated DatasetCard module, similar to the ModelCard module, for consistency and better support.
Not sure if this belongs here or on the [Hub repo](https://github.com/huggingface/huggingface_hub), but thought I’d bring it up!
### Motivation
I just spent an hour like on [this issue](https://github.com/huggingface/trl/pull/2491) trying to create a `DatasetCard` for a script.
### Your contribution
might later | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7336/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7335 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7335/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7335/comments | https://api.github.com/repos/huggingface/datasets/issues/7335/events | https://github.com/huggingface/datasets/issues/7335 | 2,743,437,260 | I_kwDODunzps6jhYfM | 7,335 | Too many open files: '/root/.cache/huggingface/token' | {
"login": "kopyl",
"id": 17604849,
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kopyl",
"html_url": "https://github.com/kopyl",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"repos_url": "https://api.github.com/users/kopyl/repos",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-16T21:30:24 | 2024-12-16T21:30:24 | null | NONE | null | ### Describe the bug
I ran this code:
```
from datasets import load_dataset
dataset = load_dataset("common-canvas/commoncatalog-cc-by", cache_dir="/datadrive/datasets/cc", num_proc=1000)
```
And got this error.
Before it was some other file though (lie something...incomplete)
runnting
```
ulimit -n 8192
```
did not help at all.
### Steps to reproduce the bug
Run the code i sent
### Expected behavior
Should be no errors
### Environment info
linux, jupyter lab. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7335/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7334 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7334/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7334/comments | https://api.github.com/repos/huggingface/datasets/issues/7334/events | https://github.com/huggingface/datasets/issues/7334 | 2,740,266,503 | I_kwDODunzps6jVSYH | 7,334 | TypeError: Value.__init__() missing 1 required positional argument: 'dtype' | {
"login": "kakamond",
"id": 185799756,
"node_id": "U_kgDOCxMUTA",
"avatar_url": "https://avatars.githubusercontent.com/u/185799756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kakamond",
"html_url": "https://github.com/kakamond",
"followers_url": "https://api.github.com/users/kakamond/followers",
"following_url": "https://api.github.com/users/kakamond/following{/other_user}",
"gists_url": "https://api.github.com/users/kakamond/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kakamond/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kakamond/subscriptions",
"organizations_url": "https://api.github.com/users/kakamond/orgs",
"repos_url": "https://api.github.com/users/kakamond/repos",
"events_url": "https://api.github.com/users/kakamond/events{/privacy}",
"received_events_url": "https://api.github.com/users/kakamond/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-15T04:08:46 | 2024-12-15T04:08:46 | null | NONE | null | ### Describe the bug
ds = load_dataset(
"./xxx.py",
name="default",
split="train",
)
The datasets does not support debugging locally anymore...
### Steps to reproduce the bug
```
from datasets import load_dataset
ds = load_dataset(
"./repo.py",
name="default",
split="train",
)
for item in ds:
print(item)
```
It works fine for "username/repo", but it does not work for "./repo.py" when debugging locally...
Running above code template will report TypeError: Value.__init__() missing 1 required positional argument: 'dtype'
### Expected behavior
fix this bug
### Environment info
python 3.10 datasets==2.21 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7334/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7328/comments | https://api.github.com/repos/huggingface/datasets/issues/7328/events | https://github.com/huggingface/datasets/pull/7328 | 2,738,626,593 | PR_kwDODunzps6FKK13 | 7,328 | Fix typo in arrow_dataset | {
"login": "AndreaFrancis",
"id": 5564745,
"node_id": "MDQ6VXNlcjU1NjQ3NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndreaFrancis",
"html_url": "https://github.com/AndreaFrancis",
"followers_url": "https://api.github.com/users/AndreaFrancis/followers",
"following_url": "https://api.github.com/users/AndreaFrancis/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreaFrancis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AndreaFrancis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreaFrancis/subscriptions",
"organizations_url": "https://api.github.com/users/AndreaFrancis/orgs",
"repos_url": "https://api.github.com/users/AndreaFrancis/repos",
"events_url": "https://api.github.com/users/AndreaFrancis/events{/privacy}",
"received_events_url": "https://api.github.com/users/AndreaFrancis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-13T15:17:09 | 2024-12-19T17:10:27 | 2024-12-19T17:10:25 | CONTRIBUTOR | null | null | {
"login": "AndreaFrancis",
"id": 5564745,
"node_id": "MDQ6VXNlcjU1NjQ3NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndreaFrancis",
"html_url": "https://github.com/AndreaFrancis",
"followers_url": "https://api.github.com/users/AndreaFrancis/followers",
"following_url": "https://api.github.com/users/AndreaFrancis/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreaFrancis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AndreaFrancis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreaFrancis/subscriptions",
"organizations_url": "https://api.github.com/users/AndreaFrancis/orgs",
"repos_url": "https://api.github.com/users/AndreaFrancis/repos",
"events_url": "https://api.github.com/users/AndreaFrancis/events{/privacy}",
"received_events_url": "https://api.github.com/users/AndreaFrancis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7328/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7328",
"html_url": "https://github.com/huggingface/datasets/pull/7328",
"diff_url": "https://github.com/huggingface/datasets/pull/7328.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7328.patch",
"merged_at": "2024-12-19T17:10:25"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7327 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7327/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7327/comments | https://api.github.com/repos/huggingface/datasets/issues/7327/events | https://github.com/huggingface/datasets/issues/7327 | 2,738,514,909 | I_kwDODunzps6jOmvd | 7,327 | .map() is not caching and ram goes OOM | {
"login": "simeneide",
"id": 7136076,
"node_id": "MDQ6VXNlcjcxMzYwNzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7136076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simeneide",
"html_url": "https://github.com/simeneide",
"followers_url": "https://api.github.com/users/simeneide/followers",
"following_url": "https://api.github.com/users/simeneide/following{/other_user}",
"gists_url": "https://api.github.com/users/simeneide/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simeneide/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simeneide/subscriptions",
"organizations_url": "https://api.github.com/users/simeneide/orgs",
"repos_url": "https://api.github.com/users/simeneide/repos",
"events_url": "https://api.github.com/users/simeneide/events{/privacy}",
"received_events_url": "https://api.github.com/users/simeneide/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-13T14:22:56 | 2024-12-13T14:22:56 | null | NONE | null | ### Describe the bug
Im trying to run a fairly simple map that is converting a dataset into numpy arrays. however, it just piles up on memory and doesnt write to disk. Ive tried multiple cache techniques such as specifying the cache dir, setting max mem, +++ but none seem to work. What am I missing here?
### Steps to reproduce the bug
```
from pydub import AudioSegment
import io
import base64
import numpy as np
import os
CACHE_PATH = "/mnt/extdisk/cache" # "/root/.cache/huggingface/"#
os.environ["HF_HOME"] = CACHE_PATH
import datasets
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# Create a handler for Jupyter notebook
handler = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
#datasets.config.IN_MEMORY_MAX_SIZE= 1000#*(2**30) #50 gb
print(datasets.config.HF_CACHE_HOME)
print(datasets.config.HF_DATASETS_CACHE)
# Decode the base64 string into bytes
def convert_mp3_to_audio_segment(example):
"""
example = ds['train'][0]
"""
try:
audio_data_bytes = base64.b64decode(example['audio'])
# Use pydub to load the MP3 audio from the decoded bytes
audio_segment = AudioSegment.from_file(io.BytesIO(audio_data_bytes), format="mp3")
# Resample to 24_000
audio_segment = audio_segment.set_frame_rate(24_000)
audio = {'sampling_rate' : audio_segment.frame_rate,
'array' : np.array(audio_segment.get_array_of_samples(), dtype="float")}
del audio_segment
duration = len(audio['array']) / audio['sampling_rate']
except Exception as e:
logger.warning(f"Failed to convert audio for {example['id']}. Error: {e}")
audio = {'sampling_rate' : 0,
'array' : np.array([]), duration : 0}
return {'audio' : audio, 'duration' : duration}
ds = datasets.load_dataset("NbAiLab/nb_distil_speech_noconcat_stortinget", cache_dir=CACHE_PATH, keep_in_memory=False)
#%%
num_proc=32
ds_processed = (
ds
#.select(range(10))
.map(convert_mp3_to_audio_segment, num_proc=num_proc, desc="Converting mp3 to audio segment") #, cache_file_name=f"{CACHE_PATH}/stortinget_audio" # , cache_file_name="test"
)
```
### Expected behavior
the map should write to disk
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-6.8.0-45-generic-x86_64-with-glibc2.39
- Python version: 3.12.7
- `huggingface_hub` version: 0.26.3
- PyArrow version: 18.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7327/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7327/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7326 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7326/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7326/comments | https://api.github.com/repos/huggingface/datasets/issues/7326/events | https://github.com/huggingface/datasets/issues/7326 | 2,738,188,902 | I_kwDODunzps6jNXJm | 7,326 | Remove upper bound for fsspec | {
"login": "fellhorn",
"id": 26092524,
"node_id": "MDQ6VXNlcjI2MDkyNTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/26092524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fellhorn",
"html_url": "https://github.com/fellhorn",
"followers_url": "https://api.github.com/users/fellhorn/followers",
"following_url": "https://api.github.com/users/fellhorn/following{/other_user}",
"gists_url": "https://api.github.com/users/fellhorn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fellhorn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fellhorn/subscriptions",
"organizations_url": "https://api.github.com/users/fellhorn/orgs",
"repos_url": "https://api.github.com/users/fellhorn/repos",
"events_url": "https://api.github.com/users/fellhorn/events{/privacy}",
"received_events_url": "https://api.github.com/users/fellhorn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-13T11:35:12 | 2024-12-16T11:08:10 | null | NONE | null | ### Describe the bug
As also raised by @cyyever in https://github.com/huggingface/datasets/pull/7296 and @NeilGirdhar in https://github.com/huggingface/datasets/commit/d5468836fe94e8be1ae093397dd43d4a2503b926#commitcomment-140952162 , `datasets` has a problematic version constraint on `fsspec`.
In our case this causes (unnecessary?) troubles due to a race condition bug in that version of the corresponding `gcsfs` plugin, that causes deadlocks: https://github.com/fsspec/gcsfs/pull/643
We just use a version override to ignore the constraint from `datasets`, but imho the version constraint could just be removed in the first place?
The last few PRs bumping the upper bound were basically uneventful:
* https://github.com/huggingface/datasets/pull/7219
* https://github.com/huggingface/datasets/pull/6921
* https://github.com/huggingface/datasets/pull/6747
### Steps to reproduce the bug
-
### Expected behavior
Installing `fsspec>=2024.10.0` along `datasets` should be possible without overwriting constraints.
### Environment info
All recent datasets versions | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7326/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7326/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7325/comments | https://api.github.com/repos/huggingface/datasets/issues/7325/events | https://github.com/huggingface/datasets/pull/7325 | 2,736,618,054 | PR_kwDODunzps6FDpMp | 7,325 | Introduce pdf support (#7318) | {
"login": "yabramuvdi",
"id": 4812761,
"node_id": "MDQ6VXNlcjQ4MTI3NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4812761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yabramuvdi",
"html_url": "https://github.com/yabramuvdi",
"followers_url": "https://api.github.com/users/yabramuvdi/followers",
"following_url": "https://api.github.com/users/yabramuvdi/following{/other_user}",
"gists_url": "https://api.github.com/users/yabramuvdi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yabramuvdi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yabramuvdi/subscriptions",
"organizations_url": "https://api.github.com/users/yabramuvdi/orgs",
"repos_url": "https://api.github.com/users/yabramuvdi/repos",
"events_url": "https://api.github.com/users/yabramuvdi/events{/privacy}",
"received_events_url": "https://api.github.com/users/yabramuvdi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-12-12T18:31:18 | 2024-12-19T17:22:51 | null | NONE | null | First implementation of the Pdf feature to support pdfs (#7318) . Using [pdfplumber](https://github.com/jsvine/pdfplumber?tab=readme-ov-file#python-library) as the default library to work with pdfs.
@lhoestq and @AndreaFrancis | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7325/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7325",
"html_url": "https://github.com/huggingface/datasets/pull/7325",
"diff_url": "https://github.com/huggingface/datasets/pull/7325.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7325.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7323 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7323/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7323/comments | https://api.github.com/repos/huggingface/datasets/issues/7323/events | https://github.com/huggingface/datasets/issues/7323 | 2,736,008,698 | I_kwDODunzps6jFC36 | 7,323 | Unexpected cache behaviour using load_dataset | {
"login": "Moritz-Wirth",
"id": 74349080,
"node_id": "MDQ6VXNlcjc0MzQ5MDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/74349080?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Moritz-Wirth",
"html_url": "https://github.com/Moritz-Wirth",
"followers_url": "https://api.github.com/users/Moritz-Wirth/followers",
"following_url": "https://api.github.com/users/Moritz-Wirth/following{/other_user}",
"gists_url": "https://api.github.com/users/Moritz-Wirth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Moritz-Wirth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moritz-Wirth/subscriptions",
"organizations_url": "https://api.github.com/users/Moritz-Wirth/orgs",
"repos_url": "https://api.github.com/users/Moritz-Wirth/repos",
"events_url": "https://api.github.com/users/Moritz-Wirth/events{/privacy}",
"received_events_url": "https://api.github.com/users/Moritz-Wirth/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-12T14:03:00 | 2024-12-12T14:18:17 | null | NONE | null | ### Describe the bug
Following the (Cache management)[https://huggingface.co/docs/datasets/en/cache] docu and previous behaviour from datasets version 2.18.0, one is able to change the cache directory. Previously, all downloaded/extracted/etc files were found in this folder. As i have recently update to the latest version this is not the case anymore. Downloaded files are stored in `~/.cache/huggingface/hub`.
Providing the `cache_dir` argument in `load_dataset` the cache directory is created and there are some files but the bulk is still in `~/.cache/huggingface/hub`.
I believe this could be solved by adding the cache_dir argument [here](https://github.com/huggingface/datasets/blob/fdda5585ab18ea1292547f36c969d12c408ab842/src/datasets/utils/file_utils.py#L188)
### Steps to reproduce the bug
For example using https://huggingface.co/datasets/ashraq/esc50:
```python
from datasets import load_dataset
ds = load_dataset("ashraq/esc50", "default", cache_dir="~/custom/cache/path/esc50")
```
### Expected behavior
I would expect the bulk of files related to the dataset to be stored somewhere in `~/custom/cache/path/esc50`, but it seems they are in `~/.cache/huggingface/hub/datasets--ashraq--esc50`.
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-5.14.0-503.15.1.el9_5.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.14
- `huggingface_hub` version: 0.26.5
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7323/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7322 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7322/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7322/comments | https://api.github.com/repos/huggingface/datasets/issues/7322/events | https://github.com/huggingface/datasets/issues/7322 | 2,732,254,868 | I_kwDODunzps6i2uaU | 7,322 | ArrowInvalid: JSON parse error: Column() changed from object to array in row 0 | {
"login": "CLL112",
"id": 41767521,
"node_id": "MDQ6VXNlcjQxNzY3NTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/41767521?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CLL112",
"html_url": "https://github.com/CLL112",
"followers_url": "https://api.github.com/users/CLL112/followers",
"following_url": "https://api.github.com/users/CLL112/following{/other_user}",
"gists_url": "https://api.github.com/users/CLL112/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CLL112/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CLL112/subscriptions",
"organizations_url": "https://api.github.com/users/CLL112/orgs",
"repos_url": "https://api.github.com/users/CLL112/repos",
"events_url": "https://api.github.com/users/CLL112/events{/privacy}",
"received_events_url": "https://api.github.com/users/CLL112/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-11T08:41:39 | 2024-12-11T08:42:54 | null | NONE | null | ### Describe the bug
Encountering an error while loading the ```liuhaotian/LLaVA-Instruct-150K dataset```.
### Steps to reproduce the bug
```
from datasets import load_dataset
fw =load_dataset("liuhaotian/LLaVA-Instruct-150K")
```
Error:
```
ArrowInvalid Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py](https://localhost:8080/#) in _generate_tables(self, files)
136 try:
--> 137 pa_table = paj.read_json(
138 io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)
20 frames
ArrowInvalid: JSON parse error: Column() changed from object to array in row 0
During handling of the above exception, another exception occurred:
ArrowTypeError Traceback (most recent call last)
ArrowTypeError: ("Expected bytes, got a 'int' object", 'Conversion failed for column id with type object')
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1895 if isinstance(e, DatasetGenerationError):
1896 raise
-> 1897 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1898
1899 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
I have tried loading the dataset both on my own server and on Colab, and encountered errors in both instances.
### Environment info
```
- `datasets` version: 3.2.0
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.26.3
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.9.0
```
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7322/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7321 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7321/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7321/comments | https://api.github.com/repos/huggingface/datasets/issues/7321/events | https://github.com/huggingface/datasets/issues/7321 | 2,731,626,760 | I_kwDODunzps6i0VEI | 7,321 | ImportError: cannot import name 'set_caching_enabled' from 'datasets' | {
"login": "sankexin",
"id": 33318353,
"node_id": "MDQ6VXNlcjMzMzE4MzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/33318353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sankexin",
"html_url": "https://github.com/sankexin",
"followers_url": "https://api.github.com/users/sankexin/followers",
"following_url": "https://api.github.com/users/sankexin/following{/other_user}",
"gists_url": "https://api.github.com/users/sankexin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sankexin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sankexin/subscriptions",
"organizations_url": "https://api.github.com/users/sankexin/orgs",
"repos_url": "https://api.github.com/users/sankexin/repos",
"events_url": "https://api.github.com/users/sankexin/events{/privacy}",
"received_events_url": "https://api.github.com/users/sankexin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-12-11T01:58:46 | 2024-12-11T13:32:15 | null | NONE | null | ### Describe the bug
Traceback (most recent call last):
File "/usr/local/lib/python3.10/runpy.py", line 187, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/usr/local/lib/python3.10/runpy.py", line 110, in _get_module_details
__import__(pkg_name)
File "/home/Medusa/axolotl/src/axolotl/cli/__init__.py", line 23, in <module>
from axolotl.train import TrainDatasetMeta
File "/home/Medusa/axolotl/src/axolotl/train.py", line 23, in <module>
from axolotl.utils.trainer import setup_trainer
File "/home/Medusa/axolotl/src/axolotl/utils/trainer.py", line 13, in <module>
from datasets import set_caching_enabled
ImportError: cannot import name 'set_caching_enabled' from 'datasets' (/usr/local/lib/python3.10/site-packages/datasets/__init__.py)
### Steps to reproduce the bug
1、axolotl
2、accelerate launch -m axolotl.cli.train examples/medusa/qwen_lora_stage1.yml
### Expected behavior
enable datasets
### Environment info
python3.10 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7321/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7320/comments | https://api.github.com/repos/huggingface/datasets/issues/7320/events | https://github.com/huggingface/datasets/issues/7320 | 2,731,112,100 | I_kwDODunzps6iyXak | 7,320 | ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['label'] | {
"login": "atrompeterog",
"id": 38381084,
"node_id": "MDQ6VXNlcjM4MzgxMDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/38381084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atrompeterog",
"html_url": "https://github.com/atrompeterog",
"followers_url": "https://api.github.com/users/atrompeterog/followers",
"following_url": "https://api.github.com/users/atrompeterog/following{/other_user}",
"gists_url": "https://api.github.com/users/atrompeterog/gists{/gist_id}",
"starred_url": "https://api.github.com/users/atrompeterog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atrompeterog/subscriptions",
"organizations_url": "https://api.github.com/users/atrompeterog/orgs",
"repos_url": "https://api.github.com/users/atrompeterog/repos",
"events_url": "https://api.github.com/users/atrompeterog/events{/privacy}",
"received_events_url": "https://api.github.com/users/atrompeterog/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-10T20:23:11 | 2024-12-10T23:22:23 | 2024-12-10T23:22:23 | NONE | null | ### Describe the bug
I am trying to create a PEFT model from DISTILBERT model, and run a training loop. However, the trainer.train() is giving me this error: ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['label']
Here is my code:
### Steps to reproduce the bug
#Creating a PEFT Config
from peft import LoraConfig
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from peft import get_peft_model
lora_config = LoraConfig(
task_type="SEQ_CLASS",
r=8,
lora_alpha=32,
target_modules=["q_lin", "k_lin", "v_lin"],
lora_dropout=0.01,
)
#Converting a Transformers Model into a PEFT Model
model = AutoModelForSequenceClassification.from_pretrained(
"distilbert-base-uncased",
num_labels=2, #Binary classification, 1 = positive, 0 = negative
)
lora_model = get_peft_model(model, lora_config)
print(lora_model)
Tokenize data set
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
# Load the train and test splits dataset
dataset = load_dataset("fancyzhx/amazon_polarity")
#create a smaller subset for train and test
subset_size = 5000
small_train_dataset = dataset["train"].shuffle(seed=42).select(range(subset_size))
small_test_dataset = dataset["test"].shuffle(seed=42).select(range(subset_size))
#Tokenize data
def tokenize_function(example):
return tokenizer(example["content"], padding="max_length", truncation=True)
tokenized_train_dataset = small_train_dataset.map(tokenize_function, batched=True)
tokenized_test_dataset = small_test_dataset.map(tokenize_function, batched=True)
train_lora = tokenized_train_dataset.rename_column('label', 'labels')
test_lora = tokenized_test_dataset.rename_column('label', 'labels')
print(tokenized_train_dataset.column_names)
print(tokenized_test_dataset.column_names)
#Train the PEFT model
import numpy as np
from transformers import Trainer, TrainingArguments, default_data_collator, DataCollatorWithPadding
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForSequenceClassification
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
return {"accuracy": (predictions == labels).mean()}
trainer = Trainer(
model=lora_model,
args=TrainingArguments(
output_dir=".",
learning_rate=2e-3,
# Reduce the batch size if you don't have enough memory
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
num_train_epochs=3,
weight_decay=0.01,
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
),
train_dataset=tokenized_train_dataset,
eval_dataset=tokenized_test_dataset,
tokenizer=tokenizer,
data_collator=DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="pt"),
compute_metrics=compute_metrics,
)
trainer.train()
### Expected behavior
Example of output:
[558/558 01:04, Epoch XX]
Epoch | Training Loss | Validation Loss | Accuracy
-- | -- | -- | --
1 | No log | 0.046478 | 0.988341
2 | 0.052800 | 0.048840 | 0.988341
### Environment info
Using python and jupyter notbook | {
"login": "atrompeterog",
"id": 38381084,
"node_id": "MDQ6VXNlcjM4MzgxMDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/38381084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atrompeterog",
"html_url": "https://github.com/atrompeterog",
"followers_url": "https://api.github.com/users/atrompeterog/followers",
"following_url": "https://api.github.com/users/atrompeterog/following{/other_user}",
"gists_url": "https://api.github.com/users/atrompeterog/gists{/gist_id}",
"starred_url": "https://api.github.com/users/atrompeterog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atrompeterog/subscriptions",
"organizations_url": "https://api.github.com/users/atrompeterog/orgs",
"repos_url": "https://api.github.com/users/atrompeterog/repos",
"events_url": "https://api.github.com/users/atrompeterog/events{/privacy}",
"received_events_url": "https://api.github.com/users/atrompeterog/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7320/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7319/comments | https://api.github.com/repos/huggingface/datasets/issues/7319/events | https://github.com/huggingface/datasets/pull/7319 | 2,730,679,980 | PR_kwDODunzps6EvHBp | 7,319 | set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-10T17:01:34 | 2024-12-10T17:04:04 | 2024-12-10T17:01:45 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7319/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7319",
"html_url": "https://github.com/huggingface/datasets/pull/7319",
"diff_url": "https://github.com/huggingface/datasets/pull/7319.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7319.patch",
"merged_at": "2024-12-10T17:01:45"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7318 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7318/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7318/comments | https://api.github.com/repos/huggingface/datasets/issues/7318/events | https://github.com/huggingface/datasets/issues/7318 | 2,730,676,278 | I_kwDODunzps6iwtA2 | 7,318 | Introduce support for PDFs | {
"login": "yabramuvdi",
"id": 4812761,
"node_id": "MDQ6VXNlcjQ4MTI3NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4812761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yabramuvdi",
"html_url": "https://github.com/yabramuvdi",
"followers_url": "https://api.github.com/users/yabramuvdi/followers",
"following_url": "https://api.github.com/users/yabramuvdi/following{/other_user}",
"gists_url": "https://api.github.com/users/yabramuvdi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yabramuvdi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yabramuvdi/subscriptions",
"organizations_url": "https://api.github.com/users/yabramuvdi/orgs",
"repos_url": "https://api.github.com/users/yabramuvdi/repos",
"events_url": "https://api.github.com/users/yabramuvdi/events{/privacy}",
"received_events_url": "https://api.github.com/users/yabramuvdi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 6 | 2024-12-10T16:59:48 | 2024-12-12T18:38:13 | null | NONE | null | ### Feature request
The idea (discussed in the Discord server with @lhoestq ) is to have a Pdf type like Image/Audio/Video. For example [Video](https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py) was recently added and contains how to decode a video file encoded in a dictionary like {"path": ..., "bytes": ...} as a VideoReader using decord. We want to do the same with pdf and get a [pypdfium2.PdfDocument](https://pypdfium2.readthedocs.io/en/stable/_modules/pypdfium2/_helpers/document.html#PdfDocument).
### Motivation
In many cases PDFs contain very valuable information beyond text (e.g. images, figures). Support for PDFs would help create datasets where all the information is preserved.
### Your contribution
I can start the implementation of the Pdf type :) | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7318/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7318/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7317 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7317/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7317/comments | https://api.github.com/repos/huggingface/datasets/issues/7317/events | https://github.com/huggingface/datasets/pull/7317 | 2,730,661,237 | PR_kwDODunzps6EvC5Q | 7,317 | Release: 3.2.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-10T16:53:20 | 2024-12-10T16:56:58 | 2024-12-10T16:56:56 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7317/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7317",
"html_url": "https://github.com/huggingface/datasets/pull/7317",
"diff_url": "https://github.com/huggingface/datasets/pull/7317.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7317.patch",
"merged_at": "2024-12-10T16:56:56"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7316 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7316/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7316/comments | https://api.github.com/repos/huggingface/datasets/issues/7316/events | https://github.com/huggingface/datasets/pull/7316 | 2,730,196,085 | PR_kwDODunzps6Etc0U | 7,316 | More docs to from_dict to mention that the result lives in RAM | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-10T13:56:01 | 2024-12-10T13:58:32 | 2024-12-10T13:57:02 | MEMBER | null | following discussions at /static-proxy?url=https%3A%2F%2Fdiscuss.huggingface.co%2Ft%2Fhow-to-load-this-simple-audio-data-set-and-use-dataset-map-without-memory-issues%2F17722%2F14%3C%2Fdiv%3E%3C%2Fdiv%3E
| {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7316/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7316/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7316",
"html_url": "https://github.com/huggingface/datasets/pull/7316",
"diff_url": "https://github.com/huggingface/datasets/pull/7316.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7316.patch",
"merged_at": "2024-12-10T13:57:02"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7314 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7314/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7314/comments | https://api.github.com/repos/huggingface/datasets/issues/7314/events | https://github.com/huggingface/datasets/pull/7314 | 2,727,502,630 | PR_kwDODunzps6EkCi5 | 7,314 | Resolved for empty datafiles | {
"login": "sahillihas",
"id": 20582290,
"node_id": "MDQ6VXNlcjIwNTgyMjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20582290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sahillihas",
"html_url": "https://github.com/sahillihas",
"followers_url": "https://api.github.com/users/sahillihas/followers",
"following_url": "https://api.github.com/users/sahillihas/following{/other_user}",
"gists_url": "https://api.github.com/users/sahillihas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sahillihas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sahillihas/subscriptions",
"organizations_url": "https://api.github.com/users/sahillihas/orgs",
"repos_url": "https://api.github.com/users/sahillihas/repos",
"events_url": "https://api.github.com/users/sahillihas/events{/privacy}",
"received_events_url": "https://api.github.com/users/sahillihas/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-12-09T15:47:22 | 2024-12-27T18:20:21 | null | NONE | null | Resolved for Issue#6152 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7314/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7314",
"html_url": "https://github.com/huggingface/datasets/pull/7314",
"diff_url": "https://github.com/huggingface/datasets/pull/7314.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7314.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7313 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7313/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7313/comments | https://api.github.com/repos/huggingface/datasets/issues/7313/events | https://github.com/huggingface/datasets/issues/7313 | 2,726,240,634 | I_kwDODunzps6ifyF6 | 7,313 | Cannot create a dataset with relative audio path | {
"login": "sedol1339",
"id": 5188731,
"node_id": "MDQ6VXNlcjUxODg3MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5188731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sedol1339",
"html_url": "https://github.com/sedol1339",
"followers_url": "https://api.github.com/users/sedol1339/followers",
"following_url": "https://api.github.com/users/sedol1339/following{/other_user}",
"gists_url": "https://api.github.com/users/sedol1339/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sedol1339/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sedol1339/subscriptions",
"organizations_url": "https://api.github.com/users/sedol1339/orgs",
"repos_url": "https://api.github.com/users/sedol1339/repos",
"events_url": "https://api.github.com/users/sedol1339/events{/privacy}",
"received_events_url": "https://api.github.com/users/sedol1339/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-12-09T07:34:20 | 2024-12-12T13:46:38 | null | NONE | null | ### Describe the bug
Hello! I want to create a dataset of parquet files, with audios stored as separate .mp3 files. However, it says "No such file or directory" (see the reproducing code).
### Steps to reproduce the bug
Creating a dataset
```
from pathlib import Path
from datasets import Dataset, load_dataset, Audio
Path('my_dataset/audio').mkdir(parents=True, exist_ok=True)
Path('my_dataset/audio/file.mp3').touch(exist_ok=True)
Dataset.from_list(
[{'audio': {'path': 'audio/file.mp3'}}]
).to_parquet('my_dataset/data.parquet')
```
Result:
```
# my_dataset
# ├── audio
# │ └── file.mp3
# └── data.parquet
```
Trying to load the dataset
```
dataset = (
load_dataset('my_dataset', split='train')
.cast_column('audio', Audio(sampling_rate=16_000))
)
dataset[0]
>>> FileNotFoundError: [Errno 2] No such file or directory: 'audio/file.mp3'
```
### Expected behavior
I expect the dataset to load correctly.
I've found 2 workarounds, but they are not very good:
1. I can specify an absolute path to the audio, however, when I move the folder or upload to HF it will stop working.
2. I can set `'path': 'file.mp3'`, and load with `load_dataset('my_dataset', data_dir='audio')` - it seems to work, but does this mean that anyone from Hugging Face who wants to use this dataset should also pass the `data_dir` argument, otherwise it won't work?
### Environment info
datasets 3.1.0, Ubuntu 24.04.1 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7313/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7312 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7312/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7312/comments | https://api.github.com/repos/huggingface/datasets/issues/7312/events | https://github.com/huggingface/datasets/pull/7312 | 2,725,103,094 | PR_kwDODunzps6EbwNN | 7,312 | [Audio Features - DO NOT MERGE] PoC for adding an offset+sliced reading to audio file. | {
"login": "TParcollet",
"id": 11910731,
"node_id": "MDQ6VXNlcjExOTEwNzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/11910731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TParcollet",
"html_url": "https://github.com/TParcollet",
"followers_url": "https://api.github.com/users/TParcollet/followers",
"following_url": "https://api.github.com/users/TParcollet/following{/other_user}",
"gists_url": "https://api.github.com/users/TParcollet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TParcollet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TParcollet/subscriptions",
"organizations_url": "https://api.github.com/users/TParcollet/orgs",
"repos_url": "https://api.github.com/users/TParcollet/repos",
"events_url": "https://api.github.com/users/TParcollet/events{/privacy}",
"received_events_url": "https://api.github.com/users/TParcollet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-08T10:27:31 | 2024-12-08T10:27:31 | null | NONE | null | This is a proof of concept for #7310 . The idea is to enable the access to others column of the dataset row when loading an audio file into a table. This is to allow sliced reading. As stated in the issue, many people have very long audio files and use start and stop slicing in this audio file.
Right now, this code work as a PoC on my dataset. However, this is **just to illustrate** the idea. Many things are messed up, the first being that the shards have wildly varying sizes.
Could be of interest to @lhoestq and @sanchit-gandhi ?
Happy to test better ideas locally. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7312/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7312/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7312",
"html_url": "https://github.com/huggingface/datasets/pull/7312",
"diff_url": "https://github.com/huggingface/datasets/pull/7312.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7312.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7311 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7311/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7311/comments | https://api.github.com/repos/huggingface/datasets/issues/7311/events | https://github.com/huggingface/datasets/issues/7311 | 2,725,002,630 | I_kwDODunzps6ibD2G | 7,311 | How to get the original dataset name with username? | {
"login": "npuichigo",
"id": 11533479,
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/npuichigo",
"html_url": "https://github.com/npuichigo",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-12-08T07:18:14 | 2024-12-08T07:19:41 | null | CONTRIBUTOR | null | ### Feature request
The issue is related to ray data https://github.com/ray-project/ray/issues/49008 which it requires to check if the dataset is the original one just after `load_dataset` and parquet files are already available on hf hub.
The solution used now is to get the dataset name, config and split, then `load_dataset` again and check the fingerprint. But it's unable to get the correct dataset name if it contains username. So how to get the dataset name with username prefix, or is there another way to query if a dataset is the original one with parquet available?
@lhoestq
### Motivation
https://github.com/ray-project/ray/issues/49008
### Your contribution
Would like to fix that. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7311/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7310 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7310/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7310/comments | https://api.github.com/repos/huggingface/datasets/issues/7310/events | https://github.com/huggingface/datasets/issues/7310 | 2,724,830,603 | I_kwDODunzps6iaZ2L | 7,310 | Enable the Audio Feature to decode / read with an offset + duration | {
"login": "TParcollet",
"id": 11910731,
"node_id": "MDQ6VXNlcjExOTEwNzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/11910731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TParcollet",
"html_url": "https://github.com/TParcollet",
"followers_url": "https://api.github.com/users/TParcollet/followers",
"following_url": "https://api.github.com/users/TParcollet/following{/other_user}",
"gists_url": "https://api.github.com/users/TParcollet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TParcollet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TParcollet/subscriptions",
"organizations_url": "https://api.github.com/users/TParcollet/orgs",
"repos_url": "https://api.github.com/users/TParcollet/repos",
"events_url": "https://api.github.com/users/TParcollet/events{/privacy}",
"received_events_url": "https://api.github.com/users/TParcollet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 2 | 2024-12-07T22:01:44 | 2024-12-09T21:09:46 | null | NONE | null | ### Feature request
For most large speech dataset, we do not wish to generate hundreds of millions of small audio samples. Instead, it is quite common to provide larger audio files with frame offset (soundfile start and stop arguments). We should be able to pass these arguments to Audio() (column ID corresponding in the dataset row).
### Motivation
I am currently generating a fairly big dataset to .parquet(). Unfortunately, it does not work because all existing functions load the whole .wav file corresponding to the row. All my attempts at bypassing this failed. We should be able to put in the Table only the bytes corresponding to what soundfile reads with an offset (and subset of the audio file).
### Your contribution
I can totally test whatever code on my large dataset creation script. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7310/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7315 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7315/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7315/comments | https://api.github.com/repos/huggingface/datasets/issues/7315/events | https://github.com/huggingface/datasets/issues/7315 | 2,729,738,963 | I_kwDODunzps6itILT | 7,315 | Allow manual configuration of Dataset Viewer for datasets not created with the `datasets` library | {
"login": "diarray-hub",
"id": 114512099,
"node_id": "U_kgDOBtNQ4w",
"avatar_url": "https://avatars.githubusercontent.com/u/114512099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/diarray-hub",
"html_url": "https://github.com/diarray-hub",
"followers_url": "https://api.github.com/users/diarray-hub/followers",
"following_url": "https://api.github.com/users/diarray-hub/following{/other_user}",
"gists_url": "https://api.github.com/users/diarray-hub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/diarray-hub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/diarray-hub/subscriptions",
"organizations_url": "https://api.github.com/users/diarray-hub/orgs",
"repos_url": "https://api.github.com/users/diarray-hub/repos",
"events_url": "https://api.github.com/users/diarray-hub/events{/privacy}",
"received_events_url": "https://api.github.com/users/diarray-hub/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 13 | 2024-12-07T16:37:12 | 2024-12-11T11:05:22 | null | NONE | null | #### **Problem Description**
Currently, the Hugging Face Dataset Viewer automatically interprets dataset fields for datasets created with the `datasets` library. However, for datasets pushed directly via `git`, the Viewer:
- Defaults to generic columns like `label` with `null` values if no explicit mapping is provided.
- Does not allow dataset creators to configure field mappings or suppress default fields unless the dataset is recreated and pushed using the `datasets` library.
This creates a limitation for creators who:
- Use custom workflows to prepare datasets (e.g., manifest files with audio-transcription mappings).
- Push large datasets directly via `git` and cannot easily restructure them to conform to the `datasets` library format.
#### **Proposed Solution**
Introduce a feature that allows dataset creators to manually configure the Dataset Viewer behavior for datasets not created with the `datasets` library. This could be achieved by:
1. **Using the YAML Metadata in `README.md`:**
- Add support for defining the dataset's field mappings directly in the `README.md` YAML section.
- Example:
```yaml
viewer:
fields:
- name: "audio"
type: "audio_path" / "text"
source: "manifest['audio']"
- name: "bambara_transcription"
type: "text"
source: "manifest['bambara']"
- name: "french_translation"
type: "text"
source: "manifest['french']"
```
With manifest being a csv or json like format file in the repository so that the viewer understands that it should look for the values of each field in that file.
#### **Benefits**
- Improves flexibility for dataset creators who push datasets via `git`.
- Enhances dataset discoverability and usability on the Hugging Face Hub by allowing creators to present meaningful field mappings without restructuring their data.
- Reduces overhead for creators of large or complex datasets.
#### **Examples of Use Case**
- An audio dataset with transcriptions in multiple languages stored in a `manifest.json` file, where the user wants the Viewer to:
- Display the `audio` column and Explicitly map features that he defined such as `bambara_transcription` and `french_translation` from the manifest. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7315/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7309 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7309/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7309/comments | https://api.github.com/repos/huggingface/datasets/issues/7309/events | https://github.com/huggingface/datasets/pull/7309 | 2,723,636,931 | PR_kwDODunzps6EW77b | 7,309 | Faster parquet streaming + filters with predicate pushdown | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-06T18:01:54 | 2024-12-07T23:32:30 | 2024-12-07T23:32:28 | MEMBER | null | ParquetFragment.to_batches uses a buffered stream to read parquet data, which makes streaming faster (x2 on my laptop).
I also added the `filters` config parameter to support filtering with predicate pushdown, e.g.
```python
from datasets import load_dataset
filters = [('problem_source', '==', 'math')]
ds = load_dataset("nvidia/OpenMathInstruct-2", streaming=True, filters=filters)
first_example = next(iter(ds["train"]))
print(first_example["problem_source"])
# 'math'
```
cc @allisonwang-db this is a nice plus for usage in spark | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7309/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7309",
"html_url": "https://github.com/huggingface/datasets/pull/7309",
"diff_url": "https://github.com/huggingface/datasets/pull/7309.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7309.patch",
"merged_at": "2024-12-07T23:32:28"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7307 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7307/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7307/comments | https://api.github.com/repos/huggingface/datasets/issues/7307/events | https://github.com/huggingface/datasets/pull/7307 | 2,720,244,889 | PR_kwDODunzps6ELKcR | 7,307 | refactor: remove unnecessary else | {
"login": "HarikrishnanBalagopal",
"id": 20921177,
"node_id": "MDQ6VXNlcjIwOTIxMTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/20921177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HarikrishnanBalagopal",
"html_url": "https://github.com/HarikrishnanBalagopal",
"followers_url": "https://api.github.com/users/HarikrishnanBalagopal/followers",
"following_url": "https://api.github.com/users/HarikrishnanBalagopal/following{/other_user}",
"gists_url": "https://api.github.com/users/HarikrishnanBalagopal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HarikrishnanBalagopal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HarikrishnanBalagopal/subscriptions",
"organizations_url": "https://api.github.com/users/HarikrishnanBalagopal/orgs",
"repos_url": "https://api.github.com/users/HarikrishnanBalagopal/repos",
"events_url": "https://api.github.com/users/HarikrishnanBalagopal/events{/privacy}",
"received_events_url": "https://api.github.com/users/HarikrishnanBalagopal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-05T12:11:09 | 2024-12-06T15:11:33 | null | NONE | null | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7307/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7307/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7307",
"html_url": "https://github.com/huggingface/datasets/pull/7307",
"diff_url": "https://github.com/huggingface/datasets/pull/7307.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7307.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7306 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7306/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7306/comments | https://api.github.com/repos/huggingface/datasets/issues/7306/events | https://github.com/huggingface/datasets/issues/7306 | 2,719,807,464 | I_kwDODunzps6iHPfo | 7,306 | Creating new dataset from list loses information. (Audio Information Lost - either Datatype or Values). | {
"login": "ai-nikolai",
"id": 9797804,
"node_id": "MDQ6VXNlcjk3OTc4MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9797804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ai-nikolai",
"html_url": "https://github.com/ai-nikolai",
"followers_url": "https://api.github.com/users/ai-nikolai/followers",
"following_url": "https://api.github.com/users/ai-nikolai/following{/other_user}",
"gists_url": "https://api.github.com/users/ai-nikolai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ai-nikolai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ai-nikolai/subscriptions",
"organizations_url": "https://api.github.com/users/ai-nikolai/orgs",
"repos_url": "https://api.github.com/users/ai-nikolai/repos",
"events_url": "https://api.github.com/users/ai-nikolai/events{/privacy}",
"received_events_url": "https://api.github.com/users/ai-nikolai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-05T09:07:53 | 2024-12-05T09:09:38 | null | NONE | null | ### Describe the bug
When creating a dataset from a list of datapoints, information is lost of the individual items.
Specifically, when creating a dataset from a list of datapoints (from another dataset). Either the datatype is lost or the values are lost. See examples below.
-> What is the best way to create a dataset from a list of datapoints?
---
e.g.:
**When running this code:**
```python
from datasets import load_dataset, Dataset
commonvoice_data = load_dataset("mozilla-foundation/common_voice_17_0", "it", split="test", streaming=True)
datapoint = next(iter(commonvoice_data))
out = [datapoint]
new_data = Dataset.from_list(out) #this loses datatype information
new_data2= Dataset.from_list(out,features=commonvoice_data.features) #this loses value information
```
**We get the following**:
---
1. `datapoint`: (the original datapoint)
```
'audio': {'path': 'it_test_0/common_voice_it_23606167.mp3', 'array': array([0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
2.21619011e-05, 2.72628222e-05, 0.00000000e+00]), 'sampling_rate': 48000}
```
Original Dataset Features:
```
>>> commonvoice_data.features
'audio': Audio(sampling_rate=48000, mono=True, decode=True, id=None)
```
- Here we see column "audio", has the proper values (both `path` & and `array`) and has the correct datatype (Audio).
----
2. new_data[0]:
```
# Cannot be printed (as it prints the entire array).
```
New Dataset 1 Features:
```
>>> new_data.features
'audio': {'array': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None), 'path': Value(dtype='string', id=None), 'sampling_rate': Value(dtype='int64', id=None)}
```
- Here we see that the column "audio", has the correct values, but is not the Audio datatype anymore.
---
3. new_data2[0]:
```
'audio': {'path': None, 'array': array([0., 0., 0., ..., 0., 0., 0.]), 'sampling_rate': 48000},
```
New Dataset 2 Features:
```
>>> new_data2.features
'audio': Audio(sampling_rate=48000, mono=True, decode=True, id=None),
```
- Here we see that the column "audio", has the correct datatype, but all the array & path values were lost!
### Steps to reproduce the bug
## Run:
```python
from datasets import load_dataset, Dataset
commonvoice_data = load_dataset("mozilla-foundation/common_voice_17_0", "it", split="test", streaming=True)
datapoint = next(iter(commonvoice_data))
out = [datapoint]
new_data = Dataset.from_list(out) #this loses datatype information
new_data2= Dataset.from_list(out,features=commonvoice_data.features) #this loses value information
```
### Expected behavior
## Expected:
```datapoint == new_data[0]```
AND
```datapoint == new_data2[0]```
### Environment info
- `datasets` version: 3.1.0
- Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.26.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7306/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7305 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7305/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7305/comments | https://api.github.com/repos/huggingface/datasets/issues/7305/events | https://github.com/huggingface/datasets/issues/7305 | 2,715,907,267 | I_kwDODunzps6h4XTD | 7,305 | Build Documentation Test Fails Due to "Bad Credentials" Error | {
"login": "ruidazeng",
"id": 31152346,
"node_id": "MDQ6VXNlcjMxMTUyMzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/31152346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruidazeng",
"html_url": "https://github.com/ruidazeng",
"followers_url": "https://api.github.com/users/ruidazeng/followers",
"following_url": "https://api.github.com/users/ruidazeng/following{/other_user}",
"gists_url": "https://api.github.com/users/ruidazeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ruidazeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruidazeng/subscriptions",
"organizations_url": "https://api.github.com/users/ruidazeng/orgs",
"repos_url": "https://api.github.com/users/ruidazeng/repos",
"events_url": "https://api.github.com/users/ruidazeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/ruidazeng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-03T20:22:54 | 2024-12-03T20:22:54 | null | CONTRIBUTOR | null | ### Describe the bug
The `Build documentation / build / build_main_documentation (push)` job is consistently failing during the "Syncing repository" step. The error occurs when attempting to determine the default branch name, resulting in "Bad credentials" errors.
### Steps to reproduce the bug
1. Trigger the `build_main_documentation` job.
2. Observe the logs during the "Syncing repository" step.
### Expected behavior
The workflow should be able to retrieve the default branch name without encountering credential issues.
### Environment info
```plaintext
Syncing repository: huggingface/notebooks
Getting Git version info
Temporarily overriding HOME='/home/runner/work/_temp/00e62748-9940-4a4f-bbbc-eb2cda6d7ed6' before making global git config changes
Adding repository directory to the temporary git global config as a safe directory
/usr/bin/git config --global --add safe.directory /home/runner/work/datasets/datasets/notebooks
Initializing the repository
Disabling automatic garbage collection
Setting up auth
Determining the default branch
Retrieving the default branch name
Bad credentials - https://docs.github.com/rest
Waiting 20 seconds before trying again
Retrieving the default branch name
Bad credentials - https://docs.github.com/rest
Waiting 19 seconds before trying again
Retrieving the default branch name
Error: Bad credentials - https://docs.github.com/rest
``` | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7305/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7304 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7304/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7304/comments | https://api.github.com/repos/huggingface/datasets/issues/7304/events | https://github.com/huggingface/datasets/pull/7304 | 2,715,179,811 | PR_kwDODunzps6D5saw | 7,304 | Update iterable_dataset.py | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-03T14:25:42 | 2024-12-03T14:28:10 | 2024-12-03T14:27:02 | MEMBER | null | close https://github.com/huggingface/datasets/issues/7297 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7304/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7304/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7304",
"html_url": "https://github.com/huggingface/datasets/pull/7304",
"diff_url": "https://github.com/huggingface/datasets/pull/7304.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7304.patch",
"merged_at": "2024-12-03T14:27:02"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7303 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7303/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7303/comments | https://api.github.com/repos/huggingface/datasets/issues/7303/events | https://github.com/huggingface/datasets/issues/7303 | 2,705,729,696 | I_kwDODunzps6hRiig | 7,303 | DataFilesNotFoundError for datasets LM1B | {
"login": "hml1996-fight",
"id": 72264324,
"node_id": "MDQ6VXNlcjcyMjY0MzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/72264324?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hml1996-fight",
"html_url": "https://github.com/hml1996-fight",
"followers_url": "https://api.github.com/users/hml1996-fight/followers",
"following_url": "https://api.github.com/users/hml1996-fight/following{/other_user}",
"gists_url": "https://api.github.com/users/hml1996-fight/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hml1996-fight/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hml1996-fight/subscriptions",
"organizations_url": "https://api.github.com/users/hml1996-fight/orgs",
"repos_url": "https://api.github.com/users/hml1996-fight/repos",
"events_url": "https://api.github.com/users/hml1996-fight/events{/privacy}",
"received_events_url": "https://api.github.com/users/hml1996-fight/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-11-29T17:27:45 | 2024-12-11T13:22:47 | 2024-12-11T13:22:47 | NONE | null | ### Describe the bug
Cannot load the dataset https://huggingface.co/datasets/billion-word-benchmark/lm1b
### Steps to reproduce the bug
`dataset = datasets.load_dataset('lm1b', split=split)`
### Expected behavior
`Traceback (most recent call last):
File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/word_freq.py", line 13, in <module>
train_data = DiffusionLoader(tokenizer=tokenizer).my_load(task_name='lm1b', splits=['train'])[0]
File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 20, in my_load
return [self._load(task_name, name) for name in splits]
File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 20, in <listcomp>
return [self._load(task_name, name) for name in splits]
File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 13, in _load
dataset = datasets.load_dataset('lm1b', split=split)
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 2594, in load_dataset
builder_instance = load_dataset_builder(
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 2266, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 1827, in dataset_module_factory
).get_module()
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 1040, in get_module
module_name, default_builder_kwargs = infer_module_for_data_files(
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 598, in infer_module_for_data_files
raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else ""))
datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in lm1b`
### Environment info
datasets: 2.20.0 | {
"login": "hml1996-fight",
"id": 72264324,
"node_id": "MDQ6VXNlcjcyMjY0MzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/72264324?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hml1996-fight",
"html_url": "https://github.com/hml1996-fight",
"followers_url": "https://api.github.com/users/hml1996-fight/followers",
"following_url": "https://api.github.com/users/hml1996-fight/following{/other_user}",
"gists_url": "https://api.github.com/users/hml1996-fight/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hml1996-fight/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hml1996-fight/subscriptions",
"organizations_url": "https://api.github.com/users/hml1996-fight/orgs",
"repos_url": "https://api.github.com/users/hml1996-fight/repos",
"events_url": "https://api.github.com/users/hml1996-fight/events{/privacy}",
"received_events_url": "https://api.github.com/users/hml1996-fight/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7303/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7303/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7302 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7302/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7302/comments | https://api.github.com/repos/huggingface/datasets/issues/7302/events | https://github.com/huggingface/datasets/pull/7302 | 2,702,626,386 | PR_kwDODunzps6DfY8G | 7,302 | Let server decide default repo visibility | {
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-11-28T16:01:13 | 2024-11-29T17:00:40 | 2024-11-29T17:00:38 | CONTRIBUTOR | null | Until now, all repos were public by default when created without passing the `private` argument. This meant that passing `private=False` or `private=None` was strictly the same. This is not the case anymore. Enterprise Hub offers organizations to set a default visibility setting for new repos. This is useful for organizations forbidding public repos for security matters. This PR mostly updates docstrings + default values so that `private=None` is always passed when users don't set it manually.
This PR doesn't create any breaking change. The real update has been done server-side when introducing the new Enterprise Hub feature. Related to https://github.com/huggingface/huggingface_hub/pull/2679. | {
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7302/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7302",
"html_url": "https://github.com/huggingface/datasets/pull/7302",
"diff_url": "https://github.com/huggingface/datasets/pull/7302.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7302.patch",
"merged_at": "2024-11-29T17:00:38"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7301 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7301/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7301/comments | https://api.github.com/repos/huggingface/datasets/issues/7301/events | https://github.com/huggingface/datasets/pull/7301 | 2,701,813,922 | PR_kwDODunzps6DdYLZ | 7,301 | update load_dataset doctring | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-11-28T11:19:20 | 2024-11-29T10:31:43 | 2024-11-29T10:31:40 | MEMBER | null | - remove canonical dataset name
- remove dataset script logic
- add streaming info
- clearer download and prepare steps | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7301/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7301",
"html_url": "https://github.com/huggingface/datasets/pull/7301",
"diff_url": "https://github.com/huggingface/datasets/pull/7301.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7301.patch",
"merged_at": "2024-11-29T10:31:40"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7300 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7300/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7300/comments | https://api.github.com/repos/huggingface/datasets/issues/7300/events | https://github.com/huggingface/datasets/pull/7300 | 2,701,424,320 | PR_kwDODunzps6Dcba8 | 7,300 | fix: update elasticsearch version | {
"login": "ruidazeng",
"id": 31152346,
"node_id": "MDQ6VXNlcjMxMTUyMzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/31152346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruidazeng",
"html_url": "https://github.com/ruidazeng",
"followers_url": "https://api.github.com/users/ruidazeng/followers",
"following_url": "https://api.github.com/users/ruidazeng/following{/other_user}",
"gists_url": "https://api.github.com/users/ruidazeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ruidazeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruidazeng/subscriptions",
"organizations_url": "https://api.github.com/users/ruidazeng/orgs",
"repos_url": "https://api.github.com/users/ruidazeng/repos",
"events_url": "https://api.github.com/users/ruidazeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/ruidazeng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-11-28T09:14:21 | 2024-12-03T14:36:56 | 2024-12-03T14:24:42 | CONTRIBUTOR | null | This should fix the `test_py311 (windows latest, deps-latest` errors.
```
=========================== short test summary info ===========================
ERROR tests/test_search.py - AttributeError: `np.float_` was removed in the NumPy 2.0 release. Use `np.float64` instead.
ERROR tests/test_search.py - AttributeError: `np.float_` was removed in the NumPy 2.0 release. Use `np.float64` instead.
===== 2822 passed, 54 skipped, 10 warnings, 2 errors in 373.36s (0:06:13) =====
Error: Process completed with exit code 1.
```
The elasticsearch version used is `elasticsearch==7.9.1`, which is 4 years old and uses the removed `numpy.float_`.
elasticsearch fixed this in [https://github.com/elastic/elasticsearch-py/pull/2551](https://github.com/elastic/elasticsearch-py/pull/2551) and released in 8.15.0 (August 2024) and 7.17.12 (September 2024).
| {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7300/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7300/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7300",
"html_url": "https://github.com/huggingface/datasets/pull/7300",
"diff_url": "https://github.com/huggingface/datasets/pull/7300.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7300.patch",
"merged_at": "2024-12-03T14:24:42"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7299 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7299/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7299/comments | https://api.github.com/repos/huggingface/datasets/issues/7299/events | https://github.com/huggingface/datasets/issues/7299 | 2,695,378,251 | I_kwDODunzps6gqDVL | 7,299 | Efficient Image Augmentation in Hugging Face Datasets | {
"login": "fabiozappo",
"id": 46443190,
"node_id": "MDQ6VXNlcjQ2NDQzMTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/46443190?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fabiozappo",
"html_url": "https://github.com/fabiozappo",
"followers_url": "https://api.github.com/users/fabiozappo/followers",
"following_url": "https://api.github.com/users/fabiozappo/following{/other_user}",
"gists_url": "https://api.github.com/users/fabiozappo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fabiozappo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fabiozappo/subscriptions",
"organizations_url": "https://api.github.com/users/fabiozappo/orgs",
"repos_url": "https://api.github.com/users/fabiozappo/repos",
"events_url": "https://api.github.com/users/fabiozappo/events{/privacy}",
"received_events_url": "https://api.github.com/users/fabiozappo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-11-26T16:50:32 | 2024-11-26T16:53:53 | null | NONE | null | ### Describe the bug
I'm using the Hugging Face datasets library to load images in batch and would like to apply a torchvision transform to solve the inconsistent image sizes in the dataset and apply some on the fly image augmentation. I can just think about using the collate_fn, but seems quite inefficient.
I'm new to the Hugging Face datasets library, I didn't find nothing in the documentation or the issues here on github.
Is there an existing way to add image transformations directly to the dataset loading pipeline?
### Steps to reproduce the bug
from datasets import load_dataset
from torch.utils.data import DataLoader
```python
def collate_fn(batch):
images = [item['image'] for item in batch]
texts = [item['text'] for item in batch]
return {
'images': images,
'texts': texts
}
dataset = load_dataset("Yuki20/pokemon_caption", split="train")
dataloader = DataLoader(dataset, batch_size=4, collate_fn=collate_fn)
# Output shows varying image sizes:
# [(1280, 1280), (431, 431), (789, 789), (769, 769)]
```
### Expected behavior
I'm looking for a way to resize images on-the-fly when loading the dataset, similar to PyTorch's Dataset.__getitem__ functionality. This would be more efficient than handling resizing in the collate_fn.
### Environment info
- `datasets` version: 3.1.0
- Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.11.10
- `huggingface_hub` version: 0.26.2
- PyArrow version: 18.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7299/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7298 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7298/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7298/comments | https://api.github.com/repos/huggingface/datasets/issues/7298/events | https://github.com/huggingface/datasets/issues/7298 | 2,694,196,968 | I_kwDODunzps6gli7o | 7,298 | loading dataset issue with load_dataset() when training controlnet | {
"login": "bigbraindump",
"id": 81594044,
"node_id": "MDQ6VXNlcjgxNTk0MDQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/81594044?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bigbraindump",
"html_url": "https://github.com/bigbraindump",
"followers_url": "https://api.github.com/users/bigbraindump/followers",
"following_url": "https://api.github.com/users/bigbraindump/following{/other_user}",
"gists_url": "https://api.github.com/users/bigbraindump/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bigbraindump/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bigbraindump/subscriptions",
"organizations_url": "https://api.github.com/users/bigbraindump/orgs",
"repos_url": "https://api.github.com/users/bigbraindump/repos",
"events_url": "https://api.github.com/users/bigbraindump/events{/privacy}",
"received_events_url": "https://api.github.com/users/bigbraindump/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-11-26T10:50:18 | 2024-11-26T10:50:18 | null | NONE | null | ### Describe the bug
i'm unable to load my dataset for [controlnet training](https://github.com/huggingface/diffusers/blob/074e12358bc17e7dbe111ea4f62f05dbae8a49d5/examples/controlnet/train_controlnet.py#L606) using load_dataset(). however, load_from_disk() seems to work?
would appreciate if someone can explain why that's the case here
1. for reference here's the structure of the original training files _before_ dataset creation -
```
- dir train
- dir A (illustrations)
- dir B (SignWriting)
- prompt.json containing:
{"source": "B/file.png", "target": "A/file.png", "prompt": "..."}
```
2. here are features _after_ dataset creation -
```
"features": {
"control_image": {
"_type": "Image"
},
"image": {
"_type": "Image"
},
"caption": {
"dtype": "string",
"_type": "Value"
}
```
3. I've also attempted to upload the dataset to huggingface with the same error output
### Steps to reproduce the bug
1. [dataset creation script](https://github.com/sign-language-processing/signwriting-illustration/blob/main/signwriting_illustration/controlnet_huggingface/dataset.py)
2. controlnet [training script](examples/controlnet/train_controlnet.py) used
3. training parameters -
! accelerate launch diffusers/examples/controlnet/train_controlnet.py \
--pretrained_model_name_or_path="stable-diffusion-v1-5/stable-diffusion-v1-5" \
--output_dir="$OUTPUT_DIR" \
--train_data_dir="$HF_DATASET_DIR" \
--conditioning_image_column=control_image \
--image_column=image \
--caption_column=caption \
--resolution=512\
--learning_rate=1e-5 \
--validation_image "./validation/0a4b3c71265bb3a726457837428dda78.png" "./validation/0a5922fe2c638e6776bd62f623145004.png" "./validation/1c9f1a53106f64c682cf5d009ee7156f.png" \
--validation_prompt "An illustration of a man with short hair" "An illustration of a woman with short hair" "An illustration of Barack Obama" \
--train_batch_size=4 \
--num_train_epochs=500 \
--tracker_project_name="sd-controlnet-signwriting-test" \
--hub_model_id="sarahahtee/signwriting-illustration-test" \
--checkpointing_steps=5000 \
--validation_steps=1000 \
--report_to wandb \
--push_to_hub
4. command -
` sbatch --export=HUGGINGFACE_TOKEN=hf_token,WANDB_API_KEY=api_key script.sh`
### Expected behavior
```
11/25/2024 17:12:18 - INFO - __main__ - Initializing controlnet weights from unet
Generating train split: 1 examples [00:00, 334.85 examples/s]
Traceback (most recent call last):
File "/data/user/user/signwriting_illustration/controlnet_huggingface/diffusers/examples/controlnet/train_controlnet.py", line 1189, in <module>
main(args)
File "/data/user/user/signwriting_illustration/controlnet_huggingface/diffusers/examples/controlnet/train_controlnet.py", line 923, in main
train_dataset = make_train_dataset(args, tokenizer, accelerator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/user/user/signwriting_illustration/controlnet_huggingface/diffusers/examples/controlnet/train_controlnet.py", line 639, in make_train_dataset
raise ValueError(
ValueError: `--image_column` value 'image' not found in dataset columns. Dataset columns are: _data_files, _fingerprint, _format_columns, _format_kwargs, _format_type, _output_all_columns, _split
```
### Environment info
accelerate 1.1.1
huggingface-hub 0.26.2
python 3.11
torch 2.5.1
transformers 4.46.2 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7298/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7298/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7297 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7297/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7297/comments | https://api.github.com/repos/huggingface/datasets/issues/7297/events | https://github.com/huggingface/datasets/issues/7297 | 2,683,977,430 | I_kwDODunzps6f-j7W | 7,297 | wrong return type for `IterableDataset.shard()` | {
"login": "ysngshn",
"id": 47225236,
"node_id": "MDQ6VXNlcjQ3MjI1MjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/47225236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ysngshn",
"html_url": "https://github.com/ysngshn",
"followers_url": "https://api.github.com/users/ysngshn/followers",
"following_url": "https://api.github.com/users/ysngshn/following{/other_user}",
"gists_url": "https://api.github.com/users/ysngshn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ysngshn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ysngshn/subscriptions",
"organizations_url": "https://api.github.com/users/ysngshn/orgs",
"repos_url": "https://api.github.com/users/ysngshn/repos",
"events_url": "https://api.github.com/users/ysngshn/events{/privacy}",
"received_events_url": "https://api.github.com/users/ysngshn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-11-22T17:25:46 | 2024-12-03T14:27:27 | 2024-12-03T14:27:03 | NONE | null | ### Describe the bug
`IterableDataset.shard()` has the wrong typing for its return as `"Dataset"`. It should be `"IterableDataset"`. Makes my IDE unhappy.
### Steps to reproduce the bug
look at [the source code](https://github.com/huggingface/datasets/blob/main/src/datasets/iterable_dataset.py#L2668)?
### Expected behavior
Correct return type as `"IterableDataset"`
### Environment info
datasets==3.1.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7297/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7296 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7296/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7296/comments | https://api.github.com/repos/huggingface/datasets/issues/7296/events | https://github.com/huggingface/datasets/pull/7296 | 2,675,573,974 | PR_kwDODunzps6ChJIJ | 7,296 | Remove upper version limit of fsspec[http] | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-11-20T11:29:16 | 2024-11-20T11:29:16 | null | NONE | null | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7296/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7296/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7296",
"html_url": "https://github.com/huggingface/datasets/pull/7296",
"diff_url": "https://github.com/huggingface/datasets/pull/7296.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7296.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7295 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7295/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7295/comments | https://api.github.com/repos/huggingface/datasets/issues/7295/events | https://github.com/huggingface/datasets/issues/7295 | 2,672,003,384 | I_kwDODunzps6fQ4k4 | 7,295 | [BUG]: Streaming from S3 triggers `unexpected keyword argument 'requote_redirect_url'` | {
"login": "casper-hansen",
"id": 27340033,
"node_id": "MDQ6VXNlcjI3MzQwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/27340033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/casper-hansen",
"html_url": "https://github.com/casper-hansen",
"followers_url": "https://api.github.com/users/casper-hansen/followers",
"following_url": "https://api.github.com/users/casper-hansen/following{/other_user}",
"gists_url": "https://api.github.com/users/casper-hansen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/casper-hansen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/casper-hansen/subscriptions",
"organizations_url": "https://api.github.com/users/casper-hansen/orgs",
"repos_url": "https://api.github.com/users/casper-hansen/repos",
"events_url": "https://api.github.com/users/casper-hansen/events{/privacy}",
"received_events_url": "https://api.github.com/users/casper-hansen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-11-19T12:23:36 | 2024-11-19T13:01:53 | null | NONE | null | ### Describe the bug
Note that this bug is only triggered when `streaming=True`. #5459 introduced always calling fsspec with `client_kwargs={"requote_redirect_url": False}`, which seems to have incompatibility issues even in the newest versions.
Analysis of what's happening:
1. `datasets` passes the `client_kwargs` through `fsspec`
2. `fsspec` passes the `client_kwargs` through `s3fs`
3. `s3fs` passes the `client_kwargs` to `aiobotocore` which uses `aiohttp`
```
s3creator = self.session.create_client(
"s3", config=conf, **init_kwargs, **client_kwargs
)
```
4. The `session` tries to create an `aiohttp` session but the `**kwargs` are not just kept as unfolded `**kwargs` but passed in as individual variables (`requote_redirect_url` and `trust_env`).
Error:
```
Traceback (most recent call last):
File "/Users/cxrh/Documents/GitHub/nlp_foundation/nlp_train/test.py", line 14, in <module>
batch = next(iter(ds))
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1353, in __iter__
for key, example in ex_iterable:
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 255, in __iter__
for key, pa_table in self.generate_tables_fn(**self.kwargs):
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py", line 78, in _generate_tables
for file_idx, file in enumerate(itertools.chain.from_iterable(files)):
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 840, in __iter__
yield from self.generator(*self.args, **self.kwargs)
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 921, in _iter_from_urlpaths
elif xisdir(urlpath, download_config=download_config):
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 305, in xisdir
return fs.isdir(inner_path)
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/spec.py", line 721, in isdir
return self.info(path)["type"] == "directory"
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/archive.py", line 38, in info
self._get_dirs()
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/filesystems/compression.py", line 64, in _get_dirs
f = {**self.file.fs.info(self.file.path), "name": self.uncompressed_name}
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 118, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 103, in sync
raise return_result
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 56, in _runner
result[0] = await coro
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 1302, in _info
out = await self._call_s3(
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 341, in _call_s3
await self.set_session()
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 524, in set_session
s3creator = self.session.create_client(
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/aiobotocore/session.py", line 114, in create_client
return ClientCreatorContext(self._create_client(*args, **kwargs))
TypeError: AioSession._create_client() got an unexpected keyword argument 'requote_redirect_url'
```
### Steps to reproduce the bug
1. Install the necessary libraries, datasets having a requirement for being at least 2.19.0:
```
pip install s3fs fsspec aiohttp aiobotocore botocore 'datasets>=2.19.0'
```
2. Run this code:
```
from datasets import load_dataset
ds = load_dataset(
"json",
data_files="s3://your_path/*.jsonl.gz",
streaming=True,
split="train",
)
batch = next(iter(ds))
print(batch)
```
3. You get the `unexpected keyword argument 'requote_redirect_url'` error.
### Expected behavior
The datasets is able to load a batch from the dataset stored on S3, without triggering this `requote_redirect_url` error.
Fix: I could fix this by directly removing the `requote_redirect_url` and `trust_env` - then it loads properly.
<img width="1127" alt="image" src="https://github.com/user-attachments/assets/4c40efa9-8787-4919-b613-e4908c3d1ab2">
### Environment info
- `datasets` version: 3.1.0
- Platform: macOS-15.1-arm64-arm-64bit
- Python version: 3.10.15
- `huggingface_hub` version: 0.26.2
- PyArrow version: 18.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7295/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7295/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7294 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7294/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7294/comments | https://api.github.com/repos/huggingface/datasets/issues/7294/events | https://github.com/huggingface/datasets/pull/7294 | 2,668,663,130 | PR_kwDODunzps6CQKTy | 7,294 | Remove `aiohttp` from direct dependencies | {
"login": "akx",
"id": 58669,
"node_id": "MDQ6VXNlcjU4NjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/58669?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akx",
"html_url": "https://github.com/akx",
"followers_url": "https://api.github.com/users/akx/followers",
"following_url": "https://api.github.com/users/akx/following{/other_user}",
"gists_url": "https://api.github.com/users/akx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akx/subscriptions",
"organizations_url": "https://api.github.com/users/akx/orgs",
"repos_url": "https://api.github.com/users/akx/repos",
"events_url": "https://api.github.com/users/akx/events{/privacy}",
"received_events_url": "https://api.github.com/users/akx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-11-18T14:00:59 | 2024-11-18T14:00:59 | null | NONE | null | The dependency is only used for catching an exception from other code. That can be done with an import guard.
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7294/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7294",
"html_url": "https://github.com/huggingface/datasets/pull/7294",
"diff_url": "https://github.com/huggingface/datasets/pull/7294.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7294.patch",
"merged_at": null
} | true |
Dataset Card for GitHub Issues
Dataset Summary
GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets repository. It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.
Supported Tasks and Leaderboards
For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the task-category-tag
with an appropriate other:other-task-name
).
task-category-tag
: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a high/low metric name. The (model name or model class) model currently achieves the following score. [IF A LEADERBOARD IS AVAILABLE]: This task has an active leaderboard which can be found at leaderboard url and ranks models based on metric name while also reporting other metric name.
Languages
Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,...
When relevant, please provide BCP-47 codes, which consist of a primary language subtag, with a script subtag and/or region subtag if available.
Dataset Structure
Data Instances
Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
{
'example_field': ...,
...
}
Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
Data Fields
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
example_field
: description ofexample_field
Note that the descriptions can be initialized with the Show Markdown Data Fields output of the tagging app, you will then only need to refine the generated descriptions.
Data Splits
Describe and name the splits in the dataset if there are more than one.
Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
Tain | Valid | Test | |
---|---|---|---|
Input Sentences | |||
Average Sentence Length |
Dataset Creation
Curation Rationale
What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
Source Data
This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)
Initial Data Collection and Normalization
Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.
If data was collected from other pre-existing datasets, link to source here and to their Hugging Face version.
If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.
Who are the source language producers?
State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.
If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See Larson 2017 for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
Describe other people represented or mentioned in the data. Where possible, link to references for the information.
Annotations
If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.
Annotation process
If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
Who are the annotators?
If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.
Describe the people or systems who originally created the annotations and their selection criteria if applicable.
If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See Larson 2017 for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
Personal and Sensitive Information
State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See Larson 2017 for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).
State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).
If efforts were made to anonymize the data, describe the anonymization process.
Considerations for Using the Data
Social Impact of Dataset
Please discuss some of the ways you believe the use of this dataset will impact society.
The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.
Discussion of Biases
Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.
For Wikipedia text, see for example Dinan et al 2020 on biases in Wikipedia (esp. Table 1), or Blodgett et al 2020 for a more general discussion of the topic.
If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.
Other Known Limitations
If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.
Additional Information
Dataset Curators
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
Licensing Information
Provide the license and link to the license webpage if available.
Citation Information
Provide the BibTex-formatted reference for the dataset. For example:
@article{article_id,
author = {Author List},
title = {Dataset Paper Title},
journal = {Publication Venue},
year = {2525}
}
If the dataset has a DOI, please provide it here.
Contributions
Thanks to @lewtun for adding this dataset.
- Downloads last month
- 3