{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6030","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6030\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6030\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6030\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6030","id":1803864744,"node_id":"PR_kwDODunzps5Vd0ZG","number":6030,"title":"fixed typo in comment","user":{"login":"NightMachinery","id":36224762,"node_id":"MDQ6VXNlcjM2MjI0NzYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36224762?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NightMachinery","html_url":"https:\/\/github.com\/NightMachinery","followers_url":"https:\/\/api.github.com\/users\/NightMachinery\/followers","following_url":"https:\/\/api.github.com\/users\/NightMachinery\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NightMachinery\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NightMachinery\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NightMachinery\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NightMachinery\/orgs","repos_url":"https:\/\/api.github.com\/users\/NightMachinery\/repos","events_url":"https:\/\/api.github.com\/users\/NightMachinery\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NightMachinery\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_6030). All of your documentation changes will be reflected on that endpoint."],"created_at":1689288597000,"updated_at":1689288925000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6030","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6030","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6030.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6030.patch","merged_at":null},"body":"This mistake was a bit confusing, so I thought it was worth sending a PR over.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6030\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6030\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6029","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6029\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6029\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6029\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6029","id":1803460046,"node_id":"PR_kwDODunzps5VcbPW","number":6029,"title":"[docs] Fix link","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007039 \/ 0.011353 (-0.004314) | 0.004175 \/ 0.011008 (-0.006833) | 0.085426 \/ 0.038508 (0.046918) | 0.079818 \/ 0.023109 (0.056709) | 0.321924 \/ 0.275898 (0.046026) | 0.345482 \/ 0.323480 (0.022002) | 0.005510 \/ 0.007986 (-0.002475) | 0.003452 \/ 0.004328 (-0.000877) | 0.065158 \/ 0.004250 (0.060907) | 0.058843 \/ 0.037052 (0.021791) | 0.316280 \/ 0.258489 (0.057791) | 0.351666 \/ 0.293841 (0.057825) | 0.031190 \/ 0.128546 (-0.097357) | 0.008500 \/ 0.075646 (-0.067147) | 0.289595 \/ 0.419271 (-0.129676) | 0.053798 \/ 0.043533 (0.010265) | 0.315804 \/ 0.255139 (0.060665) | 0.334957 \/ 0.283200 (0.051757) | 0.024350 \/ 0.141683 (-0.117332) | 1.515753 \/ 1.452155 (0.063599) | 1.556215 \/ 1.492716 (0.063499) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.210378 \/ 0.018006 (0.192372) | 0.469309 \/ 0.000490 (0.468820) | 0.002890 \/ 0.000200 (0.002690) | 0.000086 \/ 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030214 \/ 0.037411 (-0.007197) | 0.088492 \/ 0.014526 (0.073966) | 0.098684 \/ 0.176557 (-0.077873) | 0.156077 \/ 0.737135 (-0.581058) | 0.098814 \/ 0.296338 (-0.197525) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.404548 \/ 0.215209 (0.189339) | 4.026173 \/ 2.077655 (1.948518) | 2.043216 \/ 1.504120 (0.539096) | 1.880997 \/ 1.541195 (0.339802) | 1.975205 \/ 1.468490 (0.506715) | 0.489395 \/ 4.584777 (-4.095382) | 3.684097 \/ 3.745712 (-0.061615) | 5.126934 \/ 5.269862 (-0.142928) | 3.092153 \/ 4.565676 (-1.473524) | 0.057668 \/ 0.424275 (-0.366607) | 0.007372 \/ 0.007607 (-0.000235) | 0.479647 \/ 0.226044 (0.253603) | 4.780207 \/ 2.268929 (2.511278) | 2.533457 \/ 55.444624 (-52.911168) | 2.182126 \/ 6.876477 (-4.694351) | 2.431834 \/ 2.142072 (0.289761) | 0.591760 \/ 4.805227 (-4.213467) | 0.135450 \/ 6.500664 (-6.365214) | 0.063218 \/ 0.075469 (-0.012251) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.262053 \/ 1.841788 (-0.579734) | 20.246992 \/ 8.074308 (12.172684) | 14.638222 \/ 10.191392 (4.446830) | 0.150021 \/ 0.680424 (-0.530403) | 0.018680 \/ 0.534201 (-0.515521) | 0.395215 \/ 0.579283 (-0.184068) | 0.421270 \/ 0.434364 (-0.013094) | 0.458845 \/ 0.540337 (-0.081492) | 0.634488 \/ 1.386936 (-0.752448) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007080 \/ 0.011353 (-0.004273) | 0.004112 \/ 0.011008 (-0.006896) | 0.066426 \/ 0.038508 (0.027918) | 0.090088 \/ 0.023109 (0.066978) | 0.400191 \/ 0.275898 (0.124293) | 0.429614 \/ 0.323480 (0.106134) | 0.005428 \/ 0.007986 (-0.002558) | 0.003501 \/ 0.004328 (-0.000827) | 0.065056 \/ 0.004250 (0.060806) | 0.061643 \/ 0.037052 (0.024590) | 0.398619 \/ 0.258489 (0.140130) | 0.445497 \/ 0.293841 (0.151657) | 0.031703 \/ 0.128546 (-0.096843) | 0.008708 \/ 0.075646 (-0.066938) | 0.071561 \/ 0.419271 (-0.347711) | 0.050684 \/ 0.043533 (0.007151) | 0.385361 \/ 0.255139 (0.130222) | 0.409349 \/ 0.283200 (0.126149) | 0.027388 \/ 0.141683 (-0.114295) | 1.473021 \/ 1.452155 (0.020866) | 1.525246 \/ 1.492716 (0.032529) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.237710 \/ 0.018006 (0.219704) | 0.468719 \/ 0.000490 (0.468230) | 0.000385 \/ 0.000200 (0.000185) | 0.000054 \/ 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032539 \/ 0.037411 (-0.004872) | 0.095324 \/ 0.014526 (0.080798) | 0.102248 \/ 0.176557 (-0.074308) | 0.156096 \/ 0.737135 (-0.581039) | 0.103458 \/ 0.296338 (-0.192881) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.416226 \/ 0.215209 (0.201017) | 4.141044 \/ 2.077655 (2.063389) | 2.143732 \/ 1.504120 (0.639612) | 2.001020 \/ 1.541195 (0.459825) | 2.091194 \/ 1.468490 (0.622704) | 0.489977 \/ 4.584777 (-4.094800) | 3.579615 \/ 3.745712 (-0.166097) | 3.438082 \/ 5.269862 (-1.831780) | 2.069031 \/ 4.565676 (-2.496645) | 0.056994 \/ 0.424275 (-0.367281) | 0.007362 \/ 0.007607 (-0.000245) | 0.493077 \/ 0.226044 (0.267033) | 4.922622 \/ 2.268929 (2.653694) | 2.627083 \/ 55.444624 (-52.817541) | 2.301141 \/ 6.876477 (-4.575336) | 2.356794 \/ 2.142072 (0.214722) | 0.583792 \/ 4.805227 (-4.221436) | 0.133707 \/ 6.500664 (-6.366958) | 0.062892 \/ 0.075469 (-0.012577) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.364908 \/ 1.841788 (-0.476880) | 20.641219 \/ 8.074308 (12.566911) | 14.848528 \/ 10.191392 (4.657136) | 0.174207 \/ 0.680424 (-0.506217) | 0.018206 \/ 0.534201 (-0.515995) | 0.413742 \/ 0.579283 (-0.165541) | 0.419940 \/ 0.434364 (-0.014424) | 0.458543 \/ 0.540337 (-0.081794) | 0.616518 \/ 1.386936 (-0.770418) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#18b2202c3e7cdde05920078f01864964556427da \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006875 \/ 0.011353 (-0.004478) | 0.003489 \/ 0.011008 (-0.007519) | 0.082077 \/ 0.038508 (0.043569) | 0.103011 \/ 0.023109 (0.079902) | 0.370572 \/ 0.275898 (0.094674) | 0.416400 \/ 0.323480 (0.092920) | 0.004048 \/ 0.007986 (-0.003938) | 0.003563 \/ 0.004328 (-0.000765) | 0.062666 \/ 0.004250 (0.058416) | 0.063664 \/ 0.037052 (0.026612) | 0.374206 \/ 0.258489 (0.115717) | 0.425590 \/ 0.293841 (0.131749) | 0.028174 \/ 0.128546 (-0.100373) | 0.007906 \/ 0.075646 (-0.067741) | 0.266251 \/ 0.419271 (-0.153020) | 0.045923 \/ 0.043533 (0.002390) | 0.376746 \/ 0.255139 (0.121607) | 0.401950 \/ 0.283200 (0.118750) | 0.024628 \/ 0.141683 (-0.117054) | 1.441903 \/ 1.452155 (-0.010252) | 1.537494 \/ 1.492716 (0.044777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.214696 \/ 0.018006 (0.196690) | 0.425626 \/ 0.000490 (0.425137) | 0.003370 \/ 0.000200 (0.003170) | 0.000071 \/ 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023133 \/ 0.037411 (-0.014279) | 0.072374 \/ 0.014526 (0.057848) | 0.081255 \/ 0.176557 (-0.095301) | 0.146960 \/ 0.737135 (-0.590175) | 0.081748 \/ 0.296338 (-0.214590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.390683 \/ 0.215209 (0.175473) | 3.893166 \/ 2.077655 (1.815511) | 1.884321 \/ 1.504120 (0.380201) | 1.701899 \/ 1.541195 (0.160704) | 1.737839 \/ 1.468490 (0.269349) | 0.497008 \/ 4.584777 (-4.087769) | 3.041211 \/ 3.745712 (-0.704501) | 3.519947 \/ 5.269862 (-1.749915) | 2.015085 \/ 4.565676 (-2.550592) | 0.057685 \/ 0.424275 (-0.366590) | 0.006415 \/ 0.007607 (-0.001192) | 0.465565 \/ 0.226044 (0.239520) | 4.635224 \/ 2.268929 (2.366295) | 2.297941 \/ 55.444624 (-53.146683) | 1.946670 \/ 6.876477 (-4.929807) | 2.078527 \/ 2.142072 (-0.063546) | 0.584101 \/ 4.805227 (-4.221126) | 0.126488 \/ 6.500664 (-6.374176) | 0.060819 \/ 0.075469 (-0.014650) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.223400 \/ 1.841788 (-0.618388) | 17.960923 \/ 8.074308 (9.886615) | 13.187683 \/ 10.191392 (2.996291) | 0.129258 \/ 0.680424 (-0.551166) | 0.016601 \/ 0.534201 (-0.517600) | 0.330028 \/ 0.579283 (-0.249255) | 0.353861 \/ 0.434364 (-0.080503) | 0.376022 \/ 0.540337 (-0.164315) | 0.518145 \/ 1.386936 (-0.868791) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006015 \/ 0.011353 (-0.005338) | 0.003605 \/ 0.011008 (-0.007403) | 0.062169 \/ 0.038508 (0.023661) | 0.056094 \/ 0.023109 (0.032985) | 0.353085 \/ 0.275898 (0.077187) | 0.393744 \/ 0.323480 (0.070265) | 0.004672 \/ 0.007986 (-0.003313) | 0.002859 \/ 0.004328 (-0.001469) | 0.062992 \/ 0.004250 (0.058742) | 0.049767 \/ 0.037052 (0.012714) | 0.356850 \/ 0.258489 (0.098361) | 0.403731 \/ 0.293841 (0.109890) | 0.026664 \/ 0.128546 (-0.101882) | 0.008026 \/ 0.075646 (-0.067621) | 0.067944 \/ 0.419271 (-0.351327) | 0.042133 \/ 0.043533 (-0.001400) | 0.353865 \/ 0.255139 (0.098726) | 0.383461 \/ 0.283200 (0.100261) | 0.021250 \/ 0.141683 (-0.120433) | 1.428102 \/ 1.452155 (-0.024053) | 1.481061 \/ 1.492716 (-0.011655) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.223552 \/ 0.018006 (0.205546) | 0.402390 \/ 0.000490 (0.401900) | 0.000721 \/ 0.000200 (0.000521) | 0.000059 \/ 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025065 \/ 0.037411 (-0.012347) | 0.075537 \/ 0.014526 (0.061011) | 0.083519 \/ 0.176557 (-0.093037) | 0.137068 \/ 0.737135 (-0.600068) | 0.084165 \/ 0.296338 (-0.212173) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.420176 \/ 0.215209 (0.204967) | 4.206226 \/ 2.077655 (2.128571) | 2.168089 \/ 1.504120 (0.663969) | 1.987299 \/ 1.541195 (0.446104) | 2.029489 \/ 1.468490 (0.560999) | 0.495822 \/ 4.584777 (-4.088955) | 3.106580 \/ 3.745712 (-0.639132) | 3.833215 \/ 5.269862 (-1.436647) | 2.450450 \/ 4.565676 (-2.115226) | 0.056979 \/ 0.424275 (-0.367296) | 0.006514 \/ 0.007607 (-0.001093) | 0.503646 \/ 0.226044 (0.277601) | 5.035035 \/ 2.268929 (2.766106) | 2.608245 \/ 55.444624 (-52.836379) | 2.245492 \/ 6.876477 (-4.630985) | 2.262868 \/ 2.142072 (0.120795) | 0.590736 \/ 4.805227 (-4.214491) | 0.124637 \/ 6.500664 (-6.376027) | 0.061442 \/ 0.075469 (-0.014027) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.316736 \/ 1.841788 (-0.525052) | 17.948635 \/ 8.074308 (9.874327) | 13.752442 \/ 10.191392 (3.561050) | 0.144107 \/ 0.680424 (-0.536317) | 0.017112 \/ 0.534201 (-0.517089) | 0.336537 \/ 0.579283 (-0.242746) | 0.347832 \/ 0.434364 (-0.086532) | 0.392944 \/ 0.540337 (-0.147393) | 0.534455 \/ 1.386936 (-0.852481) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#406b2212263c0d33f267e35b917f410ff6b3bc00 \"CML watermark\")\n"],"created_at":1689269052000,"updated_at":1689270461000,"closed_at":1689269939000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6029","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6029","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6029.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6029.patch","merged_at":1689269939000},"body":"Fixes link to the builder classes :)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6029\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6029\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6028","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6028\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6028\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6028\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6028","id":1803294981,"node_id":"PR_kwDODunzps5Vb3LJ","number":6028,"title":"Use new hffs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_6028). All of your documentation changes will be reflected on that endpoint.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006665 \/ 0.011353 (-0.004688) | 0.004376 \/ 0.011008 (-0.006633) | 0.085529 \/ 0.038508 (0.047021) | 0.076372 \/ 0.023109 (0.053263) | 0.310019 \/ 0.275898 (0.034121) | 0.341404 \/ 0.323480 (0.017924) | 0.005666 \/ 0.007986 (-0.002320) | 0.003763 \/ 0.004328 (-0.000566) | 0.064678 \/ 0.004250 (0.060427) | 0.059283 \/ 0.037052 (0.022231) | 0.316194 \/ 0.258489 (0.057704) | 0.349397 \/ 0.293841 (0.055557) | 0.031199 \/ 0.128546 (-0.097347) | 0.008724 \/ 0.075646 (-0.066923) | 0.300236 \/ 0.419271 (-0.119035) | 0.068872 \/ 0.043533 (0.025339) | 0.308521 \/ 0.255139 (0.053382) | 0.331292 \/ 0.283200 (0.048092) | 0.028236 \/ 0.141683 (-0.113447) | 1.501365 \/ 1.452155 (0.049211) | 1.554334 \/ 1.492716 (0.061618) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.238291 \/ 0.018006 (0.220285) | 0.565069 \/ 0.000490 (0.564580) | 0.001626 \/ 0.000200 (0.001426) | 0.000070 \/ 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029777 \/ 0.037411 (-0.007634) | 0.082873 \/ 0.014526 (0.068347) | 0.099619 \/ 0.176557 (-0.076937) | 0.156572 \/ 0.737135 (-0.580563) | 0.099887 \/ 0.296338 (-0.196452) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.401017 \/ 0.215209 (0.185808) | 3.827192 \/ 2.077655 (1.749537) | 1.861554 \/ 1.504120 (0.357434) | 1.699869 \/ 1.541195 (0.158674) | 1.720043 \/ 1.468490 (0.251553) | 0.486757 \/ 4.584777 (-4.098020) | 3.638125 \/ 3.745712 (-0.107587) | 5.844959 \/ 5.269862 (0.575097) | 3.454901 \/ 4.565676 (-1.110775) | 0.057650 \/ 0.424275 (-0.366625) | 0.007341 \/ 0.007607 (-0.000266) | 0.462698 \/ 0.226044 (0.236654) | 4.633472 \/ 2.268929 (2.364544) | 2.287607 \/ 55.444624 (-53.157017) | 2.057318 \/ 6.876477 (-4.819159) | 2.203657 \/ 2.142072 (0.061584) | 0.598136 \/ 4.805227 (-4.207091) | 0.134012 \/ 6.500664 (-6.366653) | 0.060824 \/ 0.075469 (-0.014645) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.277752 \/ 1.841788 (-0.564036) | 20.013398 \/ 8.074308 (11.939089) | 14.372993 \/ 10.191392 (4.181601) | 0.169991 \/ 0.680424 (-0.510433) | 0.018344 \/ 0.534201 (-0.515857) | 0.396985 \/ 0.579283 (-0.182299) | 0.416289 \/ 0.434364 (-0.018075) | 0.458658 \/ 0.540337 (-0.081680) | 0.692980 \/ 1.386936 (-0.693956) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006689 \/ 0.011353 (-0.004664) | 0.004393 \/ 0.011008 (-0.006615) | 0.064069 \/ 0.038508 (0.025561) | 0.080717 \/ 0.023109 (0.057607) | 0.370090 \/ 0.275898 (0.094191) | 0.400432 \/ 0.323480 (0.076952) | 0.005613 \/ 0.007986 (-0.002372) | 0.003641 \/ 0.004328 (-0.000687) | 0.064771 \/ 0.004250 (0.060520) | 0.057555 \/ 0.037052 (0.020502) | 0.392156 \/ 0.258489 (0.133667) | 0.409842 \/ 0.293841 (0.116001) | 0.031500 \/ 0.128546 (-0.097047) | 0.008786 \/ 0.075646 (-0.066860) | 0.070342 \/ 0.419271 (-0.348929) | 0.048646 \/ 0.043533 (0.005113) | 0.360914 \/ 0.255139 (0.105775) | 0.387626 \/ 0.283200 (0.104426) | 0.022787 \/ 0.141683 (-0.118896) | 1.508915 \/ 1.452155 (0.056761) | 1.539719 \/ 1.492716 (0.047002) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.257985 \/ 0.018006 (0.239979) | 0.550990 \/ 0.000490 (0.550501) | 0.000407 \/ 0.000200 (0.000207) | 0.000057 \/ 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030183 \/ 0.037411 (-0.007228) | 0.086882 \/ 0.014526 (0.072356) | 0.102382 \/ 0.176557 (-0.074175) | 0.154745 \/ 0.737135 (-0.582390) | 0.104008 \/ 0.296338 (-0.192331) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.426284 \/ 0.215209 (0.211075) | 4.240812 \/ 2.077655 (2.163158) | 2.261240 \/ 1.504120 (0.757120) | 2.085905 \/ 1.541195 (0.544710) | 2.160374 \/ 1.468490 (0.691883) | 0.481126 \/ 4.584777 (-4.103651) | 3.516234 \/ 3.745712 (-0.229478) | 3.325322 \/ 5.269862 (-1.944539) | 2.043307 \/ 4.565676 (-2.522369) | 0.056663 \/ 0.424275 (-0.367612) | 0.007786 \/ 0.007607 (0.000179) | 0.497614 \/ 0.226044 (0.271570) | 4.974529 \/ 2.268929 (2.705600) | 2.700018 \/ 55.444624 (-52.744606) | 2.393778 \/ 6.876477 (-4.482699) | 2.628202 \/ 2.142072 (0.486130) | 0.594316 \/ 4.805227 (-4.210911) | 0.147092 \/ 6.500664 (-6.353572) | 0.062207 \/ 0.075469 (-0.013262) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.315676 \/ 1.841788 (-0.526112) | 20.749251 \/ 8.074308 (12.674943) | 14.371553 \/ 10.191392 (4.180160) | 0.170249 \/ 0.680424 (-0.510175) | 0.018478 \/ 0.534201 (-0.515722) | 0.395710 \/ 0.579283 (-0.183573) | 0.409706 \/ 0.434364 (-0.024658) | 0.463454 \/ 0.540337 (-0.076884) | 0.615657 \/ 1.386936 (-0.771279) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#c5a752d8e8ca0a6ed118b024ba03c1b4a2881177 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007224 \/ 0.011353 (-0.004129) | 0.004506 \/ 0.011008 (-0.006503) | 0.096729 \/ 0.038508 (0.058221) | 0.082394 \/ 0.023109 (0.059284) | 0.390954 \/ 0.275898 (0.115056) | 0.416647 \/ 0.323480 (0.093167) | 0.005894 \/ 0.007986 (-0.002092) | 0.003756 \/ 0.004328 (-0.000572) | 0.075800 \/ 0.004250 (0.071549) | 0.062683 \/ 0.037052 (0.025631) | 0.398959 \/ 0.258489 (0.140470) | 0.436624 \/ 0.293841 (0.142783) | 0.034650 \/ 0.128546 (-0.093896) | 0.009655 \/ 0.075646 (-0.065991) | 0.315761 \/ 0.419271 (-0.103511) | 0.060957 \/ 0.043533 (0.017424) | 0.385649 \/ 0.255139 (0.130510) | 0.394022 \/ 0.283200 (0.110822) | 0.024601 \/ 0.141683 (-0.117082) | 1.729586 \/ 1.452155 (0.277431) | 1.724153 \/ 1.492716 (0.231437) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.207070 \/ 0.018006 (0.189063) | 0.466502 \/ 0.000490 (0.466012) | 0.010739 \/ 0.000200 (0.010540) | 0.000214 \/ 0.000054 (0.000160) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031633 \/ 0.037411 (-0.005779) | 0.095345 \/ 0.014526 (0.080819) | 0.105399 \/ 0.176557 (-0.071157) | 0.174173 \/ 0.737135 (-0.562962) | 0.104207 \/ 0.296338 (-0.192132) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.435312 \/ 0.215209 (0.220103) | 4.265600 \/ 2.077655 (2.187946) | 2.056500 \/ 1.504120 (0.552380) | 1.848023 \/ 1.541195 (0.306828) | 1.946156 \/ 1.468490 (0.477666) | 0.557788 \/ 4.584777 (-4.026989) | 4.070289 \/ 3.745712 (0.324577) | 3.608027 \/ 5.269862 (-1.661835) | 2.214556 \/ 4.565676 (-2.351121) | 0.062623 \/ 0.424275 (-0.361652) | 0.008083 \/ 0.007607 (0.000476) | 0.491782 \/ 0.226044 (0.265738) | 4.989963 \/ 2.268929 (2.721035) | 2.575867 \/ 55.444624 (-52.868757) | 2.208045 \/ 6.876477 (-4.668431) | 2.364184 \/ 2.142072 (0.222112) | 0.633925 \/ 4.805227 (-4.171302) | 0.144323 \/ 6.500664 (-6.356341) | 0.067505 \/ 0.075469 (-0.007965) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.467219 \/ 1.841788 (-0.374569) | 22.334967 \/ 8.074308 (14.260659) | 15.715747 \/ 10.191392 (5.524355) | 0.175443 \/ 0.680424 (-0.504980) | 0.026165 \/ 0.534201 (-0.508036) | 0.490675 \/ 0.579283 (-0.088608) | 0.509211 \/ 0.434364 (0.074847) | 0.586303 \/ 0.540337 (0.045965) | 0.785052 \/ 1.386936 (-0.601884) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007893 \/ 0.011353 (-0.003460) | 0.004577 \/ 0.011008 (-0.006431) | 0.075781 \/ 0.038508 (0.037273) | 0.095492 \/ 0.023109 (0.072382) | 0.433259 \/ 0.275898 (0.157361) | 0.469386 \/ 0.323480 (0.145906) | 0.006317 \/ 0.007986 (-0.001669) | 0.003708 \/ 0.004328 (-0.000621) | 0.074417 \/ 0.004250 (0.070167) | 0.068605 \/ 0.037052 (0.031552) | 0.448701 \/ 0.258489 (0.190212) | 0.469131 \/ 0.293841 (0.175290) | 0.036647 \/ 0.128546 (-0.091899) | 0.010077 \/ 0.075646 (-0.065570) | 0.082457 \/ 0.419271 (-0.336815) | 0.063255 \/ 0.043533 (0.019722) | 0.428144 \/ 0.255139 (0.173005) | 0.451872 \/ 0.283200 (0.168672) | 0.033953 \/ 0.141683 (-0.107730) | 1.781752 \/ 1.452155 (0.329597) | 1.869014 \/ 1.492716 (0.376297) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.223596 \/ 0.018006 (0.205590) | 0.470307 \/ 0.000490 (0.469818) | 0.005059 \/ 0.000200 (0.004859) | 0.000104 \/ 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.038804 \/ 0.037411 (0.001393) | 0.117879 \/ 0.014526 (0.103353) | 0.140701 \/ 0.176557 (-0.035855) | 0.194672 \/ 0.737135 (-0.542463) | 0.132806 \/ 0.296338 (-0.163533) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.510109 \/ 0.215209 (0.294900) | 4.729457 \/ 2.077655 (2.651803) | 2.512113 \/ 1.504120 (1.007993) | 2.302553 \/ 1.541195 (0.761358) | 2.420462 \/ 1.468490 (0.951972) | 0.531682 \/ 4.584777 (-4.053095) | 4.061208 \/ 3.745712 (0.315496) | 3.588542 \/ 5.269862 (-1.681320) | 2.203187 \/ 4.565676 (-2.362489) | 0.065791 \/ 0.424275 (-0.358484) | 0.008839 \/ 0.007607 (0.001232) | 0.562041 \/ 0.226044 (0.335997) | 5.702340 \/ 2.268929 (3.433412) | 3.127609 \/ 55.444624 (-52.317015) | 2.823060 \/ 6.876477 (-4.053417) | 2.898675 \/ 2.142072 (0.756603) | 0.659589 \/ 4.805227 (-4.145638) | 0.148798 \/ 6.500664 (-6.351866) | 0.070787 \/ 0.075469 (-0.004682) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.478317 \/ 1.841788 (-0.363471) | 21.995400 \/ 8.074308 (13.921092) | 16.770729 \/ 10.191392 (6.579337) | 0.226333 \/ 0.680424 (-0.454091) | 0.021835 \/ 0.534201 (-0.512366) | 0.460373 \/ 0.579283 (-0.118910) | 0.479494 \/ 0.434364 (0.045130) | 0.529470 \/ 0.540337 (-0.010868) | 0.718066 \/ 1.386936 (-0.668870) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#9a717b8eb80b0e50b25818127f79a35e0866fb14 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007824 \/ 0.011353 (-0.003529) | 0.004601 \/ 0.011008 (-0.006407) | 0.100025 \/ 0.038508 (0.061517) | 0.096046 \/ 0.023109 (0.072936) | 0.376226 \/ 0.275898 (0.100328) | 0.410905 \/ 0.323480 (0.087425) | 0.006048 \/ 0.007986 (-0.001938) | 0.003817 \/ 0.004328 (-0.000511) | 0.076624 \/ 0.004250 (0.072374) | 0.066390 \/ 0.037052 (0.029338) | 0.380098 \/ 0.258489 (0.121609) | 0.413603 \/ 0.293841 (0.119762) | 0.036546 \/ 0.128546 (-0.092001) | 0.009881 \/ 0.075646 (-0.065765) | 0.344338 \/ 0.419271 (-0.074934) | 0.061882 \/ 0.043533 (0.018350) | 0.368568 \/ 0.255139 (0.113429) | 0.397133 \/ 0.283200 (0.113934) | 0.027255 \/ 0.141683 (-0.114428) | 1.795099 \/ 1.452155 (0.342945) | 1.852443 \/ 1.492716 (0.359727) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.247436 \/ 0.018006 (0.229430) | 0.494119 \/ 0.000490 (0.493629) | 0.004359 \/ 0.000200 (0.004159) | 0.000089 \/ 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034765 \/ 0.037411 (-0.002647) | 0.104541 \/ 0.014526 (0.090015) | 0.113898 \/ 0.176557 (-0.062659) | 0.183634 \/ 0.737135 (-0.553501) | 0.116423 \/ 0.296338 (-0.179916) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.458747 \/ 0.215209 (0.243538) | 4.555740 \/ 2.077655 (2.478085) | 2.217240 \/ 1.504120 (0.713121) | 2.039879 \/ 1.541195 (0.498684) | 2.088581 \/ 1.468490 (0.620091) | 0.588063 \/ 4.584777 (-3.996714) | 4.238226 \/ 3.745712 (0.492514) | 4.768060 \/ 5.269862 (-0.501802) | 2.857117 \/ 4.565676 (-1.708560) | 0.068742 \/ 0.424275 (-0.355533) | 0.008667 \/ 0.007607 (0.001059) | 0.549294 \/ 0.226044 (0.323249) | 5.464635 \/ 2.268929 (3.195706) | 2.744435 \/ 55.444624 (-52.700189) | 2.347660 \/ 6.876477 (-4.528816) | 2.616816 \/ 2.142072 (0.474743) | 0.703701 \/ 4.805227 (-4.101526) | 0.159749 \/ 6.500664 (-6.340915) | 0.071990 \/ 0.075469 (-0.003479) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.486599 \/ 1.841788 (-0.355188) | 22.745438 \/ 8.074308 (14.671130) | 16.822332 \/ 10.191392 (6.630940) | 0.184730 \/ 0.680424 (-0.495694) | 0.021267 \/ 0.534201 (-0.512934) | 0.467108 \/ 0.579283 (-0.112176) | 0.472674 \/ 0.434364 (0.038311) | 0.548094 \/ 0.540337 (0.007756) | 0.735885 \/ 1.386936 (-0.651051) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007746 \/ 0.011353 (-0.003607) | 0.004585 \/ 0.011008 (-0.006423) | 0.076943 \/ 0.038508 (0.038435) | 0.087473 \/ 0.023109 (0.064363) | 0.480099 \/ 0.275898 (0.204201) | 0.495271 \/ 0.323480 (0.171791) | 0.006348 \/ 0.007986 (-0.001638) | 0.003902 \/ 0.004328 (-0.000426) | 0.077586 \/ 0.004250 (0.073335) | 0.066467 \/ 0.037052 (0.029415) | 0.468741 \/ 0.258489 (0.210252) | 0.506778 \/ 0.293841 (0.212937) | 0.036877 \/ 0.128546 (-0.091669) | 0.010102 \/ 0.075646 (-0.065545) | 0.084419 \/ 0.419271 (-0.334852) | 0.058721 \/ 0.043533 (0.015188) | 0.453633 \/ 0.255139 (0.198494) | 0.481171 \/ 0.283200 (0.197971) | 0.028716 \/ 0.141683 (-0.112967) | 1.853048 \/ 1.452155 (0.400893) | 1.885847 \/ 1.492716 (0.393130) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.192136 \/ 0.018006 (0.174130) | 0.484481 \/ 0.000490 (0.483991) | 0.002951 \/ 0.000200 (0.002751) | 0.000098 \/ 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.037949 \/ 0.037411 (0.000538) | 0.108364 \/ 0.014526 (0.093838) | 0.119542 \/ 0.176557 (-0.057014) | 0.188542 \/ 0.737135 (-0.548593) | 0.122011 \/ 0.296338 (-0.174327) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.483135 \/ 0.215209 (0.267926) | 4.849715 \/ 2.077655 (2.772060) | 2.497736 \/ 1.504120 (0.993616) | 2.314243 \/ 1.541195 (0.773048) | 2.412739 \/ 1.468490 (0.944249) | 0.564137 \/ 4.584777 (-4.020639) | 4.242273 \/ 3.745712 (0.496561) | 6.337843 \/ 5.269862 (1.067982) | 3.923250 \/ 4.565676 (-0.642426) | 0.066464 \/ 0.424275 (-0.357811) | 0.009217 \/ 0.007607 (0.001610) | 0.575667 \/ 0.226044 (0.349623) | 5.746187 \/ 2.268929 (3.477258) | 3.069655 \/ 55.444624 (-52.374969) | 2.674798 \/ 6.876477 (-4.201679) | 2.956535 \/ 2.142072 (0.814463) | 0.701043 \/ 4.805227 (-4.104185) | 0.157241 \/ 6.500664 (-6.343423) | 0.073175 \/ 0.075469 (-0.002294) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.609943 \/ 1.841788 (-0.231844) | 23.478594 \/ 8.074308 (15.404286) | 17.454437 \/ 10.191392 (7.263045) | 0.186422 \/ 0.680424 (-0.494002) | 0.021703 \/ 0.534201 (-0.512498) | 0.471704 \/ 0.579283 (-0.107579) | 0.480553 \/ 0.434364 (0.046189) | 0.552881 \/ 0.540337 (0.012544) | 0.722515 \/ 1.386936 (-0.664421) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#84645f80049cd00d9e0d4908faf3c3203fdcf21d \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007542 \/ 0.011353 (-0.003811) | 0.004692 \/ 0.011008 (-0.006316) | 0.099155 \/ 0.038508 (0.060647) | 0.089365 \/ 0.023109 (0.066256) | 0.370870 \/ 0.275898 (0.094972) | 0.422152 \/ 0.323480 (0.098673) | 0.006223 \/ 0.007986 (-0.001763) | 0.003852 \/ 0.004328 (-0.000476) | 0.075438 \/ 0.004250 (0.071188) | 0.065973 \/ 0.037052 (0.028921) | 0.381513 \/ 0.258489 (0.123024) | 0.416196 \/ 0.293841 (0.122355) | 0.035483 \/ 0.128546 (-0.093063) | 0.009884 \/ 0.075646 (-0.065762) | 0.341290 \/ 0.419271 (-0.077982) | 0.060546 \/ 0.043533 (0.017014) | 0.365101 \/ 0.255139 (0.109962) | 0.391058 \/ 0.283200 (0.107859) | 0.026325 \/ 0.141683 (-0.115358) | 1.815168 \/ 1.452155 (0.363013) | 1.834711 \/ 1.492716 (0.341994) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.222177 \/ 0.018006 (0.204171) | 0.501151 \/ 0.000490 (0.500662) | 0.010202 \/ 0.000200 (0.010002) | 0.000102 \/ 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034043 \/ 0.037411 (-0.003368) | 0.097884 \/ 0.014526 (0.083358) | 0.114022 \/ 0.176557 (-0.062534) | 0.186200 \/ 0.737135 (-0.550935) | 0.115555 \/ 0.296338 (-0.180783) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.485857 \/ 0.215209 (0.270648) | 4.959263 \/ 2.077655 (2.881608) | 2.501085 \/ 1.504120 (0.996965) | 2.234660 \/ 1.541195 (0.693465) | 2.238585 \/ 1.468490 (0.770095) | 0.645431 \/ 4.584777 (-3.939345) | 4.434311 \/ 3.745712 (0.688599) | 4.771491 \/ 5.269862 (-0.498371) | 2.778963 \/ 4.565676 (-1.786714) | 0.075615 \/ 0.424275 (-0.348660) | 0.009502 \/ 0.007607 (0.001895) | 0.546539 \/ 0.226044 (0.320495) | 5.464242 \/ 2.268929 (3.195314) | 2.894101 \/ 55.444624 (-52.550524) | 2.513761 \/ 6.876477 (-4.362715) | 2.719843 \/ 2.142072 (0.577770) | 0.678828 \/ 4.805227 (-4.126399) | 0.157839 \/ 6.500664 (-6.342825) | 0.071305 \/ 0.075469 (-0.004164) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.496879 \/ 1.841788 (-0.344909) | 22.214452 \/ 8.074308 (14.140144) | 17.707541 \/ 10.191392 (7.516149) | 0.197008 \/ 0.680424 (-0.483416) | 0.024883 \/ 0.534201 (-0.509318) | 0.493611 \/ 0.579283 (-0.085672) | 0.500677 \/ 0.434364 (0.066313) | 0.569381 \/ 0.540337 (0.029044) | 0.773950 \/ 1.386936 (-0.612986) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007337 \/ 0.011353 (-0.004015) | 0.004572 \/ 0.011008 (-0.006436) | 0.091123 \/ 0.038508 (0.052615) | 0.079762 \/ 0.023109 (0.056652) | 0.450527 \/ 0.275898 (0.174629) | 0.525097 \/ 0.323480 (0.201617) | 0.005873 \/ 0.007986 (-0.002112) | 0.003797 \/ 0.004328 (-0.000532) | 0.076259 \/ 0.004250 (0.072009) | 0.062745 \/ 0.037052 (0.025692) | 0.465553 \/ 0.258489 (0.207064) | 0.546026 \/ 0.293841 (0.252186) | 0.035638 \/ 0.128546 (-0.092909) | 0.010086 \/ 0.075646 (-0.065560) | 0.109269 \/ 0.419271 (-0.310002) | 0.056765 \/ 0.043533 (0.013233) | 0.440887 \/ 0.255139 (0.185748) | 0.513325 \/ 0.283200 (0.230125) | 0.027206 \/ 0.141683 (-0.114476) | 1.863564 \/ 1.452155 (0.411409) | 1.918206 \/ 1.492716 (0.425490) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.266479 \/ 0.018006 (0.248473) | 0.487971 \/ 0.000490 (0.487481) | 0.012246 \/ 0.000200 (0.012046) | 0.000119 \/ 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.035281 \/ 0.037411 (-0.002130) | 0.102991 \/ 0.014526 (0.088465) | 0.114638 \/ 0.176557 (-0.061919) | 0.184117 \/ 0.737135 (-0.553018) | 0.117943 \/ 0.296338 (-0.178396) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.497897 \/ 0.215209 (0.282688) | 4.973806 \/ 2.077655 (2.896151) | 2.596146 \/ 1.504120 (1.092026) | 2.419694 \/ 1.541195 (0.878499) | 2.525784 \/ 1.468490 (1.057294) | 0.568021 \/ 4.584777 (-4.016756) | 4.296431 \/ 3.745712 (0.550719) | 3.690682 \/ 5.269862 (-1.579179) | 2.345965 \/ 4.565676 (-2.219712) | 0.066859 \/ 0.424275 (-0.357416) | 0.009093 \/ 0.007607 (0.001486) | 0.582616 \/ 0.226044 (0.356571) | 5.826528 \/ 2.268929 (3.557600) | 3.253222 \/ 55.444624 (-52.191403) | 2.798447 \/ 6.876477 (-4.078030) | 3.054609 \/ 2.142072 (0.912537) | 0.678816 \/ 4.805227 (-4.126411) | 0.157966 \/ 6.500664 (-6.342698) | 0.073797 \/ 0.075469 (-0.001672) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.599480 \/ 1.841788 (-0.242308) | 23.249738 \/ 8.074308 (15.175430) | 16.965406 \/ 10.191392 (6.774014) | 0.171390 \/ 0.680424 (-0.509034) | 0.021810 \/ 0.534201 (-0.512391) | 0.483339 \/ 0.579283 (-0.095944) | 0.496615 \/ 0.434364 (0.062251) | 0.583786 \/ 0.540337 (0.043448) | 0.741699 \/ 1.386936 (-0.645237) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#7935cd2e564f5d1c66ed1acf731703724ba7a287 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006054 \/ 0.011353 (-0.005299) | 0.003706 \/ 0.011008 (-0.007302) | 0.080060 \/ 0.038508 (0.041552) | 0.061479 \/ 0.023109 (0.038370) | 0.327981 \/ 0.275898 (0.052083) | 0.356930 \/ 0.323480 (0.033450) | 0.004671 \/ 0.007986 (-0.003315) | 0.002901 \/ 0.004328 (-0.001428) | 0.062425 \/ 0.004250 (0.058174) | 0.046310 \/ 0.037052 (0.009258) | 0.323657 \/ 0.258489 (0.065168) | 0.370130 \/ 0.293841 (0.076289) | 0.027151 \/ 0.128546 (-0.101395) | 0.007850 \/ 0.075646 (-0.067797) | 0.262300 \/ 0.419271 (-0.156971) | 0.045456 \/ 0.043533 (0.001923) | 0.325569 \/ 0.255139 (0.070430) | 0.352962 \/ 0.283200 (0.069762) | 0.020156 \/ 0.141683 (-0.121527) | 1.429404 \/ 1.452155 (-0.022750) | 1.615032 \/ 1.492716 (0.122316) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.187309 \/ 0.018006 (0.169303) | 0.428848 \/ 0.000490 (0.428358) | 0.003599 \/ 0.000200 (0.003399) | 0.000069 \/ 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023260 \/ 0.037411 (-0.014151) | 0.072467 \/ 0.014526 (0.057941) | 0.082398 \/ 0.176557 (-0.094159) | 0.142573 \/ 0.737135 (-0.594562) | 0.082570 \/ 0.296338 (-0.213768) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.426503 \/ 0.215209 (0.211294) | 4.267875 \/ 2.077655 (2.190220) | 2.189762 \/ 1.504120 (0.685642) | 2.027992 \/ 1.541195 (0.486798) | 2.053211 \/ 1.468490 (0.584721) | 0.503850 \/ 4.584777 (-4.080927) | 3.086444 \/ 3.745712 (-0.659268) | 3.319492 \/ 5.269862 (-1.950370) | 2.070714 \/ 4.565676 (-2.494962) | 0.057591 \/ 0.424275 (-0.366684) | 0.006407 \/ 0.007607 (-0.001200) | 0.501145 \/ 0.226044 (0.275100) | 5.017753 \/ 2.268929 (2.748825) | 2.643145 \/ 55.444624 (-52.801479) | 2.327440 \/ 6.876477 (-4.549037) | 2.460250 \/ 2.142072 (0.318178) | 0.589397 \/ 4.805227 (-4.215830) | 0.124948 \/ 6.500664 (-6.375716) | 0.060450 \/ 0.075469 (-0.015020) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.279870 \/ 1.841788 (-0.561918) | 18.115908 \/ 8.074308 (10.041600) | 13.570032 \/ 10.191392 (3.378640) | 0.132981 \/ 0.680424 (-0.547442) | 0.016942 \/ 0.534201 (-0.517259) | 0.333591 \/ 0.579283 (-0.245692) | 0.358844 \/ 0.434364 (-0.075520) | 0.395748 \/ 0.540337 (-0.144590) | 0.546213 \/ 1.386936 (-0.840723) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006062 \/ 0.011353 (-0.005291) | 0.003673 \/ 0.011008 (-0.007336) | 0.064726 \/ 0.038508 (0.026218) | 0.061854 \/ 0.023109 (0.038745) | 0.385343 \/ 0.275898 (0.109445) | 0.441284 \/ 0.323480 (0.117805) | 0.004830 \/ 0.007986 (-0.003156) | 0.002909 \/ 0.004328 (-0.001420) | 0.063874 \/ 0.004250 (0.059624) | 0.049331 \/ 0.037052 (0.012278) | 0.418484 \/ 0.258489 (0.159995) | 0.451397 \/ 0.293841 (0.157556) | 0.027665 \/ 0.128546 (-0.100881) | 0.008088 \/ 0.075646 (-0.067558) | 0.069625 \/ 0.419271 (-0.349646) | 0.043437 \/ 0.043533 (-0.000095) | 0.359789 \/ 0.255139 (0.104650) | 0.430206 \/ 0.283200 (0.147007) | 0.022308 \/ 0.141683 (-0.119375) | 1.461030 \/ 1.452155 (0.008875) | 1.513683 \/ 1.492716 (0.020966) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.230958 \/ 0.018006 (0.212952) | 0.417553 \/ 0.000490 (0.417063) | 0.000802 \/ 0.000200 (0.000602) | 0.000066 \/ 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025421 \/ 0.037411 (-0.011991) | 0.077156 \/ 0.014526 (0.062630) | 0.087533 \/ 0.176557 (-0.089024) | 0.138048 \/ 0.737135 (-0.599087) | 0.089358 \/ 0.296338 (-0.206981) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.439172 \/ 0.215209 (0.223963) | 4.409509 \/ 2.077655 (2.331854) | 2.491270 \/ 1.504120 (0.987150) | 2.308446 \/ 1.541195 (0.767252) | 2.378440 \/ 1.468490 (0.909950) | 0.499834 \/ 4.584777 (-4.084943) | 3.083168 \/ 3.745712 (-0.662544) | 2.867543 \/ 5.269862 (-2.402318) | 1.876354 \/ 4.565676 (-2.689323) | 0.057092 \/ 0.424275 (-0.367183) | 0.006955 \/ 0.007607 (-0.000653) | 0.513799 \/ 0.226044 (0.287754) | 5.126660 \/ 2.268929 (2.857731) | 2.917348 \/ 55.444624 (-52.527277) | 2.508035 \/ 6.876477 (-4.368441) | 2.698089 \/ 2.142072 (0.556016) | 0.586828 \/ 4.805227 (-4.218399) | 0.124740 \/ 6.500664 (-6.375924) | 0.062276 \/ 0.075469 (-0.013193) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.291624 \/ 1.841788 (-0.550164) | 18.199968 \/ 8.074308 (10.125660) | 13.888139 \/ 10.191392 (3.696747) | 0.162955 \/ 0.680424 (-0.517469) | 0.017343 \/ 0.534201 (-0.516858) | 0.334683 \/ 0.579283 (-0.244600) | 0.352708 \/ 0.434364 (-0.081656) | 0.400629 \/ 0.540337 (-0.139708) | 0.539497 \/ 1.386936 (-0.847439) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#e7976db7fe22c6b93a869488d07b8137ea6a0db4 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007500 \/ 0.011353 (-0.003853) | 0.004498 \/ 0.011008 (-0.006510) | 0.100239 \/ 0.038508 (0.061731) | 0.083424 \/ 0.023109 (0.060315) | 0.366664 \/ 0.275898 (0.090766) | 0.406641 \/ 0.323480 (0.083161) | 0.004577 \/ 0.007986 (-0.003409) | 0.004809 \/ 0.004328 (0.000480) | 0.076898 \/ 0.004250 (0.072647) | 0.064021 \/ 0.037052 (0.026969) | 0.375836 \/ 0.258489 (0.117347) | 0.413008 \/ 0.293841 (0.119167) | 0.036010 \/ 0.128546 (-0.092537) | 0.009655 \/ 0.075646 (-0.065991) | 0.342595 \/ 0.419271 (-0.076677) | 0.061846 \/ 0.043533 (0.018313) | 0.376543 \/ 0.255139 (0.121404) | 0.395858 \/ 0.283200 (0.112659) | 0.026792 \/ 0.141683 (-0.114891) | 1.775569 \/ 1.452155 (0.323414) | 1.865077 \/ 1.492716 (0.372360) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.221521 \/ 0.018006 (0.203514) | 0.474604 \/ 0.000490 (0.474114) | 0.004354 \/ 0.000200 (0.004154) | 0.000090 \/ 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032947 \/ 0.037411 (-0.004464) | 0.100454 \/ 0.014526 (0.085928) | 0.111955 \/ 0.176557 (-0.064602) | 0.179752 \/ 0.737135 (-0.557383) | 0.114282 \/ 0.296338 (-0.182056) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.458261 \/ 0.215209 (0.243052) | 4.563536 \/ 2.077655 (2.485881) | 2.231928 \/ 1.504120 (0.727808) | 2.036751 \/ 1.541195 (0.495556) | 2.170413 \/ 1.468490 (0.701923) | 0.570825 \/ 4.584777 (-4.013952) | 4.505762 \/ 3.745712 (0.760050) | 5.033461 \/ 5.269862 (-0.236401) | 2.704989 \/ 4.565676 (-1.860687) | 0.067011 \/ 0.424275 (-0.357264) | 0.008568 \/ 0.007607 (0.000961) | 0.545151 \/ 0.226044 (0.319106) | 5.438984 \/ 2.268929 (3.170055) | 2.771818 \/ 55.444624 (-52.672806) | 2.393082 \/ 6.876477 (-4.483395) | 2.467173 \/ 2.142072 (0.325101) | 0.678849 \/ 4.805227 (-4.126379) | 0.160480 \/ 6.500664 (-6.340184) | 0.073681 \/ 0.075469 (-0.001788) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.532272 \/ 1.841788 (-0.309516) | 22.548741 \/ 8.074308 (14.474433) | 17.091044 \/ 10.191392 (6.899652) | 0.172100 \/ 0.680424 (-0.508324) | 0.022220 \/ 0.534201 (-0.511981) | 0.467871 \/ 0.579283 (-0.111412) | 0.491135 \/ 0.434364 (0.056771) | 0.548433 \/ 0.540337 (0.008096) | 0.733340 \/ 1.386936 (-0.653596) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007593 \/ 0.011353 (-0.003760) | 0.004656 \/ 0.011008 (-0.006352) | 0.076940 \/ 0.038508 (0.038431) | 0.085183 \/ 0.023109 (0.062073) | 0.447178 \/ 0.275898 (0.171280) | 0.469545 \/ 0.323480 (0.146065) | 0.006023 \/ 0.007986 (-0.001962) | 0.003808 \/ 0.004328 (-0.000520) | 0.076767 \/ 0.004250 (0.072517) | 0.065713 \/ 0.037052 (0.028661) | 0.445573 \/ 0.258489 (0.187084) | 0.481689 \/ 0.293841 (0.187848) | 0.036893 \/ 0.128546 (-0.091654) | 0.009976 \/ 0.075646 (-0.065670) | 0.084443 \/ 0.419271 (-0.334829) | 0.058829 \/ 0.043533 (0.015297) | 0.429291 \/ 0.255139 (0.174152) | 0.454016 \/ 0.283200 (0.170816) | 0.027289 \/ 0.141683 (-0.114394) | 1.806786 \/ 1.452155 (0.354632) | 1.887680 \/ 1.492716 (0.394964) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.241012 \/ 0.018006 (0.223006) | 0.470629 \/ 0.000490 (0.470139) | 0.003213 \/ 0.000200 (0.003013) | 0.000107 \/ 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.036896 \/ 0.037411 (-0.000515) | 0.106932 \/ 0.014526 (0.092406) | 0.120333 \/ 0.176557 (-0.056223) | 0.186271 \/ 0.737135 (-0.550865) | 0.121581 \/ 0.296338 (-0.174758) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.507782 \/ 0.215209 (0.292573) | 5.062932 \/ 2.077655 (2.985278) | 2.689539 \/ 1.504120 (1.185419) | 2.482978 \/ 1.541195 (0.941784) | 2.561320 \/ 1.468490 (1.092830) | 0.570664 \/ 4.584777 (-4.014113) | 4.346051 \/ 3.745712 (0.600339) | 6.479374 \/ 5.269862 (1.209513) | 4.096483 \/ 4.565676 (-0.469194) | 0.067564 \/ 0.424275 (-0.356711) | 0.009147 \/ 0.007607 (0.001540) | 0.596059 \/ 0.226044 (0.370015) | 5.963223 \/ 2.268929 (3.694295) | 3.201039 \/ 55.444624 (-52.243585) | 2.816581 \/ 6.876477 (-4.059896) | 3.047821 \/ 2.142072 (0.905748) | 0.687749 \/ 4.805227 (-4.117478) | 0.158174 \/ 6.500664 (-6.342490) | 0.073329 \/ 0.075469 (-0.002140) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.601346 \/ 1.841788 (-0.240441) | 23.712210 \/ 8.074308 (15.637902) | 16.567272 \/ 10.191392 (6.375880) | 0.224745 \/ 0.680424 (-0.455679) | 0.021662 \/ 0.534201 (-0.512539) | 0.471427 \/ 0.579283 (-0.107856) | 0.498751 \/ 0.434364 (0.064387) | 0.572047 \/ 0.540337 (0.031710) | 0.821868 \/ 1.386936 (-0.565068) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#34d0c9027c750adc89f3d04a6bf2e9cb95915da4 \"CML watermark\")\n"],"created_at":1689262904000,"updated_at":1689272945000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6028","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6028","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6028.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6028.patch","merged_at":null},"body":"Thanks to @janineguo 's work in https:\/\/github.com\/huggingface\/datasets\/pull\/5919 which was needed to support HfFileSystem.\r\n\r\n## Implementation details\r\n\r\nI replaced all the from_hf_repo and from_local_or_remote in data_files.py to only use a new `from_patterns` which works for any fsspec path, including hf:\/\/ paths, https:\/\/ URLs and local paths. This simplifies the codebase since there is no logic duplication anymore when it comes to data files resolution.\r\n\r\nI added `_prepare_path_and_storage_options` which returns the right storage_options to use given a path and a `DownloadConfig`. This is the only place where the logic depends on the filesystem type that must be used.\r\n\r\nI also removed the `get_metadata_data_files_list ` and `get_patterns_and_data_files` functions added recently, since data files resolution is now handled using a common interface.\r\n\r\n## Breaking changes\r\n\r\nDataFilesList and DataFilesDict:\r\n- use `str` paths instead of `Union[Path, Url]`\r\n- support hf:\/\/ paths\r\n\r\nclose https:\/\/github.com\/huggingface\/datasets\/issues\/6017","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6028\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6028\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6027","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6027\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6027\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6027\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6027","id":1803008486,"node_id":"PR_kwDODunzps5Va4g3","number":6027,"title":"Delete `task_templates` in `IterableDataset` when they are no longer valid","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008698 \/ 0.011353 (-0.002655) | 0.005250 \/ 0.011008 (-0.005758) | 0.104101 \/ 0.038508 (0.065593) | 0.085021 \/ 0.023109 (0.061912) | 0.426653 \/ 0.275898 (0.150755) | 0.460449 \/ 0.323480 (0.136969) | 0.005222 \/ 0.007986 (-0.002763) | 0.006280 \/ 0.004328 (0.001951) | 0.083458 \/ 0.004250 (0.079207) | 0.066132 \/ 0.037052 (0.029079) | 0.433416 \/ 0.258489 (0.174927) | 0.482718 \/ 0.293841 (0.188877) | 0.048872 \/ 0.128546 (-0.079675) | 0.013699 \/ 0.075646 (-0.061948) | 0.365660 \/ 0.419271 (-0.053611) | 0.071008 \/ 0.043533 (0.027475) | 0.428688 \/ 0.255139 (0.173549) | 0.443554 \/ 0.283200 (0.160354) | 0.035901 \/ 0.141683 (-0.105782) | 1.829296 \/ 1.452155 (0.377141) | 1.862351 \/ 1.492716 (0.369635) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.236284 \/ 0.018006 (0.218278) | 0.584075 \/ 0.000490 (0.583585) | 0.004634 \/ 0.000200 (0.004434) | 0.000125 \/ 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034723 \/ 0.037411 (-0.002688) | 0.100989 \/ 0.014526 (0.086464) | 0.113722 \/ 0.176557 (-0.062834) | 0.187659 \/ 0.737135 (-0.549477) | 0.113937 \/ 0.296338 (-0.182401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.587500 \/ 0.215209 (0.372291) | 5.847371 \/ 2.077655 (3.769716) | 2.599691 \/ 1.504120 (1.095571) | 2.246187 \/ 1.541195 (0.704992) | 2.419126 \/ 1.468490 (0.950636) | 0.847327 \/ 4.584777 (-3.737450) | 5.230438 \/ 3.745712 (1.484726) | 7.539021 \/ 5.269862 (2.269160) | 4.617473 \/ 4.565676 (0.051797) | 0.103620 \/ 0.424275 (-0.320655) | 0.009195 \/ 0.007607 (0.001588) | 0.714247 \/ 0.226044 (0.488203) | 7.331621 \/ 2.268929 (5.062693) | 3.416575 \/ 55.444624 (-52.028049) | 2.649467 \/ 6.876477 (-4.227009) | 2.928091 \/ 2.142072 (0.786018) | 1.002155 \/ 4.805227 (-3.803072) | 0.210790 \/ 6.500664 (-6.289874) | 0.081303 \/ 0.075469 (0.005834) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.655431 \/ 1.841788 (-0.186357) | 24.069595 \/ 8.074308 (15.995287) | 20.923766 \/ 10.191392 (10.732374) | 0.232021 \/ 0.680424 (-0.448403) | 0.026355 \/ 0.534201 (-0.507846) | 0.496830 \/ 0.579283 (-0.082453) | 0.582620 \/ 0.434364 (0.148257) | 0.551227 \/ 0.540337 (0.010890) | 0.756389 \/ 1.386936 (-0.630547) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009329 \/ 0.011353 (-0.002024) | 0.005045 \/ 0.011008 (-0.005964) | 0.082116 \/ 0.038508 (0.043608) | 0.082420 \/ 0.023109 (0.059311) | 0.502513 \/ 0.275898 (0.226615) | 0.526098 \/ 0.323480 (0.202618) | 0.007468 \/ 0.007986 (-0.000517) | 0.005477 \/ 0.004328 (0.001148) | 0.082617 \/ 0.004250 (0.078367) | 0.070292 \/ 0.037052 (0.033239) | 0.503290 \/ 0.258489 (0.244801) | 0.541631 \/ 0.293841 (0.247790) | 0.050826 \/ 0.128546 (-0.077721) | 0.014699 \/ 0.075646 (-0.060948) | 0.094441 \/ 0.419271 (-0.324830) | 0.065034 \/ 0.043533 (0.021501) | 0.486778 \/ 0.255139 (0.231639) | 0.516907 \/ 0.283200 (0.233707) | 0.045140 \/ 0.141683 (-0.096543) | 1.831676 \/ 1.452155 (0.379521) | 1.910865 \/ 1.492716 (0.418149) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.286818 \/ 0.018006 (0.268812) | 0.558621 \/ 0.000490 (0.558131) | 0.002830 \/ 0.000200 (0.002630) | 0.000148 \/ 0.000054 (0.000094) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.036716 \/ 0.037411 (-0.000696) | 0.107830 \/ 0.014526 (0.093305) | 0.116368 \/ 0.176557 (-0.060188) | 0.178401 \/ 0.737135 (-0.558734) | 0.124729 \/ 0.296338 (-0.171609) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.633557 \/ 0.215209 (0.418348) | 6.423135 \/ 2.077655 (4.345480) | 2.981883 \/ 1.504120 (1.477763) | 2.755592 \/ 1.541195 (1.214398) | 2.769337 \/ 1.468490 (1.300847) | 0.836219 \/ 4.584777 (-3.748558) | 5.302030 \/ 3.745712 (1.556318) | 7.463960 \/ 5.269862 (2.194098) | 4.427254 \/ 4.565676 (-0.138422) | 0.095990 \/ 0.424275 (-0.328285) | 0.009264 \/ 0.007607 (0.001657) | 0.770642 \/ 0.226044 (0.544597) | 7.779667 \/ 2.268929 (5.510739) | 3.799115 \/ 55.444624 (-51.645509) | 3.212560 \/ 6.876477 (-3.663917) | 3.281657 \/ 2.142072 (1.139584) | 1.044981 \/ 4.805227 (-3.760246) | 0.210693 \/ 6.500664 (-6.289971) | 0.079466 \/ 0.075469 (0.003997) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.793155 \/ 1.841788 (-0.048632) | 24.691127 \/ 8.074308 (16.616819) | 22.083150 \/ 10.191392 (11.891758) | 0.242246 \/ 0.680424 (-0.438178) | 0.028001 \/ 0.534201 (-0.506200) | 0.494061 \/ 0.579283 (-0.085222) | 0.599288 \/ 0.434364 (0.164924) | 0.552101 \/ 0.540337 (0.011764) | 0.784093 \/ 1.386936 (-0.602843) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#cd429c39604af34bc3a3ba1f463329b23fcbc1e3 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006658 \/ 0.011353 (-0.004695) | 0.004044 \/ 0.011008 (-0.006965) | 0.085844 \/ 0.038508 (0.047336) | 0.077147 \/ 0.023109 (0.054038) | 0.344387 \/ 0.275898 (0.068489) | 0.376718 \/ 0.323480 (0.053238) | 0.005537 \/ 0.007986 (-0.002448) | 0.003452 \/ 0.004328 (-0.000876) | 0.065326 \/ 0.004250 (0.061076) | 0.057639 \/ 0.037052 (0.020587) | 0.352363 \/ 0.258489 (0.093873) | 0.378939 \/ 0.293841 (0.085098) | 0.031259 \/ 0.128546 (-0.097287) | 0.008464 \/ 0.075646 (-0.067183) | 0.289076 \/ 0.419271 (-0.130195) | 0.052991 \/ 0.043533 (0.009459) | 0.346053 \/ 0.255139 (0.090914) | 0.362761 \/ 0.283200 (0.079561) | 0.023501 \/ 0.141683 (-0.118182) | 1.478312 \/ 1.452155 (0.026157) | 1.545437 \/ 1.492716 (0.052721) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.202964 \/ 0.018006 (0.184957) | 0.534793 \/ 0.000490 (0.534303) | 0.006025 \/ 0.000200 (0.005825) | 0.000225 \/ 0.000054 (0.000171) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029418 \/ 0.037411 (-0.007993) | 0.084297 \/ 0.014526 (0.069771) | 0.096702 \/ 0.176557 (-0.079855) | 0.157355 \/ 0.737135 (-0.579781) | 0.097858 \/ 0.296338 (-0.198480) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.380728 \/ 0.215209 (0.165519) | 3.787712 \/ 2.077655 (1.710057) | 1.836393 \/ 1.504120 (0.332273) | 1.678415 \/ 1.541195 (0.137220) | 1.781800 \/ 1.468490 (0.313310) | 0.478677 \/ 4.584777 (-4.106100) | 3.614080 \/ 3.745712 (-0.131632) | 3.255637 \/ 5.269862 (-2.014225) | 2.063642 \/ 4.565676 (-2.502035) | 0.056470 \/ 0.424275 (-0.367805) | 0.007408 \/ 0.007607 (-0.000199) | 0.459155 \/ 0.226044 (0.233111) | 4.586679 \/ 2.268929 (2.317750) | 2.305737 \/ 55.444624 (-53.138888) | 1.954755 \/ 6.876477 (-4.921721) | 2.190809 \/ 2.142072 (0.048737) | 0.572426 \/ 4.805227 (-4.232802) | 0.130349 \/ 6.500664 (-6.370315) | 0.059346 \/ 0.075469 (-0.016124) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.253671 \/ 1.841788 (-0.588117) | 19.509015 \/ 8.074308 (11.434706) | 13.951349 \/ 10.191392 (3.759957) | 0.171038 \/ 0.680424 (-0.509386) | 0.018826 \/ 0.534201 (-0.515375) | 0.394642 \/ 0.579283 (-0.184642) | 0.419614 \/ 0.434364 (-0.014750) | 0.470931 \/ 0.540337 (-0.069406) | 0.643858 \/ 1.386936 (-0.743078) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006765 \/ 0.011353 (-0.004587) | 0.003955 \/ 0.011008 (-0.007053) | 0.064377 \/ 0.038508 (0.025869) | 0.076980 \/ 0.023109 (0.053871) | 0.368675 \/ 0.275898 (0.092777) | 0.403746 \/ 0.323480 (0.080267) | 0.005303 \/ 0.007986 (-0.002683) | 0.003257 \/ 0.004328 (-0.001072) | 0.064154 \/ 0.004250 (0.059903) | 0.056975 \/ 0.037052 (0.019923) | 0.376718 \/ 0.258489 (0.118229) | 0.416291 \/ 0.293841 (0.122450) | 0.031444 \/ 0.128546 (-0.097102) | 0.008532 \/ 0.075646 (-0.067115) | 0.070455 \/ 0.419271 (-0.348816) | 0.049032 \/ 0.043533 (0.005499) | 0.361413 \/ 0.255139 (0.106274) | 0.384648 \/ 0.283200 (0.101448) | 0.024050 \/ 0.141683 (-0.117633) | 1.514330 \/ 1.452155 (0.062176) | 1.585424 \/ 1.492716 (0.092708) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.214701 \/ 0.018006 (0.196695) | 0.447706 \/ 0.000490 (0.447216) | 0.000373 \/ 0.000200 (0.000173) | 0.000058 \/ 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031007 \/ 0.037411 (-0.006404) | 0.090545 \/ 0.014526 (0.076019) | 0.100611 \/ 0.176557 (-0.075945) | 0.154847 \/ 0.737135 (-0.582289) | 0.102864 \/ 0.296338 (-0.193475) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.427740 \/ 0.215209 (0.212531) | 4.273143 \/ 2.077655 (2.195488) | 2.294906 \/ 1.504120 (0.790786) | 2.138460 \/ 1.541195 (0.597265) | 2.274126 \/ 1.468490 (0.805636) | 0.486559 \/ 4.584777 (-4.098218) | 3.565554 \/ 3.745712 (-0.180158) | 3.377659 \/ 5.269862 (-1.892202) | 2.029883 \/ 4.565676 (-2.535793) | 0.057303 \/ 0.424275 (-0.366972) | 0.007314 \/ 0.007607 (-0.000293) | 0.504263 \/ 0.226044 (0.278219) | 5.041196 \/ 2.268929 (2.772268) | 2.819273 \/ 55.444624 (-52.625351) | 2.421479 \/ 6.876477 (-4.454998) | 2.503063 \/ 2.142072 (0.360991) | 0.581467 \/ 4.805227 (-4.223760) | 0.133532 \/ 6.500664 (-6.367132) | 0.062504 \/ 0.075469 (-0.012965) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.328765 \/ 1.841788 (-0.513022) | 20.131672 \/ 8.074308 (12.057363) | 14.312895 \/ 10.191392 (4.121503) | 0.191199 \/ 0.680424 (-0.489225) | 0.018522 \/ 0.534201 (-0.515679) | 0.393121 \/ 0.579283 (-0.186162) | 0.413122 \/ 0.434364 (-0.021242) | 0.469312 \/ 0.540337 (-0.071026) | 0.633140 \/ 1.386936 (-0.753796) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#dbf6c103f5844de40431478e7e4a64fbf2c2c067 \"CML watermark\")\n"],"created_at":1689254177000,"updated_at":1689257180000,"closed_at":1689256655000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6027","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6027","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6027.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6027.patch","merged_at":1689256655000},"body":"Fix #6025 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6027\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6027\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6026","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6026\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6026\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6026\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6026","id":1802929222,"node_id":"PR_kwDODunzps5VanI8","number":6026,"title":"Fix style with ruff 0.0.278","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_6026). All of your documentation changes will be reflected on that endpoint.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006444 \/ 0.011353 (-0.004909) | 0.003768 \/ 0.011008 (-0.007240) | 0.079625 \/ 0.038508 (0.041117) | 0.064490 \/ 0.023109 (0.041381) | 0.313858 \/ 0.275898 (0.037960) | 0.350810 \/ 0.323480 (0.027330) | 0.004804 \/ 0.007986 (-0.003182) | 0.002904 \/ 0.004328 (-0.001425) | 0.061728 \/ 0.004250 (0.057477) | 0.052265 \/ 0.037052 (0.015213) | 0.321246 \/ 0.258489 (0.062757) | 0.353873 \/ 0.293841 (0.060032) | 0.027510 \/ 0.128546 (-0.101036) | 0.007942 \/ 0.075646 (-0.067704) | 0.260518 \/ 0.419271 (-0.158754) | 0.045686 \/ 0.043533 (0.002153) | 0.316821 \/ 0.255139 (0.061682) | 0.337086 \/ 0.283200 (0.053886) | 0.022188 \/ 0.141683 (-0.119495) | 1.427345 \/ 1.452155 (-0.024810) | 1.476059 \/ 1.492716 (-0.016657) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.189640 \/ 0.018006 (0.171634) | 0.429724 \/ 0.000490 (0.429235) | 0.005314 \/ 0.000200 (0.005114) | 0.000076 \/ 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024412 \/ 0.037411 (-0.013000) | 0.073488 \/ 0.014526 (0.058962) | 0.083843 \/ 0.176557 (-0.092714) | 0.147849 \/ 0.737135 (-0.589286) | 0.085465 \/ 0.296338 (-0.210873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.405314 \/ 0.215209 (0.190105) | 4.071471 \/ 2.077655 (1.993816) | 1.916252 \/ 1.504120 (0.412132) | 1.721616 \/ 1.541195 (0.180422) | 1.807187 \/ 1.468490 (0.338697) | 0.498045 \/ 4.584777 (-4.086732) | 3.057526 \/ 3.745712 (-0.688187) | 4.451424 \/ 5.269862 (-0.818437) | 2.764020 \/ 4.565676 (-1.801656) | 0.057665 \/ 0.424275 (-0.366610) | 0.006679 \/ 0.007607 (-0.000928) | 0.485733 \/ 0.226044 (0.259688) | 4.844367 \/ 2.268929 (2.575438) | 2.435359 \/ 55.444624 (-53.009265) | 2.111478 \/ 6.876477 (-4.764999) | 2.377448 \/ 2.142072 (0.235375) | 0.587997 \/ 4.805227 (-4.217230) | 0.125545 \/ 6.500664 (-6.375120) | 0.061509 \/ 0.075469 (-0.013960) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.229210 \/ 1.841788 (-0.612577) | 18.553994 \/ 8.074308 (10.479686) | 14.037877 \/ 10.191392 (3.846485) | 0.144230 \/ 0.680424 (-0.536194) | 0.016891 \/ 0.534201 (-0.517310) | 0.329039 \/ 0.579283 (-0.250244) | 0.357269 \/ 0.434364 (-0.077095) | 0.384222 \/ 0.540337 (-0.156115) | 0.521292 \/ 1.386936 (-0.865644) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006359 \/ 0.011353 (-0.004994) | 0.003721 \/ 0.011008 (-0.007287) | 0.062047 \/ 0.038508 (0.023539) | 0.065267 \/ 0.023109 (0.042158) | 0.360164 \/ 0.275898 (0.084266) | 0.402292 \/ 0.323480 (0.078812) | 0.005603 \/ 0.007986 (-0.002382) | 0.002966 \/ 0.004328 (-0.001363) | 0.062580 \/ 0.004250 (0.058330) | 0.053634 \/ 0.037052 (0.016582) | 0.362210 \/ 0.258489 (0.103721) | 0.404285 \/ 0.293841 (0.110444) | 0.027567 \/ 0.128546 (-0.100979) | 0.008119 \/ 0.075646 (-0.067528) | 0.067577 \/ 0.419271 (-0.351694) | 0.042867 \/ 0.043533 (-0.000666) | 0.361576 \/ 0.255139 (0.106437) | 0.389061 \/ 0.283200 (0.105862) | 0.021923 \/ 0.141683 (-0.119760) | 1.446259 \/ 1.452155 (-0.005895) | 1.490724 \/ 1.492716 (-0.001992) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.206433 \/ 0.018006 (0.188427) | 0.424178 \/ 0.000490 (0.423688) | 0.002340 \/ 0.000200 (0.002140) | 0.000069 \/ 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024955 \/ 0.037411 (-0.012456) | 0.077446 \/ 0.014526 (0.062920) | 0.088540 \/ 0.176557 (-0.088017) | 0.141225 \/ 0.737135 (-0.595910) | 0.089747 \/ 0.296338 (-0.206592) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.443738 \/ 0.215209 (0.228529) | 4.208887 \/ 2.077655 (2.131233) | 2.155127 \/ 1.504120 (0.651007) | 2.028178 \/ 1.541195 (0.486983) | 2.084903 \/ 1.468490 (0.616413) | 0.497530 \/ 4.584777 (-4.087247) | 3.069012 \/ 3.745712 (-0.676700) | 3.025184 \/ 5.269862 (-2.244678) | 1.904687 \/ 4.565676 (-2.660990) | 0.057526 \/ 0.424275 (-0.366749) | 0.006482 \/ 0.007607 (-0.001125) | 0.494692 \/ 0.226044 (0.268647) | 4.944437 \/ 2.268929 (2.675508) | 2.655989 \/ 55.444624 (-52.788635) | 2.331677 \/ 6.876477 (-4.544800) | 2.382396 \/ 2.142072 (0.240324) | 0.582019 \/ 4.805227 (-4.223209) | 0.125866 \/ 6.500664 (-6.374799) | 0.062908 \/ 0.075469 (-0.012561) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.294612 \/ 1.841788 (-0.547176) | 19.016152 \/ 8.074308 (10.941844) | 14.088828 \/ 10.191392 (3.897436) | 0.160842 \/ 0.680424 (-0.519582) | 0.017054 \/ 0.534201 (-0.517146) | 0.333647 \/ 0.579283 (-0.245636) | 0.348094 \/ 0.434364 (-0.086270) | 0.394970 \/ 0.540337 (-0.145367) | 0.551141 \/ 1.386936 (-0.835795) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#9e9cfe886792b30b5000808072a0f91ec8536749 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007442 \/ 0.011353 (-0.003911) | 0.004302 \/ 0.011008 (-0.006707) | 0.087159 \/ 0.038508 (0.048651) | 0.095094 \/ 0.023109 (0.071985) | 0.315422 \/ 0.275898 (0.039524) | 0.346672 \/ 0.323480 (0.023192) | 0.005811 \/ 0.007986 (-0.002174) | 0.003597 \/ 0.004328 (-0.000731) | 0.066400 \/ 0.004250 (0.062150) | 0.065947 \/ 0.037052 (0.028894) | 0.323269 \/ 0.258489 (0.064780) | 0.353309 \/ 0.293841 (0.059468) | 0.032268 \/ 0.128546 (-0.096278) | 0.008696 \/ 0.075646 (-0.066950) | 0.291486 \/ 0.419271 (-0.127786) | 0.054609 \/ 0.043533 (0.011076) | 0.321061 \/ 0.255139 (0.065922) | 0.336907 \/ 0.283200 (0.053707) | 0.027338 \/ 0.141683 (-0.114345) | 1.496442 \/ 1.452155 (0.044287) | 1.576946 \/ 1.492716 (0.084229) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.229140 \/ 0.018006 (0.211134) | 0.487500 \/ 0.000490 (0.487010) | 0.002425 \/ 0.000200 (0.002225) | 0.000089 \/ 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029351 \/ 0.037411 (-0.008060) | 0.089610 \/ 0.014526 (0.075084) | 0.097880 \/ 0.176557 (-0.078676) | 0.155947 \/ 0.737135 (-0.581189) | 0.098593 \/ 0.296338 (-0.197745) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.382911 \/ 0.215209 (0.167702) | 3.820363 \/ 2.077655 (1.742708) | 1.866385 \/ 1.504120 (0.362265) | 1.712910 \/ 1.541195 (0.171716) | 1.813863 \/ 1.468490 (0.345373) | 0.484884 \/ 4.584777 (-4.099893) | 3.678911 \/ 3.745712 (-0.066801) | 5.249908 \/ 5.269862 (-0.019953) | 3.099614 \/ 4.565676 (-1.466063) | 0.057449 \/ 0.424275 (-0.366826) | 0.007728 \/ 0.007607 (0.000120) | 0.462123 \/ 0.226044 (0.236078) | 4.603942 \/ 2.268929 (2.335014) | 2.380957 \/ 55.444624 (-53.063668) | 2.059621 \/ 6.876477 (-4.816856) | 2.293764 \/ 2.142072 (0.151691) | 0.636471 \/ 4.805227 (-4.168756) | 0.150112 \/ 6.500664 (-6.350552) | 0.063705 \/ 0.075469 (-0.011764) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.358099 \/ 1.841788 (-0.483689) | 20.193750 \/ 8.074308 (12.119442) | 14.297350 \/ 10.191392 (4.105958) | 0.164477 \/ 0.680424 (-0.515947) | 0.018259 \/ 0.534201 (-0.515942) | 0.399010 \/ 0.579283 (-0.180273) | 0.417306 \/ 0.434364 (-0.017058) | 0.456961 \/ 0.540337 (-0.083377) | 0.631068 \/ 1.386936 (-0.755868) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007324 \/ 0.011353 (-0.004028) | 0.004463 \/ 0.011008 (-0.006545) | 0.066148 \/ 0.038508 (0.027640) | 0.093909 \/ 0.023109 (0.070799) | 0.399122 \/ 0.275898 (0.123224) | 0.430226 \/ 0.323480 (0.106746) | 0.005505 \/ 0.007986 (-0.002481) | 0.003579 \/ 0.004328 (-0.000749) | 0.066529 \/ 0.004250 (0.062278) | 0.063471 \/ 0.037052 (0.026418) | 0.406351 \/ 0.258489 (0.147862) | 0.439987 \/ 0.293841 (0.146146) | 0.032640 \/ 0.128546 (-0.095906) | 0.008770 \/ 0.075646 (-0.066877) | 0.072592 \/ 0.419271 (-0.346680) | 0.050429 \/ 0.043533 (0.006896) | 0.390873 \/ 0.255139 (0.135734) | 0.412438 \/ 0.283200 (0.129239) | 0.027113 \/ 0.141683 (-0.114570) | 1.458281 \/ 1.452155 (0.006126) | 1.536819 \/ 1.492716 (0.044103) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.228309 \/ 0.018006 (0.210303) | 0.454042 \/ 0.000490 (0.453552) | 0.000387 \/ 0.000200 (0.000187) | 0.000055 \/ 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029573 \/ 0.037411 (-0.007838) | 0.086433 \/ 0.014526 (0.071907) | 0.097992 \/ 0.176557 (-0.078565) | 0.152464 \/ 0.737135 (-0.584671) | 0.099901 \/ 0.296338 (-0.196437) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.413807 \/ 0.215209 (0.198598) | 4.126395 \/ 2.077655 (2.048740) | 2.113544 \/ 1.504120 (0.609424) | 1.967829 \/ 1.541195 (0.426635) | 2.037123 \/ 1.468490 (0.568633) | 0.489403 \/ 4.584777 (-4.095374) | 3.689508 \/ 3.745712 (-0.056204) | 3.503909 \/ 5.269862 (-1.765952) | 2.113812 \/ 4.565676 (-2.451864) | 0.057988 \/ 0.424275 (-0.366287) | 0.007336 \/ 0.007607 (-0.000271) | 0.490840 \/ 0.226044 (0.264795) | 4.885040 \/ 2.268929 (2.616112) | 2.627864 \/ 55.444624 (-52.816760) | 2.231467 \/ 6.876477 (-4.645010) | 2.251307 \/ 2.142072 (0.109235) | 0.577370 \/ 4.805227 (-4.227857) | 0.131770 \/ 6.500664 (-6.368895) | 0.061313 \/ 0.075469 (-0.014156) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.362052 \/ 1.841788 (-0.479735) | 21.332694 \/ 8.074308 (13.258386) | 15.562019 \/ 10.191392 (5.370627) | 0.170874 \/ 0.680424 (-0.509550) | 0.019226 \/ 0.534201 (-0.514975) | 0.400311 \/ 0.579283 (-0.178972) | 0.423060 \/ 0.434364 (-0.011304) | 0.469946 \/ 0.540337 (-0.070391) | 0.647745 \/ 1.386936 (-0.739191) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#aec567c2f224f192e6e1f9799e3afc755eb517b2 \"CML watermark\")\n"],"created_at":1689251664000,"updated_at":1689252386000,"closed_at":1689251821000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6026","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6026","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6026.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6026.patch","merged_at":1689251821000},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6026\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6026\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6025","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6025\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6025\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6025\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6025","id":1801852601,"node_id":"I_kwDODunzps5rZha5","number":6025,"title":"Using a dataset for a use other than it was intended for.","user":{"login":"surya-narayanan","id":17240858,"node_id":"MDQ6VXNlcjE3MjQwODU4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17240858?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/surya-narayanan","html_url":"https:\/\/github.com\/surya-narayanan","followers_url":"https:\/\/api.github.com\/users\/surya-narayanan\/followers","following_url":"https:\/\/api.github.com\/users\/surya-narayanan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/surya-narayanan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/surya-narayanan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/surya-narayanan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/surya-narayanan\/orgs","repos_url":"https:\/\/api.github.com\/users\/surya-narayanan\/repos","events_url":"https:\/\/api.github.com\/users\/surya-narayanan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/surya-narayanan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I've opened a PR with a fix. In the meantime, you can avoid the error by deleting `task_templates` with `dataset.info.task_templates = None` before the `interleave_datasets` call.\r\n` "],"created_at":1689201197000,"updated_at":1689256656000,"closed_at":1689256656000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nHi, I want to use the rotten tomatoes dataset but for a task other than classification, but when I interleave the dataset, it throws ```'ValueError: Column label is not present in features.'```. It seems that the label_col must be there in the dataset for some reason? \r\n\r\nHere is the full stacktrace\r\n\r\n```\r\n File \"\/home\/suryahari\/Vornoi\/tryage-handoff-other-datasets.py\", line 276, in create_dataloaders \r\n dataset = interleave_datasets(dsfold, stopping_strategy=\"all_exhausted\") \r\n File \"\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/combine.py\", line 134, in interleave_datasets \r\n return _interleave_iterable_datasets( \r\n File \"\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/iterable_dataset.py\", line 1833, in _interleave_iterable_datasets \r\n info = DatasetInfo.from_merge([d.info for d in datasets]) \r\n File \"\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/info.py\", line 275, in from_merge \r\n dataset_infos = [dset_info.copy() for dset_info in dataset_infos if dset_info is not None] \r\n File \"\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/info.py\", line 275, in \r\n dataset_infos = [dset_info.copy() for dset_info in dataset_infos if dset_info is not None] \r\n File \"\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/info.py\", line 378, in copy \r\n return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()}) \r\n File \"\", line 20, in __init__ \r\n File \"\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/info.py\", line 208, in __post_init__ \r\n self.task_templates = [ \r\n File \"\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/info.py\", line 209, in \r\n template.align_with_features(self.features) for template in (self.task_templates) \r\n File \"\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/tasks\/text_classification.py\", line 20, in align_with_features \r\n raise ValueError(f\"Column {self.label_column} is not present in features.\") \r\nValueError: Column label is not present in features. \r\n```\n\n### Steps to reproduce the bug\n\nDelete the column `labels` from the `rotten_tomatoes` dataset. Try to interleave it with other datasets.\n\n### Expected behavior\n\nShould let me use the dataset with just the `text` field\n\n### Environment info\n\nlatest datasets library? I don't think this was an issue in earlier versions.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6025\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6025\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6024","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6024\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6024\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6024\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6024","id":1801708808,"node_id":"PR_kwDODunzps5VWbGe","number":6024,"title":"Don't reference self in Spark._validate_cache_dir","user":{"login":"maddiedawson","id":106995444,"node_id":"U_kgDOBmCe9A","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/106995444?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/maddiedawson","html_url":"https:\/\/github.com\/maddiedawson","followers_url":"https:\/\/api.github.com\/users\/maddiedawson\/followers","following_url":"https:\/\/api.github.com\/users\/maddiedawson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/maddiedawson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/maddiedawson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/maddiedawson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/maddiedawson\/orgs","repos_url":"https:\/\/api.github.com\/users\/maddiedawson\/repos","events_url":"https:\/\/api.github.com\/users\/maddiedawson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/maddiedawson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Ptal @lhoestq :) I tested this manually on a multi-node Databricks cluster","Hm looks like the check_code_quality failures are unrelated to me change... https:\/\/github.com\/huggingface\/datasets\/actions\/runs\/5536162850\/jobs\/10103451883?pr=6024","_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005952 \/ 0.011353 (-0.005400) | 0.003585 \/ 0.011008 (-0.007424) | 0.079163 \/ 0.038508 (0.040655) | 0.057926 \/ 0.023109 (0.034817) | 0.326647 \/ 0.275898 (0.050749) | 0.383485 \/ 0.323480 (0.060005) | 0.004530 \/ 0.007986 (-0.003456) | 0.002821 \/ 0.004328 (-0.001508) | 0.062071 \/ 0.004250 (0.057820) | 0.048023 \/ 0.037052 (0.010971) | 0.329368 \/ 0.258489 (0.070879) | 0.390877 \/ 0.293841 (0.097036) | 0.026959 \/ 0.128546 (-0.101588) | 0.007911 \/ 0.075646 (-0.067735) | 0.259956 \/ 0.419271 (-0.159315) | 0.044582 \/ 0.043533 (0.001049) | 0.320537 \/ 0.255139 (0.065398) | 0.373814 \/ 0.283200 (0.090614) | 0.020275 \/ 0.141683 (-0.121408) | 1.532128 \/ 1.452155 (0.079973) | 1.595031 \/ 1.492716 (0.102315) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.186127 \/ 0.018006 (0.168120) | 0.428586 \/ 0.000490 (0.428097) | 0.005180 \/ 0.000200 (0.004980) | 0.000069 \/ 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024876 \/ 0.037411 (-0.012536) | 0.072169 \/ 0.014526 (0.057643) | 0.082015 \/ 0.176557 (-0.094542) | 0.147467 \/ 0.737135 (-0.589668) | 0.082769 \/ 0.296338 (-0.213570) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.410625 \/ 0.215209 (0.195416) | 4.116742 \/ 2.077655 (2.039088) | 2.172291 \/ 1.504120 (0.668171) | 2.022462 \/ 1.541195 (0.481268) | 2.048142 \/ 1.468490 (0.579651) | 0.503152 \/ 4.584777 (-4.081625) | 3.019135 \/ 3.745712 (-0.726577) | 3.589451 \/ 5.269862 (-1.680410) | 2.206876 \/ 4.565676 (-2.358801) | 0.057687 \/ 0.424275 (-0.366588) | 0.006560 \/ 0.007607 (-0.001047) | 0.475585 \/ 0.226044 (0.249541) | 4.784344 \/ 2.268929 (2.515416) | 2.506322 \/ 55.444624 (-52.938302) | 2.168251 \/ 6.876477 (-4.708225) | 2.324453 \/ 2.142072 (0.182381) | 0.590609 \/ 4.805227 (-4.214618) | 0.124178 \/ 6.500664 (-6.376486) | 0.059197 \/ 0.075469 (-0.016272) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.212359 \/ 1.841788 (-0.629429) | 17.915843 \/ 8.074308 (9.841535) | 13.128330 \/ 10.191392 (2.936938) | 0.144805 \/ 0.680424 (-0.535618) | 0.016889 \/ 0.534201 (-0.517312) | 0.344056 \/ 0.579283 (-0.235227) | 0.359370 \/ 0.434364 (-0.074994) | 0.404199 \/ 0.540337 (-0.136138) | 0.549117 \/ 1.386936 (-0.837819) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005914 \/ 0.011353 (-0.005439) | 0.003565 \/ 0.011008 (-0.007443) | 0.061575 \/ 0.038508 (0.023067) | 0.057677 \/ 0.023109 (0.034568) | 0.359753 \/ 0.275898 (0.083855) | 0.394135 \/ 0.323480 (0.070655) | 0.004648 \/ 0.007986 (-0.003338) | 0.002795 \/ 0.004328 (-0.001534) | 0.061877 \/ 0.004250 (0.057626) | 0.049673 \/ 0.037052 (0.012621) | 0.363120 \/ 0.258489 (0.104631) | 0.402685 \/ 0.293841 (0.108844) | 0.027021 \/ 0.128546 (-0.101525) | 0.008006 \/ 0.075646 (-0.067641) | 0.067398 \/ 0.419271 (-0.351874) | 0.044442 \/ 0.043533 (0.000909) | 0.364851 \/ 0.255139 (0.109712) | 0.387219 \/ 0.283200 (0.104019) | 0.027267 \/ 0.141683 (-0.114416) | 1.466675 \/ 1.452155 (0.014520) | 1.512607 \/ 1.492716 (0.019891) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.206156 \/ 0.018006 (0.188150) | 0.410877 \/ 0.000490 (0.410387) | 0.003061 \/ 0.000200 (0.002861) | 0.000068 \/ 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024869 \/ 0.037411 (-0.012542) | 0.075736 \/ 0.014526 (0.061210) | 0.083922 \/ 0.176557 (-0.092634) | 0.139510 \/ 0.737135 (-0.597626) | 0.087685 \/ 0.296338 (-0.208654) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.414473 \/ 0.215209 (0.199264) | 4.150633 \/ 2.077655 (2.072979) | 2.132892 \/ 1.504120 (0.628773) | 1.964072 \/ 1.541195 (0.422878) | 2.003353 \/ 1.468490 (0.534863) | 0.498012 \/ 4.584777 (-4.086765) | 3.010135 \/ 3.745712 (-0.735577) | 2.841130 \/ 5.269862 (-2.428732) | 1.826013 \/ 4.565676 (-2.739664) | 0.057443 \/ 0.424275 (-0.366832) | 0.006374 \/ 0.007607 (-0.001234) | 0.490337 \/ 0.226044 (0.264292) | 4.889628 \/ 2.268929 (2.620700) | 2.575626 \/ 55.444624 (-52.868998) | 2.246522 \/ 6.876477 (-4.629955) | 2.276183 \/ 2.142072 (0.134110) | 0.581465 \/ 4.805227 (-4.223763) | 0.123877 \/ 6.500664 (-6.376787) | 0.060339 \/ 0.075469 (-0.015130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.333202 \/ 1.841788 (-0.508585) | 18.363558 \/ 8.074308 (10.289250) | 14.109356 \/ 10.191392 (3.917964) | 0.147358 \/ 0.680424 (-0.533066) | 0.016813 \/ 0.534201 (-0.517388) | 0.334815 \/ 0.579283 (-0.244468) | 0.366576 \/ 0.434364 (-0.067788) | 0.397223 \/ 0.540337 (-0.143115) | 0.547893 \/ 1.386936 (-0.839043) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#67ac60bcbebe9ddac70264951b1d584c93003cdf \"CML watermark\")\n"],"created_at":1689193876000,"updated_at":1689267512000,"closed_at":1689251829000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6024","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6024","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6024.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6024.patch","merged_at":1689251829000},"body":"Fix for https:\/\/github.com\/huggingface\/datasets\/issues\/5963","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6024\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6024\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6023","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6023\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6023\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6023\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6023","id":1801272420,"node_id":"PR_kwDODunzps5VU7EG","number":6023,"title":"Fix `ClassLabel` min max check for `None` values","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007108 \/ 0.011353 (-0.004245) | 0.004446 \/ 0.011008 (-0.006562) | 0.084013 \/ 0.038508 (0.045505) | 0.084271 \/ 0.023109 (0.061162) | 0.324496 \/ 0.275898 (0.048598) | 0.347783 \/ 0.323480 (0.024303) | 0.004382 \/ 0.007986 (-0.003604) | 0.005200 \/ 0.004328 (0.000872) | 0.065117 \/ 0.004250 (0.060866) | 0.063368 \/ 0.037052 (0.026316) | 0.328731 \/ 0.258489 (0.070242) | 0.356676 \/ 0.293841 (0.062835) | 0.031155 \/ 0.128546 (-0.097392) | 0.008672 \/ 0.075646 (-0.066975) | 0.287573 \/ 0.419271 (-0.131698) | 0.053692 \/ 0.043533 (0.010160) | 0.308796 \/ 0.255139 (0.053657) | 0.330521 \/ 0.283200 (0.047321) | 0.025010 \/ 0.141683 (-0.116672) | 1.498968 \/ 1.452155 (0.046813) | 1.552096 \/ 1.492716 (0.059380) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.263580 \/ 0.018006 (0.245574) | 0.559765 \/ 0.000490 (0.559275) | 0.003450 \/ 0.000200 (0.003250) | 0.000079 \/ 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029403 \/ 0.037411 (-0.008008) | 0.088154 \/ 0.014526 (0.073628) | 0.100372 \/ 0.176557 (-0.076185) | 0.157777 \/ 0.737135 (-0.579359) | 0.102273 \/ 0.296338 (-0.194066) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.387027 \/ 0.215209 (0.171818) | 3.854260 \/ 2.077655 (1.776605) | 1.875159 \/ 1.504120 (0.371039) | 1.703734 \/ 1.541195 (0.162539) | 1.814305 \/ 1.468490 (0.345815) | 0.482524 \/ 4.584777 (-4.102253) | 3.463602 \/ 3.745712 (-0.282110) | 4.004766 \/ 5.269862 (-1.265095) | 2.406751 \/ 4.565676 (-2.158925) | 0.057069 \/ 0.424275 (-0.367206) | 0.007448 \/ 0.007607 (-0.000159) | 0.465801 \/ 0.226044 (0.239757) | 4.636700 \/ 2.268929 (2.367771) | 2.329475 \/ 55.444624 (-53.115150) | 1.998330 \/ 6.876477 (-4.878146) | 2.264617 \/ 2.142072 (0.122544) | 0.577998 \/ 4.805227 (-4.227230) | 0.130846 \/ 6.500664 (-6.369818) | 0.059713 \/ 0.075469 (-0.015756) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.275931 \/ 1.841788 (-0.565857) | 20.396288 \/ 8.074308 (12.321980) | 13.875242 \/ 10.191392 (3.683850) | 0.164367 \/ 0.680424 (-0.516057) | 0.018573 \/ 0.534201 (-0.515628) | 0.397516 \/ 0.579283 (-0.181767) | 0.398977 \/ 0.434364 (-0.035387) | 0.462386 \/ 0.540337 (-0.077951) | 0.610129 \/ 1.386936 (-0.776807) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006912 \/ 0.011353 (-0.004441) | 0.004212 \/ 0.011008 (-0.006797) | 0.065707 \/ 0.038508 (0.027199) | 0.090435 \/ 0.023109 (0.067325) | 0.380539 \/ 0.275898 (0.104641) | 0.412692 \/ 0.323480 (0.089212) | 0.005545 \/ 0.007986 (-0.002441) | 0.003657 \/ 0.004328 (-0.000672) | 0.065380 \/ 0.004250 (0.061130) | 0.062901 \/ 0.037052 (0.025848) | 0.385931 \/ 0.258489 (0.127442) | 0.416272 \/ 0.293841 (0.122431) | 0.031974 \/ 0.128546 (-0.096572) | 0.008783 \/ 0.075646 (-0.066863) | 0.071424 \/ 0.419271 (-0.347847) | 0.049454 \/ 0.043533 (0.005921) | 0.374231 \/ 0.255139 (0.119092) | 0.386530 \/ 0.283200 (0.103331) | 0.025404 \/ 0.141683 (-0.116279) | 1.469869 \/ 1.452155 (0.017715) | 1.548629 \/ 1.492716 (0.055913) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.218413 \/ 0.018006 (0.200406) | 0.573863 \/ 0.000490 (0.573373) | 0.004156 \/ 0.000200 (0.003956) | 0.000097 \/ 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032610 \/ 0.037411 (-0.004801) | 0.088270 \/ 0.014526 (0.073744) | 0.106821 \/ 0.176557 (-0.069735) | 0.164498 \/ 0.737135 (-0.572638) | 0.106881 \/ 0.296338 (-0.189457) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.433730 \/ 0.215209 (0.218520) | 4.323902 \/ 2.077655 (2.246247) | 2.308607 \/ 1.504120 (0.804487) | 2.138888 \/ 1.541195 (0.597693) | 2.246760 \/ 1.468490 (0.778269) | 0.486863 \/ 4.584777 (-4.097914) | 3.561826 \/ 3.745712 (-0.183886) | 5.592685 \/ 5.269862 (0.322824) | 3.318560 \/ 4.565676 (-1.247116) | 0.057348 \/ 0.424275 (-0.366927) | 0.007434 \/ 0.007607 (-0.000174) | 0.506767 \/ 0.226044 (0.280723) | 5.083097 \/ 2.268929 (2.814168) | 2.780618 \/ 55.444624 (-52.664006) | 2.456924 \/ 6.876477 (-4.419553) | 2.564184 \/ 2.142072 (0.422112) | 0.580693 \/ 4.805227 (-4.224534) | 0.134471 \/ 6.500664 (-6.366194) | 0.062883 \/ 0.075469 (-0.012586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.346618 \/ 1.841788 (-0.495169) | 20.547998 \/ 8.074308 (12.473690) | 14.404159 \/ 10.191392 (4.212767) | 0.176612 \/ 0.680424 (-0.503812) | 0.018372 \/ 0.534201 (-0.515829) | 0.395636 \/ 0.579283 (-0.183647) | 0.410661 \/ 0.434364 (-0.023703) | 0.468782 \/ 0.540337 (-0.071555) | 0.637476 \/ 1.386936 (-0.749460) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#0172d4dac0ca823e8bd293cfd4d28e78d92efe42 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009896 \/ 0.011353 (-0.001457) | 0.004658 \/ 0.011008 (-0.006351) | 0.101185 \/ 0.038508 (0.062677) | 0.075480 \/ 0.023109 (0.052371) | 0.410620 \/ 0.275898 (0.134722) | 0.470639 \/ 0.323480 (0.147159) | 0.007042 \/ 0.007986 (-0.000943) | 0.003909 \/ 0.004328 (-0.000419) | 0.079676 \/ 0.004250 (0.075425) | 0.066921 \/ 0.037052 (0.029869) | 0.423624 \/ 0.258489 (0.165135) | 0.473008 \/ 0.293841 (0.179167) | 0.048492 \/ 0.128546 (-0.080054) | 0.012833 \/ 0.075646 (-0.062813) | 0.335286 \/ 0.419271 (-0.083985) | 0.083506 \/ 0.043533 (0.039973) | 0.401918 \/ 0.255139 (0.146779) | 0.467975 \/ 0.283200 (0.184775) | 0.050025 \/ 0.141683 (-0.091658) | 1.679392 \/ 1.452155 (0.227237) | 1.852812 \/ 1.492716 (0.360095) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.248067 \/ 0.018006 (0.230061) | 0.584818 \/ 0.000490 (0.584328) | 0.021558 \/ 0.000200 (0.021358) | 0.000104 \/ 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028572 \/ 0.037411 (-0.008839) | 0.097212 \/ 0.014526 (0.082686) | 0.121675 \/ 0.176557 (-0.054881) | 0.186597 \/ 0.737135 (-0.550538) | 0.122285 \/ 0.296338 (-0.174053) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.586279 \/ 0.215209 (0.371070) | 5.634402 \/ 2.077655 (3.556747) | 2.560648 \/ 1.504120 (1.056528) | 2.288796 \/ 1.541195 (0.747601) | 2.402580 \/ 1.468490 (0.934090) | 0.801453 \/ 4.584777 (-3.783324) | 5.036654 \/ 3.745712 (1.290942) | 8.319972 \/ 5.269862 (3.050110) | 4.665620 \/ 4.565676 (0.099944) | 0.107292 \/ 0.424275 (-0.316983) | 0.009206 \/ 0.007607 (0.001599) | 0.766505 \/ 0.226044 (0.540461) | 7.333784 \/ 2.268929 (5.064856) | 3.601875 \/ 55.444624 (-51.842749) | 2.886388 \/ 6.876477 (-3.990089) | 3.231797 \/ 2.142072 (1.089725) | 1.179509 \/ 4.805227 (-3.625718) | 0.224656 \/ 6.500664 (-6.276008) | 0.084749 \/ 0.075469 (0.009280) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.772345 \/ 1.841788 (-0.069443) | 24.138788 \/ 8.074308 (16.064480) | 20.712416 \/ 10.191392 (10.521024) | 0.254655 \/ 0.680424 (-0.425769) | 0.028858 \/ 0.534201 (-0.505343) | 0.499314 \/ 0.579283 (-0.079969) | 0.605797 \/ 0.434364 (0.171433) | 0.567628 \/ 0.540337 (0.027290) | 0.752288 \/ 1.386936 (-0.634648) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.010134 \/ 0.011353 (-0.001219) | 0.004630 \/ 0.011008 (-0.006378) | 0.082282 \/ 0.038508 (0.043774) | 0.081722 \/ 0.023109 (0.058613) | 0.465018 \/ 0.275898 (0.189120) | 0.516392 \/ 0.323480 (0.192912) | 0.006618 \/ 0.007986 (-0.001368) | 0.004310 \/ 0.004328 (-0.000018) | 0.078990 \/ 0.004250 (0.074739) | 0.077729 \/ 0.037052 (0.040677) | 0.464892 \/ 0.258489 (0.206403) | 0.510551 \/ 0.293841 (0.216710) | 0.050750 \/ 0.128546 (-0.077796) | 0.014402 \/ 0.075646 (-0.061244) | 0.092587 \/ 0.419271 (-0.326685) | 0.074769 \/ 0.043533 (0.031237) | 0.468591 \/ 0.255139 (0.213452) | 0.508138 \/ 0.283200 (0.224938) | 0.047774 \/ 0.141683 (-0.093909) | 1.798354 \/ 1.452155 (0.346199) | 1.851431 \/ 1.492716 (0.358714) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.282528 \/ 0.018006 (0.264522) | 0.588286 \/ 0.000490 (0.587797) | 0.004892 \/ 0.000200 (0.004692) | 0.000136 \/ 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.037048 \/ 0.037411 (-0.000364) | 0.101513 \/ 0.014526 (0.086987) | 0.133238 \/ 0.176557 (-0.043319) | 0.234799 \/ 0.737135 (-0.502336) | 0.120636 \/ 0.296338 (-0.175703) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.615377 \/ 0.215209 (0.400168) | 6.225717 \/ 2.077655 (4.148062) | 2.974137 \/ 1.504120 (1.470018) | 2.642168 \/ 1.541195 (1.100973) | 2.706051 \/ 1.468490 (1.237561) | 0.837171 \/ 4.584777 (-3.747606) | 5.143368 \/ 3.745712 (1.397656) | 4.560241 \/ 5.269862 (-0.709621) | 2.838375 \/ 4.565676 (-1.727301) | 0.092505 \/ 0.424275 (-0.331770) | 0.008962 \/ 0.007607 (0.001355) | 0.726361 \/ 0.226044 (0.500317) | 7.323998 \/ 2.268929 (5.055070) | 3.650531 \/ 55.444624 (-51.794094) | 2.960886 \/ 6.876477 (-3.915591) | 3.003889 \/ 2.142072 (0.861816) | 0.979264 \/ 4.805227 (-3.825963) | 0.204531 \/ 6.500664 (-6.296133) | 0.078285 \/ 0.075469 (0.002816) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.774225 \/ 1.841788 (-0.067563) | 26.399536 \/ 8.074308 (18.325228) | 22.312890 \/ 10.191392 (12.121498) | 0.244651 \/ 0.680424 (-0.435773) | 0.026950 \/ 0.534201 (-0.507251) | 0.493037 \/ 0.579283 (-0.086246) | 0.620399 \/ 0.434364 (0.186036) | 0.748985 \/ 0.540337 (0.208648) | 0.799766 \/ 1.386936 (-0.587170) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#a49ac2864177ec4fb34c43b59a6e49de1f21f973 \"CML watermark\")\n"],"created_at":1689176772000,"updated_at":1689179366000,"closed_at":1689178684000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6023","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6023","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6023.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6023.patch","merged_at":1689178684000},"body":"Fix #6022 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6023\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6023\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6022","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6022\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6022\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6022\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6022","id":1800092589,"node_id":"I_kwDODunzps5rSzut","number":6022,"title":"Batch map raises TypeError: '>=' not supported between instances of 'NoneType' and 'int'","user":{"login":"codingl2k1","id":138426806,"node_id":"U_kgDOCEA5tg","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/138426806?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/codingl2k1","html_url":"https:\/\/github.com\/codingl2k1","followers_url":"https:\/\/api.github.com\/users\/codingl2k1\/followers","following_url":"https:\/\/api.github.com\/users\/codingl2k1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/codingl2k1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/codingl2k1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/codingl2k1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/codingl2k1\/orgs","repos_url":"https:\/\/api.github.com\/users\/codingl2k1\/repos","events_url":"https:\/\/api.github.com\/users\/codingl2k1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/codingl2k1\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting! I've opened a PR with a fix."],"created_at":1689132017000,"updated_at":1689178686000,"closed_at":1689178685000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nWhen mapping some datasets with `batched=True`, datasets may raise an exeception:\r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/venv\/lib\/python3.11\/site-packages\/multiprocess\/pool.py\", line 125, in worker\r\n result = (True, func(*args, **kwds))\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/utils\/py_utils.py\", line 1328, in _write_generator_to_queue\r\n for i, result in enumerate(func(**kwargs)):\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/arrow_dataset.py\", line 3483, in _map_single\r\n writer.write_batch(batch)\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/arrow_writer.py\", line 549, in write_batch\r\n array = cast_array_to_feature(col_values, col_type) if col_type is not None else col_values\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/table.py\", line 1831, in wrapper\r\n return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/table.py\", line 1831, in \r\n return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/table.py\", line 2063, in cast_array_to_feature\r\n return feature.cast_storage(array)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/features\/features.py\", line 1098, in cast_storage\r\n if min_max[\"max\"] >= self.num_classes:\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nTypeError: '>=' not supported between instances of 'NoneType' and 'int'\r\nThe above exception was the direct cause of the following exception:\r\nTraceback (most recent call last):\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/t1.py\", line 33, in \r\n ds = ds.map(transforms, num_proc=14, batched=True, batch_size=5)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/dataset_dict.py\", line 850, in map\r\n {\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/dataset_dict.py\", line 851, in \r\n k: dataset.map(\r\n ^^^^^^^^^^^^\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/arrow_dataset.py\", line 577, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/arrow_dataset.py\", line 542, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/arrow_dataset.py\", line 3179, in map\r\n for rank, done, content in iflatmap_unordered(\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/utils\/py_utils.py\", line 1368, in iflatmap_unordered\r\n [async_result.get(timeout=0.05) for async_result in async_results]\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/utils\/py_utils.py\", line 1368, in \r\n [async_result.get(timeout=0.05) for async_result in async_results]\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/venv\/lib\/python3.11\/site-packages\/multiprocess\/pool.py\", line 774, in get\r\n raise self._value\r\nTypeError: '>=' not supported between instances of 'NoneType' and 'int'\r\n```\n\n### Steps to reproduce the bug\n\n1. Checkout the latest main of datasets.\r\n2. Run the code:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndef transforms(examples):\r\n # examples[\"pixel_values\"] = [image.convert(\"RGB\").resize((100, 100)) for image in examples[\"image\"]]\r\n return examples\r\n\r\nds = load_dataset(\"scene_parse_150\")\r\nds = ds.map(transforms, num_proc=14, batched=True, batch_size=5)\r\nprint(ds)\r\n```\n\n### Expected behavior\n\nmap without exception.\n\n### Environment info\n\nDatasets: https:\/\/github.com\/huggingface\/datasets\/commit\/b8067c0262073891180869f700ebef5ac3dc5cce\r\nPython: 3.11.4\r\nSystem: Macos","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6022\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6022\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6021","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6021\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6021\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6021\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6021","id":1799785904,"node_id":"PR_kwDODunzps5VP11Q","number":6021,"title":"[docs] Update return statement of index search","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007697 \/ 0.011353 (-0.003656) | 0.004233 \/ 0.011008 (-0.006776) | 0.087890 \/ 0.038508 (0.049382) | 0.065305 \/ 0.023109 (0.042196) | 0.366919 \/ 0.275898 (0.091020) | 0.399656 \/ 0.323480 (0.076176) | 0.006753 \/ 0.007986 (-0.001232) | 0.003428 \/ 0.004328 (-0.000900) | 0.070180 \/ 0.004250 (0.065930) | 0.054164 \/ 0.037052 (0.017112) | 0.377130 \/ 0.258489 (0.118641) | 0.403456 \/ 0.293841 (0.109615) | 0.042639 \/ 0.128546 (-0.085907) | 0.012396 \/ 0.075646 (-0.063250) | 0.314235 \/ 0.419271 (-0.105036) | 0.061976 \/ 0.043533 (0.018443) | 0.376959 \/ 0.255139 (0.121820) | 0.433313 \/ 0.283200 (0.150113) | 0.031253 \/ 0.141683 (-0.110430) | 1.555749 \/ 1.452155 (0.103594) | 1.643905 \/ 1.492716 (0.151189) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.208630 \/ 0.018006 (0.190624) | 0.519532 \/ 0.000490 (0.519042) | 0.003719 \/ 0.000200 (0.003519) | 0.000099 \/ 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027403 \/ 0.037411 (-0.010008) | 0.080990 \/ 0.014526 (0.066464) | 0.090424 \/ 0.176557 (-0.086133) | 0.153922 \/ 0.737135 (-0.583213) | 0.098156 \/ 0.296338 (-0.198183) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.519453 \/ 0.215209 (0.304244) | 5.100089 \/ 2.077655 (3.022434) | 2.212165 \/ 1.504120 (0.708045) | 1.894405 \/ 1.541195 (0.353210) | 1.922914 \/ 1.468490 (0.454424) | 0.762443 \/ 4.584777 (-3.822334) | 4.669214 \/ 3.745712 (0.923502) | 5.016066 \/ 5.269862 (-0.253796) | 3.128821 \/ 4.565676 (-1.436856) | 0.091541 \/ 0.424275 (-0.332734) | 0.007582 \/ 0.007607 (-0.000026) | 0.652753 \/ 0.226044 (0.426709) | 6.601375 \/ 2.268929 (4.332446) | 3.076948 \/ 55.444624 (-52.367677) | 2.250544 \/ 6.876477 (-4.625933) | 2.404059 \/ 2.142072 (0.261987) | 0.994917 \/ 4.805227 (-3.810311) | 0.200318 \/ 6.500664 (-6.300346) | 0.069354 \/ 0.075469 (-0.006115) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.482559 \/ 1.841788 (-0.359229) | 20.722092 \/ 8.074308 (12.647784) | 17.703217 \/ 10.191392 (7.511825) | 0.215370 \/ 0.680424 (-0.465053) | 0.028208 \/ 0.534201 (-0.505993) | 0.425992 \/ 0.579283 (-0.153291) | 0.492785 \/ 0.434364 (0.058421) | 0.474154 \/ 0.540337 (-0.066183) | 0.644599 \/ 1.386936 (-0.742337) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008372 \/ 0.011353 (-0.002981) | 0.004543 \/ 0.011008 (-0.006465) | 0.070564 \/ 0.038508 (0.032056) | 0.066855 \/ 0.023109 (0.043746) | 0.386724 \/ 0.275898 (0.110826) | 0.432184 \/ 0.323480 (0.108704) | 0.005250 \/ 0.007986 (-0.002736) | 0.003630 \/ 0.004328 (-0.000698) | 0.069310 \/ 0.004250 (0.065060) | 0.055759 \/ 0.037052 (0.018707) | 0.375789 \/ 0.258489 (0.117299) | 0.417335 \/ 0.293841 (0.123494) | 0.043424 \/ 0.128546 (-0.085122) | 0.013106 \/ 0.075646 (-0.062541) | 0.087836 \/ 0.419271 (-0.331436) | 0.057770 \/ 0.043533 (0.014237) | 0.396694 \/ 0.255139 (0.141555) | 0.439350 \/ 0.283200 (0.156150) | 0.031660 \/ 0.141683 (-0.110023) | 1.571339 \/ 1.452155 (0.119185) | 1.667169 \/ 1.492716 (0.174452) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.180534 \/ 0.018006 (0.162528) | 0.540027 \/ 0.000490 (0.539537) | 0.003573 \/ 0.000200 (0.003373) | 0.000141 \/ 0.000054 (0.000086) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031380 \/ 0.037411 (-0.006032) | 0.083762 \/ 0.014526 (0.069236) | 0.098166 \/ 0.176557 (-0.078390) | 0.160761 \/ 0.737135 (-0.576374) | 0.097683 \/ 0.296338 (-0.198656) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.568074 \/ 0.215209 (0.352865) | 5.660544 \/ 2.077655 (3.582889) | 2.416698 \/ 1.504120 (0.912578) | 2.177096 \/ 1.541195 (0.635901) | 2.206178 \/ 1.468490 (0.737688) | 0.844864 \/ 4.584777 (-3.739912) | 4.793636 \/ 3.745712 (1.047923) | 7.062387 \/ 5.269862 (1.792525) | 4.201228 \/ 4.565676 (-0.364449) | 0.091997 \/ 0.424275 (-0.332279) | 0.007881 \/ 0.007607 (0.000274) | 0.679466 \/ 0.226044 (0.453422) | 6.580268 \/ 2.268929 (4.311340) | 3.229907 \/ 55.444624 (-52.214717) | 2.524877 \/ 6.876477 (-4.351600) | 2.463796 \/ 2.142072 (0.321723) | 0.975627 \/ 4.805227 (-3.829600) | 0.186670 \/ 6.500664 (-6.313994) | 0.065307 \/ 0.075469 (-0.010163) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.501447 \/ 1.841788 (-0.340340) | 21.231037 \/ 8.074308 (13.156729) | 17.591671 \/ 10.191392 (7.400279) | 0.212745 \/ 0.680424 (-0.467679) | 0.026100 \/ 0.534201 (-0.508101) | 0.428391 \/ 0.579283 (-0.150892) | 0.535268 \/ 0.434364 (0.100904) | 0.506733 \/ 0.540337 (-0.033604) | 0.660832 \/ 1.386936 (-0.726104) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#962537d7ee9191438ef47a4185d0ba626b2ee949 \"CML watermark\")\n"],"created_at":1689111212000,"updated_at":1689181982000,"closed_at":1689181380000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6021","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6021","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6021.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6021.patch","merged_at":1689181380000},"body":"Clarifies in the return statement of the docstring that the retrieval score is `IndexFlatL2` by default (see [PR](https:\/\/github.com\/huggingface\/transformers\/issues\/24739) and internal Slack [convo](https:\/\/huggingface.slack.com\/archives\/C01229B19EX\/p1689105179711689)), and fixes the formatting because multiple return values are not supported.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6021\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6021\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6020","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6020\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6020\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6020\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6020","id":1799720536,"node_id":"I_kwDODunzps5rRY5Y","number":6020,"title":"Inconsistent \"The features can't be aligned\" error when combining map, multiprocessing, and variable length outputs","user":{"login":"kheyer","id":38166299,"node_id":"MDQ6VXNlcjM4MTY2Mjk5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38166299?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kheyer","html_url":"https:\/\/github.com\/kheyer","followers_url":"https:\/\/api.github.com\/users\/kheyer\/followers","following_url":"https:\/\/api.github.com\/users\/kheyer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kheyer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kheyer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kheyer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kheyer\/orgs","repos_url":"https:\/\/api.github.com\/users\/kheyer\/repos","events_url":"https:\/\/api.github.com\/users\/kheyer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kheyer\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This scenario currently requires explicitly passing the target features (to avoid the error): \r\n```python\r\nimport datasets\r\n\r\n...\r\n\r\nfeatures = dataset.features\r\nfeatures[\"output\"] = = [{\"test\": datasets.Value(\"int64\")}]\r\ntest2 = dataset.map(lambda row, idx: test_func(row, idx), with_indices=True, num_proc=32, features=features)\r\n```"],"created_at":1689108038000,"updated_at":1689177504000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI'm using a dataset with map and multiprocessing to run a function that returned a variable length list of outputs. This output list may be empty. Normally this is handled fine, but there is an edge case that crops up when using multiprocessing. In some cases, an empty list result ends up in a dataset shard consisting of a single item. This results in a `The features can't be aligned` error that is difficult to debug because it depends on the number of processes\/shards used.\r\n\r\nI've reproduced a minimal example below. My current workaround is to fill empty results with a dummy value that I filter after, but this was a weird error that took a while to track down.\n\n### Steps to reproduce the bug\n\n```python\r\nimport datasets\r\n\r\ndataset = datasets.Dataset.from_list([{'idx':i} for i in range(60)])\r\n\r\ndef test_func(row, idx):\r\n if idx==58:\r\n return {'output': []}\r\n else:\r\n return {'output' : [{'test':1}, {'test':2}]}\r\n\r\n# this works fine\r\ntest1 = dataset.map(lambda row, idx: test_func(row, idx), with_indices=True, num_proc=4)\r\n\r\n# this fails\r\ntest2 = dataset.map(lambda row, idx: test_func(row, idx), with_indices=True, num_proc=32)\r\n>ValueError: The features can't be aligned because the key output of features {'idx': Value(dtype='int64', id=None), 'output': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None)} has unexpected type - Sequence(feature=Value(dtype='null', id=None), length=-1, id=None) (expected either [{'test': Value(dtype='int64', id=None)}] or Value(\"null\").\r\n```\r\n\r\nThe error occurs during the check\r\n\r\n```python\r\n_check_if_features_can_be_aligned([dset.features for dset in dsets])\r\n```\r\n\r\nWhen the multiprocessing splitting lines up just right with the empty return value, one of the `dset` in `dsets` will have a single item with an empty list value, causing the error.\n\n### Expected behavior\n\nExpected behavior is the result would be the same regardless of the `num_proc` value used.\n\n### Environment info\n\nDatasets version 2.11.0\r\nPython 3.9.16","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6020\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6020\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6019","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6019\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6019\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6019\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6019","id":1799532822,"node_id":"PR_kwDODunzps5VPAlD","number":6019,"title":"Improve logging","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007782 \/ 0.011353 (-0.003571) | 0.004451 \/ 0.011008 (-0.006557) | 0.099928 \/ 0.038508 (0.061420) | 0.081534 \/ 0.023109 (0.058425) | 0.379382 \/ 0.275898 (0.103484) | 0.410652 \/ 0.323480 (0.087172) | 0.005967 \/ 0.007986 (-0.002019) | 0.003702 \/ 0.004328 (-0.000627) | 0.076359 \/ 0.004250 (0.072109) | 0.066721 \/ 0.037052 (0.029669) | 0.383595 \/ 0.258489 (0.125106) | 0.423854 \/ 0.293841 (0.130013) | 0.032796 \/ 0.128546 (-0.095750) | 0.009728 \/ 0.075646 (-0.065918) | 0.344347 \/ 0.419271 (-0.074925) | 0.056320 \/ 0.043533 (0.012788) | 0.379974 \/ 0.255139 (0.124835) | 0.401294 \/ 0.283200 (0.118094) | 0.024110 \/ 0.141683 (-0.117572) | 1.804194 \/ 1.452155 (0.352039) | 1.860240 \/ 1.492716 (0.367523) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.233803 \/ 0.018006 (0.215797) | 0.506893 \/ 0.000490 (0.506404) | 0.003894 \/ 0.000200 (0.003694) | 0.000090 \/ 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033328 \/ 0.037411 (-0.004083) | 0.098661 \/ 0.014526 (0.084136) | 0.114971 \/ 0.176557 (-0.061586) | 0.186815 \/ 0.737135 (-0.550321) | 0.115490 \/ 0.296338 (-0.180848) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.422590 \/ 0.215209 (0.207381) | 4.277189 \/ 2.077655 (2.199535) | 2.095565 \/ 1.504120 (0.591445) | 2.040825 \/ 1.541195 (0.499630) | 2.162562 \/ 1.468490 (0.694072) | 0.578602 \/ 4.584777 (-4.006175) | 4.203474 \/ 3.745712 (0.457762) | 6.674595 \/ 5.269862 (1.404734) | 3.913251 \/ 4.565676 (-0.652426) | 0.067777 \/ 0.424275 (-0.356498) | 0.008716 \/ 0.007607 (0.001109) | 0.548704 \/ 0.226044 (0.322660) | 5.162120 \/ 2.268929 (2.893192) | 2.600250 \/ 55.444624 (-52.844374) | 2.232730 \/ 6.876477 (-4.643747) | 2.485617 \/ 2.142072 (0.343544) | 0.650872 \/ 4.805227 (-4.154355) | 0.148022 \/ 6.500664 (-6.352642) | 0.064795 \/ 0.075469 (-0.010674) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.399439 \/ 1.841788 (-0.442349) | 22.438959 \/ 8.074308 (14.364651) | 16.447831 \/ 10.191392 (6.256439) | 0.202003 \/ 0.680424 (-0.478421) | 0.026200 \/ 0.534201 (-0.508001) | 0.472966 \/ 0.579283 (-0.106317) | 0.491621 \/ 0.434364 (0.057257) | 0.551580 \/ 0.540337 (0.011242) | 0.751420 \/ 1.386936 (-0.635516) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007241 \/ 0.011353 (-0.004112) | 0.004434 \/ 0.011008 (-0.006574) | 0.075872 \/ 0.038508 (0.037364) | 0.080094 \/ 0.023109 (0.056985) | 0.459244 \/ 0.275898 (0.183346) | 0.492482 \/ 0.323480 (0.169002) | 0.005791 \/ 0.007986 (-0.002194) | 0.003657 \/ 0.004328 (-0.000671) | 0.075214 \/ 0.004250 (0.070964) | 0.064208 \/ 0.037052 (0.027156) | 0.464195 \/ 0.258489 (0.205706) | 0.497809 \/ 0.293841 (0.203968) | 0.036301 \/ 0.128546 (-0.092245) | 0.009855 \/ 0.075646 (-0.065791) | 0.080826 \/ 0.419271 (-0.338445) | 0.056700 \/ 0.043533 (0.013167) | 0.452850 \/ 0.255139 (0.197711) | 0.490738 \/ 0.283200 (0.207538) | 0.024145 \/ 0.141683 (-0.117538) | 1.689911 \/ 1.452155 (0.237757) | 1.789803 \/ 1.492716 (0.297087) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.247741 \/ 0.018006 (0.229735) | 0.486769 \/ 0.000490 (0.486279) | 0.000418 \/ 0.000200 (0.000218) | 0.000060 \/ 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.036317 \/ 0.037411 (-0.001094) | 0.104943 \/ 0.014526 (0.090417) | 0.120972 \/ 0.176557 (-0.055585) | 0.188461 \/ 0.737135 (-0.548674) | 0.120926 \/ 0.296338 (-0.175412) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.465788 \/ 0.215209 (0.250579) | 4.662369 \/ 2.077655 (2.584714) | 2.442241 \/ 1.504120 (0.938121) | 2.266328 \/ 1.541195 (0.725133) | 2.438998 \/ 1.468490 (0.970508) | 0.531384 \/ 4.584777 (-4.053393) | 4.125286 \/ 3.745712 (0.379574) | 3.920912 \/ 5.269862 (-1.348950) | 2.292149 \/ 4.565676 (-2.273528) | 0.070146 \/ 0.424275 (-0.354129) | 0.008887 \/ 0.007607 (0.001280) | 0.598181 \/ 0.226044 (0.372137) | 5.726454 \/ 2.268929 (3.457526) | 3.081836 \/ 55.444624 (-52.362788) | 2.683508 \/ 6.876477 (-4.192969) | 2.587350 \/ 2.142072 (0.445278) | 0.604736 \/ 4.805227 (-4.200491) | 0.141303 \/ 6.500664 (-6.359362) | 0.065020 \/ 0.075469 (-0.010449) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.481850 \/ 1.841788 (-0.359938) | 22.259592 \/ 8.074308 (14.185284) | 16.304290 \/ 10.191392 (6.112898) | 0.173514 \/ 0.680424 (-0.506909) | 0.021590 \/ 0.534201 (-0.512611) | 0.471753 \/ 0.579283 (-0.107531) | 0.472132 \/ 0.434364 (0.037768) | 0.563344 \/ 0.540337 (0.023007) | 0.738509 \/ 1.386936 (-0.648427) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#1cb7ae56dbd814945a4982c63bf0e50859a7b93a \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005910 \/ 0.011353 (-0.005443) | 0.004372 \/ 0.011008 (-0.006636) | 0.081583 \/ 0.038508 (0.043075) | 0.069598 \/ 0.023109 (0.046488) | 0.346360 \/ 0.275898 (0.070462) | 0.360733 \/ 0.323480 (0.037254) | 0.004725 \/ 0.007986 (-0.003261) | 0.003106 \/ 0.004328 (-0.001222) | 0.059916 \/ 0.004250 (0.055666) | 0.053242 \/ 0.037052 (0.016189) | 0.353551 \/ 0.258489 (0.095062) | 0.373052 \/ 0.293841 (0.079211) | 0.029036 \/ 0.128546 (-0.099510) | 0.007894 \/ 0.075646 (-0.067753) | 0.284131 \/ 0.419271 (-0.135140) | 0.049348 \/ 0.043533 (0.005815) | 0.347409 \/ 0.255139 (0.092270) | 0.355029 \/ 0.283200 (0.071830) | 0.022511 \/ 0.141683 (-0.119171) | 1.454495 \/ 1.452155 (0.002340) | 1.439551 \/ 1.492716 (-0.053166) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.218889 \/ 0.018006 (0.200883) | 0.478734 \/ 0.000490 (0.478244) | 0.003758 \/ 0.000200 (0.003558) | 0.000083 \/ 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025759 \/ 0.037411 (-0.011653) | 0.082511 \/ 0.014526 (0.067985) | 0.087578 \/ 0.176557 (-0.088979) | 0.137760 \/ 0.737135 (-0.599375) | 0.093312 \/ 0.296338 (-0.203027) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.378963 \/ 0.215209 (0.163754) | 3.645846 \/ 2.077655 (1.568191) | 1.741135 \/ 1.504120 (0.237015) | 1.599166 \/ 1.541195 (0.057972) | 1.610817 \/ 1.468490 (0.142327) | 0.459209 \/ 4.584777 (-4.125568) | 3.484857 \/ 3.745712 (-0.260855) | 3.928109 \/ 5.269862 (-1.341752) | 2.419784 \/ 4.565676 (-2.145892) | 0.051987 \/ 0.424275 (-0.372288) | 0.006495 \/ 0.007607 (-0.001112) | 0.427311 \/ 0.226044 (0.201267) | 4.226378 \/ 2.268929 (1.957450) | 2.212331 \/ 55.444624 (-53.232293) | 1.916213 \/ 6.876477 (-4.960264) | 1.978809 \/ 2.142072 (-0.163263) | 0.547351 \/ 4.805227 (-4.257876) | 0.121110 \/ 6.500664 (-6.379554) | 0.054163 \/ 0.075469 (-0.021306) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.228594 \/ 1.841788 (-0.613193) | 19.410901 \/ 8.074308 (11.336593) | 13.014722 \/ 10.191392 (2.823330) | 0.156449 \/ 0.680424 (-0.523975) | 0.021032 \/ 0.534201 (-0.513169) | 0.403976 \/ 0.579283 (-0.175307) | 0.413885 \/ 0.434364 (-0.020479) | 0.470465 \/ 0.540337 (-0.069873) | 0.641322 \/ 1.386936 (-0.745614) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007210 \/ 0.011353 (-0.004143) | 0.003824 \/ 0.011008 (-0.007185) | 0.058227 \/ 0.038508 (0.019719) | 0.076211 \/ 0.023109 (0.053102) | 0.336626 \/ 0.275898 (0.060728) | 0.420542 \/ 0.323480 (0.097062) | 0.006178 \/ 0.007986 (-0.001808) | 0.003332 \/ 0.004328 (-0.000997) | 0.058073 \/ 0.004250 (0.053823) | 0.062485 \/ 0.037052 (0.025432) | 0.386175 \/ 0.258489 (0.127686) | 0.415659 \/ 0.293841 (0.121818) | 0.031264 \/ 0.128546 (-0.097282) | 0.007502 \/ 0.075646 (-0.068144) | 0.072079 \/ 0.419271 (-0.347192) | 0.055860 \/ 0.043533 (0.012327) | 0.343508 \/ 0.255139 (0.088369) | 0.437844 \/ 0.283200 (0.154645) | 0.032852 \/ 0.141683 (-0.108831) | 1.409241 \/ 1.452155 (-0.042913) | 1.623949 \/ 1.492716 (0.131233) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.207511 \/ 0.018006 (0.189504) | 0.464149 \/ 0.000490 (0.463660) | 0.003248 \/ 0.000200 (0.003048) | 0.000226 \/ 0.000054 (0.000172) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030767 \/ 0.037411 (-0.006645) | 0.079169 \/ 0.014526 (0.064643) | 0.093111 \/ 0.176557 (-0.083445) | 0.153369 \/ 0.737135 (-0.583767) | 0.092939 \/ 0.296338 (-0.203400) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.375602 \/ 0.215209 (0.160392) | 3.968612 \/ 2.077655 (1.890957) | 2.081749 \/ 1.504120 (0.577629) | 1.899772 \/ 1.541195 (0.358577) | 1.847923 \/ 1.468490 (0.379433) | 0.442867 \/ 4.584777 (-4.141910) | 3.646664 \/ 3.745712 (-0.099048) | 5.870600 \/ 5.269862 (0.600739) | 3.356698 \/ 4.565676 (-1.208979) | 0.051422 \/ 0.424275 (-0.372853) | 0.006006 \/ 0.007607 (-0.001601) | 0.442439 \/ 0.226044 (0.216395) | 4.466256 \/ 2.268929 (2.197328) | 2.483832 \/ 55.444624 (-52.960792) | 2.105612 \/ 6.876477 (-4.770865) | 2.060650 \/ 2.142072 (-0.081422) | 0.531119 \/ 4.805227 (-4.274108) | 0.123436 \/ 6.500664 (-6.377228) | 0.059838 \/ 0.075469 (-0.015632) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.283042 \/ 1.841788 (-0.558746) | 19.688251 \/ 8.074308 (11.613943) | 13.346386 \/ 10.191392 (3.154994) | 0.197463 \/ 0.680424 (-0.482961) | 0.018484 \/ 0.534201 (-0.515717) | 0.391727 \/ 0.579283 (-0.187556) | 0.425061 \/ 0.434364 (-0.009303) | 0.448177 \/ 0.540337 (-0.092160) | 0.653694 \/ 1.386936 (-0.733242) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#01604752fe89d290479fa406b1a24ac1f346826e \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008966 \/ 0.011353 (-0.002387) | 0.005195 \/ 0.011008 (-0.005813) | 0.102879 \/ 0.038508 (0.064371) | 0.090902 \/ 0.023109 (0.067792) | 0.434397 \/ 0.275898 (0.158498) | 0.454013 \/ 0.323480 (0.130534) | 0.008507 \/ 0.007986 (0.000521) | 0.005000 \/ 0.004328 (0.000671) | 0.075789 \/ 0.004250 (0.071538) | 0.067608 \/ 0.037052 (0.030555) | 0.435091 \/ 0.258489 (0.176602) | 0.469411 \/ 0.293841 (0.175570) | 0.050859 \/ 0.128546 (-0.077687) | 0.013560 \/ 0.075646 (-0.062086) | 0.345473 \/ 0.419271 (-0.073799) | 0.094974 \/ 0.043533 (0.051441) | 0.429626 \/ 0.255139 (0.174487) | 0.434290 \/ 0.283200 (0.151090) | 0.052269 \/ 0.141683 (-0.089413) | 1.700549 \/ 1.452155 (0.248395) | 1.890693 \/ 1.492716 (0.397976) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.296618 \/ 0.018006 (0.278612) | 0.613908 \/ 0.000490 (0.613419) | 0.000484 \/ 0.000200 (0.000284) | 0.000086 \/ 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034346 \/ 0.037411 (-0.003065) | 0.096836 \/ 0.014526 (0.082310) | 0.113332 \/ 0.176557 (-0.063224) | 0.194464 \/ 0.737135 (-0.542671) | 0.111732 \/ 0.296338 (-0.184606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.624954 \/ 0.215209 (0.409745) | 6.442193 \/ 2.077655 (4.364538) | 2.818331 \/ 1.504120 (1.314211) | 2.529607 \/ 1.541195 (0.988413) | 2.549026 \/ 1.468490 (1.080536) | 0.967367 \/ 4.584777 (-3.617410) | 5.446885 \/ 3.745712 (1.701173) | 6.259099 \/ 5.269862 (0.989237) | 3.652936 \/ 4.565676 (-0.912740) | 0.106420 \/ 0.424275 (-0.317855) | 0.011293 \/ 0.007607 (0.003686) | 0.772026 \/ 0.226044 (0.545982) | 7.823986 \/ 2.268929 (5.555057) | 3.725328 \/ 55.444624 (-51.719297) | 2.851489 \/ 6.876477 (-4.024988) | 3.013722 \/ 2.142072 (0.871649) | 1.045090 \/ 4.805227 (-3.760137) | 0.213174 \/ 6.500664 (-6.287490) | 0.077104 \/ 0.075469 (0.001635) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.657135 \/ 1.841788 (-0.184652) | 24.547604 \/ 8.074308 (16.473296) | 19.989533 \/ 10.191392 (9.798141) | 0.257139 \/ 0.680424 (-0.423285) | 0.028448 \/ 0.534201 (-0.505753) | 0.490801 \/ 0.579283 (-0.088482) | 0.628072 \/ 0.434364 (0.193708) | 0.584873 \/ 0.540337 (0.044536) | 0.825258 \/ 1.386936 (-0.561678) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009258 \/ 0.011353 (-0.002095) | 0.005660 \/ 0.011008 (-0.005348) | 0.080577 \/ 0.038508 (0.042069) | 0.095786 \/ 0.023109 (0.072676) | 0.473334 \/ 0.275898 (0.197436) | 0.527962 \/ 0.323480 (0.204482) | 0.006537 \/ 0.007986 (-0.001449) | 0.004411 \/ 0.004328 (0.000083) | 0.080702 \/ 0.004250 (0.076452) | 0.077020 \/ 0.037052 (0.039968) | 0.483205 \/ 0.258489 (0.224716) | 0.556916 \/ 0.293841 (0.263076) | 0.047670 \/ 0.128546 (-0.080877) | 0.016647 \/ 0.075646 (-0.058999) | 0.090653 \/ 0.419271 (-0.328619) | 0.062122 \/ 0.043533 (0.018589) | 0.498326 \/ 0.255139 (0.243187) | 0.546572 \/ 0.283200 (0.263372) | 0.037525 \/ 0.141683 (-0.104157) | 1.869520 \/ 1.452155 (0.417365) | 1.915335 \/ 1.492716 (0.422619) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.248287 \/ 0.018006 (0.230281) | 0.611440 \/ 0.000490 (0.610950) | 0.004102 \/ 0.000200 (0.003902) | 0.000132 \/ 0.000054 (0.000078) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.038228 \/ 0.037411 (0.000817) | 0.103510 \/ 0.014526 (0.088984) | 0.114337 \/ 0.176557 (-0.062219) | 0.189662 \/ 0.737135 (-0.547473) | 0.119078 \/ 0.296338 (-0.177260) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.606622 \/ 0.215209 (0.391413) | 6.053900 \/ 2.077655 (3.976246) | 2.857972 \/ 1.504120 (1.353852) | 2.549756 \/ 1.541195 (1.008561) | 2.584557 \/ 1.468490 (1.116067) | 0.930431 \/ 4.584777 (-3.654346) | 5.524077 \/ 3.745712 (1.778365) | 7.858406 \/ 5.269862 (2.588545) | 4.890697 \/ 4.565676 (0.325020) | 0.095356 \/ 0.424275 (-0.328919) | 0.008614 \/ 0.007607 (0.001007) | 0.774227 \/ 0.226044 (0.548182) | 7.470215 \/ 2.268929 (5.201287) | 3.784820 \/ 55.444624 (-51.659805) | 3.199364 \/ 6.876477 (-3.677113) | 3.212002 \/ 2.142072 (1.069929) | 1.054104 \/ 4.805227 (-3.751123) | 0.226044 \/ 6.500664 (-6.274620) | 0.092237 \/ 0.075469 (0.016768) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.801054 \/ 1.841788 (-0.040734) | 24.220404 \/ 8.074308 (16.146096) | 21.652936 \/ 10.191392 (11.461544) | 0.247004 \/ 0.680424 (-0.433420) | 0.029651 \/ 0.534201 (-0.504550) | 0.475702 \/ 0.579283 (-0.103581) | 0.621121 \/ 0.434364 (0.186757) | 0.570489 \/ 0.540337 (0.030151) | 0.768840 \/ 1.386936 (-0.618096) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#b2fc21eda345643fb57d1d1167ebed9043310911 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009223 \/ 0.011353 (-0.002130) | 0.005750 \/ 0.011008 (-0.005258) | 0.105264 \/ 0.038508 (0.066756) | 0.088478 \/ 0.023109 (0.065369) | 0.461119 \/ 0.275898 (0.185221) | 0.481115 \/ 0.323480 (0.157636) | 0.006366 \/ 0.007986 (-0.001619) | 0.004515 \/ 0.004328 (0.000186) | 0.079296 \/ 0.004250 (0.075045) | 0.063483 \/ 0.037052 (0.026430) | 0.444490 \/ 0.258489 (0.186001) | 0.496474 \/ 0.293841 (0.202634) | 0.048568 \/ 0.128546 (-0.079978) | 0.013574 \/ 0.075646 (-0.062073) | 0.379213 \/ 0.419271 (-0.040059) | 0.086464 \/ 0.043533 (0.042932) | 0.437526 \/ 0.255139 (0.182387) | 0.447117 \/ 0.283200 (0.163917) | 0.049502 \/ 0.141683 (-0.092180) | 1.749146 \/ 1.452155 (0.296992) | 1.831082 \/ 1.492716 (0.338365) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.268205 \/ 0.018006 (0.250199) | 0.627406 \/ 0.000490 (0.626917) | 0.005439 \/ 0.000200 (0.005239) | 0.000128 \/ 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030564 \/ 0.037411 (-0.006848) | 0.096365 \/ 0.014526 (0.081840) | 0.117484 \/ 0.176557 (-0.059072) | 0.189104 \/ 0.737135 (-0.548032) | 0.118073 \/ 0.296338 (-0.178266) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.618229 \/ 0.215209 (0.403019) | 6.437853 \/ 2.077655 (4.360199) | 2.789946 \/ 1.504120 (1.285826) | 2.339245 \/ 1.541195 (0.798050) | 2.588779 \/ 1.468490 (1.120289) | 0.921008 \/ 4.584777 (-3.663769) | 5.402940 \/ 3.745712 (1.657227) | 4.818783 \/ 5.269862 (-0.451078) | 3.162259 \/ 4.565676 (-1.403417) | 0.108501 \/ 0.424275 (-0.315774) | 0.009384 \/ 0.007607 (0.001777) | 0.766811 \/ 0.226044 (0.540766) | 7.624629 \/ 2.268929 (5.355701) | 3.442420 \/ 55.444624 (-52.002204) | 2.759967 \/ 6.876477 (-4.116510) | 3.049644 \/ 2.142072 (0.907572) | 1.113308 \/ 4.805227 (-3.691919) | 0.223923 \/ 6.500664 (-6.276741) | 0.079156 \/ 0.075469 (0.003687) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.683318 \/ 1.841788 (-0.158470) | 25.062141 \/ 8.074308 (16.987833) | 21.777131 \/ 10.191392 (11.585739) | 0.266939 \/ 0.680424 (-0.413485) | 0.029670 \/ 0.534201 (-0.504531) | 0.476761 \/ 0.579283 (-0.102522) | 0.622080 \/ 0.434364 (0.187716) | 0.601781 \/ 0.540337 (0.061443) | 0.785126 \/ 1.386936 (-0.601811) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.010198 \/ 0.011353 (-0.001155) | 0.005777 \/ 0.011008 (-0.005231) | 0.083003 \/ 0.038508 (0.044495) | 0.093411 \/ 0.023109 (0.070302) | 0.496178 \/ 0.275898 (0.220280) | 0.554670 \/ 0.323480 (0.231190) | 0.008351 \/ 0.007986 (0.000365) | 0.004678 \/ 0.004328 (0.000350) | 0.083631 \/ 0.004250 (0.079381) | 0.075538 \/ 0.037052 (0.038485) | 0.492410 \/ 0.258489 (0.233921) | 0.545209 \/ 0.293841 (0.251368) | 0.048365 \/ 0.128546 (-0.080181) | 0.014219 \/ 0.075646 (-0.061427) | 0.100749 \/ 0.419271 (-0.318523) | 0.063431 \/ 0.043533 (0.019898) | 0.511115 \/ 0.255139 (0.255976) | 0.532965 \/ 0.283200 (0.249765) | 0.037968 \/ 0.141683 (-0.103715) | 1.940268 \/ 1.452155 (0.488113) | 2.032934 \/ 1.492716 (0.540217) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.238179 \/ 0.018006 (0.220172) | 0.605767 \/ 0.000490 (0.605277) | 0.004033 \/ 0.000200 (0.003833) | 0.000125 \/ 0.000054 (0.000071) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.036436 \/ 0.037411 (-0.000975) | 0.108034 \/ 0.014526 (0.093509) | 0.118624 \/ 0.176557 (-0.057933) | 0.183079 \/ 0.737135 (-0.554056) | 0.121739 \/ 0.296338 (-0.174600) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.630538 \/ 0.215209 (0.415329) | 6.552184 \/ 2.077655 (4.474529) | 3.003412 \/ 1.504120 (1.499292) | 2.669026 \/ 1.541195 (1.127832) | 2.791109 \/ 1.468490 (1.322619) | 0.884003 \/ 4.584777 (-3.700774) | 5.538660 \/ 3.745712 (1.792947) | 5.126708 \/ 5.269862 (-0.143154) | 3.120825 \/ 4.565676 (-1.444852) | 0.101178 \/ 0.424275 (-0.323097) | 0.009027 \/ 0.007607 (0.001420) | 0.785914 \/ 0.226044 (0.559869) | 7.994720 \/ 2.268929 (5.725792) | 4.061996 \/ 55.444624 (-51.382629) | 3.263230 \/ 6.876477 (-3.613247) | 3.288622 \/ 2.142072 (1.146550) | 1.141867 \/ 4.805227 (-3.663360) | 0.255287 \/ 6.500664 (-6.245378) | 0.100637 \/ 0.075469 (0.025168) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.769821 \/ 1.841788 (-0.071967) | 24.994008 \/ 8.074308 (16.919700) | 21.765971 \/ 10.191392 (11.574579) | 0.268493 \/ 0.680424 (-0.411931) | 0.028047 \/ 0.534201 (-0.506154) | 0.489472 \/ 0.579283 (-0.089811) | 0.594809 \/ 0.434364 (0.160445) | 0.613578 \/ 0.540337 (0.073241) | 0.879360 \/ 1.386936 (-0.507576) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#b85b1154aef2a9ab4d558f60d91623f2cc1583c4 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006003 \/ 0.011353 (-0.005350) | 0.003590 \/ 0.011008 (-0.007418) | 0.084657 \/ 0.038508 (0.046149) | 0.057884 \/ 0.023109 (0.034775) | 0.318347 \/ 0.275898 (0.042449) | 0.345976 \/ 0.323480 (0.022496) | 0.004706 \/ 0.007986 (-0.003279) | 0.002921 \/ 0.004328 (-0.001407) | 0.061850 \/ 0.004250 (0.057600) | 0.050558 \/ 0.037052 (0.013505) | 0.320877 \/ 0.258489 (0.062388) | 0.356062 \/ 0.293841 (0.062222) | 0.027511 \/ 0.128546 (-0.101035) | 0.007954 \/ 0.075646 (-0.067693) | 0.260290 \/ 0.419271 (-0.158981) | 0.051207 \/ 0.043533 (0.007674) | 0.334423 \/ 0.255139 (0.079284) | 0.338575 \/ 0.283200 (0.055375) | 0.022330 \/ 0.141683 (-0.119353) | 1.445446 \/ 1.452155 (-0.006709) | 1.500626 \/ 1.492716 (0.007910) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.192440 \/ 0.018006 (0.174433) | 0.428455 \/ 0.000490 (0.427965) | 0.000318 \/ 0.000200 (0.000118) | 0.000056 \/ 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.022933 \/ 0.037411 (-0.014478) | 0.072795 \/ 0.014526 (0.058269) | 0.081149 \/ 0.176557 (-0.095407) | 0.142941 \/ 0.737135 (-0.594195) | 0.082410 \/ 0.296338 (-0.213928) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.405220 \/ 0.215209 (0.190011) | 4.048585 \/ 2.077655 (1.970931) | 2.027908 \/ 1.504120 (0.523788) | 1.887828 \/ 1.541195 (0.346633) | 2.131780 \/ 1.468490 (0.663290) | 0.502847 \/ 4.584777 (-4.081930) | 3.069498 \/ 3.745712 (-0.676215) | 4.094774 \/ 5.269862 (-1.175088) | 2.544004 \/ 4.565676 (-2.021673) | 0.059540 \/ 0.424275 (-0.364735) | 0.006501 \/ 0.007607 (-0.001106) | 0.477218 \/ 0.226044 (0.251173) | 4.764961 \/ 2.268929 (2.496032) | 2.434594 \/ 55.444624 (-53.010030) | 2.104833 \/ 6.876477 (-4.771644) | 2.263059 \/ 2.142072 (0.120987) | 0.591755 \/ 4.805227 (-4.213472) | 0.131167 \/ 6.500664 (-6.369497) | 0.061808 \/ 0.075469 (-0.013661) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.345364 \/ 1.841788 (-0.496424) | 18.122584 \/ 8.074308 (10.048276) | 13.318689 \/ 10.191392 (3.127297) | 0.144526 \/ 0.680424 (-0.535898) | 0.016997 \/ 0.534201 (-0.517204) | 0.336036 \/ 0.579283 (-0.243247) | 0.359532 \/ 0.434364 (-0.074832) | 0.386945 \/ 0.540337 (-0.153392) | 0.538659 \/ 1.386936 (-0.848277) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006088 \/ 0.011353 (-0.005265) | 0.003684 \/ 0.011008 (-0.007324) | 0.062340 \/ 0.038508 (0.023832) | 0.058461 \/ 0.023109 (0.035352) | 0.360134 \/ 0.275898 (0.084236) | 0.393298 \/ 0.323480 (0.069818) | 0.004664 \/ 0.007986 (-0.003322) | 0.002909 \/ 0.004328 (-0.001420) | 0.062668 \/ 0.004250 (0.058418) | 0.050145 \/ 0.037052 (0.013092) | 0.361897 \/ 0.258489 (0.103408) | 0.402008 \/ 0.293841 (0.108167) | 0.027491 \/ 0.128546 (-0.101055) | 0.008113 \/ 0.075646 (-0.067534) | 0.068114 \/ 0.419271 (-0.351157) | 0.043303 \/ 0.043533 (-0.000230) | 0.360569 \/ 0.255139 (0.105430) | 0.387144 \/ 0.283200 (0.103944) | 0.020194 \/ 0.141683 (-0.121489) | 1.418066 \/ 1.452155 (-0.034089) | 1.475640 \/ 1.492716 (-0.017076) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.200291 \/ 0.018006 (0.182285) | 0.432298 \/ 0.000490 (0.431809) | 0.003303 \/ 0.000200 (0.003103) | 0.000075 \/ 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027749 \/ 0.037411 (-0.009662) | 0.081890 \/ 0.014526 (0.067364) | 0.094319 \/ 0.176557 (-0.082238) | 0.148646 \/ 0.737135 (-0.588490) | 0.091830 \/ 0.296338 (-0.204509) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.433546 \/ 0.215209 (0.218337) | 4.326855 \/ 2.077655 (2.249200) | 2.230186 \/ 1.504120 (0.726066) | 2.052524 \/ 1.541195 (0.511329) | 2.117270 \/ 1.468490 (0.648779) | 0.500331 \/ 4.584777 (-4.084446) | 3.113662 \/ 3.745712 (-0.632050) | 2.931540 \/ 5.269862 (-2.338322) | 1.853615 \/ 4.565676 (-2.712062) | 0.058250 \/ 0.424275 (-0.366025) | 0.006546 \/ 0.007607 (-0.001061) | 0.508850 \/ 0.226044 (0.282806) | 5.081809 \/ 2.268929 (2.812880) | 2.687037 \/ 55.444624 (-52.757588) | 2.369317 \/ 6.876477 (-4.507160) | 2.383549 \/ 2.142072 (0.241477) | 0.587039 \/ 4.805227 (-4.218188) | 0.125858 \/ 6.500664 (-6.374806) | 0.062522 \/ 0.075469 (-0.012947) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.294929 \/ 1.841788 (-0.546858) | 18.056312 \/ 8.074308 (9.982004) | 13.755117 \/ 10.191392 (3.563725) | 0.132037 \/ 0.680424 (-0.548387) | 0.016866 \/ 0.534201 (-0.517335) | 0.339040 \/ 0.579283 (-0.240243) | 0.364371 \/ 0.434364 (-0.069993) | 0.399533 \/ 0.540337 (-0.140804) | 0.564524 \/ 1.386936 (-0.822412) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#64b811c13a7982015d7e078e3d693ce5359a05a2 \"CML watermark\")\n","@lhoestq This bar comes from: https:\/\/github.com\/huggingface\/datasets\/blob\/b8067c0262073891180869f700ebef5ac3dc5cce\/src\/datasets\/builder.py#L1156-L1166\r\n\r\nDo you prefer not showing it or, e.g., having `desc=\"Generating splits\"`?","No strong opinion. Since there is a \"Generating\" progress bar already, maybe it can be \"Preparing splits\" (ref to download_and_prepare)","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006348 \/ 0.011353 (-0.005005) | 0.003721 \/ 0.011008 (-0.007287) | 0.084039 \/ 0.038508 (0.045531) | 0.067627 \/ 0.023109 (0.044517) | 0.308372 \/ 0.275898 (0.032474) | 0.335131 \/ 0.323480 (0.011652) | 0.005157 \/ 0.007986 (-0.002829) | 0.003266 \/ 0.004328 (-0.001062) | 0.065374 \/ 0.004250 (0.061124) | 0.055550 \/ 0.037052 (0.018498) | 0.314001 \/ 0.258489 (0.055512) | 0.350510 \/ 0.293841 (0.056669) | 0.030859 \/ 0.128546 (-0.097688) | 0.008286 \/ 0.075646 (-0.067361) | 0.287122 \/ 0.419271 (-0.132149) | 0.051494 \/ 0.043533 (0.007961) | 0.309868 \/ 0.255139 (0.054729) | 0.325845 \/ 0.283200 (0.042645) | 0.022622 \/ 0.141683 (-0.119061) | 1.468730 \/ 1.452155 (0.016575) | 1.547871 \/ 1.492716 (0.055155) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.202763 \/ 0.018006 (0.184757) | 0.456403 \/ 0.000490 (0.455914) | 0.003116 \/ 0.000200 (0.002916) | 0.000079 \/ 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027297 \/ 0.037411 (-0.010114) | 0.081204 \/ 0.014526 (0.066678) | 0.094274 \/ 0.176557 (-0.082282) | 0.154391 \/ 0.737135 (-0.582744) | 0.094312 \/ 0.296338 (-0.202026) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.387382 \/ 0.215209 (0.172173) | 3.865597 \/ 2.077655 (1.787943) | 1.855959 \/ 1.504120 (0.351839) | 1.685411 \/ 1.541195 (0.144216) | 1.732127 \/ 1.468490 (0.263637) | 0.482230 \/ 4.584777 (-4.102547) | 3.664947 \/ 3.745712 (-0.080765) | 5.114379 \/ 5.269862 (-0.155482) | 3.102803 \/ 4.565676 (-1.462873) | 0.056509 \/ 0.424275 (-0.367766) | 0.007230 \/ 0.007607 (-0.000377) | 0.456788 \/ 0.226044 (0.230744) | 4.575831 \/ 2.268929 (2.306902) | 2.335249 \/ 55.444624 (-53.109375) | 2.003805 \/ 6.876477 (-4.872672) | 2.141788 \/ 2.142072 (-0.000285) | 0.577501 \/ 4.805227 (-4.227726) | 0.130264 \/ 6.500664 (-6.370400) | 0.058889 \/ 0.075469 (-0.016580) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.252673 \/ 1.841788 (-0.589115) | 18.676897 \/ 8.074308 (10.602589) | 13.988101 \/ 10.191392 (3.796709) | 0.151376 \/ 0.680424 (-0.529048) | 0.018104 \/ 0.534201 (-0.516097) | 0.388413 \/ 0.579283 (-0.190870) | 0.414841 \/ 0.434364 (-0.019523) | 0.456078 \/ 0.540337 (-0.084259) | 0.641715 \/ 1.386936 (-0.745221) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006315 \/ 0.011353 (-0.005038) | 0.003847 \/ 0.011008 (-0.007162) | 0.063989 \/ 0.038508 (0.025481) | 0.068244 \/ 0.023109 (0.045135) | 0.416201 \/ 0.275898 (0.140303) | 0.438446 \/ 0.323480 (0.114966) | 0.005820 \/ 0.007986 (-0.002166) | 0.003165 \/ 0.004328 (-0.001163) | 0.064143 \/ 0.004250 (0.059892) | 0.056529 \/ 0.037052 (0.019477) | 0.414916 \/ 0.258489 (0.156427) | 0.450771 \/ 0.293841 (0.156930) | 0.030611 \/ 0.128546 (-0.097935) | 0.008289 \/ 0.075646 (-0.067357) | 0.070725 \/ 0.419271 (-0.348546) | 0.047998 \/ 0.043533 (0.004465) | 0.405609 \/ 0.255139 (0.150470) | 0.421895 \/ 0.283200 (0.138696) | 0.022135 \/ 0.141683 (-0.119548) | 1.444238 \/ 1.452155 (-0.007916) | 1.515823 \/ 1.492716 (0.023107) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.227043 \/ 0.018006 (0.209037) | 0.439732 \/ 0.000490 (0.439242) | 0.001267 \/ 0.000200 (0.001067) | 0.000070 \/ 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029082 \/ 0.037411 (-0.008329) | 0.086201 \/ 0.014526 (0.071675) | 0.098653 \/ 0.176557 (-0.077903) | 0.152574 \/ 0.737135 (-0.584561) | 0.100696 \/ 0.296338 (-0.195642) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.411243 \/ 0.215209 (0.196034) | 4.100170 \/ 2.077655 (2.022515) | 2.118310 \/ 1.504120 (0.614190) | 1.935646 \/ 1.541195 (0.394451) | 1.970798 \/ 1.468490 (0.502307) | 0.478635 \/ 4.584777 (-4.106142) | 3.589396 \/ 3.745712 (-0.156316) | 3.312462 \/ 5.269862 (-1.957399) | 1.963081 \/ 4.565676 (-2.602595) | 0.056392 \/ 0.424275 (-0.367883) | 0.007134 \/ 0.007607 (-0.000473) | 0.485131 \/ 0.226044 (0.259086) | 4.838946 \/ 2.268929 (2.570017) | 2.624550 \/ 55.444624 (-52.820075) | 2.223046 \/ 6.876477 (-4.653431) | 2.230642 \/ 2.142072 (0.088570) | 0.594892 \/ 4.805227 (-4.210335) | 0.130523 \/ 6.500664 (-6.370141) | 0.059585 \/ 0.075469 (-0.015884) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.329941 \/ 1.841788 (-0.511847) | 19.199057 \/ 8.074308 (11.124748) | 14.166009 \/ 10.191392 (3.974617) | 0.190595 \/ 0.680424 (-0.489829) | 0.018419 \/ 0.534201 (-0.515782) | 0.392031 \/ 0.579283 (-0.187252) | 0.409395 \/ 0.434364 (-0.024969) | 0.475930 \/ 0.540337 (-0.064408) | 0.654412 \/ 1.386936 (-0.732524) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#42fdfbd567674d075c3a9148ec3c95221eb62cfe \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007500 \/ 0.011353 (-0.003853) | 0.004328 \/ 0.011008 (-0.006681) | 0.086718 \/ 0.038508 (0.048209) | 0.098638 \/ 0.023109 (0.075529) | 0.335308 \/ 0.275898 (0.059409) | 0.369163 \/ 0.323480 (0.045683) | 0.005733 \/ 0.007986 (-0.002253) | 0.003738 \/ 0.004328 (-0.000590) | 0.066452 \/ 0.004250 (0.062202) | 0.066245 \/ 0.037052 (0.029192) | 0.337609 \/ 0.258489 (0.079120) | 0.388584 \/ 0.293841 (0.094744) | 0.031742 \/ 0.128546 (-0.096804) | 0.008721 \/ 0.075646 (-0.066925) | 0.290820 \/ 0.419271 (-0.128452) | 0.053323 \/ 0.043533 (0.009790) | 0.329192 \/ 0.255139 (0.074053) | 0.350560 \/ 0.283200 (0.067360) | 0.025402 \/ 0.141683 (-0.116281) | 1.476174 \/ 1.452155 (0.024020) | 1.578194 \/ 1.492716 (0.085478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.256160 \/ 0.018006 (0.238154) | 0.560315 \/ 0.000490 (0.559825) | 0.005287 \/ 0.000200 (0.005088) | 0.000094 \/ 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029164 \/ 0.037411 (-0.008247) | 0.084881 \/ 0.014526 (0.070356) | 0.100979 \/ 0.176557 (-0.075577) | 0.156539 \/ 0.737135 (-0.580597) | 0.101510 \/ 0.296338 (-0.194828) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.381138 \/ 0.215209 (0.165929) | 3.791573 \/ 2.077655 (1.713918) | 1.841954 \/ 1.504120 (0.337834) | 1.672463 \/ 1.541195 (0.131268) | 1.785769 \/ 1.468490 (0.317279) | 0.483263 \/ 4.584777 (-4.101514) | 3.617391 \/ 3.745712 (-0.128322) | 5.607794 \/ 5.269862 (0.337933) | 3.359530 \/ 4.565676 (-1.206147) | 0.056826 \/ 0.424275 (-0.367449) | 0.007375 \/ 0.007607 (-0.000232) | 0.455853 \/ 0.226044 (0.229809) | 4.548965 \/ 2.268929 (2.280037) | 2.412716 \/ 55.444624 (-53.031908) | 1.991456 \/ 6.876477 (-4.885021) | 2.242851 \/ 2.142072 (0.100778) | 0.573070 \/ 4.805227 (-4.232157) | 0.134658 \/ 6.500664 (-6.366006) | 0.061539 \/ 0.075469 (-0.013930) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.278306 \/ 1.841788 (-0.563481) | 20.634317 \/ 8.074308 (12.560009) | 15.164246 \/ 10.191392 (4.972854) | 0.167487 \/ 0.680424 (-0.512937) | 0.019006 \/ 0.534201 (-0.515195) | 0.394617 \/ 0.579283 (-0.184666) | 0.423385 \/ 0.434364 (-0.010979) | 0.469968 \/ 0.540337 (-0.070370) | 0.630058 \/ 1.386936 (-0.756878) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006793 \/ 0.011353 (-0.004559) | 0.004260 \/ 0.011008 (-0.006748) | 0.065398 \/ 0.038508 (0.026890) | 0.077850 \/ 0.023109 (0.054741) | 0.371754 \/ 0.275898 (0.095855) | 0.400652 \/ 0.323480 (0.077172) | 0.005729 \/ 0.007986 (-0.002256) | 0.003660 \/ 0.004328 (-0.000669) | 0.065119 \/ 0.004250 (0.060869) | 0.060714 \/ 0.037052 (0.023661) | 0.384592 \/ 0.258489 (0.126103) | 0.412806 \/ 0.293841 (0.118965) | 0.031865 \/ 0.128546 (-0.096681) | 0.008807 \/ 0.075646 (-0.066839) | 0.071156 \/ 0.419271 (-0.348115) | 0.049571 \/ 0.043533 (0.006038) | 0.367381 \/ 0.255139 (0.112242) | 0.386713 \/ 0.283200 (0.103513) | 0.024838 \/ 0.141683 (-0.116845) | 1.492986 \/ 1.452155 (0.040831) | 1.559243 \/ 1.492716 (0.066526) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.269737 \/ 0.018006 (0.251730) | 0.565177 \/ 0.000490 (0.564687) | 0.000404 \/ 0.000200 (0.000204) | 0.000060 \/ 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031631 \/ 0.037411 (-0.005780) | 0.087289 \/ 0.014526 (0.072764) | 0.102798 \/ 0.176557 (-0.073759) | 0.158977 \/ 0.737135 (-0.578158) | 0.105495 \/ 0.296338 (-0.190843) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.425067 \/ 0.215209 (0.209858) | 4.243121 \/ 2.077655 (2.165466) | 2.234567 \/ 1.504120 (0.730447) | 2.070810 \/ 1.541195 (0.529615) | 2.176802 \/ 1.468490 (0.708312) | 0.484987 \/ 4.584777 (-4.099790) | 3.647000 \/ 3.745712 (-0.098712) | 3.574843 \/ 5.269862 (-1.695019) | 2.092581 \/ 4.565676 (-2.473095) | 0.057299 \/ 0.424275 (-0.366976) | 0.007480 \/ 0.007607 (-0.000128) | 0.507838 \/ 0.226044 (0.281794) | 5.076594 \/ 2.268929 (2.807666) | 2.718858 \/ 55.444624 (-52.725766) | 2.362793 \/ 6.876477 (-4.513684) | 2.451962 \/ 2.142072 (0.309890) | 0.581355 \/ 4.805227 (-4.223872) | 0.133723 \/ 6.500664 (-6.366941) | 0.061896 \/ 0.075469 (-0.013573) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.325814 \/ 1.841788 (-0.515974) | 20.614502 \/ 8.074308 (12.540194) | 14.769422 \/ 10.191392 (4.578029) | 0.193797 \/ 0.680424 (-0.486627) | 0.018379 \/ 0.534201 (-0.515822) | 0.394153 \/ 0.579283 (-0.185130) | 0.409585 \/ 0.434364 (-0.024779) | 0.479107 \/ 0.540337 (-0.061231) | 0.668397 \/ 1.386936 (-0.718539) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#b2d892237169bad5512c91cae453d257ebefc201 \"CML watermark\")\n","In the end, I decided to remove the progress bar to avoid having it displayed when loading a cached dataset.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006673 \/ 0.011353 (-0.004680) | 0.004162 \/ 0.011008 (-0.006846) | 0.084017 \/ 0.038508 (0.045509) | 0.079536 \/ 0.023109 (0.056426) | 0.313594 \/ 0.275898 (0.037695) | 0.349200 \/ 0.323480 (0.025720) | 0.005544 \/ 0.007986 (-0.002441) | 0.003472 \/ 0.004328 (-0.000857) | 0.064742 \/ 0.004250 (0.060491) | 0.056857 \/ 0.037052 (0.019805) | 0.318635 \/ 0.258489 (0.060146) | 0.354378 \/ 0.293841 (0.060537) | 0.030856 \/ 0.128546 (-0.097690) | 0.008759 \/ 0.075646 (-0.066887) | 0.287760 \/ 0.419271 (-0.131511) | 0.052307 \/ 0.043533 (0.008775) | 0.316396 \/ 0.255139 (0.061257) | 0.351408 \/ 0.283200 (0.068208) | 0.024914 \/ 0.141683 (-0.116769) | 1.484592 \/ 1.452155 (0.032437) | 1.560662 \/ 1.492716 (0.067945) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.280938 \/ 0.018006 (0.262932) | 0.580236 \/ 0.000490 (0.579747) | 0.003369 \/ 0.000200 (0.003169) | 0.000090 \/ 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028736 \/ 0.037411 (-0.008675) | 0.082916 \/ 0.014526 (0.068390) | 0.097761 \/ 0.176557 (-0.078796) | 0.153515 \/ 0.737135 (-0.583620) | 0.099282 \/ 0.296338 (-0.197057) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.401244 \/ 0.215209 (0.186035) | 4.019866 \/ 2.077655 (1.942211) | 2.029642 \/ 1.504120 (0.525522) | 1.849591 \/ 1.541195 (0.308396) | 1.946829 \/ 1.468490 (0.478339) | 0.479750 \/ 4.584777 (-4.105027) | 3.482822 \/ 3.745712 (-0.262890) | 3.955859 \/ 5.269862 (-1.314003) | 2.370747 \/ 4.565676 (-2.194930) | 0.056905 \/ 0.424275 (-0.367370) | 0.007319 \/ 0.007607 (-0.000288) | 0.485310 \/ 0.226044 (0.259266) | 4.858228 \/ 2.268929 (2.589299) | 2.500476 \/ 55.444624 (-52.944148) | 2.171156 \/ 6.876477 (-4.705320) | 2.427266 \/ 2.142072 (0.285194) | 0.570199 \/ 4.805227 (-4.235029) | 0.130855 \/ 6.500664 (-6.369809) | 0.060269 \/ 0.075469 (-0.015200) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.258044 \/ 1.841788 (-0.583743) | 20.218657 \/ 8.074308 (12.144349) | 13.597970 \/ 10.191392 (3.406578) | 0.167656 \/ 0.680424 (-0.512768) | 0.018137 \/ 0.534201 (-0.516064) | 0.395309 \/ 0.579283 (-0.183975) | 0.406325 \/ 0.434364 (-0.028039) | 0.467457 \/ 0.540337 (-0.072880) | 0.613636 \/ 1.386936 (-0.773300) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006846 \/ 0.011353 (-0.004507) | 0.004207 \/ 0.011008 (-0.006802) | 0.064525 \/ 0.038508 (0.026017) | 0.081329 \/ 0.023109 (0.058220) | 0.399838 \/ 0.275898 (0.123940) | 0.431305 \/ 0.323480 (0.107825) | 0.005859 \/ 0.007986 (-0.002127) | 0.003568 \/ 0.004328 (-0.000760) | 0.065262 \/ 0.004250 (0.061011) | 0.064796 \/ 0.037052 (0.027744) | 0.406858 \/ 0.258489 (0.148369) | 0.440971 \/ 0.293841 (0.147130) | 0.031421 \/ 0.128546 (-0.097125) | 0.008777 \/ 0.075646 (-0.066870) | 0.071418 \/ 0.419271 (-0.347853) | 0.049263 \/ 0.043533 (0.005730) | 0.384279 \/ 0.255139 (0.129140) | 0.410745 \/ 0.283200 (0.127546) | 0.024467 \/ 0.141683 (-0.117216) | 1.522379 \/ 1.452155 (0.070224) | 1.581636 \/ 1.492716 (0.088920) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.276161 \/ 0.018006 (0.258155) | 0.548842 \/ 0.000490 (0.548352) | 0.004523 \/ 0.000200 (0.004324) | 0.000098 \/ 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030747 \/ 0.037411 (-0.006664) | 0.087493 \/ 0.014526 (0.072967) | 0.106563 \/ 0.176557 (-0.069993) | 0.162949 \/ 0.737135 (-0.574186) | 0.105303 \/ 0.296338 (-0.191036) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.425854 \/ 0.215209 (0.210645) | 4.244797 \/ 2.077655 (2.167142) | 2.269006 \/ 1.504120 (0.764886) | 2.097428 \/ 1.541195 (0.556234) | 2.181038 \/ 1.468490 (0.712548) | 0.477286 \/ 4.584777 (-4.107491) | 3.591452 \/ 3.745712 (-0.154260) | 3.481281 \/ 5.269862 (-1.788580) | 2.066895 \/ 4.565676 (-2.498782) | 0.056576 \/ 0.424275 (-0.367699) | 0.007409 \/ 0.007607 (-0.000199) | 0.498411 \/ 0.226044 (0.272367) | 4.994873 \/ 2.268929 (2.725945) | 2.749148 \/ 55.444624 (-52.695476) | 2.378544 \/ 6.876477 (-4.497932) | 2.452859 \/ 2.142072 (0.310786) | 0.571340 \/ 4.805227 (-4.233887) | 0.132174 \/ 6.500664 (-6.368490) | 0.061507 \/ 0.075469 (-0.013962) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.370773 \/ 1.841788 (-0.471015) | 20.493342 \/ 8.074308 (12.419034) | 14.809886 \/ 10.191392 (4.618494) | 0.175730 \/ 0.680424 (-0.504693) | 0.018617 \/ 0.534201 (-0.515583) | 0.393808 \/ 0.579283 (-0.185476) | 0.416419 \/ 0.434364 (-0.017945) | 0.477183 \/ 0.540337 (-0.063155) | 0.668060 \/ 1.386936 (-0.718876) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#2de7a2a4af5d94b0f98a7a6db94e78984af40602 \"CML watermark\")\n","Nice one :)"],"created_at":1689100223000,"updated_at":1689190454000,"closed_at":1689182368000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6019","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6019","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6019.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6019.patch","merged_at":1689182368000},"body":"Adds the StreamHandler (as `hfh` and `transformers` do) to the library's logger to log INFO messages and logs the messages about \"loading a cached result\" (and some other warnings) as INFO\r\n\r\n(Also removes the `leave=False` arg in the progress bars to be consistent with `hfh` and `transformers` - progress bars serve as an indicator that a result is not cached, so it makes more sense not to delete them)\r\n\r\nFix #2832, fix https:\/\/github.com\/huggingface\/datasets\/issues\/1948, fix https:\/\/github.com\/huggingface\/datasets\/issues\/5444","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6019\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6019\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6018","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6018\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6018\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6018\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6018","id":1799411999,"node_id":"PR_kwDODunzps5VOmKY","number":6018,"title":"test1","user":{"login":"ognjenovicj","id":139256323,"node_id":"U_kgDOCEziAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/139256323?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ognjenovicj","html_url":"https:\/\/github.com\/ognjenovicj","followers_url":"https:\/\/api.github.com\/users\/ognjenovicj\/followers","following_url":"https:\/\/api.github.com\/users\/ognjenovicj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ognjenovicj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ognjenovicj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ognjenovicj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ognjenovicj\/orgs","repos_url":"https:\/\/api.github.com\/users\/ognjenovicj\/repos","events_url":"https:\/\/api.github.com\/users\/ognjenovicj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ognjenovicj\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We no longer host datasets in this repo. You should use the HF Hub instead."],"created_at":1689096349000,"updated_at":1689185815000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6018","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6018","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6018.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6018.patch","merged_at":null},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6018\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6018\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6017","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6017\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6017\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6017\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6017","id":1799309132,"node_id":"I_kwDODunzps5rP0dM","number":6017,"title":"Switch to huggingface_hub's HfFileSystem","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":{"login":"lhoestq","id":42851186.0,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1689092680000,"updated_at":1689119113000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"draft":null,"pull_request":null,"body":"instead of the current datasets.filesystems.hffilesystem.HfFileSystem which can be slow in some cases\r\n\r\nrelated to https:\/\/github.com\/huggingface\/datasets\/issues\/5846 and https:\/\/github.com\/huggingface\/datasets\/pull\/5919","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6017\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6017\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6016","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6016\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6016\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6016\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6016","id":1798968033,"node_id":"PR_kwDODunzps5VNEvn","number":6016,"title":"Dataset string representation enhancement","user":{"login":"Ganryuu","id":63643948,"node_id":"MDQ6VXNlcjYzNjQzOTQ4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/63643948?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Ganryuu","html_url":"https:\/\/github.com\/Ganryuu","followers_url":"https:\/\/api.github.com\/users\/Ganryuu\/followers","following_url":"https:\/\/api.github.com\/users\/Ganryuu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Ganryuu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Ganryuu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Ganryuu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Ganryuu\/orgs","repos_url":"https:\/\/api.github.com\/users\/Ganryuu\/repos","events_url":"https:\/\/api.github.com\/users\/Ganryuu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Ganryuu\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_6016). All of your documentation changes will be reflected on that endpoint.","It we could have something similar to Polars, that would be great.\r\n\r\nThis is what Polars outputs: \r\n* `__repr__`\/`__str__` :\r\n```\r\nshape: (67_349, 3)\r\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\r\n\u2502 idx \u2506 sentence \u2506 label \u2502\r\n\u2502 --- \u2506 --- \u2506 --- \u2502\r\n\u2502 i32 \u2506 str \u2506 i64 \u2502\r\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\r\n\u2502 0 \u2506 hide new secretions from the par\u2026 \u2506 0 \u2502\r\n\u2502 1 \u2506 contains no wit , only labored g\u2026 \u2506 0 \u2502\r\n\u2502 2 \u2506 that loves its characters and co\u2026 \u2506 1 \u2502\r\n\u2502 3 \u2506 remains utterly satisfied to rem\u2026 \u2506 0 \u2502\r\n\u2502 \u2026 \u2506 \u2026 \u2506 \u2026 \u2502\r\n\u2502 67345 \u2506 anguish , anger and frustration \u2506 0 \u2502\r\n\u2502 67346 \u2506 at achieving the modest , crowd-\u2026 \u2506 1 \u2502\r\n\u2502 67347 \u2506 a patient viewer \u2506 1 \u2502\r\n\u2502 67348 \u2506 this new jangle of noise , mayhe\u2026 \u2506 0 \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\n```\r\n\r\n* `_repr_html_`:\r\n\"Screenshot\r\n\r\n"],"created_at":1689082705000,"updated_at":1689203761000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6016","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6016","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6016.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6016.patch","merged_at":null},"body":"my attempt at #6010 \r\nnot sure if this is the right way to go about it, I will wait for your feedback ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6016\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6016\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6015","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6015\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6015\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6015\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6015","id":1798807893,"node_id":"PR_kwDODunzps5VMhgB","number":6015,"title":"Add metadata ui screenshot in docs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007633 \/ 0.011353 (-0.003720) | 0.004666 \/ 0.011008 (-0.006343) | 0.097768 \/ 0.038508 (0.059260) | 0.085153 \/ 0.023109 (0.062044) | 0.400315 \/ 0.275898 (0.124417) | 0.452903 \/ 0.323480 (0.129423) | 0.006227 \/ 0.007986 (-0.001759) | 0.003814 \/ 0.004328 (-0.000515) | 0.074586 \/ 0.004250 (0.070336) | 0.064295 \/ 0.037052 (0.027242) | 0.408082 \/ 0.258489 (0.149593) | 0.446921 \/ 0.293841 (0.153080) | 0.034593 \/ 0.128546 (-0.093953) | 0.009191 \/ 0.075646 (-0.066456) | 0.337099 \/ 0.419271 (-0.082173) | 0.075320 \/ 0.043533 (0.031787) | 0.403488 \/ 0.255139 (0.148349) | 0.435309 \/ 0.283200 (0.152109) | 0.035675 \/ 0.141683 (-0.106008) | 1.732642 \/ 1.452155 (0.280487) | 1.770238 \/ 1.492716 (0.277522) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.235879 \/ 0.018006 (0.217873) | 0.500330 \/ 0.000490 (0.499841) | 0.005221 \/ 0.000200 (0.005021) | 0.000150 \/ 0.000054 (0.000096) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032479 \/ 0.037411 (-0.004933) | 0.095873 \/ 0.014526 (0.081348) | 0.107118 \/ 0.176557 (-0.069438) | 0.173809 \/ 0.737135 (-0.563326) | 0.109832 \/ 0.296338 (-0.186507) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.444342 \/ 0.215209 (0.229133) | 4.459010 \/ 2.077655 (2.381355) | 2.209687 \/ 1.504120 (0.705567) | 2.007556 \/ 1.541195 (0.466362) | 2.113683 \/ 1.468490 (0.645193) | 0.544281 \/ 4.584777 (-4.040496) | 4.037151 \/ 3.745712 (0.291439) | 4.852644 \/ 5.269862 (-0.417217) | 3.134126 \/ 4.565676 (-1.431550) | 0.066815 \/ 0.424275 (-0.357460) | 0.008836 \/ 0.007607 (0.001229) | 0.560904 \/ 0.226044 (0.334859) | 5.302760 \/ 2.268929 (3.033832) | 2.750182 \/ 55.444624 (-52.694442) | 2.322595 \/ 6.876477 (-4.553882) | 2.547486 \/ 2.142072 (0.405414) | 0.665766 \/ 4.805227 (-4.139461) | 0.151613 \/ 6.500664 (-6.349051) | 0.071155 \/ 0.075469 (-0.004314) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.473717 \/ 1.841788 (-0.368071) | 22.584179 \/ 8.074308 (14.509871) | 15.888001 \/ 10.191392 (5.696609) | 0.181073 \/ 0.680424 (-0.499351) | 0.021395 \/ 0.534201 (-0.512806) | 0.452693 \/ 0.579283 (-0.126590) | 0.447709 \/ 0.434364 (0.013345) | 0.529599 \/ 0.540337 (-0.010738) | 0.699241 \/ 1.386936 (-0.687695) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007917 \/ 0.011353 (-0.003436) | 0.004544 \/ 0.011008 (-0.006464) | 0.074566 \/ 0.038508 (0.036058) | 0.087530 \/ 0.023109 (0.064421) | 0.419753 \/ 0.275898 (0.143854) | 0.452352 \/ 0.323480 (0.128872) | 0.005882 \/ 0.007986 (-0.002104) | 0.003904 \/ 0.004328 (-0.000425) | 0.073539 \/ 0.004250 (0.069289) | 0.071320 \/ 0.037052 (0.034267) | 0.432899 \/ 0.258489 (0.174409) | 0.470365 \/ 0.293841 (0.176524) | 0.036198 \/ 0.128546 (-0.092348) | 0.009342 \/ 0.075646 (-0.066304) | 0.080970 \/ 0.419271 (-0.338301) | 0.058769 \/ 0.043533 (0.015236) | 0.413397 \/ 0.255139 (0.158258) | 0.448362 \/ 0.283200 (0.165162) | 0.034177 \/ 0.141683 (-0.107506) | 1.706217 \/ 1.452155 (0.254063) | 1.776743 \/ 1.492716 (0.284026) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.198779 \/ 0.018006 (0.180773) | 0.499862 \/ 0.000490 (0.499372) | 0.003891 \/ 0.000200 (0.003692) | 0.000108 \/ 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034671 \/ 0.037411 (-0.002740) | 0.103165 \/ 0.014526 (0.088639) | 0.115813 \/ 0.176557 (-0.060744) | 0.177407 \/ 0.737135 (-0.559728) | 0.117733 \/ 0.296338 (-0.178606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.476859 \/ 0.215209 (0.261650) | 4.823063 \/ 2.077655 (2.745409) | 2.524133 \/ 1.504120 (1.020013) | 2.374482 \/ 1.541195 (0.833288) | 2.518047 \/ 1.468490 (1.049557) | 0.559131 \/ 4.584777 (-4.025646) | 4.126213 \/ 3.745712 (0.380501) | 6.488570 \/ 5.269862 (1.218708) | 3.816540 \/ 4.565676 (-0.749137) | 0.064742 \/ 0.424275 (-0.359533) | 0.008476 \/ 0.007607 (0.000869) | 0.576432 \/ 0.226044 (0.350387) | 5.835133 \/ 2.268929 (3.566205) | 3.237833 \/ 55.444624 (-52.206791) | 2.726596 \/ 6.876477 (-4.149880) | 2.799212 \/ 2.142072 (0.657139) | 0.661628 \/ 4.805227 (-4.143599) | 0.153997 \/ 6.500664 (-6.346667) | 0.070621 \/ 0.075469 (-0.004848) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.648505 \/ 1.841788 (-0.193282) | 22.454019 \/ 8.074308 (14.379711) | 16.077098 \/ 10.191392 (5.885706) | 0.217875 \/ 0.680424 (-0.462549) | 0.021285 \/ 0.534201 (-0.512916) | 0.459837 \/ 0.579283 (-0.119446) | 0.476211 \/ 0.434364 (0.041847) | 0.525903 \/ 0.540337 (-0.014435) | 0.717224 \/ 1.386936 (-0.669712) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#b767e9c3ef30f9da30d47cfcaccf9a7ac2500c43 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008929 \/ 0.011353 (-0.002424) | 0.004188 \/ 0.011008 (-0.006820) | 0.097030 \/ 0.038508 (0.058522) | 0.071363 \/ 0.023109 (0.048254) | 0.333116 \/ 0.275898 (0.057218) | 0.371272 \/ 0.323480 (0.047792) | 0.006430 \/ 0.007986 (-0.001555) | 0.003689 \/ 0.004328 (-0.000639) | 0.068666 \/ 0.004250 (0.064416) | 0.057562 \/ 0.037052 (0.020510) | 0.347208 \/ 0.258489 (0.088719) | 0.390514 \/ 0.293841 (0.096673) | 0.050560 \/ 0.128546 (-0.077987) | 0.013372 \/ 0.075646 (-0.062275) | 0.311345 \/ 0.419271 (-0.107927) | 0.068990 \/ 0.043533 (0.025457) | 0.363026 \/ 0.255139 (0.107887) | 0.379793 \/ 0.283200 (0.096593) | 0.036891 \/ 0.141683 (-0.104792) | 1.583481 \/ 1.452155 (0.131327) | 1.688727 \/ 1.492716 (0.196011) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.209777 \/ 0.018006 (0.191771) | 0.507267 \/ 0.000490 (0.506777) | 0.003637 \/ 0.000200 (0.003438) | 0.000105 \/ 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029309 \/ 0.037411 (-0.008102) | 0.088386 \/ 0.014526 (0.073861) | 0.104974 \/ 0.176557 (-0.071582) | 0.171999 \/ 0.737135 (-0.565137) | 0.110797 \/ 0.296338 (-0.185542) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.543465 \/ 0.215209 (0.328256) | 5.361491 \/ 2.077655 (3.283836) | 2.348712 \/ 1.504120 (0.844592) | 2.012527 \/ 1.541195 (0.471332) | 2.069776 \/ 1.468490 (0.601286) | 0.874262 \/ 4.584777 (-3.710515) | 4.877317 \/ 3.745712 (1.131605) | 5.327459 \/ 5.269862 (0.057597) | 3.336823 \/ 4.565676 (-1.228854) | 0.100456 \/ 0.424275 (-0.323819) | 0.008503 \/ 0.007607 (0.000895) | 0.692009 \/ 0.226044 (0.465965) | 6.912731 \/ 2.268929 (4.643802) | 3.110548 \/ 55.444624 (-52.334076) | 2.443665 \/ 6.876477 (-4.432811) | 2.528713 \/ 2.142072 (0.386641) | 1.076358 \/ 4.805227 (-3.728869) | 0.220352 \/ 6.500664 (-6.280312) | 0.080293 \/ 0.075469 (0.004824) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.538444 \/ 1.841788 (-0.303344) | 21.121221 \/ 8.074308 (13.046913) | 19.810609 \/ 10.191392 (9.619216) | 0.225406 \/ 0.680424 (-0.455018) | 0.026652 \/ 0.534201 (-0.507549) | 0.430372 \/ 0.579283 (-0.148911) | 0.510722 \/ 0.434364 (0.076358) | 0.514347 \/ 0.540337 (-0.025991) | 0.686050 \/ 1.386936 (-0.700886) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007675 \/ 0.011353 (-0.003678) | 0.004542 \/ 0.011008 (-0.006466) | 0.069655 \/ 0.038508 (0.031147) | 0.069338 \/ 0.023109 (0.046229) | 0.436505 \/ 0.275898 (0.160607) | 0.481806 \/ 0.323480 (0.158326) | 0.005315 \/ 0.007986 (-0.002670) | 0.004455 \/ 0.004328 (0.000127) | 0.072674 \/ 0.004250 (0.068424) | 0.058088 \/ 0.037052 (0.021035) | 0.445825 \/ 0.258489 (0.187336) | 0.501706 \/ 0.293841 (0.207865) | 0.047123 \/ 0.128546 (-0.081424) | 0.012943 \/ 0.075646 (-0.062703) | 0.093491 \/ 0.419271 (-0.325780) | 0.060169 \/ 0.043533 (0.016637) | 0.436530 \/ 0.255139 (0.181391) | 0.466873 \/ 0.283200 (0.183674) | 0.040453 \/ 0.141683 (-0.101230) | 1.586438 \/ 1.452155 (0.134283) | 1.671081 \/ 1.492716 (0.178365) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.180607 \/ 0.018006 (0.162601) | 0.520145 \/ 0.000490 (0.519655) | 0.004824 \/ 0.000200 (0.004624) | 0.000116 \/ 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029308 \/ 0.037411 (-0.008103) | 0.093652 \/ 0.014526 (0.079126) | 0.102332 \/ 0.176557 (-0.074224) | 0.162414 \/ 0.737135 (-0.574721) | 0.098017 \/ 0.296338 (-0.198321) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.583949 \/ 0.215209 (0.368740) | 6.035191 \/ 2.077655 (3.957536) | 2.801274 \/ 1.504120 (1.297155) | 2.566150 \/ 1.541195 (1.024955) | 2.437122 \/ 1.468490 (0.968632) | 0.865038 \/ 4.584777 (-3.719739) | 4.841727 \/ 3.745712 (1.096015) | 4.683919 \/ 5.269862 (-0.585943) | 2.941240 \/ 4.565676 (-1.624437) | 0.104888 \/ 0.424275 (-0.319387) | 0.007747 \/ 0.007607 (0.000140) | 0.780041 \/ 0.226044 (0.553997) | 7.771314 \/ 2.268929 (5.502385) | 3.680814 \/ 55.444624 (-51.763811) | 2.938472 \/ 6.876477 (-3.938004) | 2.981740 \/ 2.142072 (0.839668) | 1.065411 \/ 4.805227 (-3.739816) | 0.222265 \/ 6.500664 (-6.278399) | 0.082428 \/ 0.075469 (0.006959) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.626774 \/ 1.841788 (-0.215014) | 21.618284 \/ 8.074308 (13.543976) | 20.596743 \/ 10.191392 (10.405351) | 0.240969 \/ 0.680424 (-0.439454) | 0.025630 \/ 0.534201 (-0.508570) | 0.481981 \/ 0.579283 (-0.097302) | 0.547914 \/ 0.434364 (0.113550) | 0.522296 \/ 0.540337 (-0.018041) | 0.729174 \/ 1.386936 (-0.657762) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#b8067c0262073891180869f700ebef5ac3dc5cce \"CML watermark\")\n"],"created_at":1689077789000,"updated_at":1689091648000,"closed_at":1689091006000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6015","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6015","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6015.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6015.patch","merged_at":1689091006000},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6015\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6015\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6014","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6014\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6014\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6014\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6014","id":1798213816,"node_id":"I_kwDODunzps5rLpC4","number":6014,"title":"Request to Share\/Update Dataset Viewer Code","user":{"login":"lilyorlilypad","id":105081034,"node_id":"U_kgDOBkNoyg","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/105081034?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lilyorlilypad","html_url":"https:\/\/github.com\/lilyorlilypad","followers_url":"https:\/\/api.github.com\/users\/lilyorlilypad\/followers","following_url":"https:\/\/api.github.com\/users\/lilyorlilypad\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lilyorlilypad\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lilyorlilypad\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lilyorlilypad\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lilyorlilypad\/orgs","repos_url":"https:\/\/api.github.com\/users\/lilyorlilypad\/repos","events_url":"https:\/\/api.github.com\/users\/lilyorlilypad\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lilyorlilypad\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! The huggingface\/dataset-viewer code was not maintained anymore because we switched to a new dataset viewer that is deployed available for each dataset the Hugging Face website.\r\n\r\nWhat are you using this old repository for ?","I think these parts are outdated:\r\n\r\n* https:\/\/github.com\/huggingface\/datasets-viewer\/blob\/8efad8eae313a891f713469983bf4c744786f26e\/run.py#L126-L131\r\n* https:\/\/github.com\/huggingface\/datasets-viewer\/blob\/8efad8eae313a891f713469983bf4c744786f26e\/run.py#L145-L150\r\n\r\nTo make the viewer work, the first one should be replaced with the following:\r\n```python\r\ndataset_module = datasets.load.dataset_module_factory(path)\r\nbuilder_cls = datasets.load.import_main_class(dataset_module.module_path)\r\nconfs = builder_cls.BUILDER_CONFIGS\r\n```\r\nAnd the second one:\r\n```python\r\ndataset_module = datasets.load.dataset_module_factory(path)\r\nbuilder_cls = datasets.load.import_main_class(dataset_module.module_path)\r\nif conf:\r\n builder_instance = builder_cls(name=conf, cache_dir=path if path_to_datasets is not None else None)\r\nelse:\r\n builder_instance = builder_cls(cache_dir=path if path_to_datasets is not None else None)\r\n```\r\n\r\nBut as @lhoestq suggested, it's better to use the `datasets-server` API nowadays to [fetch the rows](https:\/\/huggingface.co\/docs\/datasets-server\/rows).","> The dataset viewer on the Hugging Face website is incredibly useful\r\n\r\n@mariosasko i think @lilyorlilypad wants to run the new dataset-viewer, not the old one","> wants to run the new dataset-viewer, not the old one\r\n\r\nThanks for the clarification for me. I do want to run the new dataset-viewer. ","It should be possible to run it locally using the HF datasets-server API (docs [here](https:\/\/huggingface.co\/docs\/datasets-server)) but the front end part is not open source (yet ?)\r\n\r\nThe back-end is open source though if you're interested: https:\/\/github.com\/huggingface\/datasets-server\r\nIt automatically converts datasets on HF to Parquet, which is the format we use to power the viewer.","the new frontend would probably be hard to open source, as is, as it's quite intertwined with the Hub's code.\r\n\r\nHowever, at some point it would be amazing to have a community-driven open source implementation of a frontend to datasets-server! "],"created_at":1689057369000,"updated_at":1689171529000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"\r\nOverview:\r\nThe repository (huggingface\/datasets-viewer) was recently archived and when I tried to run the code, there was the error message \"AttributeError: module 'datasets.load' has no attribute 'prepare_module'\". I could not resolve the issue myself due to lack of documentation of that attribute. \r\n\r\nRequest:\r\nI kindly request the sharing of the code responsible for the dataset preview functionality or help with resolving the error. The dataset viewer on the Hugging Face website is incredibly useful since it is compatible with different types of inputs. It allows users to find datasets that meet their needs more efficiently. If needed, I am willing to contribute to the project by testing, documenting, and providing feedback on the dataset viewer code. \r\n\r\nThank you for considering this request, and I look forward to your response.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6014\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6014\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6013","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6013\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6013\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6013\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6013","id":1796083437,"node_id":"I_kwDODunzps5rDg7t","number":6013,"title":"[FR] `map` should reuse unchanged columns from the previous dataset to avoid disk usage","user":{"login":"NightMachinery","id":36224762,"node_id":"MDQ6VXNlcjM2MjI0NzYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36224762?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NightMachinery","html_url":"https:\/\/github.com\/NightMachinery","followers_url":"https:\/\/api.github.com\/users\/NightMachinery\/followers","following_url":"https:\/\/api.github.com\/users\/NightMachinery\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NightMachinery\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NightMachinery\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NightMachinery\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NightMachinery\/orgs","repos_url":"https:\/\/api.github.com\/users\/NightMachinery\/repos","events_url":"https:\/\/api.github.com\/users\/NightMachinery\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NightMachinery\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":3761482852,"node_id":"LA_kwDODunzps7gM6xk","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20second%20issue","name":"good second issue","color":"BDE59C","default":false,"description":"Issues a bit more difficult than \"Good First\" issues"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["You can use the `remove_columns` parameter in `map` to avoid duplicating the columns (and save disk space) and then concatenate the original dataset with the map result:\r\n```python\r\nfrom datasets import concatenate_datasets\r\n# dummy example\r\nds_new = ds.map(lambda x: {\"new_col\": x[\"col\"] + 2}, remove_columns=ds.column_names)\r\nds_combined = concatenate_datasets([ds, ds_new], axis=1)\r\n```\r\n\r\nDoing this automatically is hard to implement efficiently unless we know ahead of time which existing columns will be modified by a `map` transform. We have this info when `input_columns` are specified, so I think this is the only case we can optimize."],"created_at":1688971340000,"updated_at":1689003472000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\n\nCurrently adding a new column with `map` will cause all the data in the dataset to be duplicated and stored\/cached on the disk again. It should reuse unchanged columns. \n\n### Motivation\n\nThis allows having datasets with different columns but sharing some basic columns. Currently, these datasets would become too expensive to store and one would need some kind of on-the-fly join; which also doesn't seem implemented.\n\n### Your contribution\n\n_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6013\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6013\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6012","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6012\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6012\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6012\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6012","id":1795575432,"node_id":"I_kwDODunzps5rBk6I","number":6012,"title":"[FR] Transform Chaining, Lazy Mapping","user":{"login":"NightMachinery","id":36224762,"node_id":"MDQ6VXNlcjM2MjI0NzYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36224762?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NightMachinery","html_url":"https:\/\/github.com\/NightMachinery","followers_url":"https:\/\/api.github.com\/users\/NightMachinery\/followers","following_url":"https:\/\/api.github.com\/users\/NightMachinery\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NightMachinery\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NightMachinery\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NightMachinery\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NightMachinery\/orgs","repos_url":"https:\/\/api.github.com\/users\/NightMachinery\/repos","events_url":"https:\/\/api.github.com\/users\/NightMachinery\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NightMachinery\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["You can use `with_transform` to get a new dataset object.\r\n\r\nSupport for lazy `map` has already been discussed [here](https:\/\/github.com\/huggingface\/datasets\/issues\/3385) a little bit. Personally, I'm not a fan, as this would make `map` even more complex. ","> You can use `with_transform` to get a new dataset object.\r\n> \r\n> Support for lazy `map` has already been discussed [here](https:\/\/github.com\/huggingface\/datasets\/issues\/3385) a little bit. Personally, I'm not a fan, as this would make `map` even more complex.\r\n\r\nI read about IterableDataset, and it seems to have lazy mapping. But I can't figure out how to convert an IterableDataset into a normal one when needed.\r\n\r\n`with_transform` still does not chain AFAIU.","> I read about IterableDataset, and it seems to have lazy mapping. But I can't figure out how to convert an IterableDataset into a normal one when needed.\r\n\r\nYou must cache an `IterableDataset` to disk to load it as a `Dataset`. One way to do this is with `Dataset.from_generator`:\r\n```python\r\nfrom functools import partial\r\nfrom datasets import Dataset\r\n\r\ndef gen_from_iterable_dataset(iterable_ds)\r\n yield from iterable_ds\r\n\r\nds = Dataset.from_generator(partial(gen_from_iterable_dataset, iterable_ds), features=iterable_ds.features})\r\n```\r\n\r\n> with_transform still does not chain AFAIU.\r\n\r\nYes, not supported yet - the solution is to combine the transforms into a single one.","I wonder if it would be beneficial to have a dedicated method to do that ? Maybe a `.save_to_disk()` so that the user can reload the resulting dataset later ?","> ```python\r\n> from functools import partial\r\n> from datasets import Dataset\r\n> \r\n> def gen_from_iterable_dataset(iterable_ds)\r\n> yield from iterable_ds\r\n> \r\n> ds = Dataset.from_generator(partial(gen_from_iterable_dataset, iterable_ds), features=iterable_ds.features})\r\n> ```\r\n\r\n@mariosasko With these complex mapping functions, what hash will be used to cache this dataset?\r\n"],"created_at":1688938821000,"updated_at":1689277941000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\n\nCurrently using a `map` call processes and duplicates the whole dataset, which takes both time and disk space.\r\n\r\nThe solution is to allow lazy mapping, which is essentially a saved chain of transforms that are applied on the fly whenever a slice of the dataset is requested.\r\n\r\nThe API should look like `map`, as `set_transform` changes the current dataset while `map` returns another dataset.\n\n### Motivation\n\nLazy processing allows lower disk usage and faster experimentation.\n\n### Your contribution\n\n_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6012\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6012\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6011","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6011\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6011\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6011\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6011","id":1795296568,"node_id":"I_kwDODunzps5rAg04","number":6011,"title":"Documentation: wiki_dpr Dataset has no metric_type for Faiss Index","user":{"login":"YichiRockyZhang","id":29335344,"node_id":"MDQ6VXNlcjI5MzM1MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29335344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/YichiRockyZhang","html_url":"https:\/\/github.com\/YichiRockyZhang","followers_url":"https:\/\/api.github.com\/users\/YichiRockyZhang\/followers","following_url":"https:\/\/api.github.com\/users\/YichiRockyZhang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/YichiRockyZhang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/YichiRockyZhang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/YichiRockyZhang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/YichiRockyZhang\/orgs","repos_url":"https:\/\/api.github.com\/users\/YichiRockyZhang\/repos","events_url":"https:\/\/api.github.com\/users\/YichiRockyZhang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/YichiRockyZhang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! You can do `ds.get_index(\"embeddings\").faiss_index.metric_type` to get the metric type and then match the result with the FAISS metric [enum](https:\/\/github.com\/facebookresearch\/faiss\/blob\/43d86e30736ede853c384b24667fc3ab897d6ba9\/faiss\/MetricType.h#L22-L36) (should be L2).","Ah! Thank you for pointing this out. FYI: the enum indicates it's using the inner product. Using `torch.inner` or `torch.dot` still produces a discrepancy compared to the built-in score. I think this is because of the compression\/quantization that occurs with the FAISS index."],"created_at":1688891419000,"updated_at":1689044556000,"closed_at":1689044556000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nAfter loading `wiki_dpr` using:\r\n```py\r\nds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train')\r\nprint(ds.get_index(\"embeddings\").metric_type) # prints nothing because the value is None\r\n```\r\nthe index does not have a defined `metric_type`. This is an issue because I do not know how the `scores` are being computed for `get_nearest_examples()`.\n\n### Steps to reproduce the bug\n\nSystem: Python 3.9.16, Transformers 4.30.2, WSL\r\n\r\nAfter loading `wiki_dpr` using:\r\n```py\r\nds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train')\r\nprint(ds.get_index(\"embeddings\").metric_type) # prints nothing because the value is None\r\n```\r\nthe index does not have a defined `metric_type`. This is an issue because I do not know how the `scores` are being computed for `get_nearest_examples()`.\r\n\r\n```py\r\nfrom transformers import DPRQuestionEncoder, DPRContextEncoder, DPRQuestionEncoderTokenizer, DPRContextEncoderTokenizer\r\n\r\ntokenizer = DPRQuestionEncoderTokenizer.from_pretrained(\"facebook\/dpr-question_encoder-multiset-base\")\r\nencoder = DPRQuestionEncoder.from_pretrained(\"facebook\/dpr-question_encoder-multiset-base\")\r\n\r\ndef encode_question(query, tokenizer=tokenizer, encoder=encoder):\r\n inputs = tokenizer(query, return_tensors='pt')\r\n question_embedding = encoder(**inputs)[0].detach().numpy()\r\n return question_embedding\r\n\r\ndef get_knn(query, k=5, tokenizer=tokenizer, encoder=encoder, verbose=False):\r\n enc_question = encode_question(query, tokenizer, encoder)\r\n topk_results = ds.get_nearest_examples(index_name='embeddings',\r\n query=enc_question,\r\n k=k)\r\n \r\n \r\n a = torch.tensor(enc_question[0]).reshape(768)\r\n b = torch.tensor(topk_results.examples['embeddings'][0])\r\n print(a.shape, b.shape)\r\n print(torch.dot(a, b))\r\n print((a-b).pow(2).sum())\r\n\r\n return topk_results\r\n```\r\n\r\nThe [FAISS documentation](https:\/\/github.com\/facebookresearch\/faiss\/wiki\/MetricType-and-distances) suggests the metric is usually L2 distance (without the square root) or the inner product. I compute both for the sample query:\r\n```py\r\nquery = \"\"\" it catapulted into popular culture along with a line of action figures and other toys by Bandai.[2] By 2001, the media franchise had generated over $6 billion in toy sales.\r\nDespite initial criticism that its action violence targeted child audiences, the franchise has been commercially successful.\"\"\"\r\nget_knn(query,k=5)\r\n```\r\n\r\nHere, I get dot product of 80.6020 and L2 distance of 77.6616 and \r\n```py\r\nNearestExamplesResults(scores=array([76.20431 , 75.312416, 74.945404, 74.866394, 74.68506 ],\r\n dtype=float32), examples={'id': ['3081096', '2004811', '8908258', '9594124', '286575'], 'text': ['actors, resulting in the \"Power Rangers\" franchise which has continued since then into sequel TV series (with \"Power Rangers Beast Morphers\" set to premiere in 2019), comic books, video games, and three feature films, with a further cinematic universe planned. Following from the success of \"Power Rangers\", Saban acquired the rights to more of Toei\\'s library, creating \"VR Troopers\" and \"Big Bad Beetleborgs\" from several Metal Hero Series shows and \"Masked Rider\" from Kamen Rider Series footage. DIC Entertainment joined this boom by acquiring the rights to \"Gridman the Hyper Agent\" and turning it into \"Superhuman Samurai Syber-Squad\". In 2002,', \r\n```\r\n\r\nDoing `k=1` indicates the higher the outputted number, the better the match, so the metric should not be L2 distance. However, my manually computed inner product (80.6) has a discrepancy with the reported (76.2). Perhaps, this has to do with me using the `compressed` embeddings?\n\n### Expected behavior\n\n```py\r\nds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train')\r\nprint(ds.get_index(\"embeddings\").metric_type) # METRIC_INNER_PRODUCT\r\n```\n\n### Environment info\n\n- `datasets` version: 2.12.0\r\n- Platform: Linux-4.18.0-477.13.1.el8_8.x86_64-x86_64-with-glibc2.28\r\n- Python version: 3.9.16\r\n- Huggingface_hub version: 0.14.1\r\n- PyArrow version: 12.0.0\r\n- Pandas version: 2.0.1","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6011\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6011\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6010","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6010\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6010\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6010\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6010","id":1793838152,"node_id":"I_kwDODunzps5q68xI","number":6010,"title":"Improve `Dataset`'s string representation","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I want to take a shot at this if possible ","Yes, feel free to work on this.\r\n\r\nYou can check the PyArrow Table `__repr__` and Polars DataFrame `__repr__`\/`_repr_html_` implementations for some pointers\/ideas."],"created_at":1688747883000,"updated_at":1688999574000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"Currently, `Dataset.__repr__` outputs a dataset's column names and the number of rows. We could improve it by printing its features and the first few rows.\r\n\r\nWe should also implement `_repr_html_` to have a rich HTML representation in notebooks\/Streamlit.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6010\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6010\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6009","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6009\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6009\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6009\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6009","id":1792059808,"node_id":"PR_kwDODunzps5U1mus","number":6009,"title":"Fix cast for dictionaries with no keys","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006961 \/ 0.011353 (-0.004392) | 0.004390 \/ 0.011008 (-0.006618) | 0.103249 \/ 0.038508 (0.064741) | 0.048084 \/ 0.023109 (0.024975) | 0.351213 \/ 0.275898 (0.075315) | 0.416918 \/ 0.323480 (0.093439) | 0.005539 \/ 0.007986 (-0.002446) | 0.003555 \/ 0.004328 (-0.000774) | 0.079306 \/ 0.004250 (0.075055) | 0.066937 \/ 0.037052 (0.029884) | 0.382601 \/ 0.258489 (0.124112) | 0.406125 \/ 0.293841 (0.112284) | 0.032269 \/ 0.128546 (-0.096277) | 0.009133 \/ 0.075646 (-0.066514) | 0.354449 \/ 0.419271 (-0.064822) | 0.068978 \/ 0.043533 (0.025445) | 0.352314 \/ 0.255139 (0.097175) | 0.390398 \/ 0.283200 (0.107199) | 0.025640 \/ 0.141683 (-0.116043) | 1.553865 \/ 1.452155 (0.101710) | 1.601292 \/ 1.492716 (0.108576) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.208310 \/ 0.018006 (0.190303) | 0.440076 \/ 0.000490 (0.439586) | 0.000363 \/ 0.000200 (0.000163) | 0.000059 \/ 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029173 \/ 0.037411 (-0.008238) | 0.111323 \/ 0.014526 (0.096797) | 0.123001 \/ 0.176557 (-0.053556) | 0.180180 \/ 0.737135 (-0.556955) | 0.125804 \/ 0.296338 (-0.170534) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.419919 \/ 0.215209 (0.204710) | 4.194515 \/ 2.077655 (2.116860) | 1.881234 \/ 1.504120 (0.377114) | 1.672914 \/ 1.541195 (0.131720) | 1.723102 \/ 1.468490 (0.254612) | 0.543584 \/ 4.584777 (-4.041193) | 3.822477 \/ 3.745712 (0.076765) | 1.837946 \/ 5.269862 (-3.431915) | 1.094975 \/ 4.565676 (-3.470701) | 0.066788 \/ 0.424275 (-0.357487) | 0.011689 \/ 0.007607 (0.004082) | 0.520983 \/ 0.226044 (0.294938) | 5.209245 \/ 2.268929 (2.940316) | 2.392916 \/ 55.444624 (-53.051708) | 2.060042 \/ 6.876477 (-4.816434) | 2.162291 \/ 2.142072 (0.020219) | 0.668472 \/ 4.805227 (-4.136755) | 0.144373 \/ 6.500664 (-6.356291) | 0.066152 \/ 0.075469 (-0.009318) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.251256 \/ 1.841788 (-0.590532) | 15.161338 \/ 8.074308 (7.087030) | 14.416133 \/ 10.191392 (4.224741) | 0.166145 \/ 0.680424 (-0.514279) | 0.018168 \/ 0.534201 (-0.516033) | 0.433364 \/ 0.579283 (-0.145919) | 0.417484 \/ 0.434364 (-0.016880) | 0.502543 \/ 0.540337 (-0.037794) | 0.602904 \/ 1.386936 (-0.784032) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006946 \/ 0.011353 (-0.004407) | 0.004248 \/ 0.011008 (-0.006761) | 0.079707 \/ 0.038508 (0.041199) | 0.046226 \/ 0.023109 (0.023117) | 0.375864 \/ 0.275898 (0.099966) | 0.430740 \/ 0.323480 (0.107260) | 0.006222 \/ 0.007986 (-0.001764) | 0.003474 \/ 0.004328 (-0.000854) | 0.079622 \/ 0.004250 (0.075372) | 0.066666 \/ 0.037052 (0.029613) | 0.379487 \/ 0.258489 (0.120998) | 0.423002 \/ 0.293841 (0.129161) | 0.032836 \/ 0.128546 (-0.095710) | 0.008976 \/ 0.075646 (-0.066670) | 0.086578 \/ 0.419271 (-0.332693) | 0.055651 \/ 0.043533 (0.012118) | 0.360787 \/ 0.255139 (0.105648) | 0.384265 \/ 0.283200 (0.101065) | 0.025350 \/ 0.141683 (-0.116333) | 1.547880 \/ 1.452155 (0.095725) | 1.605850 \/ 1.492716 (0.113134) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.184227 \/ 0.018006 (0.166220) | 0.442071 \/ 0.000490 (0.441582) | 0.002887 \/ 0.000200 (0.002687) | 0.000088 \/ 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031923 \/ 0.037411 (-0.005488) | 0.119093 \/ 0.014526 (0.104568) | 0.128704 \/ 0.176557 (-0.047853) | 0.187065 \/ 0.737135 (-0.550070) | 0.134135 \/ 0.296338 (-0.162204) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.455731 \/ 0.215209 (0.240522) | 4.562911 \/ 2.077655 (2.485256) | 2.247431 \/ 1.504120 (0.743311) | 2.053346 \/ 1.541195 (0.512151) | 2.049611 \/ 1.468490 (0.581121) | 0.546069 \/ 4.584777 (-4.038708) | 3.821852 \/ 3.745712 (0.076140) | 3.358497 \/ 5.269862 (-1.911364) | 1.667697 \/ 4.565676 (-2.897979) | 0.067968 \/ 0.424275 (-0.356307) | 0.012344 \/ 0.007607 (0.004737) | 0.550864 \/ 0.226044 (0.324820) | 5.496867 \/ 2.268929 (3.227939) | 2.680031 \/ 55.444624 (-52.764594) | 2.328673 \/ 6.876477 (-4.547804) | 2.436754 \/ 2.142072 (0.294682) | 0.681195 \/ 4.805227 (-4.124033) | 0.148761 \/ 6.500664 (-6.351904) | 0.067716 \/ 0.075469 (-0.007753) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.353798 \/ 1.841788 (-0.487990) | 15.992965 \/ 8.074308 (7.918657) | 14.051539 \/ 10.191392 (3.860147) | 0.181087 \/ 0.680424 (-0.499337) | 0.018653 \/ 0.534201 (-0.515548) | 0.433499 \/ 0.579283 (-0.145784) | 0.428845 \/ 0.434364 (-0.005519) | 0.501100 \/ 0.540337 (-0.039238) | 0.603666 \/ 1.386936 (-0.783270) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#10cfa871a2f387fe9c6360e1873ea74c6d69ff67 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.010983 \/ 0.011353 (-0.000370) | 0.005630 \/ 0.011008 (-0.005378) | 0.109967 \/ 0.038508 (0.071458) | 0.101580 \/ 0.023109 (0.078471) | 0.490205 \/ 0.275898 (0.214307) | 0.534653 \/ 0.323480 (0.211173) | 0.008365 \/ 0.007986 (0.000379) | 0.004317 \/ 0.004328 (-0.000012) | 0.082429 \/ 0.004250 (0.078179) | 0.080556 \/ 0.037052 (0.043504) | 0.494627 \/ 0.258489 (0.236138) | 0.544189 \/ 0.293841 (0.250348) | 0.049419 \/ 0.128546 (-0.079127) | 0.014033 \/ 0.075646 (-0.061613) | 0.370406 \/ 0.419271 (-0.048866) | 0.083468 \/ 0.043533 (0.039935) | 0.463829 \/ 0.255139 (0.208690) | 0.507516 \/ 0.283200 (0.224316) | 0.053266 \/ 0.141683 (-0.088417) | 1.778680 \/ 1.452155 (0.326525) | 1.916616 \/ 1.492716 (0.423900) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.267646 \/ 0.018006 (0.249640) | 0.617824 \/ 0.000490 (0.617334) | 0.007720 \/ 0.000200 (0.007520) | 0.000139 \/ 0.000054 (0.000085) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034464 \/ 0.037411 (-0.002948) | 0.113626 \/ 0.014526 (0.099100) | 0.118911 \/ 0.176557 (-0.057646) | 0.194701 \/ 0.737135 (-0.542434) | 0.123431 \/ 0.296338 (-0.172907) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.606073 \/ 0.215209 (0.390863) | 6.086393 \/ 2.077655 (4.008738) | 2.568712 \/ 1.504120 (1.064593) | 2.260801 \/ 1.541195 (0.719606) | 2.411798 \/ 1.468490 (0.943307) | 0.876433 \/ 4.584777 (-3.708344) | 5.521280 \/ 3.745712 (1.775568) | 5.969722 \/ 5.269862 (0.699861) | 3.671028 \/ 4.565676 (-0.894649) | 0.097082 \/ 0.424275 (-0.327193) | 0.011354 \/ 0.007607 (0.003747) | 0.713842 \/ 0.226044 (0.487798) | 7.291172 \/ 2.268929 (5.022244) | 3.315272 \/ 55.444624 (-52.129352) | 2.777487 \/ 6.876477 (-4.098990) | 3.025449 \/ 2.142072 (0.883377) | 1.014115 \/ 4.805227 (-3.791112) | 0.217928 \/ 6.500664 (-6.282736) | 0.083097 \/ 0.075469 (0.007627) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.640060 \/ 1.841788 (-0.201728) | 25.342172 \/ 8.074308 (17.267864) | 22.776510 \/ 10.191392 (12.585118) | 0.227300 \/ 0.680424 (-0.453124) | 0.032233 \/ 0.534201 (-0.501968) | 0.507547 \/ 0.579283 (-0.071736) | 0.647044 \/ 0.434364 (0.212680) | 0.607019 \/ 0.540337 (0.066682) | 0.823548 \/ 1.386936 (-0.563388) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009576 \/ 0.011353 (-0.001777) | 0.009322 \/ 0.011008 (-0.001687) | 0.087184 \/ 0.038508 (0.048676) | 0.100795 \/ 0.023109 (0.077685) | 0.492138 \/ 0.275898 (0.216240) | 0.528386 \/ 0.323480 (0.204906) | 0.006689 \/ 0.007986 (-0.001296) | 0.004735 \/ 0.004328 (0.000406) | 0.085519 \/ 0.004250 (0.081269) | 0.072648 \/ 0.037052 (0.035595) | 0.496068 \/ 0.258489 (0.237579) | 0.549634 \/ 0.293841 (0.255793) | 0.049709 \/ 0.128546 (-0.078837) | 0.015077 \/ 0.075646 (-0.060569) | 0.099445 \/ 0.419271 (-0.319826) | 0.068080 \/ 0.043533 (0.024547) | 0.500426 \/ 0.255139 (0.245287) | 0.531437 \/ 0.283200 (0.248238) | 0.053176 \/ 0.141683 (-0.088507) | 1.827942 \/ 1.452155 (0.375787) | 1.914286 \/ 1.492716 (0.421570) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.247658 \/ 0.018006 (0.229652) | 0.590805 \/ 0.000490 (0.590315) | 0.005319 \/ 0.000200 (0.005119) | 0.000165 \/ 0.000054 (0.000110) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.036993 \/ 0.037411 (-0.000418) | 0.112944 \/ 0.014526 (0.098419) | 0.118964 \/ 0.176557 (-0.057593) | 0.194867 \/ 0.737135 (-0.542269) | 0.120816 \/ 0.296338 (-0.175523) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.638062 \/ 0.215209 (0.422853) | 6.246785 \/ 2.077655 (4.169130) | 2.957779 \/ 1.504120 (1.453659) | 2.739118 \/ 1.541195 (1.197924) | 2.795362 \/ 1.468490 (1.326872) | 0.890532 \/ 4.584777 (-3.694245) | 5.508198 \/ 3.745712 (1.762486) | 5.222315 \/ 5.269862 (-0.047547) | 3.152731 \/ 4.565676 (-1.412946) | 0.098344 \/ 0.424275 (-0.325931) | 0.008800 \/ 0.007607 (0.001193) | 0.757889 \/ 0.226044 (0.531845) | 7.545715 \/ 2.268929 (5.276787) | 3.694536 \/ 55.444624 (-51.750088) | 3.112872 \/ 6.876477 (-3.763605) | 3.182358 \/ 2.142072 (1.040285) | 1.028171 \/ 4.805227 (-3.777056) | 0.215223 \/ 6.500664 (-6.285441) | 0.085856 \/ 0.075469 (0.010387) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.853138 \/ 1.841788 (0.011350) | 25.939672 \/ 8.074308 (17.865364) | 23.118029 \/ 10.191392 (12.926637) | 0.250599 \/ 0.680424 (-0.429825) | 0.029942 \/ 0.534201 (-0.504259) | 0.508748 \/ 0.579283 (-0.070535) | 0.593966 \/ 0.434364 (0.159602) | 0.605499 \/ 0.540337 (0.065162) | 0.863827 \/ 1.386936 (-0.523109) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#5d15950d99677e9473cdcd31cfd83aa17e313e28 \"CML watermark\")\n"],"created_at":1688669294000,"updated_at":1688739180000,"closed_at":1688738473000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6009","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6009","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6009.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6009.patch","merged_at":1688738473000},"body":"Fix #5677 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6009\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6009\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6008","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6008\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6008\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6008\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6008","id":1789869344,"node_id":"I_kwDODunzps5qrz0g","number":6008,"title":"Dataset.from_generator consistently freezes at ~1000 rows","user":{"login":"andreemic","id":27695722,"node_id":"MDQ6VXNlcjI3Njk1NzIy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27695722?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/andreemic","html_url":"https:\/\/github.com\/andreemic","followers_url":"https:\/\/api.github.com\/users\/andreemic\/followers","following_url":"https:\/\/api.github.com\/users\/andreemic\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/andreemic\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/andreemic\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/andreemic\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/andreemic\/orgs","repos_url":"https:\/\/api.github.com\/users\/andreemic\/repos","events_url":"https:\/\/api.github.com\/users\/andreemic\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/andreemic\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["By default, we write data to disk (so it can be memory-mapped) every 1000 rows\/samples. You can control this with the `writer_batch_size` parameter. Also, when working with fixed-size arrays, the `ArrayXD` feature types yield better performance (e.g., in your case, `features=datasets.Features({\"i\": datasets.Array3D(shape=(512,512,3), dtype=\"float32\")})` should be faster).\r\n\r\nOur support for multi-dim arrays could be better, and we plan to improve it as part of https:\/\/github.com\/huggingface\/datasets\/issues\/5272.","> By default, we write data to disk (so it can be memory-mapped) every 1000 rows\/samples. You can control this with the `writer_batch_size` parameter. Also, when working with fixed-size arrays, the `ArrayXD` feature types yield better performance (e.g., in your case, `features=datasets.Features({\"i\": datasets.Array3D(shape=(512,512,3), dtype=\"float32\")})` should be faster).\r\n> \r\n> Our support for multi-dim arrays could be better, and we plan to improve it as part of #5272.\r\n\r\nThanks for the explanation! The Image array was just for demonstration, I use PIL Images in practice. Does that make a difference? What's the best approach for a dataset with PIL Images as rows?","It's best to use the `datasets.Image()` feature type for PIL images (to save space) :)"],"created_at":1688573208000,"updated_at":1688996799000,"closed_at":1688996799000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nWhenever I try to create a dataset which contains images using `Dataset.from_generator`, it freezes around 996 rows. I suppose it has something to do with memory consumption, but there's more memory available. I\r\n\r\nSomehow it worked a few times but mostly this makes the datasets library much more cumbersome to work with because generators are the easiest way to turn an existing dataset into a Hugging Face dataset.\r\n\r\nI've let it run in the frozen state for way longer than it can possibly take to load the actual dataset.\r\n\r\nLet me know if you have ideas how to resolve it!\n\n### Steps to reproduce the bug\n\n```python\r\nfrom datasets import Dataset\r\nimport numpy as np\r\n\r\ndef gen():\r\n for row in range(10000):\r\n yield {\"i\": np.random.rand(512, 512, 3)}\r\n \r\nDataset.from_generator(gen)\r\n# -> 90% of the time gets stuck around 1000 rows\r\n```\n\n### Expected behavior\n\nShould continue and go through all the examples yielded by the generator, or at least throw an error or somehow communicate what's going on.\n\n### Environment info\n\n- `datasets` version: 2.8.0\r\n- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- PyArrow version: 12.0.1\r\n- Pandas version: 1.5.1\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6008\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6008\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6007","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6007\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6007\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6007\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6007","id":1789782693,"node_id":"I_kwDODunzps5qreql","number":6007,"title":"Get an error \"OverflowError: Python int too large to convert to C long\" when loading a large dataset","user":{"login":"silverriver","id":2529049,"node_id":"MDQ6VXNlcjI1MjkwNDk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2529049?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/silverriver","html_url":"https:\/\/github.com\/silverriver","followers_url":"https:\/\/api.github.com\/users\/silverriver\/followers","following_url":"https:\/\/api.github.com\/users\/silverriver\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/silverriver\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/silverriver\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/silverriver\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/silverriver\/orgs","repos_url":"https:\/\/api.github.com\/users\/silverriver\/repos","events_url":"https:\/\/api.github.com\/users\/silverriver\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/silverriver\/received_events","type":"User","site_admin":false},"labels":[{"id":5705560427,"node_id":"LA_kwDODunzps8AAAABVBPxaw","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/arrow","name":"arrow","color":"c2e0c6","default":false,"description":"Related to Apache Arrow"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This error means that one of the int32 (`Value(\"int32\")`) columns in the dataset has a value that is out of the valid (int32) range.\r\n\r\nI'll open a PR to print the name of a problematic column to make debugging such errors easier.","I am afraid int32 is not the reason for this error.\r\n\r\nI have submitted a commit to use int64 for all ints in the dataset:\r\nhttps:\/\/huggingface.co\/datasets\/liwu\/MNBVC\/commit\/857ac00d9eab96a6708ad6a82bd9001686042a9e\r\n\r\nand I have updated my env to the latest datasets release:\r\nCopy-and-paste the text below in your GitHub issue.\r\n\r\n- `datasets` version: 2.13.1\r\n- Platform: macOS-13.2.1-arm64-arm-64bit\r\n- Python version: 3.11.2\r\n- Huggingface_hub version: 0.13.4\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 1.5.3\r\n\r\nBut the error still exist\r\n\r\n```\r\nDownloading and preparing dataset mnbvc\/news_peoples_daily to \/Users\/silver\/.cache\/huggingface\/datasets\/liwu___mnbvc\/news_peoples_daily\/0.0.1\/ee380f6309fe9b8b0d1fb14d77118f132444f22c8c4b28bf5c1645312688e051...\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 12\/12 [00:00<00:00, 9070.40it\/s]\r\nExtracting data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 12\/12 [00:00<00:00, 2697.16it\/s]\r\n---------------------------------------------------------------------------\r\nOverflowError Traceback (most recent call last)\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/builder.py:1647, in GeneratorBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)\r\n 1646 example = self.info.features.encode_example(record) if self.info.features is not None else record\r\n-> 1647 writer.write(example, key)\r\n 1648 num_examples_progress_update += 1\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/arrow_writer.py:490, in ArrowWriter.write(self, example, key, writer_batch_size)\r\n 488 self.hkey_record = []\r\n--> 490 self.write_examples_on_file()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/arrow_writer.py:448, in ArrowWriter.write_examples_on_file(self)\r\n 444 batch_examples[col] = [\r\n 445 row[0][col].to_pylist()[0] if isinstance(row[0][col], (pa.Array, pa.ChunkedArray)) else row[0][col]\r\n 446 for row in self.current_examples\r\n 447 ]\r\n--> 448 self.write_batch(batch_examples=batch_examples)\r\n 449 self.current_examples = []\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/arrow_writer.py:553, in ArrowWriter.write_batch(self, batch_examples, writer_batch_size)\r\n 552 typed_sequence = OptimizedTypedSequence(col_values, type=col_type, try_type=col_try_type, col=col)\r\n--> 553 arrays.append(pa.array(typed_sequence))\r\n 554 inferred_features[col] = typed_sequence.get_inferred_type()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/pyarrow\/array.pxi:236, in pyarrow.lib.array()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/pyarrow\/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/arrow_writer.py:189, in TypedSequence.__arrow_array__(self, type)\r\n 188 trying_cast_to_python_objects = True\r\n--> 189 out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n 190 # use smaller integer precisions if possible\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/pyarrow\/array.pxi:320, in pyarrow.lib.array()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/pyarrow\/array.pxi:39, in pyarrow.lib._sequence_to_array()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/pyarrow\/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\nOverflowError: Python int too large to convert to C long\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOverflowError Traceback (most recent call last)\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/builder.py:1656, in GeneratorBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)\r\n 1655 num_shards = shard_id + 1\r\n-> 1656 num_examples, num_bytes = writer.finalize()\r\n 1657 writer.close()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/arrow_writer.py:584, in ArrowWriter.finalize(self, close_stream)\r\n 583 self.hkey_record = []\r\n--> 584 self.write_examples_on_file()\r\n 585 # If schema is known, infer features even if no examples were written\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/arrow_writer.py:448, in ArrowWriter.write_examples_on_file(self)\r\n 444 batch_examples[col] = [\r\n 445 row[0][col].to_pylist()[0] if isinstance(row[0][col], (pa.Array, pa.ChunkedArray)) else row[0][col]\r\n 446 for row in self.current_examples\r\n 447 ]\r\n--> 448 self.write_batch(batch_examples=batch_examples)\r\n 449 self.current_examples = []\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/arrow_writer.py:553, in ArrowWriter.write_batch(self, batch_examples, writer_batch_size)\r\n 552 typed_sequence = OptimizedTypedSequence(col_values, type=col_type, try_type=col_try_type, col=col)\r\n--> 553 arrays.append(pa.array(typed_sequence))\r\n 554 inferred_features[col] = typed_sequence.get_inferred_type()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/pyarrow\/array.pxi:236, in pyarrow.lib.array()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/pyarrow\/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/arrow_writer.py:189, in TypedSequence.__arrow_array__(self, type)\r\n 188 trying_cast_to_python_objects = True\r\n--> 189 out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n 190 # use smaller integer precisions if possible\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/pyarrow\/array.pxi:320, in pyarrow.lib.array()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/pyarrow\/array.pxi:39, in pyarrow.lib._sequence_to_array()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/pyarrow\/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\nOverflowError: Python int too large to convert to C long\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nDatasetGenerationError Traceback (most recent call last)\r\nCell In[2], line 1\r\n----> 1 dataset = load_dataset(\"liwu\/MNBVC\", 'news_peoples_daily', split='train')\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/load.py:1809, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1806 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n 1808 # Download and prepare data\r\n-> 1809 builder_instance.download_and_prepare(\r\n 1810 download_config=download_config,\r\n 1811 download_mode=download_mode,\r\n 1812 verification_mode=verification_mode,\r\n 1813 try_from_hf_gcs=try_from_hf_gcs,\r\n 1814 num_proc=num_proc,\r\n 1815 storage_options=storage_options,\r\n 1816 )\r\n 1818 # Build dataset for splits\r\n 1819 keep_in_memory = (\r\n 1820 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)\r\n 1821 )\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/builder.py:909, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)\r\n 907 if num_proc is not None:\r\n 908 prepare_split_kwargs[\"num_proc\"] = num_proc\r\n--> 909 self._download_and_prepare(\r\n 910 dl_manager=dl_manager,\r\n 911 verification_mode=verification_mode,\r\n 912 **prepare_split_kwargs,\r\n 913 **download_and_prepare_kwargs,\r\n 914 )\r\n 915 # Sync info\r\n 916 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/builder.py:1670, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs)\r\n 1669 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs):\r\n-> 1670 super()._download_and_prepare(\r\n 1671 dl_manager,\r\n 1672 verification_mode,\r\n 1673 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS\r\n 1674 or verification_mode == VerificationMode.ALL_CHECKS,\r\n 1675 **prepare_splits_kwargs,\r\n 1676 )\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/builder.py:1004, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)\r\n 1000 split_dict.add(split_generator.split_info)\r\n 1002 try:\r\n 1003 # Prepare split will record examples associated to the split\r\n-> 1004 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 1005 except OSError as e:\r\n 1006 raise OSError(\r\n 1007 \"Cannot find data file. \"\r\n 1008 + (self.manual_download_instructions or \"\")\r\n 1009 + \"\\nOriginal error:\\n\"\r\n 1010 + str(e)\r\n 1011 ) from None\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/builder.py:1508, in GeneratorBasedBuilder._prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size)\r\n 1506 job_id = 0\r\n 1507 with pbar:\r\n-> 1508 for job_id, done, content in self._prepare_split_single(\r\n 1509 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args\r\n 1510 ):\r\n 1511 if done:\r\n 1512 result = content\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/builder.py:1665, in GeneratorBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)\r\n 1663 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:\r\n 1664 e = e.__context__\r\n-> 1665 raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\n 1667 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)\r\n\r\nDatasetGenerationError: An error occurred while generating the dataset\r\n```\r\n\r\nBesides, it works fine when I am using streamed dataset.","`simhash` is the problematic column - it has values such as `18329103420363166823` that are out of the int64 range. You can fix this by setting the feature type to `Value(\"string\")` (it's advised to use this type for hash values in general)\r\n\r\n> Besides, it works fine when I am using streamed dataset.\r\n\r\nStreaming yields Python dictionaries from the script without converting them to the Arrow representation, as this conversion step is not that cheap performance-wise.","i am using uint64 for simhash\r\n\r\nuint64 ranges up to about 3.69E19.\r\n\r\n18329103420363166823 is less than this value.\r\n\r\nmoreover, our simhash algorithm use 64 bits. it should fit in uint64.\r\n\r\n\r\n\r\n","You are right. I overlooked the feature type.\r\n\r\nThis is a reproducer:\r\n```python\r\nimport pyarrow as pa\r\nfrom datasets.arrow_writer import TypedSequence\r\n\r\npa.array(TypedSequence([18329103420363166823], type=Value(\"uint64\")))\r\n```\r\n\r\n`pa.array([18329103420363166823])` also fails with the same error, so it seems PyArrow does not always infer the correct type as NumPy does (`uint64` in this case).\r\n\r\nI'll report this issue in the Arrow repo.\r\n\r\n`pa.array([18329103420363166823], pa.uint64)` works, so maybe we can implement a temporary fix (supporting complex input such as `[{\"image\": pil_image, \"num\": uint64_value}]` would be hard though).\r\n\r\nIn the meantime, you should be able to bypass this error by returning the `simhash` values as NumPy scalars in the script:\r\n```python\r\ndef _generate_examples(self, ...):\r\n ...\r\n yield {..., \"simhash\": np.uint64(simhash), ...}\r\n```","Thank you for checking this issue in detail.\r\n\r\nHowever, it seems that using `np.uint64(simhash)` does not work. The same issue still exists.\r\n\r\nhttps:\/\/huggingface.co\/datasets\/liwu\/MNBVC\/commit\/1e44f1e400b7e61052647d44c99cdae3bae9c830\r\n\r\nAnyway, we decide to use string type for these simhash values. Hope pyarrow can fix their bug soon.","Arrow issue: https:\/\/github.com\/apache\/arrow\/issues\/36520"],"created_at":1688570210000,"updated_at":1689016277000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nWhen load a large dataset with the following code\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"liwu\/MNBVC\", 'news_peoples_daily', split='train')\r\n```\r\n\r\nWe encountered the error: \"OverflowError: Python int too large to convert to C long\"\r\nThe error look something like:\r\n\r\n```\r\nOverflowError: Python int too large to convert to C long\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOverflowError Traceback (most recent call last)\r\n in \r\n----> 1 dataset = load_dataset(\"liwu\/MNBVC\", 'news_peoples_daily', split='train', cache_dir='\/sfs\/MNBVC\/.cache\/')\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1749 ignore_verifications=ignore_verifications,\r\n 1750 try_from_hf_gcs=try_from_hf_gcs,\r\n-> 1751 use_auth_token=use_auth_token,\r\n 1752 )\r\n 1753 \r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/datasets\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 703 if not downloaded_from_gcs:\r\n 704 self._download_and_prepare(\r\n--> 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 706 )\r\n 707 # Sync info\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n 1225 \r\n 1226 def _download_and_prepare(self, dl_manager, verify_infos):\r\n-> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n 1228 \r\n 1229 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 791 try:\r\n 792 # Prepare split will record examples associated to the split\r\n--> 793 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 794 except OSError as e:\r\n 795 raise OSError(\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/datasets\/builder.py in _prepare_split(self, split_generator, check_duplicate_keys)\r\n 1219 writer.write(example, key)\r\n 1220 finally:\r\n-> 1221 num_examples, num_bytes = writer.finalize()\r\n 1222 \r\n 1223 split_generator.split_info.num_examples = num_examples\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/datasets\/arrow_writer.py in finalize(self, close_stream)\r\n 536 # Re-intializing to empty list for next batch\r\n 537 self.hkey_record = []\r\n--> 538 self.write_examples_on_file()\r\n 539 if self.pa_writer is None:\r\n 540 if self.schema:\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/datasets\/arrow_writer.py in write_examples_on_file(self)\r\n 407 # Since current_examples contains (example, key) tuples\r\n 408 batch_examples[col] = [row[0][col] for row in self.current_examples]\r\n--> 409 self.write_batch(batch_examples=batch_examples)\r\n 410 self.current_examples = []\r\n 411 \r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/datasets\/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)\r\n 506 col_try_type = try_features[col] if try_features is not None and col in try_features else None\r\n 507 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)\r\n--> 508 arrays.append(pa.array(typed_sequence))\r\n 509 inferred_features[col] = typed_sequence.get_inferred_type()\r\n 510 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/pyarrow\/array.pxi in pyarrow.lib.array()\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/pyarrow\/array.pxi in pyarrow.lib._handle_arrow_array_protocol()\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/datasets\/arrow_writer.py in __arrow_array__(self, type)\r\n 180 else:\r\n 181 trying_cast_to_python_objects = True\r\n--> 182 out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n 183 # use smaller integer precisions if possible\r\n 184 if self.trying_int_optimization:\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/pyarrow\/array.pxi in pyarrow.lib.array()\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/pyarrow\/array.pxi in pyarrow.lib._sequence_to_array()\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\nOverflowError: Python int too large to convert to C long\r\n```\r\n\r\nHowever, that dataset can be loaded in a streaming manner:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"liwu\/MNBVC\", 'news_peoples_daily', split='train', streaming=True)\r\n\r\nfor i in dataset:\r\n pass # it work well\r\n```\r\n\r\nAnother issue is reported in our dataset hub:\r\nhttps:\/\/huggingface.co\/datasets\/liwu\/MNBVC\/discussions\/2\r\n\r\n\r\n### Steps to reproduce the bug\r\n\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"liwu\/MNBVC\", 'news_peoples_daily', split='train')\r\n\r\n### Expected behavior\r\n\r\nthe dataset can be safely loaded\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-3.10.0-1160.an7.x86_64-x86_64-with-centos-7.9\r\n- Python version: 3.6.8\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.1.5","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6007\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6007\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6006","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6006\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6006\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6006\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6006","id":1788855582,"node_id":"I_kwDODunzps5qn8Ue","number":6006,"title":"NotADirectoryError when loading gigawords","user":{"login":"xipq","id":115634163,"node_id":"U_kgDOBuRv8w","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/115634163?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/xipq","html_url":"https:\/\/github.com\/xipq","followers_url":"https:\/\/api.github.com\/users\/xipq\/followers","following_url":"https:\/\/api.github.com\/users\/xipq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/xipq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/xipq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/xipq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/xipq\/orgs","repos_url":"https:\/\/api.github.com\/users\/xipq\/repos","events_url":"https:\/\/api.github.com\/users\/xipq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/xipq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["issue due to corrupted download files. resolved after cleaning download cache. sorry for any inconvinence."],"created_at":1688538221000,"updated_at":1688538662000,"closed_at":1688538661000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\ngot `NotADirectoryError` whtn loading gigawords dataset\n\n### Steps to reproduce the bug\n\nWhen running\r\n```\r\nimport datasets\r\ndatasets.load_dataset('gigaword')\r\n```\r\n\r\nGot the following exception:\r\n```bash\r\nTraceback (most recent call last): [0\/1862]\r\n File \"\/home\/x\/.conda\/envs\/dataproc\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1629, in _prepare_split_single \r\n for key, record in generator: \r\n File \"\/home\/x\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/gigaword\/ea83a8b819190acac5f2dae011fad51dccf269a0604ec5dd24795b\r\n64efb424b6\/gigaword.py\", line 115, in _generate_examples \r\n with open(src_path, encoding=\"utf-8\") as f_d, open(tgt_path, encoding=\"utf-8\") as f_s:\r\n File \"\/home\/x\/.conda\/envs\/dataproc\/lib\/python3.8\/site-packages\/datasets\/streaming.py\", line 71, in wrapper\r\n return function(*args, use_auth_token=use_auth_token, **kwargs)\r\n File \"\/home\/x\/.conda\/envs\/dataproc\/lib\/python3.8\/site-packages\/datasets\/download\/streaming_download_manager.py\", line 493, in xope\r\nn \r\n return open(main_hop, mode, *args, **kwargs) \r\nNotADirectoryError: [Errno 20] Not a directory: '\/home\/x\/.cache\/huggingface\/datasets\/downloads\/6da52431bb5124d90cf51a0187d2dbee9046e\r\n89780c4be7599794a4f559048ec\/org_data\/train.src.txt'\r\n \r\nThe above exception was the direct cause of the following exception:\r\n \r\nTraceback (most recent call last): \r\n File \"gigaword.py\", line 38, in \r\n main() \r\n File \"gigaword.py\", line 35, in main \r\n train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path=\"..\/data\/\")\r\n File \"\/home\/x\/MICL\/preprocess\/fewshot_gym_dataset.py\", line 199, in generate_k_shot_data \r\n dataset = self.load_dataset() \r\n File \"gigaword.py\", line 29, in load_dataset \r\n return datasets.load_dataset('gigaword') \r\n File \"\/home\/x\/.conda\/envs\/dataproc\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1809, in load_dataset \r\n builder_instance.download_and_prepare( \r\n File \"\/home\/x\/.conda\/envs\/dataproc\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 909, in download_and_prepare\r\n self._download_and_prepare( \r\n File \"\/home\/x\/.conda\/envs\/dataproc\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1670, in _download_and_prepare\r\n super()._download_and_prepare( \r\n File \"\/home\/x\/.conda\/envs\/dataproc\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1004, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs) \r\n File \"\/home\/x\/.conda\/envs\/dataproc\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1508, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single( \r\n File \"\/home\/x\/.conda\/envs\/dataproc\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1665, in _prepare_split_single \r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\ndatasets.builder.DatasetGenerationError: An error occurred while generating the dataset\r\n```\r\n\n\n### Expected behavior\n\nDownload and process the dataset successfully\n\n### Environment info\n\n- `datasets` version: 2.13.1\r\n- Platform: Linux-5.0.0-1032-azure-x86_64-with-glibc2.10\r\n- Python version: 3.8.0\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 12.0.1\r\n- Pandas version: 2.0.3\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6006\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6006\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6005","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6005\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6005\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6005\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6005","id":1788103576,"node_id":"PR_kwDODunzps5UoJ91","number":6005,"title":"Drop Python 3.7 support","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006152 \/ 0.011353 (-0.005200) | 0.003916 \/ 0.011008 (-0.007092) | 0.097355 \/ 0.038508 (0.058847) | 0.037228 \/ 0.023109 (0.014119) | 0.315753 \/ 0.275898 (0.039855) | 0.387949 \/ 0.323480 (0.064470) | 0.004804 \/ 0.007986 (-0.003181) | 0.002975 \/ 0.004328 (-0.001353) | 0.076932 \/ 0.004250 (0.072682) | 0.053497 \/ 0.037052 (0.016445) | 0.331143 \/ 0.258489 (0.072654) | 0.388347 \/ 0.293841 (0.094506) | 0.027535 \/ 0.128546 (-0.101011) | 0.008509 \/ 0.075646 (-0.067137) | 0.312639 \/ 0.419271 (-0.106632) | 0.047212 \/ 0.043533 (0.003679) | 0.316875 \/ 0.255139 (0.061736) | 0.352191 \/ 0.283200 (0.068992) | 0.021380 \/ 0.141683 (-0.120303) | 1.541401 \/ 1.452155 (0.089247) | 1.519420 \/ 1.492716 (0.026704) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.206332 \/ 0.018006 (0.188326) | 0.412252 \/ 0.000490 (0.411762) | 0.005119 \/ 0.000200 (0.004919) | 0.000077 \/ 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023856 \/ 0.037411 (-0.013556) | 0.098216 \/ 0.014526 (0.083691) | 0.106553 \/ 0.176557 (-0.070003) | 0.168767 \/ 0.737135 (-0.568369) | 0.109244 \/ 0.296338 (-0.187094) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.457580 \/ 0.215209 (0.242371) | 4.583246 \/ 2.077655 (2.505591) | 2.296356 \/ 1.504120 (0.792236) | 2.096216 \/ 1.541195 (0.555021) | 2.159086 \/ 1.468490 (0.690596) | 0.557905 \/ 4.584777 (-4.026872) | 3.345910 \/ 3.745712 (-0.399802) | 1.767436 \/ 5.269862 (-3.502426) | 1.021583 \/ 4.565676 (-3.544094) | 0.067265 \/ 0.424275 (-0.357011) | 0.011411 \/ 0.007607 (0.003804) | 0.559841 \/ 0.226044 (0.333797) | 5.586892 \/ 2.268929 (3.317963) | 2.735520 \/ 55.444624 (-52.709104) | 2.429393 \/ 6.876477 (-4.447084) | 2.544901 \/ 2.142072 (0.402829) | 0.667603 \/ 4.805227 (-4.137625) | 0.136244 \/ 6.500664 (-6.364421) | 0.066961 \/ 0.075469 (-0.008508) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.206529 \/ 1.841788 (-0.635259) | 13.988306 \/ 8.074308 (5.913998) | 13.481813 \/ 10.191392 (3.290421) | 0.161901 \/ 0.680424 (-0.518523) | 0.016850 \/ 0.534201 (-0.517351) | 0.367657 \/ 0.579283 (-0.211626) | 0.393343 \/ 0.434364 (-0.041021) | 0.465288 \/ 0.540337 (-0.075050) | 0.559888 \/ 1.386936 (-0.827048) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005956 \/ 0.011353 (-0.005397) | 0.003734 \/ 0.011008 (-0.007274) | 0.077841 \/ 0.038508 (0.039333) | 0.036532 \/ 0.023109 (0.013422) | 0.438923 \/ 0.275898 (0.163025) | 0.490133 \/ 0.323480 (0.166653) | 0.004651 \/ 0.007986 (-0.003335) | 0.002881 \/ 0.004328 (-0.001448) | 0.077868 \/ 0.004250 (0.073618) | 0.051700 \/ 0.037052 (0.014647) | 0.448018 \/ 0.258489 (0.189529) | 0.500304 \/ 0.293841 (0.206464) | 0.029051 \/ 0.128546 (-0.099496) | 0.008498 \/ 0.075646 (-0.067148) | 0.082932 \/ 0.419271 (-0.336339) | 0.043665 \/ 0.043533 (0.000132) | 0.431613 \/ 0.255139 (0.176474) | 0.458749 \/ 0.283200 (0.175549) | 0.021951 \/ 0.141683 (-0.119731) | 1.556043 \/ 1.452155 (0.103888) | 1.588391 \/ 1.492716 (0.095675) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.220674 \/ 0.018006 (0.202667) | 0.415408 \/ 0.000490 (0.414918) | 0.002613 \/ 0.000200 (0.002413) | 0.000075 \/ 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025548 \/ 0.037411 (-0.011863) | 0.103633 \/ 0.014526 (0.089107) | 0.115193 \/ 0.176557 (-0.061364) | 0.163971 \/ 0.737135 (-0.573164) | 0.114754 \/ 0.296338 (-0.181585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.456823 \/ 0.215209 (0.241614) | 4.569950 \/ 2.077655 (2.492296) | 2.196339 \/ 1.504120 (0.692219) | 1.985822 \/ 1.541195 (0.444628) | 2.044083 \/ 1.468490 (0.575593) | 0.567919 \/ 4.584777 (-4.016858) | 3.397515 \/ 3.745712 (-0.348197) | 1.741087 \/ 5.269862 (-3.528775) | 1.041237 \/ 4.565676 (-3.524440) | 0.068963 \/ 0.424275 (-0.355313) | 0.011677 \/ 0.007607 (0.004070) | 0.565010 \/ 0.226044 (0.338966) | 5.625886 \/ 2.268929 (3.356957) | 2.670658 \/ 55.444624 (-52.773967) | 2.300279 \/ 6.876477 (-4.576198) | 2.392178 \/ 2.142072 (0.250106) | 0.680226 \/ 4.805227 (-4.125001) | 0.139119 \/ 6.500664 (-6.361545) | 0.067953 \/ 0.075469 (-0.007516) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.303280 \/ 1.841788 (-0.538507) | 14.458686 \/ 8.074308 (6.384378) | 14.409369 \/ 10.191392 (4.217977) | 0.144581 \/ 0.680424 (-0.535843) | 0.016634 \/ 0.534201 (-0.517567) | 0.364607 \/ 0.579283 (-0.214676) | 0.394521 \/ 0.434364 (-0.039843) | 0.433417 \/ 0.540337 (-0.106921) | 0.527127 \/ 1.386936 (-0.859809) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#04a36f9546484dceadb84a133c1a460281d018f8 \"CML watermark\")\n","_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006245 \/ 0.011353 (-0.005108) | 0.003871 \/ 0.011008 (-0.007138) | 0.098823 \/ 0.038508 (0.060315) | 0.039853 \/ 0.023109 (0.016744) | 0.314989 \/ 0.275898 (0.039091) | 0.376733 \/ 0.323480 (0.053254) | 0.004754 \/ 0.007986 (-0.003232) | 0.002971 \/ 0.004328 (-0.001357) | 0.078451 \/ 0.004250 (0.074201) | 0.053160 \/ 0.037052 (0.016107) | 0.324443 \/ 0.258489 (0.065954) | 0.361488 \/ 0.293841 (0.067647) | 0.027942 \/ 0.128546 (-0.100604) | 0.008535 \/ 0.075646 (-0.067111) | 0.315526 \/ 0.419271 (-0.103745) | 0.045706 \/ 0.043533 (0.002174) | 0.329614 \/ 0.255139 (0.074475) | 0.336339 \/ 0.283200 (0.053139) | 0.021278 \/ 0.141683 (-0.120405) | 1.529710 \/ 1.452155 (0.077555) | 1.566833 \/ 1.492716 (0.074116) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.215263 \/ 0.018006 (0.197257) | 0.440320 \/ 0.000490 (0.439830) | 0.002627 \/ 0.000200 (0.002427) | 0.000075 \/ 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023971 \/ 0.037411 (-0.013441) | 0.100549 \/ 0.014526 (0.086023) | 0.106995 \/ 0.176557 (-0.069561) | 0.169630 \/ 0.737135 (-0.567505) | 0.111614 \/ 0.296338 (-0.184724) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.424911 \/ 0.215209 (0.209702) | 4.246920 \/ 2.077655 (2.169266) | 1.923321 \/ 1.504120 (0.419202) | 1.714795 \/ 1.541195 (0.173600) | 1.772906 \/ 1.468490 (0.304416) | 0.554676 \/ 4.584777 (-4.030101) | 3.478896 \/ 3.745712 (-0.266816) | 2.800494 \/ 5.269862 (-2.469368) | 1.382630 \/ 4.565676 (-3.183047) | 0.067271 \/ 0.424275 (-0.357004) | 0.010967 \/ 0.007607 (0.003360) | 0.526769 \/ 0.226044 (0.300725) | 5.288564 \/ 2.268929 (3.019636) | 2.337459 \/ 55.444624 (-53.107165) | 1.999975 \/ 6.876477 (-4.876502) | 2.102680 \/ 2.142072 (-0.039392) | 0.672181 \/ 4.805227 (-4.133046) | 0.135097 \/ 6.500664 (-6.365567) | 0.066950 \/ 0.075469 (-0.008519) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.264365 \/ 1.841788 (-0.577423) | 14.282440 \/ 8.074308 (6.208132) | 14.220200 \/ 10.191392 (4.028808) | 0.139055 \/ 0.680424 (-0.541369) | 0.016681 \/ 0.534201 (-0.517520) | 0.367936 \/ 0.579283 (-0.211348) | 0.393959 \/ 0.434364 (-0.040404) | 0.424438 \/ 0.540337 (-0.115900) | 0.508065 \/ 1.386936 (-0.878872) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006514 \/ 0.011353 (-0.004839) | 0.003890 \/ 0.011008 (-0.007118) | 0.078871 \/ 0.038508 (0.040363) | 0.038080 \/ 0.023109 (0.014971) | 0.358282 \/ 0.275898 (0.082384) | 0.430654 \/ 0.323480 (0.107174) | 0.005712 \/ 0.007986 (-0.002273) | 0.003030 \/ 0.004328 (-0.001299) | 0.078636 \/ 0.004250 (0.074386) | 0.057771 \/ 0.037052 (0.020719) | 0.368814 \/ 0.258489 (0.110325) | 0.437047 \/ 0.293841 (0.143206) | 0.029470 \/ 0.128546 (-0.099076) | 0.008523 \/ 0.075646 (-0.067124) | 0.083334 \/ 0.419271 (-0.335938) | 0.044505 \/ 0.043533 (0.000972) | 0.357484 \/ 0.255139 (0.102345) | 0.393839 \/ 0.283200 (0.110639) | 0.023340 \/ 0.141683 (-0.118343) | 1.561033 \/ 1.452155 (0.108878) | 1.595560 \/ 1.492716 (0.102844) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.204149 \/ 0.018006 (0.186143) | 0.442747 \/ 0.000490 (0.442257) | 0.003105 \/ 0.000200 (0.002905) | 0.000085 \/ 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027002 \/ 0.037411 (-0.010409) | 0.105595 \/ 0.014526 (0.091070) | 0.108695 \/ 0.176557 (-0.067861) | 0.163182 \/ 0.737135 (-0.573953) | 0.114999 \/ 0.296338 (-0.181339) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.483713 \/ 0.215209 (0.268504) | 4.836063 \/ 2.077655 (2.758409) | 2.488072 \/ 1.504120 (0.983952) | 2.289556 \/ 1.541195 (0.748361) | 2.342912 \/ 1.468490 (0.874422) | 0.565937 \/ 4.584777 (-4.018840) | 3.479085 \/ 3.745712 (-0.266627) | 1.770922 \/ 5.269862 (-3.498940) | 1.046084 \/ 4.565676 (-3.519592) | 0.067857 \/ 0.424275 (-0.356418) | 0.011283 \/ 0.007607 (0.003676) | 0.592966 \/ 0.226044 (0.366921) | 5.932842 \/ 2.268929 (3.663914) | 2.956252 \/ 55.444624 (-52.488372) | 2.602704 \/ 6.876477 (-4.273772) | 2.715625 \/ 2.142072 (0.573552) | 0.674299 \/ 4.805227 (-4.130929) | 0.136039 \/ 6.500664 (-6.364625) | 0.067629 \/ 0.075469 (-0.007840) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.333734 \/ 1.841788 (-0.508054) | 14.561943 \/ 8.074308 (6.487634) | 14.455385 \/ 10.191392 (4.263993) | 0.132020 \/ 0.680424 (-0.548404) | 0.016893 \/ 0.534201 (-0.517308) | 0.367146 \/ 0.579283 (-0.212137) | 0.399623 \/ 0.434364 (-0.034741) | 0.432658 \/ 0.540337 (-0.107680) | 0.530475 \/ 1.386936 (-0.856461) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#18da5adb22b2b403b8d8ae673192746d2ed7e9f9 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006045 \/ 0.011353 (-0.005308) | 0.003906 \/ 0.011008 (-0.007103) | 0.097558 \/ 0.038508 (0.059050) | 0.038827 \/ 0.023109 (0.015718) | 0.393564 \/ 0.275898 (0.117666) | 0.442459 \/ 0.323480 (0.118980) | 0.004792 \/ 0.007986 (-0.003194) | 0.002984 \/ 0.004328 (-0.001345) | 0.076419 \/ 0.004250 (0.072169) | 0.053606 \/ 0.037052 (0.016554) | 0.409743 \/ 0.258489 (0.151254) | 0.445753 \/ 0.293841 (0.151912) | 0.027753 \/ 0.128546 (-0.100793) | 0.008428 \/ 0.075646 (-0.067219) | 0.310267 \/ 0.419271 (-0.109004) | 0.057582 \/ 0.043533 (0.014049) | 0.396624 \/ 0.255139 (0.141485) | 0.416288 \/ 0.283200 (0.133089) | 0.029048 \/ 0.141683 (-0.112635) | 1.495362 \/ 1.452155 (0.043207) | 1.546331 \/ 1.492716 (0.053615) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.203832 \/ 0.018006 (0.185826) | 0.423649 \/ 0.000490 (0.423160) | 0.004533 \/ 0.000200 (0.004333) | 0.000076 \/ 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023084 \/ 0.037411 (-0.014328) | 0.100503 \/ 0.014526 (0.085977) | 0.105058 \/ 0.176557 (-0.071499) | 0.168506 \/ 0.737135 (-0.568629) | 0.112019 \/ 0.296338 (-0.184320) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.425877 \/ 0.215209 (0.210668) | 4.251278 \/ 2.077655 (2.173624) | 1.931339 \/ 1.504120 (0.427219) | 1.730578 \/ 1.541195 (0.189383) | 1.750637 \/ 1.468490 (0.282147) | 0.559307 \/ 4.584777 (-4.025470) | 3.461665 \/ 3.745712 (-0.284047) | 2.826959 \/ 5.269862 (-2.442903) | 1.418448 \/ 4.565676 (-3.147229) | 0.067881 \/ 0.424275 (-0.356394) | 0.011394 \/ 0.007607 (0.003787) | 0.533226 \/ 0.226044 (0.307181) | 5.341849 \/ 2.268929 (3.072921) | 2.367832 \/ 55.444624 (-53.076792) | 2.027240 \/ 6.876477 (-4.849236) | 2.095852 \/ 2.142072 (-0.046220) | 0.673790 \/ 4.805227 (-4.131437) | 0.136044 \/ 6.500664 (-6.364620) | 0.066350 \/ 0.075469 (-0.009119) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.203740 \/ 1.841788 (-0.638048) | 13.720879 \/ 8.074308 (5.646571) | 13.405939 \/ 10.191392 (3.214547) | 0.146792 \/ 0.680424 (-0.533632) | 0.016844 \/ 0.534201 (-0.517357) | 0.373455 \/ 0.579283 (-0.205828) | 0.394596 \/ 0.434364 (-0.039768) | 0.464715 \/ 0.540337 (-0.075623) | 0.558931 \/ 1.386936 (-0.828005) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006118 \/ 0.011353 (-0.005235) | 0.003817 \/ 0.011008 (-0.007191) | 0.077494 \/ 0.038508 (0.038985) | 0.037507 \/ 0.023109 (0.014398) | 0.387030 \/ 0.275898 (0.111132) | 0.437352 \/ 0.323480 (0.113872) | 0.004810 \/ 0.007986 (-0.003176) | 0.002935 \/ 0.004328 (-0.001394) | 0.077143 \/ 0.004250 (0.072892) | 0.053986 \/ 0.037052 (0.016933) | 0.393164 \/ 0.258489 (0.134675) | 0.449603 \/ 0.293841 (0.155762) | 0.029303 \/ 0.128546 (-0.099244) | 0.008481 \/ 0.075646 (-0.067165) | 0.083363 \/ 0.419271 (-0.335908) | 0.043877 \/ 0.043533 (0.000344) | 0.378175 \/ 0.255139 (0.123036) | 0.403996 \/ 0.283200 (0.120797) | 0.021688 \/ 0.141683 (-0.119995) | 1.541606 \/ 1.452155 (0.089452) | 1.552996 \/ 1.492716 (0.060280) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.236759 \/ 0.018006 (0.218752) | 0.416221 \/ 0.000490 (0.415732) | 0.000862 \/ 0.000200 (0.000662) | 0.000070 \/ 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025543 \/ 0.037411 (-0.011868) | 0.101731 \/ 0.014526 (0.087206) | 0.108482 \/ 0.176557 (-0.068075) | 0.160290 \/ 0.737135 (-0.576845) | 0.111392 \/ 0.296338 (-0.184946) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.457767 \/ 0.215209 (0.242558) | 4.565976 \/ 2.077655 (2.488321) | 2.245413 \/ 1.504120 (0.741294) | 2.031458 \/ 1.541195 (0.490264) | 2.073193 \/ 1.468490 (0.604702) | 0.560461 \/ 4.584777 (-4.024316) | 3.422536 \/ 3.745712 (-0.323176) | 2.977017 \/ 5.269862 (-2.292845) | 1.377021 \/ 4.565676 (-3.188655) | 0.068444 \/ 0.424275 (-0.355831) | 0.011036 \/ 0.007607 (0.003429) | 0.571501 \/ 0.226044 (0.345456) | 5.702652 \/ 2.268929 (3.433723) | 2.727132 \/ 55.444624 (-52.717492) | 2.399269 \/ 6.876477 (-4.477208) | 2.574281 \/ 2.142072 (0.432208) | 0.682600 \/ 4.805227 (-4.122627) | 0.136943 \/ 6.500664 (-6.363722) | 0.067126 \/ 0.075469 (-0.008343) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.322196 \/ 1.841788 (-0.519592) | 14.239509 \/ 8.074308 (6.165201) | 14.235779 \/ 10.191392 (4.044387) | 0.148262 \/ 0.680424 (-0.532162) | 0.016566 \/ 0.534201 (-0.517635) | 0.364034 \/ 0.579283 (-0.215249) | 0.399157 \/ 0.434364 (-0.035207) | 0.426348 \/ 0.540337 (-0.113990) | 0.520804 \/ 1.386936 (-0.866132) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#8f57aae06bd325d76cb70cb774450f3a66f169cf \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007808 \/ 0.011353 (-0.003545) | 0.004706 \/ 0.011008 (-0.006303) | 0.100530 \/ 0.038508 (0.062022) | 0.052052 \/ 0.023109 (0.028943) | 0.419300 \/ 0.275898 (0.143402) | 0.488451 \/ 0.323480 (0.164971) | 0.006350 \/ 0.007986 (-0.001636) | 0.003875 \/ 0.004328 (-0.000453) | 0.076489 \/ 0.004250 (0.072238) | 0.077554 \/ 0.037052 (0.040502) | 0.435863 \/ 0.258489 (0.177373) | 0.483241 \/ 0.293841 (0.189400) | 0.037518 \/ 0.128546 (-0.091028) | 0.009857 \/ 0.075646 (-0.065789) | 0.340933 \/ 0.419271 (-0.078339) | 0.087046 \/ 0.043533 (0.043514) | 0.410721 \/ 0.255139 (0.155582) | 0.428995 \/ 0.283200 (0.145795) | 0.041701 \/ 0.141683 (-0.099982) | 1.821017 \/ 1.452155 (0.368862) | 1.837021 \/ 1.492716 (0.344305) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.228444 \/ 0.018006 (0.210438) | 0.480446 \/ 0.000490 (0.479956) | 0.004963 \/ 0.000200 (0.004763) | 0.000101 \/ 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032485 \/ 0.037411 (-0.004926) | 0.096500 \/ 0.014526 (0.081974) | 0.111547 \/ 0.176557 (-0.065010) | 0.178842 \/ 0.737135 (-0.558294) | 0.111099 \/ 0.296338 (-0.185240) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.467159 \/ 0.215209 (0.251950) | 4.701676 \/ 2.077655 (2.624021) | 2.390560 \/ 1.504120 (0.886440) | 2.197722 \/ 1.541195 (0.656528) | 2.264705 \/ 1.468490 (0.796215) | 0.568667 \/ 4.584777 (-4.016110) | 4.200724 \/ 3.745712 (0.455012) | 3.777625 \/ 5.269862 (-1.492236) | 2.372451 \/ 4.565676 (-2.193225) | 0.067562 \/ 0.424275 (-0.356714) | 0.008947 \/ 0.007607 (0.001340) | 0.556910 \/ 0.226044 (0.330865) | 5.528927 \/ 2.268929 (3.259998) | 2.902780 \/ 55.444624 (-52.541844) | 2.507933 \/ 6.876477 (-4.368544) | 2.734627 \/ 2.142072 (0.592554) | 0.683305 \/ 4.805227 (-4.121922) | 0.158288 \/ 6.500664 (-6.342376) | 0.071252 \/ 0.075469 (-0.004217) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.487502 \/ 1.841788 (-0.354286) | 22.193341 \/ 8.074308 (14.119033) | 15.922607 \/ 10.191392 (5.731215) | 0.172189 \/ 0.680424 (-0.508235) | 0.021502 \/ 0.534201 (-0.512699) | 0.471198 \/ 0.579283 (-0.108085) | 0.475979 \/ 0.434364 (0.041615) | 0.544675 \/ 0.540337 (0.004338) | 0.756102 \/ 1.386936 (-0.630834) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007635 \/ 0.011353 (-0.003717) | 0.004614 \/ 0.011008 (-0.006394) | 0.075852 \/ 0.038508 (0.037344) | 0.049700 \/ 0.023109 (0.026591) | 0.425957 \/ 0.275898 (0.150059) | 0.512590 \/ 0.323480 (0.189110) | 0.006921 \/ 0.007986 (-0.001065) | 0.003714 \/ 0.004328 (-0.000615) | 0.075536 \/ 0.004250 (0.071286) | 0.070206 \/ 0.037052 (0.033153) | 0.455706 \/ 0.258489 (0.197217) | 0.512231 \/ 0.293841 (0.218390) | 0.036685 \/ 0.128546 (-0.091861) | 0.009793 \/ 0.075646 (-0.065853) | 0.084208 \/ 0.419271 (-0.335064) | 0.065262 \/ 0.043533 (0.021729) | 0.423761 \/ 0.255139 (0.168622) | 0.456791 \/ 0.283200 (0.173591) | 0.044539 \/ 0.141683 (-0.097144) | 1.797029 \/ 1.452155 (0.344874) | 1.864124 \/ 1.492716 (0.371408) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.366840 \/ 0.018006 (0.348834) | 0.479254 \/ 0.000490 (0.478765) | 0.070383 \/ 0.000200 (0.070183) | 0.000762 \/ 0.000054 (0.000707) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034233 \/ 0.037411 (-0.003178) | 0.103140 \/ 0.014526 (0.088614) | 0.117099 \/ 0.176557 (-0.059457) | 0.178532 \/ 0.737135 (-0.558603) | 0.120092 \/ 0.296338 (-0.176247) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.492993 \/ 0.215209 (0.277784) | 4.878776 \/ 2.077655 (2.801121) | 2.566666 \/ 1.504120 (1.062547) | 2.356383 \/ 1.541195 (0.815188) | 2.454723 \/ 1.468490 (0.986233) | 0.571432 \/ 4.584777 (-4.013345) | 4.240554 \/ 3.745712 (0.494842) | 7.509259 \/ 5.269862 (2.239398) | 4.040294 \/ 4.565676 (-0.525382) | 0.067409 \/ 0.424275 (-0.356866) | 0.008657 \/ 0.007607 (0.001050) | 0.585751 \/ 0.226044 (0.359707) | 5.967668 \/ 2.268929 (3.698739) | 3.195573 \/ 55.444624 (-52.249052) | 2.839772 \/ 6.876477 (-4.036704) | 2.806319 \/ 2.142072 (0.664246) | 0.681502 \/ 4.805227 (-4.123725) | 0.158673 \/ 6.500664 (-6.341991) | 0.073224 \/ 0.075469 (-0.002245) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.623335 \/ 1.841788 (-0.218453) | 22.490806 \/ 8.074308 (14.416498) | 16.762435 \/ 10.191392 (6.571043) | 0.180961 \/ 0.680424 (-0.499463) | 0.022716 \/ 0.534201 (-0.511485) | 0.472910 \/ 0.579283 (-0.106373) | 0.471616 \/ 0.434364 (0.037252) | 0.548192 \/ 0.540337 (0.007854) | 0.734357 \/ 1.386936 (-0.652579) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#c0498b47a00153d4730352b6595fc51ab054fb95 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005858 \/ 0.011353 (-0.005495) | 0.003512 \/ 0.011008 (-0.007497) | 0.079739 \/ 0.038508 (0.041231) | 0.057736 \/ 0.023109 (0.034627) | 0.317640 \/ 0.275898 (0.041742) | 0.354157 \/ 0.323480 (0.030677) | 0.004772 \/ 0.007986 (-0.003214) | 0.002824 \/ 0.004328 (-0.001504) | 0.063288 \/ 0.004250 (0.059037) | 0.049542 \/ 0.037052 (0.012489) | 0.323974 \/ 0.258489 (0.065485) | 0.372149 \/ 0.293841 (0.078308) | 0.026841 \/ 0.128546 (-0.101705) | 0.007846 \/ 0.075646 (-0.067800) | 0.262546 \/ 0.419271 (-0.156725) | 0.051952 \/ 0.043533 (0.008420) | 0.319439 \/ 0.255139 (0.064300) | 0.343862 \/ 0.283200 (0.060663) | 0.027021 \/ 0.141683 (-0.114662) | 1.445211 \/ 1.452155 (-0.006944) | 1.485006 \/ 1.492716 (-0.007711) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.183174 \/ 0.018006 (0.165167) | 0.422794 \/ 0.000490 (0.422304) | 0.004148 \/ 0.000200 (0.003948) | 0.000067 \/ 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023037 \/ 0.037411 (-0.014374) | 0.071300 \/ 0.014526 (0.056775) | 0.083022 \/ 0.176557 (-0.093535) | 0.146215 \/ 0.737135 (-0.590920) | 0.082549 \/ 0.296338 (-0.213789) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.422846 \/ 0.215209 (0.207637) | 4.215280 \/ 2.077655 (2.137626) | 2.256802 \/ 1.504120 (0.752682) | 2.056867 \/ 1.541195 (0.515673) | 2.102478 \/ 1.468490 (0.633988) | 0.497552 \/ 4.584777 (-4.087225) | 3.049716 \/ 3.745712 (-0.695996) | 4.209227 \/ 5.269862 (-1.060635) | 2.599947 \/ 4.565676 (-1.965730) | 0.059131 \/ 0.424275 (-0.365144) | 0.006459 \/ 0.007607 (-0.001148) | 0.495047 \/ 0.226044 (0.269003) | 4.952332 \/ 2.268929 (2.683404) | 2.675260 \/ 55.444624 (-52.769365) | 2.333223 \/ 6.876477 (-4.543254) | 2.449573 \/ 2.142072 (0.307500) | 0.583420 \/ 4.805227 (-4.221807) | 0.125140 \/ 6.500664 (-6.375524) | 0.060209 \/ 0.075469 (-0.015260) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.215033 \/ 1.841788 (-0.626755) | 18.101107 \/ 8.074308 (10.026799) | 13.489222 \/ 10.191392 (3.297830) | 0.147122 \/ 0.680424 (-0.533302) | 0.016567 \/ 0.534201 (-0.517634) | 0.329909 \/ 0.579283 (-0.249374) | 0.340952 \/ 0.434364 (-0.093412) | 0.379166 \/ 0.540337 (-0.161172) | 0.510767 \/ 1.386936 (-0.876169) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005942 \/ 0.011353 (-0.005411) | 0.003628 \/ 0.011008 (-0.007380) | 0.061975 \/ 0.038508 (0.023467) | 0.058331 \/ 0.023109 (0.035221) | 0.393277 \/ 0.275898 (0.117379) | 0.410740 \/ 0.323480 (0.087261) | 0.004546 \/ 0.007986 (-0.003440) | 0.002826 \/ 0.004328 (-0.001503) | 0.062216 \/ 0.004250 (0.057966) | 0.049801 \/ 0.037052 (0.012748) | 0.394070 \/ 0.258489 (0.135581) | 0.414407 \/ 0.293841 (0.120566) | 0.027161 \/ 0.128546 (-0.101385) | 0.007901 \/ 0.075646 (-0.067746) | 0.066778 \/ 0.419271 (-0.352493) | 0.041354 \/ 0.043533 (-0.002179) | 0.379432 \/ 0.255139 (0.124293) | 0.402966 \/ 0.283200 (0.119766) | 0.020279 \/ 0.141683 (-0.121404) | 1.416986 \/ 1.452155 (-0.035169) | 1.474335 \/ 1.492716 (-0.018382) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.226147 \/ 0.018006 (0.208140) | 0.404361 \/ 0.000490 (0.403871) | 0.000358 \/ 0.000200 (0.000158) | 0.000054 \/ 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025105 \/ 0.037411 (-0.012306) | 0.075849 \/ 0.014526 (0.061323) | 0.084781 \/ 0.176557 (-0.091775) | 0.137415 \/ 0.737135 (-0.599720) | 0.086288 \/ 0.296338 (-0.210051) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.445925 \/ 0.215209 (0.230716) | 4.453478 \/ 2.077655 (2.375823) | 2.419048 \/ 1.504120 (0.914928) | 2.246363 \/ 1.541195 (0.705168) | 2.304022 \/ 1.468490 (0.835532) | 0.499132 \/ 4.584777 (-4.085645) | 3.001336 \/ 3.745712 (-0.744376) | 2.902593 \/ 5.269862 (-2.367269) | 1.819843 \/ 4.565676 (-2.745834) | 0.057210 \/ 0.424275 (-0.367065) | 0.006338 \/ 0.007607 (-0.001269) | 0.523280 \/ 0.226044 (0.297236) | 5.235969 \/ 2.268929 (2.967040) | 2.897585 \/ 55.444624 (-52.547039) | 2.541586 \/ 6.876477 (-4.334891) | 2.564233 \/ 2.142072 (0.422160) | 0.584714 \/ 4.805227 (-4.220513) | 0.124611 \/ 6.500664 (-6.376053) | 0.061774 \/ 0.075469 (-0.013695) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.349799 \/ 1.841788 (-0.491988) | 18.225076 \/ 8.074308 (10.150768) | 13.781518 \/ 10.191392 (3.590126) | 0.130562 \/ 0.680424 (-0.549862) | 0.016434 \/ 0.534201 (-0.517767) | 0.331607 \/ 0.579283 (-0.247676) | 0.343456 \/ 0.434364 (-0.090908) | 0.380437 \/ 0.540337 (-0.159900) | 0.522793 \/ 1.386936 (-0.864143) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#f0a3dbbd2e7ace162346d95ec27db674e80c1e23 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.013721 \/ 0.011353 (0.002368) | 0.005715 \/ 0.011008 (-0.005293) | 0.090116 \/ 0.038508 (0.051608) | 0.087185 \/ 0.023109 (0.064075) | 0.427813 \/ 0.275898 (0.151915) | 0.390614 \/ 0.323480 (0.067135) | 0.006976 \/ 0.007986 (-0.001009) | 0.004231 \/ 0.004328 (-0.000098) | 0.078320 \/ 0.004250 (0.074070) | 0.066235 \/ 0.037052 (0.029183) | 0.439904 \/ 0.258489 (0.181415) | 0.424119 \/ 0.293841 (0.130278) | 0.050362 \/ 0.128546 (-0.078184) | 0.014992 \/ 0.075646 (-0.060654) | 0.293519 \/ 0.419271 (-0.125753) | 0.066906 \/ 0.043533 (0.023373) | 0.449657 \/ 0.255139 (0.194518) | 0.393800 \/ 0.283200 (0.110600) | 0.032258 \/ 0.141683 (-0.109425) | 1.539534 \/ 1.452155 (0.087379) | 1.675292 \/ 1.492716 (0.182576) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.210515 \/ 0.018006 (0.192508) | 0.506817 \/ 0.000490 (0.506327) | 0.001938 \/ 0.000200 (0.001738) | 0.000118 \/ 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026019 \/ 0.037411 (-0.011393) | 0.080635 \/ 0.014526 (0.066109) | 0.103050 \/ 0.176557 (-0.073507) | 0.160597 \/ 0.737135 (-0.576538) | 0.095844 \/ 0.296338 (-0.200495) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.506359 \/ 0.215209 (0.291150) | 5.041586 \/ 2.077655 (2.963931) | 2.198288 \/ 1.504120 (0.694168) | 1.987544 \/ 1.541195 (0.446349) | 1.866790 \/ 1.468490 (0.398300) | 0.681642 \/ 4.584777 (-3.903135) | 4.719306 \/ 3.745712 (0.973593) | 7.669869 \/ 5.269862 (2.400008) | 4.466082 \/ 4.565676 (-0.099595) | 0.092974 \/ 0.424275 (-0.331301) | 0.008196 \/ 0.007607 (0.000589) | 0.707656 \/ 0.226044 (0.481612) | 6.974507 \/ 2.268929 (4.705579) | 3.254206 \/ 55.444624 (-52.190418) | 2.499019 \/ 6.876477 (-4.377457) | 2.509089 \/ 2.142072 (0.367017) | 0.915952 \/ 4.805227 (-3.889276) | 0.192119 \/ 6.500664 (-6.308545) | 0.065473 \/ 0.075469 (-0.009996) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.309078 \/ 1.841788 (-0.532710) | 19.660348 \/ 8.074308 (11.586040) | 16.659582 \/ 10.191392 (6.468190) | 0.194315 \/ 0.680424 (-0.486109) | 0.027773 \/ 0.534201 (-0.506428) | 0.401241 \/ 0.579283 (-0.178042) | 0.515799 \/ 0.434364 (0.081435) | 0.488772 \/ 0.540337 (-0.051566) | 0.604790 \/ 1.386936 (-0.782146) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006823 \/ 0.011353 (-0.004530) | 0.003940 \/ 0.011008 (-0.007068) | 0.061533 \/ 0.038508 (0.023025) | 0.065241 \/ 0.023109 (0.042132) | 0.411790 \/ 0.275898 (0.135892) | 0.475720 \/ 0.323480 (0.152241) | 0.005376 \/ 0.007986 (-0.002609) | 0.003433 \/ 0.004328 (-0.000895) | 0.065703 \/ 0.004250 (0.061452) | 0.050736 \/ 0.037052 (0.013683) | 0.435890 \/ 0.258489 (0.177401) | 0.436698 \/ 0.293841 (0.142857) | 0.040357 \/ 0.128546 (-0.088189) | 0.011578 \/ 0.075646 (-0.064069) | 0.072831 \/ 0.419271 (-0.346440) | 0.055698 \/ 0.043533 (0.012165) | 0.408225 \/ 0.255139 (0.153086) | 0.439551 \/ 0.283200 (0.156352) | 0.030469 \/ 0.141683 (-0.111214) | 1.443866 \/ 1.452155 (-0.008289) | 1.502022 \/ 1.492716 (0.009306) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.290338 \/ 0.018006 (0.272332) | 0.540726 \/ 0.000490 (0.540236) | 0.003244 \/ 0.000200 (0.003044) | 0.000170 \/ 0.000054 (0.000116) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030865 \/ 0.037411 (-0.006547) | 0.090866 \/ 0.014526 (0.076340) | 0.106224 \/ 0.176557 (-0.070332) | 0.166583 \/ 0.737135 (-0.570553) | 0.104448 \/ 0.296338 (-0.191891) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.518025 \/ 0.215209 (0.302816) | 6.027065 \/ 2.077655 (3.949410) | 2.671840 \/ 1.504120 (1.167720) | 2.273949 \/ 1.541195 (0.732754) | 2.414892 \/ 1.468490 (0.946402) | 0.774318 \/ 4.584777 (-3.810459) | 5.020364 \/ 3.745712 (1.274652) | 4.146927 \/ 5.269862 (-1.122934) | 2.584598 \/ 4.565676 (-1.981078) | 0.089519 \/ 0.424275 (-0.334756) | 0.009181 \/ 0.007607 (0.001574) | 0.654467 \/ 0.226044 (0.428423) | 6.421595 \/ 2.268929 (4.152666) | 3.091589 \/ 55.444624 (-52.353036) | 2.554798 \/ 6.876477 (-4.321679) | 2.441354 \/ 2.142072 (0.299282) | 0.943386 \/ 4.805227 (-3.861841) | 0.173641 \/ 6.500664 (-6.327023) | 0.072209 \/ 0.075469 (-0.003260) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.557147 \/ 1.841788 (-0.284641) | 19.980747 \/ 8.074308 (11.906439) | 17.816813 \/ 10.191392 (7.625421) | 0.212078 \/ 0.680424 (-0.468346) | 0.025435 \/ 0.534201 (-0.508766) | 0.396200 \/ 0.579283 (-0.183084) | 0.546249 \/ 0.434364 (0.111885) | 0.459632 \/ 0.540337 (-0.080705) | 0.616548 \/ 1.386936 (-0.770388) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#535e972a70a3d4f8490a7e1a77ac43d5a4ab2655 \"CML watermark\")\n"],"created_at":1688482957000,"updated_at":1688657561000,"closed_at":1688656963000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6005","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6005","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6005.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6005.patch","merged_at":1688656963000},"body":"`hfh` and `transformers` have dropped Python 3.7 support, so we should do the same :).\r\n\r\n(Based on the stats, it seems less than 10% of the users use `datasets` with Python 3.7)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6005\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6005\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6004","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6004\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6004\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6004\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6004","id":1786636368,"node_id":"PR_kwDODunzps5UjN2h","number":6004,"title":"Misc improvements","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006897 \/ 0.011353 (-0.004456) | 0.004207 \/ 0.011008 (-0.006802) | 0.104828 \/ 0.038508 (0.066320) | 0.048054 \/ 0.023109 (0.024945) | 0.373991 \/ 0.275898 (0.098093) | 0.426740 \/ 0.323480 (0.103260) | 0.005540 \/ 0.007986 (-0.002446) | 0.003531 \/ 0.004328 (-0.000797) | 0.079304 \/ 0.004250 (0.075053) | 0.066996 \/ 0.037052 (0.029944) | 0.370675 \/ 0.258489 (0.112186) | 0.414154 \/ 0.293841 (0.120313) | 0.031567 \/ 0.128546 (-0.096979) | 0.008843 \/ 0.075646 (-0.066803) | 0.357426 \/ 0.419271 (-0.061845) | 0.067040 \/ 0.043533 (0.023508) | 0.362384 \/ 0.255139 (0.107245) | 0.376056 \/ 0.283200 (0.092856) | 0.032985 \/ 0.141683 (-0.108697) | 1.560603 \/ 1.452155 (0.108448) | 1.619024 \/ 1.492716 (0.126308) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.229059 \/ 0.018006 (0.211053) | 0.440513 \/ 0.000490 (0.440023) | 0.004647 \/ 0.000200 (0.004447) | 0.000085 \/ 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029517 \/ 0.037411 (-0.007894) | 0.120974 \/ 0.014526 (0.106448) | 0.125070 \/ 0.176557 (-0.051486) | 0.184695 \/ 0.737135 (-0.552441) | 0.130244 \/ 0.296338 (-0.166095) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.436930 \/ 0.215209 (0.221721) | 4.356118 \/ 2.077655 (2.278463) | 2.049169 \/ 1.504120 (0.545049) | 1.842898 \/ 1.541195 (0.301703) | 1.918948 \/ 1.468490 (0.450458) | 0.553573 \/ 4.584777 (-4.031204) | 3.883195 \/ 3.745712 (0.137483) | 3.209780 \/ 5.269862 (-2.060081) | 1.551707 \/ 4.565676 (-3.013970) | 0.068181 \/ 0.424275 (-0.356094) | 0.012370 \/ 0.007607 (0.004762) | 0.539899 \/ 0.226044 (0.313854) | 5.380008 \/ 2.268929 (3.111079) | 2.518178 \/ 55.444624 (-52.926446) | 2.174190 \/ 6.876477 (-4.702286) | 2.317812 \/ 2.142072 (0.175740) | 0.674154 \/ 4.805227 (-4.131073) | 0.149313 \/ 6.500664 (-6.351351) | 0.068297 \/ 0.075469 (-0.007172) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.261426 \/ 1.841788 (-0.580362) | 15.316378 \/ 8.074308 (7.242070) | 13.573512 \/ 10.191392 (3.382120) | 0.190022 \/ 0.680424 (-0.490401) | 0.018697 \/ 0.534201 (-0.515504) | 0.448122 \/ 0.579283 (-0.131161) | 0.435044 \/ 0.434364 (0.000681) | 0.550065 \/ 0.540337 (0.009728) | 0.653547 \/ 1.386936 (-0.733389) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007116 \/ 0.011353 (-0.004237) | 0.004375 \/ 0.011008 (-0.006633) | 0.081793 \/ 0.038508 (0.043285) | 0.047980 \/ 0.023109 (0.024871) | 0.392185 \/ 0.275898 (0.116287) | 0.462263 \/ 0.323480 (0.138783) | 0.005574 \/ 0.007986 (-0.002412) | 0.003552 \/ 0.004328 (-0.000776) | 0.080413 \/ 0.004250 (0.076162) | 0.065539 \/ 0.037052 (0.028487) | 0.413137 \/ 0.258489 (0.154648) | 0.467377 \/ 0.293841 (0.173536) | 0.034386 \/ 0.128546 (-0.094160) | 0.009183 \/ 0.075646 (-0.066464) | 0.087542 \/ 0.419271 (-0.331730) | 0.053954 \/ 0.043533 (0.010421) | 0.385096 \/ 0.255139 (0.129957) | 0.404900 \/ 0.283200 (0.121701) | 0.025908 \/ 0.141683 (-0.115775) | 1.550159 \/ 1.452155 (0.098005) | 1.598794 \/ 1.492716 (0.106078) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.246222 \/ 0.018006 (0.228216) | 0.441095 \/ 0.000490 (0.440605) | 0.006863 \/ 0.000200 (0.006663) | 0.000109 \/ 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032179 \/ 0.037411 (-0.005233) | 0.120112 \/ 0.014526 (0.105586) | 0.129326 \/ 0.176557 (-0.047230) | 0.184542 \/ 0.737135 (-0.552593) | 0.135038 \/ 0.296338 (-0.161300) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.459002 \/ 0.215209 (0.243793) | 4.580258 \/ 2.077655 (2.502604) | 2.296689 \/ 1.504120 (0.792569) | 2.104338 \/ 1.541195 (0.563143) | 2.182896 \/ 1.468490 (0.714406) | 0.546447 \/ 4.584777 (-4.038330) | 3.854047 \/ 3.745712 (0.108335) | 1.873829 \/ 5.269862 (-3.396032) | 1.116484 \/ 4.565676 (-3.449193) | 0.067158 \/ 0.424275 (-0.357117) | 0.012035 \/ 0.007607 (0.004428) | 0.556642 \/ 0.226044 (0.330597) | 5.574436 \/ 2.268929 (3.305508) | 2.828223 \/ 55.444624 (-52.616402) | 2.519851 \/ 6.876477 (-4.356626) | 2.668594 \/ 2.142072 (0.526521) | 0.675989 \/ 4.805227 (-4.129238) | 0.146075 \/ 6.500664 (-6.354589) | 0.067788 \/ 0.075469 (-0.007681) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.345958 \/ 1.841788 (-0.495830) | 15.672748 \/ 8.074308 (7.598440) | 14.937583 \/ 10.191392 (4.746191) | 0.163479 \/ 0.680424 (-0.516945) | 0.018364 \/ 0.534201 (-0.515837) | 0.433296 \/ 0.579283 (-0.145987) | 0.432463 \/ 0.434364 (-0.001901) | 0.512000 \/ 0.540337 (-0.028338) | 0.619397 \/ 1.386936 (-0.767539) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#0832d48a07ed00b406271f4b4439e6d54ae38ebf \"CML watermark\")\n","_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.010097 \/ 0.011353 (-0.001256) | 0.005070 \/ 0.011008 (-0.005939) | 0.118638 \/ 0.038508 (0.080130) | 0.043651 \/ 0.023109 (0.020542) | 0.356074 \/ 0.275898 (0.080176) | 0.414578 \/ 0.323480 (0.091098) | 0.005939 \/ 0.007986 (-0.002046) | 0.004927 \/ 0.004328 (0.000598) | 0.089545 \/ 0.004250 (0.085294) | 0.067533 \/ 0.037052 (0.030481) | 0.371550 \/ 0.258489 (0.113061) | 0.417808 \/ 0.293841 (0.123967) | 0.045186 \/ 0.128546 (-0.083361) | 0.015763 \/ 0.075646 (-0.059883) | 0.393304 \/ 0.419271 (-0.025967) | 0.065123 \/ 0.043533 (0.021591) | 0.345057 \/ 0.255139 (0.089918) | 0.378809 \/ 0.283200 (0.095610) | 0.033243 \/ 0.141683 (-0.108440) | 1.679956 \/ 1.452155 (0.227802) | 1.775456 \/ 1.492716 (0.282739) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.229723 \/ 0.018006 (0.211717) | 0.554630 \/ 0.000490 (0.554140) | 0.008729 \/ 0.000200 (0.008529) | 0.000183 \/ 0.000054 (0.000129) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027284 \/ 0.037411 (-0.010128) | 0.114741 \/ 0.014526 (0.100215) | 0.129188 \/ 0.176557 (-0.047369) | 0.189270 \/ 0.737135 (-0.547866) | 0.126000 \/ 0.296338 (-0.170339) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.580417 \/ 0.215209 (0.365208) | 5.829337 \/ 2.077655 (3.751683) | 2.421191 \/ 1.504120 (0.917071) | 2.063673 \/ 1.541195 (0.522479) | 2.133427 \/ 1.468490 (0.664937) | 0.830964 \/ 4.584777 (-3.753813) | 5.107139 \/ 3.745712 (1.361427) | 4.599451 \/ 5.269862 (-0.670410) | 2.406502 \/ 4.565676 (-2.159175) | 0.100422 \/ 0.424275 (-0.323853) | 0.011850 \/ 0.007607 (0.004243) | 0.741881 \/ 0.226044 (0.515836) | 7.425689 \/ 2.268929 (5.156760) | 3.068948 \/ 55.444624 (-52.375676) | 2.496292 \/ 6.876477 (-4.380184) | 2.566420 \/ 2.142072 (0.424348) | 1.093084 \/ 4.805227 (-3.712144) | 0.224106 \/ 6.500664 (-6.276558) | 0.084549 \/ 0.075469 (0.009080) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.416315 \/ 1.841788 (-0.425473) | 16.306901 \/ 8.074308 (8.232593) | 19.792419 \/ 10.191392 (9.601027) | 0.224223 \/ 0.680424 (-0.456201) | 0.026385 \/ 0.534201 (-0.507816) | 0.463460 \/ 0.579283 (-0.115823) | 0.598385 \/ 0.434364 (0.164021) | 0.543981 \/ 0.540337 (0.003644) | 0.647454 \/ 1.386936 (-0.739482) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009470 \/ 0.011353 (-0.001883) | 0.004800 \/ 0.011008 (-0.006208) | 0.094276 \/ 0.038508 (0.055768) | 0.045157 \/ 0.023109 (0.022048) | 0.397302 \/ 0.275898 (0.121404) | 0.474213 \/ 0.323480 (0.150733) | 0.005826 \/ 0.007986 (-0.002160) | 0.003724 \/ 0.004328 (-0.000605) | 0.090060 \/ 0.004250 (0.085809) | 0.066671 \/ 0.037052 (0.029618) | 0.439560 \/ 0.258489 (0.181071) | 0.468598 \/ 0.293841 (0.174757) | 0.044549 \/ 0.128546 (-0.083997) | 0.014000 \/ 0.075646 (-0.061646) | 0.110457 \/ 0.419271 (-0.308815) | 0.065898 \/ 0.043533 (0.022365) | 0.408101 \/ 0.255139 (0.152962) | 0.433473 \/ 0.283200 (0.150273) | 0.038438 \/ 0.141683 (-0.103245) | 1.767781 \/ 1.452155 (0.315626) | 1.791575 \/ 1.492716 (0.298859) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.230257 \/ 0.018006 (0.212251) | 0.492280 \/ 0.000490 (0.491790) | 0.005110 \/ 0.000200 (0.004910) | 0.000119 \/ 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028854 \/ 0.037411 (-0.008557) | 0.111702 \/ 0.014526 (0.097176) | 0.122040 \/ 0.176557 (-0.054517) | 0.179103 \/ 0.737135 (-0.558032) | 0.128869 \/ 0.296338 (-0.167470) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.634795 \/ 0.215209 (0.419586) | 6.204760 \/ 2.077655 (4.127105) | 2.692479 \/ 1.504120 (1.188359) | 2.324260 \/ 1.541195 (0.783066) | 2.380640 \/ 1.468490 (0.912149) | 0.887827 \/ 4.584777 (-3.696950) | 5.251648 \/ 3.745712 (1.505935) | 2.632767 \/ 5.269862 (-2.637095) | 1.745721 \/ 4.565676 (-2.819955) | 0.108364 \/ 0.424275 (-0.315911) | 0.013409 \/ 0.007607 (0.005802) | 0.783427 \/ 0.226044 (0.557383) | 7.765144 \/ 2.268929 (5.496216) | 3.340686 \/ 55.444624 (-52.103938) | 2.715340 \/ 6.876477 (-4.161137) | 2.768604 \/ 2.142072 (0.626531) | 1.119746 \/ 4.805227 (-3.685481) | 0.210804 \/ 6.500664 (-6.289860) | 0.072600 \/ 0.075469 (-0.002869) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.517334 \/ 1.841788 (-0.324454) | 17.046837 \/ 8.074308 (8.972529) | 19.371090 \/ 10.191392 (9.179698) | 0.194275 \/ 0.680424 (-0.486148) | 0.026712 \/ 0.534201 (-0.507488) | 0.462731 \/ 0.579283 (-0.116552) | 0.568958 \/ 0.434364 (0.134595) | 0.555707 \/ 0.540337 (0.015370) | 0.663654 \/ 1.386936 (-0.723283) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#5d20476b1d4c8e11e0ffafc1570cbf4bd19011cf \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006423 \/ 0.011353 (-0.004930) | 0.003882 \/ 0.011008 (-0.007126) | 0.082976 \/ 0.038508 (0.044468) | 0.071281 \/ 0.023109 (0.048171) | 0.311367 \/ 0.275898 (0.035469) | 0.348228 \/ 0.323480 (0.024748) | 0.005315 \/ 0.007986 (-0.002671) | 0.003326 \/ 0.004328 (-0.001003) | 0.064641 \/ 0.004250 (0.060391) | 0.056134 \/ 0.037052 (0.019081) | 0.314071 \/ 0.258489 (0.055582) | 0.360534 \/ 0.293841 (0.066693) | 0.030642 \/ 0.128546 (-0.097904) | 0.008301 \/ 0.075646 (-0.067345) | 0.285820 \/ 0.419271 (-0.133451) | 0.069241 \/ 0.043533 (0.025708) | 0.313995 \/ 0.255139 (0.058856) | 0.336656 \/ 0.283200 (0.053457) | 0.031686 \/ 0.141683 (-0.109997) | 1.467627 \/ 1.452155 (0.015472) | 1.536493 \/ 1.492716 (0.043777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.196518 \/ 0.018006 (0.178512) | 0.458235 \/ 0.000490 (0.457745) | 0.005599 \/ 0.000200 (0.005399) | 0.000088 \/ 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027371 \/ 0.037411 (-0.010040) | 0.080986 \/ 0.014526 (0.066460) | 0.093296 \/ 0.176557 (-0.083260) | 0.150592 \/ 0.737135 (-0.586543) | 0.094150 \/ 0.296338 (-0.202188) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.379412 \/ 0.215209 (0.164202) | 3.797927 \/ 2.077655 (1.720272) | 1.830654 \/ 1.504120 (0.326534) | 1.669569 \/ 1.541195 (0.128374) | 1.746738 \/ 1.468490 (0.278248) | 0.479536 \/ 4.584777 (-4.105241) | 3.592867 \/ 3.745712 (-0.152845) | 5.468098 \/ 5.269862 (0.198237) | 3.268013 \/ 4.565676 (-1.297663) | 0.056635 \/ 0.424275 (-0.367640) | 0.007224 \/ 0.007607 (-0.000383) | 0.456681 \/ 0.226044 (0.230636) | 4.566736 \/ 2.268929 (2.297807) | 2.362831 \/ 55.444624 (-53.081793) | 1.965141 \/ 6.876477 (-4.911336) | 2.156905 \/ 2.142072 (0.014833) | 0.572543 \/ 4.805227 (-4.232684) | 0.132203 \/ 6.500664 (-6.368461) | 0.059254 \/ 0.075469 (-0.016215) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.256134 \/ 1.841788 (-0.585654) | 19.905438 \/ 8.074308 (11.831130) | 14.179556 \/ 10.191392 (3.988164) | 0.168043 \/ 0.680424 (-0.512381) | 0.018215 \/ 0.534201 (-0.515986) | 0.392740 \/ 0.579283 (-0.186543) | 0.398397 \/ 0.434364 (-0.035967) | 0.463806 \/ 0.540337 (-0.076531) | 0.616248 \/ 1.386936 (-0.770688) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006564 \/ 0.011353 (-0.004789) | 0.003923 \/ 0.011008 (-0.007085) | 0.063929 \/ 0.038508 (0.025421) | 0.073780 \/ 0.023109 (0.050671) | 0.360242 \/ 0.275898 (0.084344) | 0.395078 \/ 0.323480 (0.071598) | 0.005265 \/ 0.007986 (-0.002720) | 0.003229 \/ 0.004328 (-0.001100) | 0.064094 \/ 0.004250 (0.059843) | 0.057468 \/ 0.037052 (0.020416) | 0.369530 \/ 0.258489 (0.111041) | 0.411159 \/ 0.293841 (0.117318) | 0.031278 \/ 0.128546 (-0.097268) | 0.008424 \/ 0.075646 (-0.067222) | 0.070411 \/ 0.419271 (-0.348860) | 0.048714 \/ 0.043533 (0.005181) | 0.361280 \/ 0.255139 (0.106141) | 0.382468 \/ 0.283200 (0.099269) | 0.023059 \/ 0.141683 (-0.118624) | 1.452369 \/ 1.452155 (0.000215) | 1.519192 \/ 1.492716 (0.026475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.223745 \/ 0.018006 (0.205739) | 0.442086 \/ 0.000490 (0.441596) | 0.000379 \/ 0.000200 (0.000179) | 0.000055 \/ 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030919 \/ 0.037411 (-0.006493) | 0.088483 \/ 0.014526 (0.073958) | 0.101165 \/ 0.176557 (-0.075391) | 0.154332 \/ 0.737135 (-0.582804) | 0.103030 \/ 0.296338 (-0.193309) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.414520 \/ 0.215209 (0.199311) | 4.126754 \/ 2.077655 (2.049099) | 2.142677 \/ 1.504120 (0.638557) | 1.995300 \/ 1.541195 (0.454106) | 2.101678 \/ 1.468490 (0.633188) | 0.481099 \/ 4.584777 (-4.103678) | 3.562813 \/ 3.745712 (-0.182900) | 3.392463 \/ 5.269862 (-1.877399) | 1.983943 \/ 4.565676 (-2.581734) | 0.056594 \/ 0.424275 (-0.367681) | 0.007216 \/ 0.007607 (-0.000391) | 0.495085 \/ 0.226044 (0.269041) | 4.955640 \/ 2.268929 (2.686712) | 2.629434 \/ 55.444624 (-52.815191) | 2.269577 \/ 6.876477 (-4.606900) | 2.357708 \/ 2.142072 (0.215635) | 0.612370 \/ 4.805227 (-4.192857) | 0.131169 \/ 6.500664 (-6.369495) | 0.061029 \/ 0.075469 (-0.014440) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.339438 \/ 1.841788 (-0.502350) | 19.757611 \/ 8.074308 (11.683303) | 14.246254 \/ 10.191392 (4.054862) | 0.170750 \/ 0.680424 (-0.509674) | 0.018192 \/ 0.534201 (-0.516009) | 0.395693 \/ 0.579283 (-0.183590) | 0.411003 \/ 0.434364 (-0.023361) | 0.478531 \/ 0.540337 (-0.061806) | 0.650291 \/ 1.386936 (-0.736645) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#3e34d06d746688dd5d26e4c85517b7e1a2f361ca \"CML watermark\")\n"],"created_at":1688408954000,"updated_at":1688663051000,"closed_at":1688662525000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6004","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6004","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6004.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6004.patch","merged_at":1688662525000},"body":"Contains the following improvements:\r\n\r\n* fixes a \"share dataset\" link in README and modifies the \"hosting\" part in the disclaimer section\r\n* updates `Makefile` to also run the style checks on `utils` and `setup.py`\r\n* deletes a test for GH-hosted datasets (no longer supported)\r\n* deletes `convert_dataset.sh` (outdated)\r\n* aligns `utils\/release.py` with `transformers` (the current version is outdated)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6004\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6004\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6003","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6003\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6003\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6003\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6003","id":1786554110,"node_id":"I_kwDODunzps5qfKb-","number":6003,"title":"interleave_datasets & DataCollatorForLanguageModeling having a conflict ?","user":{"login":"PonteIneptique","id":1929830,"node_id":"MDQ6VXNlcjE5Mjk4MzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1929830?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PonteIneptique","html_url":"https:\/\/github.com\/PonteIneptique","followers_url":"https:\/\/api.github.com\/users\/PonteIneptique\/followers","following_url":"https:\/\/api.github.com\/users\/PonteIneptique\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PonteIneptique\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PonteIneptique\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PonteIneptique\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PonteIneptique\/orgs","repos_url":"https:\/\/api.github.com\/users\/PonteIneptique\/repos","events_url":"https:\/\/api.github.com\/users\/PonteIneptique\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PonteIneptique\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1688404531000,"updated_at":1688404531000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nHi everyone :)\r\n\r\nI have two local & custom datasets (1 \"sentence\" per line) which I split along the 95\/5 lines for pre-training a Bert model. I use a modified version of `run_mlm.py` in order to be able to make use of `interleave_dataset`:\r\n\r\n- `tokenize()` runs fine\r\n- `group_text()` runs fine\r\n\r\nEverytime, on step 19, I get \r\n\r\n```pytb\r\n File \"env\/lib\/python3.9\/site-packages\/transformers\/data\/data_collator.py\", line 779, in torch_mask_tokens\r\n inputs[indices_random] = random_words[indices_random]\r\nRuntimeError: Index put requires the source and destination dtypes match, got Float for the destination and Long for the source.\r\n```\r\n\r\nI tried:\r\n- training without interleave on dataset 1, it runs\r\n- training without interleave on dataset 2, it runs\r\n- training without `.to_iterable_dataset()`, it hangs then crash\r\n- training without group_text() and padding to max_length seemed to fix the issue, but who knows if this was just because it was an issue that would come much later in terms of steps.\r\n\r\nI might have coded something wrong, but I don't get what \n\n### Steps to reproduce the bug\n\nI have this function:\r\n\r\n```py\r\ndef build_dataset(path: str, percent: str):\r\n dataset = load_dataset(\r\n \"text\",\r\n data_files={\"train\": [path]},\r\n split=f\"train[{percent}]\"\r\n )\r\n dataset = dataset.map(\r\n lambda examples: tokenize(examples[\"text\"]),\r\n batched=True,\r\n num_proc=num_proc,\r\n )\r\n\r\n dataset = dataset.map(\r\n group_texts,\r\n batched=True,\r\n num_proc=num_proc,\r\n desc=f\"Grouping texts in chunks of {tokenizer.max_seq_length}\",\r\n remove_columns=[\"text\"]\r\n )\r\n\r\n print(len(dataset))\r\n return dataset.to_iterable_dataset()\r\n```\r\n\r\nI hardcoded group_text:\r\n```py\r\n def group_texts(examples):\r\n # Concatenate all texts.\r\n concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}\r\n total_length = len(concatenated_examples[list(examples.keys())[0]])\r\n # We drop the small remainder, and if the total_length < max_seq_length we exclude this batch and return an empty dict.\r\n # We could add padding if the model supported it instead of this drop, you can customize this part to your needs.\r\n total_length = (total_length \/\/ 512) * 512\r\n # Split by chunks of max_len.\r\n result = {\r\n k: [t[i: i + 512] for i in range(0, total_length, 512)]\r\n for k, t in concatenated_examples.items()\r\n }\r\n # result = {k: [el for el in elements if el] for k, elements in result.items()}\r\n return result\r\n```\r\n\r\nAnd then I build datasets using the following code:\r\n\r\n```py\r\ntrain1 = build_dataset(\"d1.txt\", \":95%\")\r\ntrain2 = build_dataset(\"d2.txt\", \":95%\")\r\ndev1 = build_dataset(\"d1.txt\", \"95%:\")\r\ndev2 = build_dataset(\"d2.txt\", \"95%:\")\r\n```\r\n\r\nand finally I run\r\n```py\r\ntrain_dataset = interleave_datasets(\r\n [train1, train2],\r\n probabilities=[0.8, 0.2],\r\n seed=42\r\n)\r\neval_dataset = interleave_datasets(\r\n [dev1, dev2],\r\n probabilities=[0.8, 0.2],\r\n seed=42\r\n)\r\n```\r\n\r\nThen I run the training part which remains mostly untouched:\r\n\r\n> CUDA_VISIBLE_DEVICES=1 python custom_dataset.py --model_type bert --per_device_train_batch_size 32 --do_train --output_dir \/var\/mlm\/training-bert\/model --max_seq_length 512 --save_steps 10000 --save_total_limit 3 --auto_find_batch_size --logging_dir .\/logs-bert --learning_rate 0.0001 --do_train --num_train_epochs 25 --warmup_steps 10000 --max_step 45000 --fp16\n\n### Expected behavior\n\nThe model should then train normally, but fails every time at the same step (19).\r\n\r\nprinting the variables at `inputs[indices_random] = random_words[indices_random]` shows a magnificient empty tensor (, 32) [if I remember well]\n\n### Environment info\n\ntransformers[torch] 4.30.2\r\nUbuntu\r\nA100 0 CUDA 12\r\nDriver Version: 525.116.04","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6003\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6003\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6002","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6002\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6002\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6002\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6002","id":1786053060,"node_id":"PR_kwDODunzps5UhP-Z","number":6002,"title":"Add KLUE-MRC metrics","user":{"login":"ingyuseong","id":37537248,"node_id":"MDQ6VXNlcjM3NTM3MjQ4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/37537248?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ingyuseong","html_url":"https:\/\/github.com\/ingyuseong","followers_url":"https:\/\/api.github.com\/users\/ingyuseong\/followers","following_url":"https:\/\/api.github.com\/users\/ingyuseong\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ingyuseong\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ingyuseong\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ingyuseong\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ingyuseong\/orgs","repos_url":"https:\/\/api.github.com\/users\/ingyuseong\/repos","events_url":"https:\/\/api.github.com\/users\/ingyuseong\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ingyuseong\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The metrics API in `datasets` is deprecated as of version 2.0, and `evaulate` is our new library for metrics. You can add a new metric to it by following [these steps](https:\/\/huggingface.co\/docs\/evaluate\/creating_and_sharing)."],"created_at":1688386270000,"updated_at":1688903840000,"closed_at":1688903840000,"author_association":"NONE","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6002","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6002","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6002.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6002.patch","merged_at":null},"body":"## Metrics for KLUE-MRC (Korean Language Understanding Evaluation \u2014 Machine Reading Comprehension)\r\n\r\nAdding metrics for [KLUE-MRC](https:\/\/huggingface.co\/datasets\/klue).\r\nKLUE-MRC is very similar to SQuAD 2.0 but has a slightly different format which is why I added metrics for KLUE-MRC.\r\n\r\nSpecifically, in the case of [LM Eval Harness](https:\/\/github.com\/EleutherAI\/lm-evaluation-harness), it leverages the scoring script of SQuAD to evaluate SQuAD 2.0 and KorQuAD. But the script isn't suitable for KLUE-MRC because KLUE-MRC is a bit different from SQuAD 2.0. And this is why I added the scoring script for KLUE-MRC.\r\n\r\n- [x] All tests passed\r\n- [x] Added a metric card (referred the metric card of SQuAD 2.0)\r\n- [x] Compatibility test with [LM Eval Harness](https:\/\/github.com\/EleutherAI\/lm-evaluation-harness) passed\r\n\r\n### References\r\n- [KLUE: Korean Language Understanding Evaluation](https:\/\/datasets-benchmarks-proceedings.neurips.cc\/paper_files\/paper\/2021\/file\/98dce83da57b0395e163467c9dae521b-Paper-round2.pdf)\r\n- [KLUE on Hugging Face Datasets](https:\/\/huggingface.co\/datasets\/klue)\r\n- #2416","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6002\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6002\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6001","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6001\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6001\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6001\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6001","id":1782516627,"node_id":"PR_kwDODunzps5UVMMh","number":6001,"title":"Align `column_names` type check with type hint in `sort`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006038 \/ 0.011353 (-0.005315) | 0.003797 \/ 0.011008 (-0.007211) | 0.097686 \/ 0.038508 (0.059178) | 0.035235 \/ 0.023109 (0.012126) | 0.317294 \/ 0.275898 (0.041396) | 0.377682 \/ 0.323480 (0.054202) | 0.003485 \/ 0.007986 (-0.004501) | 0.003603 \/ 0.004328 (-0.000725) | 0.077268 \/ 0.004250 (0.073017) | 0.054649 \/ 0.037052 (0.017597) | 0.322293 \/ 0.258489 (0.063804) | 0.372277 \/ 0.293841 (0.078436) | 0.027927 \/ 0.128546 (-0.100619) | 0.008495 \/ 0.075646 (-0.067151) | 0.313078 \/ 0.419271 (-0.106193) | 0.046974 \/ 0.043533 (0.003441) | 0.313848 \/ 0.255139 (0.058709) | 0.338454 \/ 0.283200 (0.055255) | 0.020462 \/ 0.141683 (-0.121221) | 1.473027 \/ 1.452155 (0.020873) | 1.539468 \/ 1.492716 (0.046752) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.221429 \/ 0.018006 (0.203423) | 0.412044 \/ 0.000490 (0.411555) | 0.005866 \/ 0.000200 (0.005666) | 0.000075 \/ 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.022870 \/ 0.037411 (-0.014541) | 0.099129 \/ 0.014526 (0.084603) | 0.103463 \/ 0.176557 (-0.073094) | 0.164969 \/ 0.737135 (-0.572166) | 0.110000 \/ 0.296338 (-0.186339) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.431311 \/ 0.215209 (0.216102) | 4.293562 \/ 2.077655 (2.215907) | 1.961209 \/ 1.504120 (0.457089) | 1.733680 \/ 1.541195 (0.192485) | 1.793171 \/ 1.468490 (0.324681) | 0.568566 \/ 4.584777 (-4.016211) | 3.401794 \/ 3.745712 (-0.343918) | 1.827949 \/ 5.269862 (-3.441913) | 1.055963 \/ 4.565676 (-3.509714) | 0.068459 \/ 0.424275 (-0.355816) | 0.011586 \/ 0.007607 (0.003979) | 0.533936 \/ 0.226044 (0.307891) | 5.347637 \/ 2.268929 (3.078708) | 2.378056 \/ 55.444624 (-53.066569) | 2.032159 \/ 6.876477 (-4.844318) | 2.159064 \/ 2.142072 (0.016991) | 0.674528 \/ 4.805227 (-4.130699) | 0.136859 \/ 6.500664 (-6.363805) | 0.066629 \/ 0.075469 (-0.008840) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.218084 \/ 1.841788 (-0.623704) | 14.141710 \/ 8.074308 (6.067402) | 13.588415 \/ 10.191392 (3.397023) | 0.155104 \/ 0.680424 (-0.525320) | 0.017160 \/ 0.534201 (-0.517041) | 0.375558 \/ 0.579283 (-0.203725) | 0.386293 \/ 0.434364 (-0.048071) | 0.459476 \/ 0.540337 (-0.080862) | 0.548561 \/ 1.386936 (-0.838375) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005878 \/ 0.011353 (-0.005475) | 0.003750 \/ 0.011008 (-0.007259) | 0.077720 \/ 0.038508 (0.039212) | 0.034955 \/ 0.023109 (0.011846) | 0.357480 \/ 0.275898 (0.081582) | 0.418210 \/ 0.323480 (0.094730) | 0.004566 \/ 0.007986 (-0.003419) | 0.002918 \/ 0.004328 (-0.001410) | 0.076517 \/ 0.004250 (0.072266) | 0.050202 \/ 0.037052 (0.013150) | 0.368166 \/ 0.258489 (0.109677) | 0.415681 \/ 0.293841 (0.121840) | 0.029496 \/ 0.128546 (-0.099050) | 0.008547 \/ 0.075646 (-0.067099) | 0.083037 \/ 0.419271 (-0.336234) | 0.045001 \/ 0.043533 (0.001468) | 0.356503 \/ 0.255139 (0.101364) | 0.383747 \/ 0.283200 (0.100547) | 0.025071 \/ 0.141683 (-0.116612) | 1.541985 \/ 1.452155 (0.089830) | 1.594710 \/ 1.492716 (0.101994) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.204491 \/ 0.018006 (0.186484) | 0.408686 \/ 0.000490 (0.408196) | 0.002505 \/ 0.000200 (0.002305) | 0.000082 \/ 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024446 \/ 0.037411 (-0.012965) | 0.101432 \/ 0.014526 (0.086906) | 0.108105 \/ 0.176557 (-0.068452) | 0.161195 \/ 0.737135 (-0.575940) | 0.112671 \/ 0.296338 (-0.183667) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.459697 \/ 0.215209 (0.244488) | 4.570071 \/ 2.077655 (2.492416) | 2.211547 \/ 1.504120 (0.707427) | 1.996651 \/ 1.541195 (0.455457) | 2.015621 \/ 1.468490 (0.547131) | 0.567423 \/ 4.584777 (-4.017354) | 3.408027 \/ 3.745712 (-0.337685) | 2.913824 \/ 5.269862 (-2.356038) | 1.423223 \/ 4.565676 (-3.142453) | 0.068740 \/ 0.424275 (-0.355535) | 0.010997 \/ 0.007607 (0.003390) | 0.567340 \/ 0.226044 (0.341296) | 5.666280 \/ 2.268929 (3.397351) | 2.804934 \/ 55.444624 (-52.639690) | 2.430761 \/ 6.876477 (-4.445716) | 2.451820 \/ 2.142072 (0.309748) | 0.681926 \/ 4.805227 (-4.123301) | 0.137761 \/ 6.500664 (-6.362903) | 0.067173 \/ 0.075469 (-0.008296) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.329853 \/ 1.841788 (-0.511934) | 14.436232 \/ 8.074308 (6.361924) | 14.398645 \/ 10.191392 (4.207253) | 0.147421 \/ 0.680424 (-0.533002) | 0.016743 \/ 0.534201 (-0.517458) | 0.364964 \/ 0.579283 (-0.214319) | 0.387072 \/ 0.434364 (-0.047292) | 0.423892 \/ 0.540337 (-0.116445) | 0.521304 \/ 1.386936 (-0.865632) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#a62b6ce65f718e9ff4189da86d160ae4bb197fc2 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006463 \/ 0.011353 (-0.004889) | 0.003923 \/ 0.011008 (-0.007086) | 0.102096 \/ 0.038508 (0.063588) | 0.040230 \/ 0.023109 (0.017121) | 0.384688 \/ 0.275898 (0.108789) | 0.445574 \/ 0.323480 (0.122094) | 0.003590 \/ 0.007986 (-0.004395) | 0.004023 \/ 0.004328 (-0.000306) | 0.080125 \/ 0.004250 (0.075875) | 0.057406 \/ 0.037052 (0.020354) | 0.395049 \/ 0.258489 (0.136560) | 0.438065 \/ 0.293841 (0.144224) | 0.028963 \/ 0.128546 (-0.099583) | 0.008693 \/ 0.075646 (-0.066954) | 0.317158 \/ 0.419271 (-0.102114) | 0.047930 \/ 0.043533 (0.004397) | 0.382442 \/ 0.255139 (0.127303) | 0.410665 \/ 0.283200 (0.127466) | 0.020127 \/ 0.141683 (-0.121555) | 1.558554 \/ 1.452155 (0.106400) | 1.590959 \/ 1.492716 (0.098242) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.208826 \/ 0.018006 (0.190820) | 0.432037 \/ 0.000490 (0.431547) | 0.006509 \/ 0.000200 (0.006309) | 0.000285 \/ 0.000054 (0.000230) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023460 \/ 0.037411 (-0.013951) | 0.099070 \/ 0.014526 (0.084545) | 0.105771 \/ 0.176557 (-0.070785) | 0.166683 \/ 0.737135 (-0.570452) | 0.108755 \/ 0.296338 (-0.187583) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.424324 \/ 0.215209 (0.209115) | 4.225696 \/ 2.077655 (2.148042) | 1.910955 \/ 1.504120 (0.406835) | 1.704493 \/ 1.541195 (0.163298) | 1.782784 \/ 1.468490 (0.314293) | 0.562927 \/ 4.584777 (-4.021850) | 3.380163 \/ 3.745712 (-0.365550) | 1.779641 \/ 5.269862 (-3.490221) | 1.029134 \/ 4.565676 (-3.536543) | 0.068325 \/ 0.424275 (-0.355950) | 0.011528 \/ 0.007607 (0.003921) | 0.530141 \/ 0.226044 (0.304097) | 5.323443 \/ 2.268929 (3.054514) | 2.346956 \/ 55.444624 (-53.097668) | 2.013335 \/ 6.876477 (-4.863142) | 2.118531 \/ 2.142072 (-0.023541) | 0.675206 \/ 4.805227 (-4.130021) | 0.135473 \/ 6.500664 (-6.365191) | 0.064804 \/ 0.075469 (-0.010665) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.240179 \/ 1.841788 (-0.601608) | 14.692449 \/ 8.074308 (6.618141) | 13.672223 \/ 10.191392 (3.480831) | 0.147748 \/ 0.680424 (-0.532676) | 0.017119 \/ 0.534201 (-0.517082) | 0.369481 \/ 0.579283 (-0.209802) | 0.390133 \/ 0.434364 (-0.044231) | 0.458768 \/ 0.540337 (-0.081569) | 0.548989 \/ 1.386936 (-0.837947) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006319 \/ 0.011353 (-0.005034) | 0.003975 \/ 0.011008 (-0.007033) | 0.077886 \/ 0.038508 (0.039378) | 0.038322 \/ 0.023109 (0.015213) | 0.379851 \/ 0.275898 (0.103953) | 0.456749 \/ 0.323480 (0.133269) | 0.005320 \/ 0.007986 (-0.002665) | 0.003135 \/ 0.004328 (-0.001194) | 0.078272 \/ 0.004250 (0.074022) | 0.059919 \/ 0.037052 (0.022866) | 0.430062 \/ 0.258489 (0.171573) | 0.477432 \/ 0.293841 (0.183591) | 0.029713 \/ 0.128546 (-0.098833) | 0.008704 \/ 0.075646 (-0.066942) | 0.082488 \/ 0.419271 (-0.336784) | 0.044667 \/ 0.043533 (0.001134) | 0.354910 \/ 0.255139 (0.099771) | 0.434637 \/ 0.283200 (0.151438) | 0.026402 \/ 0.141683 (-0.115281) | 1.528825 \/ 1.452155 (0.076671) | 1.548209 \/ 1.492716 (0.055493) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.237988 \/ 0.018006 (0.219982) | 0.420402 \/ 0.000490 (0.419913) | 0.003098 \/ 0.000200 (0.002898) | 0.000077 \/ 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026253 \/ 0.037411 (-0.011159) | 0.106137 \/ 0.014526 (0.091611) | 0.110273 \/ 0.176557 (-0.066284) | 0.165316 \/ 0.737135 (-0.571819) | 0.115720 \/ 0.296338 (-0.180619) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.454244 \/ 0.215209 (0.239035) | 4.526018 \/ 2.077655 (2.448364) | 2.395985 \/ 1.504120 (0.891865) | 2.234822 \/ 1.541195 (0.693627) | 2.370235 \/ 1.468490 (0.901745) | 0.567607 \/ 4.584777 (-4.017169) | 3.650156 \/ 3.745712 (-0.095556) | 3.360094 \/ 5.269862 (-1.909768) | 1.415252 \/ 4.565676 (-3.150424) | 0.068012 \/ 0.424275 (-0.356263) | 0.011135 \/ 0.007607 (0.003528) | 0.561967 \/ 0.226044 (0.335923) | 5.621819 \/ 2.268929 (3.352890) | 2.676912 \/ 55.444624 (-52.767712) | 2.338306 \/ 6.876477 (-4.538171) | 2.430888 \/ 2.142072 (0.288815) | 0.684576 \/ 4.805227 (-4.120651) | 0.138923 \/ 6.500664 (-6.361741) | 0.069933 \/ 0.075469 (-0.005536) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.313383 \/ 1.841788 (-0.528405) | 15.125088 \/ 8.074308 (7.050780) | 14.801501 \/ 10.191392 (4.610109) | 0.134235 \/ 0.680424 (-0.546189) | 0.017058 \/ 0.534201 (-0.517143) | 0.365166 \/ 0.579283 (-0.214117) | 0.395415 \/ 0.434364 (-0.038949) | 0.419355 \/ 0.540337 (-0.120983) | 0.513411 \/ 1.386936 (-0.873525) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#8b9649b3cfb49342e44873ce7e29e0c75eaf3efa \"CML watermark\")\n"],"created_at":1688130950000,"updated_at":1688134712000,"closed_at":1688134284000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6001","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6001","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6001.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6001.patch","merged_at":1688134284000},"body":"Fix #5998 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6001\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6001\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6000","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6000\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6000\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6000\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6000","id":1782456878,"node_id":"PR_kwDODunzps5UU_FB","number":6000,"title":"Pin `joblib` to avoid `joblibspark` test failures","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006722 \/ 0.011353 (-0.004631) | 0.004425 \/ 0.011008 (-0.006583) | 0.100850 \/ 0.038508 (0.062341) | 0.040816 \/ 0.023109 (0.017707) | 0.348823 \/ 0.275898 (0.072925) | 0.446285 \/ 0.323480 (0.122805) | 0.005738 \/ 0.007986 (-0.002247) | 0.003517 \/ 0.004328 (-0.000811) | 0.078824 \/ 0.004250 (0.074574) | 0.064695 \/ 0.037052 (0.027643) | 0.389894 \/ 0.258489 (0.131405) | 0.416107 \/ 0.293841 (0.122266) | 0.028850 \/ 0.128546 (-0.099696) | 0.009011 \/ 0.075646 (-0.066635) | 0.323117 \/ 0.419271 (-0.096154) | 0.049162 \/ 0.043533 (0.005629) | 0.340144 \/ 0.255139 (0.085005) | 0.382072 \/ 0.283200 (0.098872) | 0.023160 \/ 0.141683 (-0.118523) | 1.549218 \/ 1.452155 (0.097063) | 1.581266 \/ 1.492716 (0.088550) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.293360 \/ 0.018006 (0.275353) | 0.602189 \/ 0.000490 (0.601700) | 0.004608 \/ 0.000200 (0.004408) | 0.000082 \/ 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028144 \/ 0.037411 (-0.009267) | 0.107088 \/ 0.014526 (0.092562) | 0.112188 \/ 0.176557 (-0.064369) | 0.174669 \/ 0.737135 (-0.562466) | 0.116359 \/ 0.296338 (-0.179980) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.422911 \/ 0.215209 (0.207702) | 4.231524 \/ 2.077655 (2.153869) | 1.906711 \/ 1.504120 (0.402591) | 1.706841 \/ 1.541195 (0.165646) | 1.792066 \/ 1.468490 (0.323576) | 0.559221 \/ 4.584777 (-4.025556) | 3.434280 \/ 3.745712 (-0.311433) | 1.918714 \/ 5.269862 (-3.351148) | 1.073070 \/ 4.565676 (-3.492606) | 0.067891 \/ 0.424275 (-0.356384) | 0.011927 \/ 0.007607 (0.004320) | 0.530843 \/ 0.226044 (0.304799) | 5.309213 \/ 2.268929 (3.040285) | 2.439246 \/ 55.444624 (-53.005378) | 2.101245 \/ 6.876477 (-4.775231) | 2.177436 \/ 2.142072 (0.035363) | 0.672150 \/ 4.805227 (-4.133077) | 0.137571 \/ 6.500664 (-6.363093) | 0.068343 \/ 0.075469 (-0.007126) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.265262 \/ 1.841788 (-0.576525) | 14.988021 \/ 8.074308 (6.913713) | 13.611677 \/ 10.191392 (3.420285) | 0.171389 \/ 0.680424 (-0.509035) | 0.017681 \/ 0.534201 (-0.516520) | 0.377542 \/ 0.579283 (-0.201741) | 0.399475 \/ 0.434364 (-0.034889) | 0.469553 \/ 0.540337 (-0.070785) | 0.561888 \/ 1.386936 (-0.825048) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006782 \/ 0.011353 (-0.004571) | 0.004412 \/ 0.011008 (-0.006597) | 0.078594 \/ 0.038508 (0.040086) | 0.039930 \/ 0.023109 (0.016820) | 0.371879 \/ 0.275898 (0.095981) | 0.444910 \/ 0.323480 (0.121430) | 0.005707 \/ 0.007986 (-0.002279) | 0.003901 \/ 0.004328 (-0.000427) | 0.080125 \/ 0.004250 (0.075875) | 0.063977 \/ 0.037052 (0.026925) | 0.382781 \/ 0.258489 (0.124292) | 0.441791 \/ 0.293841 (0.147950) | 0.030428 \/ 0.128546 (-0.098118) | 0.009008 \/ 0.075646 (-0.066638) | 0.084447 \/ 0.419271 (-0.334824) | 0.044432 \/ 0.043533 (0.000899) | 0.365686 \/ 0.255139 (0.110547) | 0.394312 \/ 0.283200 (0.111113) | 0.024508 \/ 0.141683 (-0.117175) | 1.577020 \/ 1.452155 (0.124865) | 1.630259 \/ 1.492716 (0.137543) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.307960 \/ 0.018006 (0.289953) | 0.591473 \/ 0.000490 (0.590983) | 0.008098 \/ 0.000200 (0.007898) | 0.000110 \/ 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029567 \/ 0.037411 (-0.007845) | 0.112773 \/ 0.014526 (0.098247) | 0.117362 \/ 0.176557 (-0.059194) | 0.174293 \/ 0.737135 (-0.562843) | 0.123156 \/ 0.296338 (-0.173182) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.457475 \/ 0.215209 (0.242266) | 4.599067 \/ 2.077655 (2.521412) | 2.262638 \/ 1.504120 (0.758518) | 2.124943 \/ 1.541195 (0.583748) | 2.339912 \/ 1.468490 (0.871422) | 0.566264 \/ 4.584777 (-4.018513) | 3.489261 \/ 3.745712 (-0.256451) | 1.925151 \/ 5.269862 (-3.344711) | 1.099389 \/ 4.565676 (-3.466287) | 0.068232 \/ 0.424275 (-0.356043) | 0.011660 \/ 0.007607 (0.004052) | 0.571227 \/ 0.226044 (0.345183) | 5.702059 \/ 2.268929 (3.433130) | 2.837701 \/ 55.444624 (-52.606924) | 2.605468 \/ 6.876477 (-4.271008) | 2.818396 \/ 2.142072 (0.676323) | 0.681856 \/ 4.805227 (-4.123371) | 0.141401 \/ 6.500664 (-6.359263) | 0.069728 \/ 0.075469 (-0.005741) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.354935 \/ 1.841788 (-0.486853) | 15.437404 \/ 8.074308 (7.363095) | 15.415193 \/ 10.191392 (5.223801) | 0.153459 \/ 0.680424 (-0.526964) | 0.017190 \/ 0.534201 (-0.517011) | 0.367256 \/ 0.579283 (-0.212027) | 0.392709 \/ 0.434364 (-0.041655) | 0.426125 \/ 0.540337 (-0.114213) | 0.522612 \/ 1.386936 (-0.864324) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#25ac13d8ab23e7d99252ce083a45e8333b6bbcdc \"CML watermark\")\n","_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009183 \/ 0.011353 (-0.002170) | 0.005232 \/ 0.011008 (-0.005776) | 0.120349 \/ 0.038508 (0.081841) | 0.044715 \/ 0.023109 (0.021606) | 0.361519 \/ 0.275898 (0.085621) | 0.463702 \/ 0.323480 (0.140223) | 0.005842 \/ 0.007986 (-0.002144) | 0.004041 \/ 0.004328 (-0.000288) | 0.096953 \/ 0.004250 (0.092703) | 0.070593 \/ 0.037052 (0.033540) | 0.409790 \/ 0.258489 (0.151301) | 0.477452 \/ 0.293841 (0.183611) | 0.045827 \/ 0.128546 (-0.082719) | 0.014038 \/ 0.075646 (-0.061608) | 0.421317 \/ 0.419271 (0.002045) | 0.065276 \/ 0.043533 (0.021743) | 0.360074 \/ 0.255139 (0.104935) | 0.409147 \/ 0.283200 (0.125947) | 0.032444 \/ 0.141683 (-0.109238) | 1.739257 \/ 1.452155 (0.287102) | 1.831408 \/ 1.492716 (0.338692) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.274852 \/ 0.018006 (0.256846) | 0.596320 \/ 0.000490 (0.595830) | 0.006399 \/ 0.000200 (0.006199) | 0.000133 \/ 0.000054 (0.000079) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031400 \/ 0.037411 (-0.006012) | 0.127052 \/ 0.014526 (0.112526) | 0.134269 \/ 0.176557 (-0.042288) | 0.225998 \/ 0.737135 (-0.511137) | 0.150019 \/ 0.296338 (-0.146319) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.654202 \/ 0.215209 (0.438993) | 6.216735 \/ 2.077655 (4.139081) | 2.440214 \/ 1.504120 (0.936094) | 2.150575 \/ 1.541195 (0.609380) | 2.124790 \/ 1.468490 (0.656300) | 0.923514 \/ 4.584777 (-3.661263) | 5.556924 \/ 3.745712 (1.811212) | 2.843886 \/ 5.269862 (-2.425975) | 1.834232 \/ 4.565676 (-2.731444) | 0.111735 \/ 0.424275 (-0.312540) | 0.014823 \/ 0.007607 (0.007216) | 0.820503 \/ 0.226044 (0.594459) | 7.887737 \/ 2.268929 (5.618809) | 3.120307 \/ 55.444624 (-52.324317) | 2.405856 \/ 6.876477 (-4.470621) | 2.411239 \/ 2.142072 (0.269167) | 1.071283 \/ 4.805227 (-3.733944) | 0.227738 \/ 6.500664 (-6.272926) | 0.073516 \/ 0.075469 (-0.001953) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.531806 \/ 1.841788 (-0.309982) | 18.547661 \/ 8.074308 (10.473353) | 21.083922 \/ 10.191392 (10.892530) | 0.241706 \/ 0.680424 (-0.438718) | 0.034169 \/ 0.534201 (-0.500032) | 0.497514 \/ 0.579283 (-0.081769) | 0.599801 \/ 0.434364 (0.165437) | 0.576465 \/ 0.540337 (0.036127) | 0.673509 \/ 1.386936 (-0.713427) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007558 \/ 0.011353 (-0.003795) | 0.005001 \/ 0.011008 (-0.006008) | 0.093809 \/ 0.038508 (0.055301) | 0.039792 \/ 0.023109 (0.016683) | 0.456869 \/ 0.275898 (0.180971) | 0.493370 \/ 0.323480 (0.169891) | 0.005561 \/ 0.007986 (-0.002424) | 0.003982 \/ 0.004328 (-0.000346) | 0.085421 \/ 0.004250 (0.081170) | 0.059817 \/ 0.037052 (0.022765) | 0.468040 \/ 0.258489 (0.209550) | 0.514853 \/ 0.293841 (0.221012) | 0.044267 \/ 0.128546 (-0.084279) | 0.012674 \/ 0.075646 (-0.062972) | 0.098324 \/ 0.419271 (-0.320948) | 0.056604 \/ 0.043533 (0.013071) | 0.432200 \/ 0.255139 (0.177061) | 0.459812 \/ 0.283200 (0.176612) | 0.033872 \/ 0.141683 (-0.107811) | 1.618576 \/ 1.452155 (0.166421) | 1.676562 \/ 1.492716 (0.183846) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.230625 \/ 0.018006 (0.212619) | 0.600558 \/ 0.000490 (0.600068) | 0.003419 \/ 0.000200 (0.003219) | 0.000113 \/ 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026916 \/ 0.037411 (-0.010496) | 0.103003 \/ 0.014526 (0.088478) | 0.117078 \/ 0.176557 (-0.059478) | 0.169359 \/ 0.737135 (-0.567776) | 0.120305 \/ 0.296338 (-0.176034) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.616877 \/ 0.215209 (0.401668) | 6.157232 \/ 2.077655 (4.079577) | 2.869219 \/ 1.504120 (1.365099) | 2.381410 \/ 1.541195 (0.840216) | 2.417357 \/ 1.468490 (0.948867) | 0.914947 \/ 4.584777 (-3.669830) | 5.718526 \/ 3.745712 (1.972814) | 2.757253 \/ 5.269862 (-2.512609) | 1.794122 \/ 4.565676 (-2.771554) | 0.108423 \/ 0.424275 (-0.315852) | 0.013378 \/ 0.007607 (0.005771) | 0.831067 \/ 0.226044 (0.605023) | 8.478946 \/ 2.268929 (6.210018) | 3.685937 \/ 55.444624 (-51.758687) | 2.867472 \/ 6.876477 (-4.009005) | 2.895975 \/ 2.142072 (0.753903) | 1.137547 \/ 4.805227 (-3.667681) | 0.213891 \/ 6.500664 (-6.286773) | 0.075825 \/ 0.075469 (0.000356) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.621193 \/ 1.841788 (-0.220594) | 17.322110 \/ 8.074308 (9.247802) | 21.804016 \/ 10.191392 (11.612624) | 0.243692 \/ 0.680424 (-0.436732) | 0.030331 \/ 0.534201 (-0.503870) | 0.492186 \/ 0.579283 (-0.087097) | 0.632583 \/ 0.434364 (0.198219) | 0.576265 \/ 0.540337 (0.035927) | 0.713165 \/ 1.386936 (-0.673771) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#a293ceb5aa41c4ae265c0e2aa9ada2d544466121 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008916 \/ 0.011353 (-0.002437) | 0.004737 \/ 0.011008 (-0.006271) | 0.134271 \/ 0.038508 (0.095763) | 0.054472 \/ 0.023109 (0.031363) | 0.380942 \/ 0.275898 (0.105044) | 0.474138 \/ 0.323480 (0.150658) | 0.007917 \/ 0.007986 (-0.000068) | 0.003748 \/ 0.004328 (-0.000580) | 0.092765 \/ 0.004250 (0.088515) | 0.077873 \/ 0.037052 (0.040821) | 0.397533 \/ 0.258489 (0.139043) | 0.454737 \/ 0.293841 (0.160896) | 0.039901 \/ 0.128546 (-0.088645) | 0.010188 \/ 0.075646 (-0.065458) | 0.447312 \/ 0.419271 (0.028040) | 0.068684 \/ 0.043533 (0.025151) | 0.371554 \/ 0.255139 (0.116415) | 0.459655 \/ 0.283200 (0.176455) | 0.027157 \/ 0.141683 (-0.114526) | 1.874643 \/ 1.452155 (0.422488) | 2.014800 \/ 1.492716 (0.522083) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.227079 \/ 0.018006 (0.209073) | 0.483241 \/ 0.000490 (0.482751) | 0.012404 \/ 0.000200 (0.012204) | 0.000409 \/ 0.000054 (0.000354) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033135 \/ 0.037411 (-0.004277) | 0.137782 \/ 0.014526 (0.123257) | 0.142951 \/ 0.176557 (-0.033605) | 0.209825 \/ 0.737135 (-0.527311) | 0.152438 \/ 0.296338 (-0.143900) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.513066 \/ 0.215209 (0.297857) | 5.122776 \/ 2.077655 (3.045121) | 2.399270 \/ 1.504120 (0.895150) | 2.180143 \/ 1.541195 (0.638949) | 2.286395 \/ 1.468490 (0.817905) | 0.641866 \/ 4.584777 (-3.942911) | 4.694922 \/ 3.745712 (0.949210) | 2.543390 \/ 5.269862 (-2.726472) | 1.398592 \/ 4.565676 (-3.167084) | 0.088662 \/ 0.424275 (-0.335613) | 0.015854 \/ 0.007607 (0.008247) | 0.688891 \/ 0.226044 (0.462847) | 6.370148 \/ 2.268929 (4.101220) | 2.949974 \/ 55.444624 (-52.494650) | 2.538049 \/ 6.876477 (-4.338428) | 2.699380 \/ 2.142072 (0.557308) | 0.792670 \/ 4.805227 (-4.012557) | 0.169126 \/ 6.500664 (-6.331538) | 0.078511 \/ 0.075469 (0.003042) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.609119 \/ 1.841788 (-0.232669) | 18.785069 \/ 8.074308 (10.710761) | 16.670783 \/ 10.191392 (6.479391) | 0.213081 \/ 0.680424 (-0.467343) | 0.023904 \/ 0.534201 (-0.510296) | 0.567720 \/ 0.579283 (-0.011564) | 0.505806 \/ 0.434364 (0.071442) | 0.649466 \/ 0.540337 (0.109129) | 0.773174 \/ 1.386936 (-0.613762) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008036 \/ 0.011353 (-0.003317) | 0.004808 \/ 0.011008 (-0.006201) | 0.094316 \/ 0.038508 (0.055808) | 0.056174 \/ 0.023109 (0.033065) | 0.481618 \/ 0.275898 (0.205720) | 0.565300 \/ 0.323480 (0.241820) | 0.006339 \/ 0.007986 (-0.001646) | 0.003950 \/ 0.004328 (-0.000379) | 0.093389 \/ 0.004250 (0.089139) | 0.076163 \/ 0.037052 (0.039111) | 0.489013 \/ 0.258489 (0.230524) | 0.565451 \/ 0.293841 (0.271611) | 0.039392 \/ 0.128546 (-0.089155) | 0.010553 \/ 0.075646 (-0.065093) | 0.101406 \/ 0.419271 (-0.317865) | 0.062355 \/ 0.043533 (0.018822) | 0.470461 \/ 0.255139 (0.215322) | 0.502574 \/ 0.283200 (0.219375) | 0.030196 \/ 0.141683 (-0.111486) | 1.893926 \/ 1.452155 (0.441771) | 1.958902 \/ 1.492716 (0.466185) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.198074 \/ 0.018006 (0.180068) | 0.476828 \/ 0.000490 (0.476338) | 0.003457 \/ 0.000200 (0.003257) | 0.000105 \/ 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.037576 \/ 0.037411 (0.000165) | 0.146663 \/ 0.014526 (0.132138) | 0.152969 \/ 0.176557 (-0.023588) | 0.218683 \/ 0.737135 (-0.518452) | 0.161552 \/ 0.296338 (-0.134786) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.525988 \/ 0.215209 (0.310779) | 5.234673 \/ 2.077655 (3.157018) | 2.571668 \/ 1.504120 (1.067548) | 2.339760 \/ 1.541195 (0.798565) | 2.422886 \/ 1.468490 (0.954395) | 0.651537 \/ 4.584777 (-3.933240) | 4.811148 \/ 3.745712 (1.065436) | 4.451165 \/ 5.269862 (-0.818697) | 2.016283 \/ 4.565676 (-2.549394) | 0.096393 \/ 0.424275 (-0.327882) | 0.015222 \/ 0.007607 (0.007615) | 0.739132 \/ 0.226044 (0.513087) | 6.813327 \/ 2.268929 (4.544399) | 3.169018 \/ 55.444624 (-52.275606) | 2.783120 \/ 6.876477 (-4.093356) | 2.918979 \/ 2.142072 (0.776907) | 0.797476 \/ 4.805227 (-4.007751) | 0.171038 \/ 6.500664 (-6.329626) | 0.079878 \/ 0.075469 (0.004409) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.595082 \/ 1.841788 (-0.246705) | 19.685844 \/ 8.074308 (11.611536) | 17.518989 \/ 10.191392 (7.327597) | 0.220015 \/ 0.680424 (-0.460409) | 0.026351 \/ 0.534201 (-0.507850) | 0.578977 \/ 0.579283 (-0.000306) | 0.549564 \/ 0.434364 (0.115200) | 0.667564 \/ 0.540337 (0.127227) | 0.802121 \/ 1.386936 (-0.584815) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#e9aee64766aaddfda60a735cfc93345aed64bdcf \"CML watermark\")\n"],"created_at":1688128614000,"updated_at":1688131025000,"closed_at":1688130507000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6000","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6000","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6000.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6000.patch","merged_at":1688130507000},"body":"`joblibspark` doesn't support the latest `joblib` release.\r\n\r\nSee https:\/\/github.com\/huggingface\/datasets\/actions\/runs\/5401870932\/jobs\/9812337078 for the errors","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6000\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6000\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5999","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5999\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5999\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5999\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5999","id":1781851513,"node_id":"I_kwDODunzps5qNOV5","number":5999,"title":"Getting a 409 error while loading xglue dataset","user":{"login":"Praful932","id":45713796,"node_id":"MDQ6VXNlcjQ1NzEzNzk2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/45713796?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Praful932","html_url":"https:\/\/github.com\/Praful932","followers_url":"https:\/\/api.github.com\/users\/Praful932\/followers","following_url":"https:\/\/api.github.com\/users\/Praful932\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Praful932\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Praful932\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Praful932\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Praful932\/orgs","repos_url":"https:\/\/api.github.com\/users\/Praful932\/repos","events_url":"https:\/\/api.github.com\/users\/Praful932\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Praful932\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462.0,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @Praful932.\r\n\r\nLet's continue the conversation on the Hub: https:\/\/huggingface.co\/datasets\/xglue\/discussions\/5"],"created_at":1688098434000,"updated_at":1688104643000,"closed_at":1688104642000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nUnable to load xglue dataset\n\n### Steps to reproduce the bug\n\n```python\r\nimport datasets\r\n\r\ndataset = datasets.load_dataset(\"xglue\", \"ntg\")\r\n```\r\n\r\n> ConnectionError: Couldn't reach https:\/\/xglue.blob.core.windows.net\/xglue\/xglue_full_dataset.tar.gz (error 409)\n\n### Expected behavior\n\nExpected the dataset to load\n\n### Environment info\n\n- `datasets` version: 2.13.1\r\n- Platform: Linux-5.15.107+-x86_64-with-glibc2.31\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.5.3","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5999\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5999\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5998","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5998\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5998\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5998\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5998","id":1781805018,"node_id":"I_kwDODunzps5qNC_a","number":5998,"title":"The current implementation has a potential bug in the sort method","user":{"login":"wangyuxinwhy","id":22192665,"node_id":"MDQ6VXNlcjIyMTkyNjY1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22192665?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/wangyuxinwhy","html_url":"https:\/\/github.com\/wangyuxinwhy","followers_url":"https:\/\/api.github.com\/users\/wangyuxinwhy\/followers","following_url":"https:\/\/api.github.com\/users\/wangyuxinwhy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/wangyuxinwhy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/wangyuxinwhy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/wangyuxinwhy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/wangyuxinwhy\/orgs","repos_url":"https:\/\/api.github.com\/users\/wangyuxinwhy\/repos","events_url":"https:\/\/api.github.com\/users\/wangyuxinwhy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/wangyuxinwhy\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting, @wangyuxinwhy. "],"created_at":1688095017000,"updated_at":1688134863000,"closed_at":1688134285000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nIn the sort method\uff0chere's a piece of code\r\n\r\n```python\r\n# column_names: Union[str, Sequence_[str]]\r\n\r\n# Check proper format of and for duplicates in column_names\r\nif not isinstance(column_names, list):\r\n column_names = [column_names]\r\n```\r\n\r\nI get an error when I pass in a tuple based on the column_names type annotation, it will raise an errror.As in the example below, while the type annotation implies that a tuple can be passed.\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('glue', 'ax')['test']\r\ndataset.sort(column_names=('premise', 'hypothesis'))\r\n# Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset.\r\n```\r\n\r\nOf course, after I modified the tuple into a list, everything worked fine\r\n\r\nChange the code to the following so there will be no problem\r\n\r\n```python\r\n# Check proper format of and for duplicates in column_names\r\nif not isinstance(column_names, list):\r\n if isinstance(column_names, str):\r\n column_names = [column_names]\r\n else:\r\n column_names = list(column_names)\r\n```\r\n\r\n### Steps to reproduce the bug\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('glue', 'ax')['test']\r\ndataset.sort(column_names=('premise', 'hypothesis'))\r\n# Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset.\r\n```\r\n\r\n### Expected behavior\r\n\r\nPassing tuple into column_names should be equivalent to passing list\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 2.13.0\r\n- Platform: macOS-13.1-arm64-arm-64bit\r\n- Python version: 3.10.11\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 12.0.1\r\n- Pandas version: 2.0.2","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5998\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5998\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5997","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5997\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5997\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5997\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5997","id":1781582818,"node_id":"I_kwDODunzps5qMMvi","number":5997,"title":"extend the map function so it can wrap around long text that does not fit in the context window","user":{"login":"siddhsql","id":127623723,"node_id":"U_kgDOB5tiKw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/127623723?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/siddhsql","html_url":"https:\/\/github.com\/siddhsql","followers_url":"https:\/\/api.github.com\/users\/siddhsql\/followers","following_url":"https:\/\/api.github.com\/users\/siddhsql\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/siddhsql\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/siddhsql\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/siddhsql\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/siddhsql\/orgs","repos_url":"https:\/\/api.github.com\/users\/siddhsql\/repos","events_url":"https:\/\/api.github.com\/users\/siddhsql\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/siddhsql\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I just noticed the [docs](https:\/\/github.com\/huggingface\/datasets\/blob\/main\/src\/datasets\/arrow_dataset.py#L2881C11-L2881C200) say:\r\n\r\n>If batched is `True` and `batch_size` is `n > 1`, then the function takes a batch of `n` examples as input and can return a batch with `n` examples, or with an arbitrary number of examples.\r\n\r\nso maybe this is a bug then.","All the values in a batch must be of the same length. So one solution is dropping all the input columns:\r\n```python\r\ndata = data.map(lambda samples: tokenizer(samples[\"text\"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True, remove_columns=data.column_names)\r\n```\r\n\r\nAnother is padding\/transforming the input columns to the tokenizer output's length (447). "],"created_at":1688076921000,"updated_at":1688407132000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\n\nI understand `dataset` provides a [`map`](https:\/\/github.com\/huggingface\/datasets\/blob\/main\/src\/datasets\/arrow_dataset.py#L2849) function. This function in turn takes in a callable that is used to tokenize the text on which a model is trained. Frequently this text will not fit within a models's context window. In this case it would be useful to wrap around the text into multiple rows with each row fitting the model's context window. I tried to do it using this code as example which in turn I have borrowed from [here](https:\/\/stackoverflow.com\/a\/76343993\/147530):\r\n\r\n```\r\ndata = data.map(lambda samples: tokenizer(samples[\"text\"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True)\r\n```\r\n\r\nbut running the code gives me this error:\r\n\r\n```\r\nFile \"\/llm\/fine-tune.py\", line 117, in \r\n data = data.map(lambda samples: tokenizer(samples[\"text\"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True)\r\n File \"\/llm\/.env\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py\", line 580, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/llm\/.env\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py\", line 545, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/llm\/.env\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py\", line 3087, in map\r\n for rank, done, content in Dataset._map_single(**dataset_kwargs):\r\n File \"\/llm\/.env\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py\", line 3480, in _map_single\r\n writer.write_batch(batch)\r\n File \"\/llm\/.env\/lib\/python3.9\/site-packages\/datasets\/arrow_writer.py\", line 556, in write_batch\r\n pa_table = pa.Table.from_arrays(arrays, schema=schema)\r\n File \"pyarrow\/table.pxi\", line 3798, in pyarrow.lib.Table.from_arrays\r\n File \"pyarrow\/table.pxi\", line 2962, in pyarrow.lib.Table.validate\r\n File \"pyarrow\/error.pxi\", line 100, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 394 but got length 447\r\n```\r\n\r\nThe lambda function I have provided is correctly chopping up long text so it wraps around (and because of this 394 samples become 447 after wrap around) but the dataset `map` function does not like it.\n\n### Motivation\n\nplease see above\n\n### Your contribution\n\nI'm afraid I don't have much knowledge to help","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5997\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5997\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5996","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5996\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5996\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5996\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5996","id":1779294374,"node_id":"PR_kwDODunzps5UKP0i","number":5996,"title":"Deprecate `use_auth_token` in favor of `token`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006134 \/ 0.011353 (-0.005219) | 0.003816 \/ 0.011008 (-0.007193) | 0.098226 \/ 0.038508 (0.059718) | 0.036830 \/ 0.023109 (0.013721) | 0.314551 \/ 0.275898 (0.038653) | 0.372251 \/ 0.323480 (0.048771) | 0.004762 \/ 0.007986 (-0.003224) | 0.003041 \/ 0.004328 (-0.001287) | 0.077651 \/ 0.004250 (0.073401) | 0.052445 \/ 0.037052 (0.015393) | 0.324632 \/ 0.258489 (0.066143) | 0.365724 \/ 0.293841 (0.071883) | 0.028069 \/ 0.128546 (-0.100477) | 0.008444 \/ 0.075646 (-0.067203) | 0.312767 \/ 0.419271 (-0.106505) | 0.047773 \/ 0.043533 (0.004240) | 0.305317 \/ 0.255139 (0.050178) | 0.332007 \/ 0.283200 (0.048807) | 0.018985 \/ 0.141683 (-0.122698) | 1.538022 \/ 1.452155 (0.085868) | 1.575898 \/ 1.492716 (0.083182) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.204780 \/ 0.018006 (0.186774) | 0.428125 \/ 0.000490 (0.427635) | 0.003454 \/ 0.000200 (0.003254) | 0.000078 \/ 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025064 \/ 0.037411 (-0.012348) | 0.099419 \/ 0.014526 (0.084893) | 0.111068 \/ 0.176557 (-0.065489) | 0.169775 \/ 0.737135 (-0.567361) | 0.112067 \/ 0.296338 (-0.184271) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.429642 \/ 0.215209 (0.214433) | 4.275556 \/ 2.077655 (2.197901) | 1.914658 \/ 1.504120 (0.410539) | 1.706556 \/ 1.541195 (0.165361) | 1.754228 \/ 1.468490 (0.285738) | 0.563669 \/ 4.584777 (-4.021108) | 3.391501 \/ 3.745712 (-0.354211) | 1.791517 \/ 5.269862 (-3.478345) | 1.030704 \/ 4.565676 (-3.534973) | 0.070882 \/ 0.424275 (-0.353393) | 0.011351 \/ 0.007607 (0.003744) | 0.529438 \/ 0.226044 (0.303394) | 5.294316 \/ 2.268929 (3.025387) | 2.344653 \/ 55.444624 (-53.099972) | 1.997468 \/ 6.876477 (-4.879009) | 2.108932 \/ 2.142072 (-0.033140) | 0.676794 \/ 4.805227 (-4.128433) | 0.135058 \/ 6.500664 (-6.365607) | 0.065857 \/ 0.075469 (-0.009612) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.231864 \/ 1.841788 (-0.609924) | 13.986694 \/ 8.074308 (5.912386) | 13.306600 \/ 10.191392 (3.115208) | 0.145520 \/ 0.680424 (-0.534904) | 0.016717 \/ 0.534201 (-0.517484) | 0.366303 \/ 0.579283 (-0.212980) | 0.391637 \/ 0.434364 (-0.042727) | 0.425445 \/ 0.540337 (-0.114892) | 0.507719 \/ 1.386936 (-0.879217) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006236 \/ 0.011353 (-0.005116) | 0.003766 \/ 0.011008 (-0.007242) | 0.076794 \/ 0.038508 (0.038286) | 0.037210 \/ 0.023109 (0.014101) | 0.378387 \/ 0.275898 (0.102489) | 0.425456 \/ 0.323480 (0.101977) | 0.004694 \/ 0.007986 (-0.003291) | 0.002921 \/ 0.004328 (-0.001407) | 0.076985 \/ 0.004250 (0.072735) | 0.052188 \/ 0.037052 (0.015136) | 0.394385 \/ 0.258489 (0.135896) | 0.432527 \/ 0.293841 (0.138686) | 0.029091 \/ 0.128546 (-0.099455) | 0.008364 \/ 0.075646 (-0.067282) | 0.082583 \/ 0.419271 (-0.336689) | 0.042928 \/ 0.043533 (-0.000605) | 0.375321 \/ 0.255139 (0.120182) | 0.391719 \/ 0.283200 (0.108519) | 0.019388 \/ 0.141683 (-0.122295) | 1.550644 \/ 1.452155 (0.098489) | 1.604882 \/ 1.492716 (0.112166) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.236859 \/ 0.018006 (0.218853) | 0.418528 \/ 0.000490 (0.418039) | 0.000388 \/ 0.000200 (0.000188) | 0.000059 \/ 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025548 \/ 0.037411 (-0.011863) | 0.100644 \/ 0.014526 (0.086118) | 0.109102 \/ 0.176557 (-0.067455) | 0.161694 \/ 0.737135 (-0.575441) | 0.112088 \/ 0.296338 (-0.184250) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.484128 \/ 0.215209 (0.268919) | 4.849952 \/ 2.077655 (2.772297) | 2.512769 \/ 1.504120 (1.008649) | 2.303295 \/ 1.541195 (0.762100) | 2.356699 \/ 1.468490 (0.888209) | 0.564181 \/ 4.584777 (-4.020596) | 3.421393 \/ 3.745712 (-0.324319) | 2.570875 \/ 5.269862 (-2.698987) | 1.474307 \/ 4.565676 (-3.091370) | 0.068035 \/ 0.424275 (-0.356240) | 0.011300 \/ 0.007607 (0.003693) | 0.587867 \/ 0.226044 (0.361823) | 5.862447 \/ 2.268929 (3.593519) | 3.004017 \/ 55.444624 (-52.440607) | 2.664989 \/ 6.876477 (-4.211488) | 2.740020 \/ 2.142072 (0.597948) | 0.680840 \/ 4.805227 (-4.124387) | 0.137001 \/ 6.500664 (-6.363663) | 0.068098 \/ 0.075469 (-0.007371) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.297362 \/ 1.841788 (-0.544426) | 14.207891 \/ 8.074308 (6.133583) | 14.087562 \/ 10.191392 (3.896170) | 0.149514 \/ 0.680424 (-0.530910) | 0.016566 \/ 0.534201 (-0.517635) | 0.367602 \/ 0.579283 (-0.211681) | 0.400692 \/ 0.434364 (-0.033671) | 0.432907 \/ 0.540337 (-0.107431) | 0.525924 \/ 1.386936 (-0.861012) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#1ec069feaaf6c28d4e4df76d344693b591a74c3f \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006223 \/ 0.011353 (-0.005130) | 0.003672 \/ 0.011008 (-0.007336) | 0.097451 \/ 0.038508 (0.058943) | 0.036243 \/ 0.023109 (0.013133) | 0.375650 \/ 0.275898 (0.099752) | 0.431652 \/ 0.323480 (0.108172) | 0.004758 \/ 0.007986 (-0.003227) | 0.002941 \/ 0.004328 (-0.001387) | 0.077383 \/ 0.004250 (0.073132) | 0.055342 \/ 0.037052 (0.018289) | 0.390335 \/ 0.258489 (0.131846) | 0.427867 \/ 0.293841 (0.134026) | 0.027619 \/ 0.128546 (-0.100927) | 0.008244 \/ 0.075646 (-0.067402) | 0.313499 \/ 0.419271 (-0.105773) | 0.054987 \/ 0.043533 (0.011454) | 0.394044 \/ 0.255139 (0.138905) | 0.398784 \/ 0.283200 (0.115584) | 0.026499 \/ 0.141683 (-0.115184) | 1.496907 \/ 1.452155 (0.044753) | 1.554465 \/ 1.492716 (0.061749) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.241197 \/ 0.018006 (0.223190) | 0.427856 \/ 0.000490 (0.427366) | 0.006264 \/ 0.000200 (0.006065) | 0.000218 \/ 0.000054 (0.000164) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025550 \/ 0.037411 (-0.011862) | 0.104426 \/ 0.014526 (0.089901) | 0.110310 \/ 0.176557 (-0.066246) | 0.173813 \/ 0.737135 (-0.563322) | 0.112129 \/ 0.296338 (-0.184209) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.458806 \/ 0.215209 (0.243597) | 4.576351 \/ 2.077655 (2.498697) | 2.265670 \/ 1.504120 (0.761550) | 2.073230 \/ 1.541195 (0.532035) | 2.135283 \/ 1.468490 (0.666793) | 0.562506 \/ 4.584777 (-4.022271) | 3.375101 \/ 3.745712 (-0.370611) | 1.734393 \/ 5.269862 (-3.535469) | 1.026622 \/ 4.565676 (-3.539054) | 0.068144 \/ 0.424275 (-0.356131) | 0.011092 \/ 0.007607 (0.003485) | 0.562779 \/ 0.226044 (0.336734) | 5.608256 \/ 2.268929 (3.339328) | 2.706468 \/ 55.444624 (-52.738157) | 2.381607 \/ 6.876477 (-4.494869) | 2.451027 \/ 2.142072 (0.308954) | 0.671590 \/ 4.805227 (-4.133637) | 0.135749 \/ 6.500664 (-6.364915) | 0.065389 \/ 0.075469 (-0.010080) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.244806 \/ 1.841788 (-0.596981) | 14.042150 \/ 8.074308 (5.967841) | 14.246612 \/ 10.191392 (4.055220) | 0.134309 \/ 0.680424 (-0.546114) | 0.017082 \/ 0.534201 (-0.517119) | 0.366043 \/ 0.579283 (-0.213240) | 0.400748 \/ 0.434364 (-0.033616) | 0.425695 \/ 0.540337 (-0.114643) | 0.509355 \/ 1.386936 (-0.877581) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006134 \/ 0.011353 (-0.005219) | 0.003980 \/ 0.011008 (-0.007028) | 0.078353 \/ 0.038508 (0.039845) | 0.038011 \/ 0.023109 (0.014902) | 0.375784 \/ 0.275898 (0.099886) | 0.433619 \/ 0.323480 (0.110139) | 0.004897 \/ 0.007986 (-0.003088) | 0.002981 \/ 0.004328 (-0.001347) | 0.077362 \/ 0.004250 (0.073112) | 0.056108 \/ 0.037052 (0.019056) | 0.395984 \/ 0.258489 (0.137495) | 0.427397 \/ 0.293841 (0.133556) | 0.029325 \/ 0.128546 (-0.099221) | 0.008498 \/ 0.075646 (-0.067148) | 0.082478 \/ 0.419271 (-0.336794) | 0.044085 \/ 0.043533 (0.000552) | 0.389923 \/ 0.255139 (0.134784) | 0.391180 \/ 0.283200 (0.107980) | 0.022452 \/ 0.141683 (-0.119231) | 1.507758 \/ 1.452155 (0.055603) | 1.530459 \/ 1.492716 (0.037743) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.230928 \/ 0.018006 (0.212922) | 0.408484 \/ 0.000490 (0.407995) | 0.000806 \/ 0.000200 (0.000606) | 0.000067 \/ 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025183 \/ 0.037411 (-0.012228) | 0.102292 \/ 0.014526 (0.087766) | 0.108142 \/ 0.176557 (-0.068415) | 0.161172 \/ 0.737135 (-0.575963) | 0.114476 \/ 0.296338 (-0.181862) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.482978 \/ 0.215209 (0.267769) | 4.816103 \/ 2.077655 (2.738448) | 2.505567 \/ 1.504120 (1.001447) | 2.302598 \/ 1.541195 (0.761404) | 2.371238 \/ 1.468490 (0.902748) | 0.567467 \/ 4.584777 (-4.017310) | 3.363407 \/ 3.745712 (-0.382306) | 1.746213 \/ 5.269862 (-3.523649) | 1.035468 \/ 4.565676 (-3.530208) | 0.068431 \/ 0.424275 (-0.355844) | 0.011069 \/ 0.007607 (0.003462) | 0.598241 \/ 0.226044 (0.372196) | 5.953927 \/ 2.268929 (3.684999) | 3.007493 \/ 55.444624 (-52.437132) | 2.629399 \/ 6.876477 (-4.247078) | 2.737201 \/ 2.142072 (0.595129) | 0.682456 \/ 4.805227 (-4.122771) | 0.137613 \/ 6.500664 (-6.363051) | 0.067941 \/ 0.075469 (-0.007528) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.306015 \/ 1.841788 (-0.535772) | 14.359240 \/ 8.074308 (6.284932) | 14.187601 \/ 10.191392 (3.996209) | 0.138612 \/ 0.680424 (-0.541812) | 0.016708 \/ 0.534201 (-0.517493) | 0.366365 \/ 0.579283 (-0.212918) | 0.396982 \/ 0.434364 (-0.037382) | 0.426939 \/ 0.540337 (-0.113398) | 0.520064 \/ 1.386936 (-0.866872) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#21d0fd041a5eca02d3ee787396216ac613c662ac \"CML watermark\")\n","They use `token` and emit a deprecation warning if `use_auth_token` is passed instead (see https:\/\/github.com\/huggingface\/transformers\/blob\/78a2b19fc84ed55c65f4bf20a901edb7ceb73c5f\/src\/transformers\/modeling_utils.py#L1933). \r\n\r\nI think we can update the `examples` scripts after merging this PR.","> I think we can update the examples scripts after merging this PR.\r\n\r\nWe should do a release before updated in the examples scripts no ? That's why it's an option to not have a deprecation warning until transformers and co are updated with the `token` arg","> We should do a release before updated in the examples scripts no ? That's why it's an option to not have a deprecation warning until transformers and co are updated with the token arg\r\n\r\nThis would avoid the warning only for the latest `datasets` release. TBH, I don't think this is worth the hassle, considering how simple it is to remove it.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007644 \/ 0.011353 (-0.003709) | 0.004667 \/ 0.011008 (-0.006341) | 0.117347 \/ 0.038508 (0.078839) | 0.050620 \/ 0.023109 (0.027510) | 0.415402 \/ 0.275898 (0.139504) | 0.485898 \/ 0.323480 (0.162418) | 0.005848 \/ 0.007986 (-0.002138) | 0.003736 \/ 0.004328 (-0.000592) | 0.089798 \/ 0.004250 (0.085547) | 0.069344 \/ 0.037052 (0.032292) | 0.441684 \/ 0.258489 (0.183195) | 0.468972 \/ 0.293841 (0.175131) | 0.036637 \/ 0.128546 (-0.091909) | 0.010219 \/ 0.075646 (-0.065427) | 0.394293 \/ 0.419271 (-0.024978) | 0.061462 \/ 0.043533 (0.017929) | 0.409448 \/ 0.255139 (0.154309) | 0.431557 \/ 0.283200 (0.148358) | 0.027795 \/ 0.141683 (-0.113888) | 1.837844 \/ 1.452155 (0.385690) | 1.862683 \/ 1.492716 (0.369967) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.230500 \/ 0.018006 (0.212494) | 0.483139 \/ 0.000490 (0.482649) | 0.006517 \/ 0.000200 (0.006317) | 0.000143 \/ 0.000054 (0.000088) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033152 \/ 0.037411 (-0.004259) | 0.133673 \/ 0.014526 (0.119147) | 0.143853 \/ 0.176557 (-0.032704) | 0.215254 \/ 0.737135 (-0.521882) | 0.150676 \/ 0.296338 (-0.145662) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.503796 \/ 0.215209 (0.288587) | 5.049981 \/ 2.077655 (2.972326) | 2.399427 \/ 1.504120 (0.895307) | 2.167635 \/ 1.541195 (0.626441) | 2.257448 \/ 1.468490 (0.788958) | 0.641298 \/ 4.584777 (-3.943479) | 4.828676 \/ 3.745712 (1.082964) | 4.346069 \/ 5.269862 (-0.923793) | 2.103890 \/ 4.565676 (-2.461786) | 0.079115 \/ 0.424275 (-0.345160) | 0.013377 \/ 0.007607 (0.005770) | 0.621207 \/ 0.226044 (0.395162) | 6.190939 \/ 2.268929 (3.922011) | 2.920129 \/ 55.444624 (-52.524495) | 2.549225 \/ 6.876477 (-4.327252) | 2.719221 \/ 2.142072 (0.577149) | 0.790949 \/ 4.805227 (-4.014278) | 0.172032 \/ 6.500664 (-6.328632) | 0.077779 \/ 0.075469 (0.002310) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.432572 \/ 1.841788 (-0.409216) | 21.000031 \/ 8.074308 (12.925723) | 17.555093 \/ 10.191392 (7.363701) | 0.166646 \/ 0.680424 (-0.513778) | 0.020451 \/ 0.534201 (-0.513750) | 0.488767 \/ 0.579283 (-0.090516) | 0.737036 \/ 0.434364 (0.302672) | 0.621694 \/ 0.540337 (0.081356) | 0.732074 \/ 1.386936 (-0.654862) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008198 \/ 0.011353 (-0.003155) | 0.004987 \/ 0.011008 (-0.006021) | 0.090714 \/ 0.038508 (0.052206) | 0.053379 \/ 0.023109 (0.030270) | 0.425199 \/ 0.275898 (0.149301) | 0.514036 \/ 0.323480 (0.190556) | 0.006043 \/ 0.007986 (-0.001943) | 0.003888 \/ 0.004328 (-0.000441) | 0.088294 \/ 0.004250 (0.084043) | 0.073024 \/ 0.037052 (0.035971) | 0.435983 \/ 0.258489 (0.177494) | 0.514293 \/ 0.293841 (0.220452) | 0.039451 \/ 0.128546 (-0.089095) | 0.010439 \/ 0.075646 (-0.065207) | 0.096885 \/ 0.419271 (-0.322387) | 0.060165 \/ 0.043533 (0.016632) | 0.421053 \/ 0.255139 (0.165914) | 0.455545 \/ 0.283200 (0.172345) | 0.027234 \/ 0.141683 (-0.114449) | 1.768975 \/ 1.452155 (0.316820) | 1.842853 \/ 1.492716 (0.350137) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.278940 \/ 0.018006 (0.260933) | 0.480709 \/ 0.000490 (0.480219) | 0.000436 \/ 0.000200 (0.000236) | 0.000070 \/ 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034900 \/ 0.037411 (-0.002511) | 0.144893 \/ 0.014526 (0.130368) | 0.149567 \/ 0.176557 (-0.026989) | 0.213200 \/ 0.737135 (-0.523935) | 0.156735 \/ 0.296338 (-0.139604) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.535897 \/ 0.215209 (0.320687) | 5.336998 \/ 2.077655 (3.259343) | 2.685854 \/ 1.504120 (1.181734) | 2.470177 \/ 1.541195 (0.928983) | 2.547495 \/ 1.468490 (1.079004) | 0.642830 \/ 4.584777 (-3.941947) | 4.595866 \/ 3.745712 (0.850154) | 2.186696 \/ 5.269862 (-3.083165) | 1.317969 \/ 4.565676 (-3.247708) | 0.079268 \/ 0.424275 (-0.345007) | 0.013792 \/ 0.007607 (0.006185) | 0.662236 \/ 0.226044 (0.436192) | 6.604775 \/ 2.268929 (4.335847) | 3.355888 \/ 55.444624 (-52.088736) | 2.968911 \/ 6.876477 (-3.907565) | 3.121862 \/ 2.142072 (0.979790) | 0.794752 \/ 4.805227 (-4.010475) | 0.170800 \/ 6.500664 (-6.329864) | 0.078393 \/ 0.075469 (0.002924) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.601605 \/ 1.841788 (-0.240183) | 20.743553 \/ 8.074308 (12.669245) | 17.543968 \/ 10.191392 (7.352576) | 0.221884 \/ 0.680424 (-0.458540) | 0.020779 \/ 0.534201 (-0.513422) | 0.479677 \/ 0.579283 (-0.099606) | 0.516207 \/ 0.434364 (0.081843) | 0.564046 \/ 0.540337 (0.023709) | 0.711336 \/ 1.386936 (-0.675600) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#819bb4346434912eb405ce3f3e9f21dc25a2fe85 \"CML watermark\")\n","Yes, sounds great! Thanks","yup"],"created_at":1687969598000,"updated_at":1688570540000,"closed_at":1688400213000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5996","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5996","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5996.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5996.patch","merged_at":1688400213000},"body":"... to be consistent with `transformers` and `huggingface_hub`.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5996\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5996\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5995","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5995\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5995\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5995\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5995","id":1777088925,"node_id":"PR_kwDODunzps5UCvYJ","number":5995,"title":"Support returning dataframe in map transform","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009725 \/ 0.011353 (-0.001628) | 0.006014 \/ 0.011008 (-0.004994) | 0.136039 \/ 0.038508 (0.097531) | 0.049685 \/ 0.023109 (0.026576) | 0.492967 \/ 0.275898 (0.217068) | 0.553775 \/ 0.323480 (0.230295) | 0.007421 \/ 0.007986 (-0.000564) | 0.004686 \/ 0.004328 (0.000357) | 0.106639 \/ 0.004250 (0.102389) | 0.073483 \/ 0.037052 (0.036431) | 0.507194 \/ 0.258489 (0.248705) | 0.535760 \/ 0.293841 (0.241919) | 0.049666 \/ 0.128546 (-0.078880) | 0.014139 \/ 0.075646 (-0.061507) | 0.435459 \/ 0.419271 (0.016188) | 0.076026 \/ 0.043533 (0.032493) | 0.454542 \/ 0.255139 (0.199403) | 0.512724 \/ 0.283200 (0.229524) | 0.034969 \/ 0.141683 (-0.106713) | 1.881048 \/ 1.452155 (0.428893) | 1.959915 \/ 1.492716 (0.467199) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.265322 \/ 0.018006 (0.247316) | 0.573963 \/ 0.000490 (0.573474) | 0.017493 \/ 0.000200 (0.017293) | 0.000637 \/ 0.000054 (0.000582) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028712 \/ 0.037411 (-0.008699) | 0.149554 \/ 0.014526 (0.135029) | 0.130013 \/ 0.176557 (-0.046544) | 0.203408 \/ 0.737135 (-0.533727) | 0.144778 \/ 0.296338 (-0.151561) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.664198 \/ 0.215209 (0.448989) | 6.418054 \/ 2.077655 (4.340399) | 2.602338 \/ 1.504120 (1.098219) | 2.212992 \/ 1.541195 (0.671797) | 2.214309 \/ 1.468490 (0.745819) | 0.914772 \/ 4.584777 (-3.670005) | 5.824831 \/ 3.745712 (2.079119) | 2.865381 \/ 5.269862 (-2.404481) | 1.906020 \/ 4.565676 (-2.659657) | 0.106947 \/ 0.424275 (-0.317328) | 0.013467 \/ 0.007607 (0.005860) | 0.834556 \/ 0.226044 (0.608512) | 8.237078 \/ 2.268929 (5.968150) | 3.380919 \/ 55.444624 (-52.063705) | 2.656713 \/ 6.876477 (-4.219764) | 2.834941 \/ 2.142072 (0.692869) | 1.151241 \/ 4.805227 (-3.653986) | 0.220860 \/ 6.500664 (-6.279804) | 0.080781 \/ 0.075469 (0.005312) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.655128 \/ 1.841788 (-0.186660) | 18.696108 \/ 8.074308 (10.621800) | 22.882108 \/ 10.191392 (12.690716) | 0.236041 \/ 0.680424 (-0.444383) | 0.031073 \/ 0.534201 (-0.503128) | 0.525263 \/ 0.579283 (-0.054021) | 0.632933 \/ 0.434364 (0.198569) | 0.707228 \/ 0.540337 (0.166890) | 0.753508 \/ 1.386936 (-0.633428) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009875 \/ 0.011353 (-0.001478) | 0.005135 \/ 0.011008 (-0.005873) | 0.101307 \/ 0.038508 (0.062799) | 0.044895 \/ 0.023109 (0.021786) | 0.497824 \/ 0.275898 (0.221926) | 0.573098 \/ 0.323480 (0.249618) | 0.006669 \/ 0.007986 (-0.001317) | 0.004289 \/ 0.004328 (-0.000039) | 0.105824 \/ 0.004250 (0.101573) | 0.061002 \/ 0.037052 (0.023950) | 0.510127 \/ 0.258489 (0.251638) | 0.581387 \/ 0.293841 (0.287546) | 0.052843 \/ 0.128546 (-0.075703) | 0.015506 \/ 0.075646 (-0.060140) | 0.116057 \/ 0.419271 (-0.303215) | 0.063444 \/ 0.043533 (0.019912) | 0.479366 \/ 0.255139 (0.224227) | 0.518419 \/ 0.283200 (0.235220) | 0.034876 \/ 0.141683 (-0.106806) | 2.018446 \/ 1.452155 (0.566292) | 1.960755 \/ 1.492716 (0.468039) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.269077 \/ 0.018006 (0.251070) | 0.606059 \/ 0.000490 (0.605569) | 0.000488 \/ 0.000200 (0.000288) | 0.000093 \/ 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032465 \/ 0.037411 (-0.004946) | 0.136517 \/ 0.014526 (0.121991) | 0.147740 \/ 0.176557 (-0.028816) | 0.193802 \/ 0.737135 (-0.543334) | 0.151876 \/ 0.296338 (-0.144462) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.709866 \/ 0.215209 (0.494657) | 6.848193 \/ 2.077655 (4.770538) | 3.310853 \/ 1.504120 (1.806733) | 2.940813 \/ 1.541195 (1.399619) | 2.934934 \/ 1.468490 (1.466444) | 0.927104 \/ 4.584777 (-3.657673) | 5.921607 \/ 3.745712 (2.175895) | 4.926558 \/ 5.269862 (-0.343303) | 2.853269 \/ 4.565676 (-1.712407) | 0.120278 \/ 0.424275 (-0.303998) | 0.015468 \/ 0.007607 (0.007861) | 0.820509 \/ 0.226044 (0.594464) | 8.263136 \/ 2.268929 (5.994208) | 3.780214 \/ 55.444624 (-51.664410) | 3.108482 \/ 6.876477 (-3.767995) | 3.101544 \/ 2.142072 (0.959471) | 1.165539 \/ 4.805227 (-3.639688) | 0.229215 \/ 6.500664 (-6.271449) | 0.079862 \/ 0.075469 (0.004393) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.775071 \/ 1.841788 (-0.066717) | 19.327621 \/ 8.074308 (11.253313) | 23.057537 \/ 10.191392 (12.866145) | 0.250649 \/ 0.680424 (-0.429775) | 0.029767 \/ 0.534201 (-0.504434) | 0.554774 \/ 0.579283 (-0.024509) | 0.651919 \/ 0.434364 (0.217555) | 0.651641 \/ 0.540337 (0.111304) | 0.762386 \/ 1.386936 (-0.624550) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#fdc3ce7060366f480621e8640903c9ab476164e7 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005997 \/ 0.011353 (-0.005356) | 0.003892 \/ 0.011008 (-0.007116) | 0.098020 \/ 0.038508 (0.059512) | 0.042584 \/ 0.023109 (0.019475) | 0.317909 \/ 0.275898 (0.042011) | 0.395042 \/ 0.323480 (0.071563) | 0.005358 \/ 0.007986 (-0.002628) | 0.003266 \/ 0.004328 (-0.001062) | 0.076698 \/ 0.004250 (0.072447) | 0.062331 \/ 0.037052 (0.025279) | 0.334900 \/ 0.258489 (0.076411) | 0.379355 \/ 0.293841 (0.085514) | 0.030815 \/ 0.128546 (-0.097731) | 0.008596 \/ 0.075646 (-0.067050) | 0.327739 \/ 0.419271 (-0.091533) | 0.054061 \/ 0.043533 (0.010528) | 0.311044 \/ 0.255139 (0.055905) | 0.336705 \/ 0.283200 (0.053506) | 0.022785 \/ 0.141683 (-0.118898) | 1.516793 \/ 1.452155 (0.064639) | 1.590435 \/ 1.492716 (0.097719) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.289157 \/ 0.018006 (0.271151) | 0.531074 \/ 0.000490 (0.530585) | 0.004672 \/ 0.000200 (0.004472) | 0.000095 \/ 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026173 \/ 0.037411 (-0.011238) | 0.105723 \/ 0.014526 (0.091197) | 0.118010 \/ 0.176557 (-0.058547) | 0.178062 \/ 0.737135 (-0.559073) | 0.120059 \/ 0.296338 (-0.176279) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.410870 \/ 0.215209 (0.195661) | 4.042183 \/ 2.077655 (1.964528) | 1.830059 \/ 1.504120 (0.325939) | 1.638996 \/ 1.541195 (0.097802) | 1.701368 \/ 1.468490 (0.232878) | 0.529915 \/ 4.584777 (-4.054861) | 3.693308 \/ 3.745712 (-0.052404) | 1.827875 \/ 5.269862 (-3.441986) | 1.063237 \/ 4.565676 (-3.502440) | 0.065368 \/ 0.424275 (-0.358907) | 0.010986 \/ 0.007607 (0.003379) | 0.509399 \/ 0.226044 (0.283354) | 5.092739 \/ 2.268929 (2.823810) | 2.293490 \/ 55.444624 (-53.151135) | 1.958742 \/ 6.876477 (-4.917735) | 2.024985 \/ 2.142072 (-0.117088) | 0.646978 \/ 4.805227 (-4.158249) | 0.138616 \/ 6.500664 (-6.362048) | 0.062101 \/ 0.075469 (-0.013368) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.202016 \/ 1.841788 (-0.639772) | 14.493204 \/ 8.074308 (6.418896) | 12.992160 \/ 10.191392 (2.800768) | 0.188922 \/ 0.680424 (-0.491502) | 0.017594 \/ 0.534201 (-0.516606) | 0.399917 \/ 0.579283 (-0.179367) | 0.429760 \/ 0.434364 (-0.004604) | 0.497906 \/ 0.540337 (-0.042431) | 0.608745 \/ 1.386936 (-0.778191) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006164 \/ 0.011353 (-0.005189) | 0.003980 \/ 0.011008 (-0.007028) | 0.074676 \/ 0.038508 (0.036168) | 0.041337 \/ 0.023109 (0.018228) | 0.400981 \/ 0.275898 (0.125083) | 0.448791 \/ 0.323480 (0.125312) | 0.004063 \/ 0.007986 (-0.003923) | 0.004443 \/ 0.004328 (0.000114) | 0.075011 \/ 0.004250 (0.070760) | 0.056494 \/ 0.037052 (0.019441) | 0.402054 \/ 0.258489 (0.143565) | 0.446122 \/ 0.293841 (0.152281) | 0.031752 \/ 0.128546 (-0.096794) | 0.008835 \/ 0.075646 (-0.066811) | 0.081226 \/ 0.419271 (-0.338046) | 0.051501 \/ 0.043533 (0.007969) | 0.383674 \/ 0.255139 (0.128535) | 0.405524 \/ 0.283200 (0.122325) | 0.025929 \/ 0.141683 (-0.115754) | 1.492985 \/ 1.452155 (0.040830) | 1.541601 \/ 1.492716 (0.048885) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.305149 \/ 0.018006 (0.287142) | 0.497259 \/ 0.000490 (0.496770) | 0.000420 \/ 0.000200 (0.000220) | 0.000056 \/ 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027933 \/ 0.037411 (-0.009479) | 0.111900 \/ 0.014526 (0.097374) | 0.124879 \/ 0.176557 (-0.051678) | 0.178952 \/ 0.737135 (-0.558184) | 0.127698 \/ 0.296338 (-0.168640) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.448525 \/ 0.215209 (0.233316) | 4.486791 \/ 2.077655 (2.409137) | 2.256687 \/ 1.504120 (0.752567) | 2.061078 \/ 1.541195 (0.519884) | 2.078924 \/ 1.468490 (0.610434) | 0.534412 \/ 4.584777 (-4.050365) | 3.721098 \/ 3.745712 (-0.024614) | 1.818735 \/ 5.269862 (-3.451127) | 1.104198 \/ 4.565676 (-3.461479) | 0.066277 \/ 0.424275 (-0.357998) | 0.011441 \/ 0.007607 (0.003834) | 0.550140 \/ 0.226044 (0.324095) | 5.498079 \/ 2.268929 (3.229150) | 2.717398 \/ 55.444624 (-52.727227) | 2.410194 \/ 6.876477 (-4.466283) | 2.405304 \/ 2.142072 (0.263231) | 0.665432 \/ 4.805227 (-4.139796) | 0.141488 \/ 6.500664 (-6.359177) | 0.064051 \/ 0.075469 (-0.011419) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.272334 \/ 1.841788 (-0.569454) | 14.901608 \/ 8.074308 (6.827300) | 14.287857 \/ 10.191392 (4.096465) | 0.165337 \/ 0.680424 (-0.515086) | 0.017402 \/ 0.534201 (-0.516799) | 0.398120 \/ 0.579283 (-0.181163) | 0.416539 \/ 0.434364 (-0.017825) | 0.463890 \/ 0.540337 (-0.076447) | 0.567909 \/ 1.386936 (-0.819027) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#504ec0f2e00ee38e0993ed1e4f1e10f1eefaea0d \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009434 \/ 0.011353 (-0.001919) | 0.005567 \/ 0.011008 (-0.005441) | 0.122652 \/ 0.038508 (0.084144) | 0.050177 \/ 0.023109 (0.027067) | 0.384292 \/ 0.275898 (0.108394) | 0.446608 \/ 0.323480 (0.123128) | 0.006502 \/ 0.007986 (-0.001484) | 0.004523 \/ 0.004328 (0.000194) | 0.100581 \/ 0.004250 (0.096331) | 0.073615 \/ 0.037052 (0.036563) | 0.420179 \/ 0.258489 (0.161690) | 0.474631 \/ 0.293841 (0.180790) | 0.047942 \/ 0.128546 (-0.080604) | 0.013864 \/ 0.075646 (-0.061783) | 0.419384 \/ 0.419271 (0.000112) | 0.088317 \/ 0.043533 (0.044784) | 0.379620 \/ 0.255139 (0.124481) | 0.412639 \/ 0.283200 (0.129440) | 0.048947 \/ 0.141683 (-0.092736) | 1.823498 \/ 1.452155 (0.371343) | 1.966629 \/ 1.492716 (0.473913) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.300669 \/ 0.018006 (0.282663) | 0.593499 \/ 0.000490 (0.593009) | 0.007247 \/ 0.000200 (0.007047) | 0.000114 \/ 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030556 \/ 0.037411 (-0.006856) | 0.119252 \/ 0.014526 (0.104726) | 0.131403 \/ 0.176557 (-0.045153) | 0.201845 \/ 0.737135 (-0.535291) | 0.139350 \/ 0.296338 (-0.156989) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.652400 \/ 0.215209 (0.437191) | 6.536540 \/ 2.077655 (4.458886) | 2.644565 \/ 1.504120 (1.140445) | 2.245181 \/ 1.541195 (0.703986) | 2.316030 \/ 1.468490 (0.847540) | 0.922535 \/ 4.584777 (-3.662242) | 5.469065 \/ 3.745712 (1.723353) | 2.800489 \/ 5.269862 (-2.469373) | 1.749042 \/ 4.565676 (-2.816635) | 0.108444 \/ 0.424275 (-0.315831) | 0.015651 \/ 0.007607 (0.008044) | 0.846085 \/ 0.226044 (0.620041) | 8.018460 \/ 2.268929 (5.749531) | 3.338710 \/ 55.444624 (-52.105914) | 2.675998 \/ 6.876477 (-4.200479) | 2.918550 \/ 2.142072 (0.776478) | 1.135145 \/ 4.805227 (-3.670082) | 0.215165 \/ 6.500664 (-6.285499) | 0.082066 \/ 0.075469 (0.006597) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.561661 \/ 1.841788 (-0.280127) | 18.519035 \/ 8.074308 (10.444727) | 19.046300 \/ 10.191392 (8.854908) | 0.236890 \/ 0.680424 (-0.443534) | 0.027681 \/ 0.534201 (-0.506520) | 0.511998 \/ 0.579283 (-0.067285) | 0.591627 \/ 0.434364 (0.157264) | 0.562021 \/ 0.540337 (0.021683) | 0.679354 \/ 1.386936 (-0.707582) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009643 \/ 0.011353 (-0.001710) | 0.005768 \/ 0.011008 (-0.005241) | 0.104430 \/ 0.038508 (0.065922) | 0.050044 \/ 0.023109 (0.026935) | 0.464117 \/ 0.275898 (0.188219) | 0.518439 \/ 0.323480 (0.194959) | 0.006935 \/ 0.007986 (-0.001051) | 0.004316 \/ 0.004328 (-0.000013) | 0.094330 \/ 0.004250 (0.090080) | 0.071451 \/ 0.037052 (0.034399) | 0.492248 \/ 0.258489 (0.233759) | 0.555740 \/ 0.293841 (0.261899) | 0.047836 \/ 0.128546 (-0.080711) | 0.014788 \/ 0.075646 (-0.060859) | 0.107590 \/ 0.419271 (-0.311682) | 0.064396 \/ 0.043533 (0.020863) | 0.451529 \/ 0.255139 (0.196390) | 0.475025 \/ 0.283200 (0.191826) | 0.040006 \/ 0.141683 (-0.101677) | 1.797107 \/ 1.452155 (0.344953) | 1.879261 \/ 1.492716 (0.386545) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.298458 \/ 0.018006 (0.280451) | 0.613022 \/ 0.000490 (0.612532) | 0.003582 \/ 0.000200 (0.003382) | 0.000106 \/ 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030179 \/ 0.037411 (-0.007232) | 0.123286 \/ 0.014526 (0.108760) | 0.132070 \/ 0.176557 (-0.044486) | 0.190883 \/ 0.737135 (-0.546252) | 0.138526 \/ 0.296338 (-0.157812) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.666908 \/ 0.215209 (0.451699) | 6.489035 \/ 2.077655 (4.411381) | 2.897027 \/ 1.504120 (1.392907) | 2.565150 \/ 1.541195 (1.023956) | 2.504827 \/ 1.468490 (1.036336) | 0.916112 \/ 4.584777 (-3.668665) | 5.651751 \/ 3.745712 (1.906039) | 2.743382 \/ 5.269862 (-2.526479) | 1.773338 \/ 4.565676 (-2.792338) | 0.128764 \/ 0.424275 (-0.295511) | 0.013140 \/ 0.007607 (0.005533) | 0.803281 \/ 0.226044 (0.577236) | 8.258874 \/ 2.268929 (5.989945) | 3.633260 \/ 55.444624 (-51.811364) | 2.878827 \/ 6.876477 (-3.997649) | 2.977178 \/ 2.142072 (0.835106) | 1.130467 \/ 4.805227 (-3.674760) | 0.226381 \/ 6.500664 (-6.274283) | 0.081550 \/ 0.075469 (0.006081) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.842927 \/ 1.841788 (0.001139) | 18.411520 \/ 8.074308 (10.337212) | 21.118228 \/ 10.191392 (10.926836) | 0.231526 \/ 0.680424 (-0.448898) | 0.029300 \/ 0.534201 (-0.504901) | 0.527450 \/ 0.579283 (-0.051834) | 0.618873 \/ 0.434364 (0.184509) | 0.593314 \/ 0.540337 (0.052976) | 0.734430 \/ 1.386936 (-0.652506) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#0d2b8854c265b4dc202e480427890f472b34ea15 \"CML watermark\")\n"],"created_at":1687875308000,"updated_at":1687960562000,"closed_at":1687959993000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5995","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5995","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5995.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5995.patch","merged_at":1687959993000},"body":"Allow returning Pandas DataFrames in `map` transforms.\r\n\r\n(Plus, raise an error in the non-batched mode if a returned PyArrow table\/Pandas DataFrame has more than one row)\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5995\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5995\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5994","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5994\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5994\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5994\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5994","id":1776829004,"node_id":"PR_kwDODunzps5UB1cA","number":5994,"title":"Fix select_columns columns order","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005969 \/ 0.011353 (-0.005384) | 0.003687 \/ 0.011008 (-0.007321) | 0.100843 \/ 0.038508 (0.062335) | 0.036912 \/ 0.023109 (0.013803) | 0.312389 \/ 0.275898 (0.036491) | 0.370335 \/ 0.323480 (0.046855) | 0.003434 \/ 0.007986 (-0.004552) | 0.003710 \/ 0.004328 (-0.000619) | 0.076899 \/ 0.004250 (0.072648) | 0.053647 \/ 0.037052 (0.016594) | 0.324825 \/ 0.258489 (0.066336) | 0.367711 \/ 0.293841 (0.073870) | 0.028079 \/ 0.128546 (-0.100467) | 0.008326 \/ 0.075646 (-0.067320) | 0.312342 \/ 0.419271 (-0.106930) | 0.047423 \/ 0.043533 (0.003890) | 0.321063 \/ 0.255139 (0.065924) | 0.336508 \/ 0.283200 (0.053308) | 0.019973 \/ 0.141683 (-0.121710) | 1.529334 \/ 1.452155 (0.077179) | 1.573746 \/ 1.492716 (0.081030) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.210849 \/ 0.018006 (0.192843) | 0.418798 \/ 0.000490 (0.418309) | 0.007347 \/ 0.000200 (0.007147) | 0.000070 \/ 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.022718 \/ 0.037411 (-0.014694) | 0.098400 \/ 0.014526 (0.083874) | 0.106590 \/ 0.176557 (-0.069967) | 0.168460 \/ 0.737135 (-0.568675) | 0.108401 \/ 0.296338 (-0.187938) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.443066 \/ 0.215209 (0.227857) | 4.416658 \/ 2.077655 (2.339003) | 2.088844 \/ 1.504120 (0.584724) | 1.879564 \/ 1.541195 (0.338369) | 1.933815 \/ 1.468490 (0.465325) | 0.565085 \/ 4.584777 (-4.019692) | 3.412440 \/ 3.745712 (-0.333273) | 1.754686 \/ 5.269862 (-3.515175) | 1.024576 \/ 4.565676 (-3.541100) | 0.067909 \/ 0.424275 (-0.356366) | 0.011054 \/ 0.007607 (0.003447) | 0.534748 \/ 0.226044 (0.308703) | 5.351457 \/ 2.268929 (3.082529) | 2.517368 \/ 55.444624 (-52.927256) | 2.182762 \/ 6.876477 (-4.693715) | 2.238205 \/ 2.142072 (0.096133) | 0.672962 \/ 4.805227 (-4.132265) | 0.136098 \/ 6.500664 (-6.364566) | 0.066534 \/ 0.075469 (-0.008935) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.281241 \/ 1.841788 (-0.560547) | 13.872881 \/ 8.074308 (5.798573) | 13.161023 \/ 10.191392 (2.969631) | 0.130011 \/ 0.680424 (-0.550412) | 0.016759 \/ 0.534201 (-0.517442) | 0.359802 \/ 0.579283 (-0.219481) | 0.392577 \/ 0.434364 (-0.041787) | 0.427742 \/ 0.540337 (-0.112595) | 0.522241 \/ 1.386936 (-0.864695) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005985 \/ 0.011353 (-0.005368) | 0.003705 \/ 0.011008 (-0.007304) | 0.077699 \/ 0.038508 (0.039191) | 0.035686 \/ 0.023109 (0.012577) | 0.420356 \/ 0.275898 (0.144458) | 0.476753 \/ 0.323480 (0.153273) | 0.003510 \/ 0.007986 (-0.004475) | 0.002807 \/ 0.004328 (-0.001521) | 0.077151 \/ 0.004250 (0.072901) | 0.046420 \/ 0.037052 (0.009368) | 0.391781 \/ 0.258489 (0.133292) | 0.461128 \/ 0.293841 (0.167287) | 0.027847 \/ 0.128546 (-0.100699) | 0.008322 \/ 0.075646 (-0.067324) | 0.082768 \/ 0.419271 (-0.336503) | 0.042629 \/ 0.043533 (-0.000904) | 0.405745 \/ 0.255139 (0.150606) | 0.430797 \/ 0.283200 (0.147598) | 0.019832 \/ 0.141683 (-0.121851) | 1.556208 \/ 1.452155 (0.104054) | 1.612166 \/ 1.492716 (0.119450) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.230633 \/ 0.018006 (0.212626) | 0.401667 \/ 0.000490 (0.401178) | 0.000776 \/ 0.000200 (0.000576) | 0.000069 \/ 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024959 \/ 0.037411 (-0.012452) | 0.100560 \/ 0.014526 (0.086034) | 0.109175 \/ 0.176557 (-0.067382) | 0.159919 \/ 0.737135 (-0.577217) | 0.112810 \/ 0.296338 (-0.183528) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.460601 \/ 0.215209 (0.245392) | 4.620039 \/ 2.077655 (2.542385) | 2.257900 \/ 1.504120 (0.753780) | 2.039192 \/ 1.541195 (0.497997) | 2.064451 \/ 1.468490 (0.595961) | 0.557887 \/ 4.584777 (-4.026890) | 3.356100 \/ 3.745712 (-0.389612) | 1.703578 \/ 5.269862 (-3.566284) | 1.024984 \/ 4.565676 (-3.540693) | 0.067602 \/ 0.424275 (-0.356673) | 0.011450 \/ 0.007607 (0.003842) | 0.563230 \/ 0.226044 (0.337186) | 5.632150 \/ 2.268929 (3.363221) | 2.698701 \/ 55.444624 (-52.745924) | 2.363218 \/ 6.876477 (-4.513259) | 2.363997 \/ 2.142072 (0.221925) | 0.671260 \/ 4.805227 (-4.133967) | 0.136166 \/ 6.500664 (-6.364499) | 0.067094 \/ 0.075469 (-0.008375) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.303030 \/ 1.841788 (-0.538757) | 14.137277 \/ 8.074308 (6.062969) | 13.937631 \/ 10.191392 (3.746239) | 0.162626 \/ 0.680424 (-0.517798) | 0.016687 \/ 0.534201 (-0.517514) | 0.363657 \/ 0.579283 (-0.215626) | 0.392021 \/ 0.434364 (-0.042343) | 0.427275 \/ 0.540337 (-0.113062) | 0.512192 \/ 1.386936 (-0.874744) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#42603528d9bd8c3ab287ed0eadc7fa3d1ef4cfd8 \"CML watermark\")\n","_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005974 \/ 0.011353 (-0.005378) | 0.003947 \/ 0.011008 (-0.007061) | 0.098604 \/ 0.038508 (0.060096) | 0.036947 \/ 0.023109 (0.013838) | 0.311844 \/ 0.275898 (0.035946) | 0.375243 \/ 0.323480 (0.051763) | 0.003453 \/ 0.007986 (-0.004533) | 0.003834 \/ 0.004328 (-0.000495) | 0.077943 \/ 0.004250 (0.073692) | 0.052956 \/ 0.037052 (0.015904) | 0.320812 \/ 0.258489 (0.062323) | 0.373963 \/ 0.293841 (0.080122) | 0.028382 \/ 0.128546 (-0.100164) | 0.008525 \/ 0.075646 (-0.067121) | 0.311306 \/ 0.419271 (-0.107965) | 0.047029 \/ 0.043533 (0.003496) | 0.309933 \/ 0.255139 (0.054794) | 0.335114 \/ 0.283200 (0.051915) | 0.019629 \/ 0.141683 (-0.122054) | 1.569771 \/ 1.452155 (0.117617) | 1.585899 \/ 1.492716 (0.093182) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.216565 \/ 0.018006 (0.198559) | 0.426717 \/ 0.000490 (0.426228) | 0.003609 \/ 0.000200 (0.003409) | 0.000077 \/ 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023079 \/ 0.037411 (-0.014332) | 0.096954 \/ 0.014526 (0.082428) | 0.105398 \/ 0.176557 (-0.071158) | 0.165433 \/ 0.737135 (-0.571703) | 0.109703 \/ 0.296338 (-0.186636) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.456227 \/ 0.215209 (0.241018) | 4.529857 \/ 2.077655 (2.452202) | 2.214054 \/ 1.504120 (0.709934) | 2.029716 \/ 1.541195 (0.488521) | 2.081175 \/ 1.468490 (0.612685) | 0.563642 \/ 4.584777 (-4.021135) | 3.355393 \/ 3.745712 (-0.390320) | 1.765938 \/ 5.269862 (-3.503924) | 1.039062 \/ 4.565676 (-3.526615) | 0.067952 \/ 0.424275 (-0.356323) | 0.011044 \/ 0.007607 (0.003437) | 0.556935 \/ 0.226044 (0.330890) | 5.588167 \/ 2.268929 (3.319239) | 2.667217 \/ 55.444624 (-52.777407) | 2.337383 \/ 6.876477 (-4.539094) | 2.429590 \/ 2.142072 (0.287517) | 0.676972 \/ 4.805227 (-4.128256) | 0.135782 \/ 6.500664 (-6.364882) | 0.066323 \/ 0.075469 (-0.009146) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.237358 \/ 1.841788 (-0.604429) | 13.910492 \/ 8.074308 (5.836184) | 13.227275 \/ 10.191392 (3.035883) | 0.146857 \/ 0.680424 (-0.533567) | 0.016991 \/ 0.534201 (-0.517210) | 0.363637 \/ 0.579283 (-0.215646) | 0.392462 \/ 0.434364 (-0.041902) | 0.450009 \/ 0.540337 (-0.090329) | 0.536077 \/ 1.386936 (-0.850859) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006067 \/ 0.011353 (-0.005286) | 0.003851 \/ 0.011008 (-0.007158) | 0.078462 \/ 0.038508 (0.039954) | 0.036221 \/ 0.023109 (0.013112) | 0.389195 \/ 0.275898 (0.113297) | 0.428710 \/ 0.323480 (0.105230) | 0.004645 \/ 0.007986 (-0.003341) | 0.002973 \/ 0.004328 (-0.001355) | 0.078299 \/ 0.004250 (0.074048) | 0.047076 \/ 0.037052 (0.010024) | 0.375673 \/ 0.258489 (0.117184) | 0.432352 \/ 0.293841 (0.138511) | 0.028212 \/ 0.128546 (-0.100334) | 0.008475 \/ 0.075646 (-0.067172) | 0.083902 \/ 0.419271 (-0.335369) | 0.046699 \/ 0.043533 (0.003166) | 0.364502 \/ 0.255139 (0.109363) | 0.389792 \/ 0.283200 (0.106592) | 0.025266 \/ 0.141683 (-0.116417) | 1.517458 \/ 1.452155 (0.065303) | 1.543634 \/ 1.492716 (0.050918) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.236479 \/ 0.018006 (0.218472) | 0.411528 \/ 0.000490 (0.411038) | 0.005213 \/ 0.000200 (0.005013) | 0.000091 \/ 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025764 \/ 0.037411 (-0.011647) | 0.103174 \/ 0.014526 (0.088648) | 0.110609 \/ 0.176557 (-0.065948) | 0.164630 \/ 0.737135 (-0.572506) | 0.114863 \/ 0.296338 (-0.181475) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.457155 \/ 0.215209 (0.241946) | 4.550675 \/ 2.077655 (2.473021) | 2.350473 \/ 1.504120 (0.846353) | 2.204919 \/ 1.541195 (0.663724) | 2.076724 \/ 1.468490 (0.608234) | 0.563107 \/ 4.584777 (-4.021670) | 3.390669 \/ 3.745712 (-0.355043) | 1.741111 \/ 5.269862 (-3.528751) | 1.033268 \/ 4.565676 (-3.532408) | 0.068400 \/ 0.424275 (-0.355875) | 0.011607 \/ 0.007607 (0.004000) | 0.561944 \/ 0.226044 (0.335900) | 5.620224 \/ 2.268929 (3.351296) | 2.705241 \/ 55.444624 (-52.739384) | 2.344520 \/ 6.876477 (-4.531957) | 2.386119 \/ 2.142072 (0.244046) | 0.681583 \/ 4.805227 (-4.123644) | 0.137272 \/ 6.500664 (-6.363392) | 0.069217 \/ 0.075469 (-0.006252) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.322690 \/ 1.841788 (-0.519098) | 14.464953 \/ 8.074308 (6.390645) | 14.269350 \/ 10.191392 (4.077958) | 0.158879 \/ 0.680424 (-0.521545) | 0.016722 \/ 0.534201 (-0.517479) | 0.360299 \/ 0.579283 (-0.218984) | 0.391609 \/ 0.434364 (-0.042755) | 0.420507 \/ 0.540337 (-0.119831) | 0.512822 \/ 1.386936 (-0.874114) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#ca68191900d97b29abb3c2c4ba0502fe30d137d1 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007106 \/ 0.011353 (-0.004247) | 0.005224 \/ 0.011008 (-0.005784) | 0.127563 \/ 0.038508 (0.089055) | 0.055067 \/ 0.023109 (0.031958) | 0.418660 \/ 0.275898 (0.142761) | 0.487891 \/ 0.323480 (0.164411) | 0.005712 \/ 0.007986 (-0.002274) | 0.004585 \/ 0.004328 (0.000256) | 0.090994 \/ 0.004250 (0.086743) | 0.071837 \/ 0.037052 (0.034784) | 0.446957 \/ 0.258489 (0.188468) | 0.475966 \/ 0.293841 (0.182125) | 0.038062 \/ 0.128546 (-0.090484) | 0.010056 \/ 0.075646 (-0.065590) | 0.406796 \/ 0.419271 (-0.012475) | 0.066542 \/ 0.043533 (0.023009) | 0.413676 \/ 0.255139 (0.158537) | 0.448624 \/ 0.283200 (0.165424) | 0.030332 \/ 0.141683 (-0.111351) | 1.895307 \/ 1.452155 (0.443152) | 1.904411 \/ 1.492716 (0.411694) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.221246 \/ 0.018006 (0.203240) | 0.461288 \/ 0.000490 (0.460799) | 0.005957 \/ 0.000200 (0.005757) | 0.000112 \/ 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029255 \/ 0.037411 (-0.008156) | 0.131299 \/ 0.014526 (0.116773) | 0.135814 \/ 0.176557 (-0.040742) | 0.201342 \/ 0.737135 (-0.535793) | 0.141748 \/ 0.296338 (-0.154591) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.463936 \/ 0.215209 (0.248727) | 4.709621 \/ 2.077655 (2.631966) | 2.093844 \/ 1.504120 (0.589724) | 1.897963 \/ 1.541195 (0.356768) | 1.927865 \/ 1.468490 (0.459375) | 0.610879 \/ 4.584777 (-3.973898) | 4.481370 \/ 3.745712 (0.735658) | 2.112235 \/ 5.269862 (-3.157627) | 1.203349 \/ 4.565676 (-3.362327) | 0.074828 \/ 0.424275 (-0.349447) | 0.013121 \/ 0.007607 (0.005514) | 0.580894 \/ 0.226044 (0.354849) | 5.801872 \/ 2.268929 (3.532943) | 2.579950 \/ 55.444624 (-52.864674) | 2.251569 \/ 6.876477 (-4.624908) | 2.421305 \/ 2.142072 (0.279232) | 0.760938 \/ 4.805227 (-4.044289) | 0.169554 \/ 6.500664 (-6.331110) | 0.077499 \/ 0.075469 (0.002030) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.410419 \/ 1.841788 (-0.431368) | 17.442331 \/ 8.074308 (9.368023) | 15.782183 \/ 10.191392 (5.590791) | 0.180649 \/ 0.680424 (-0.499775) | 0.021790 \/ 0.534201 (-0.512411) | 0.511040 \/ 0.579283 (-0.068243) | 0.510472 \/ 0.434364 (0.076108) | 0.607141 \/ 0.540337 (0.066804) | 0.724794 \/ 1.386936 (-0.662142) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007280 \/ 0.011353 (-0.004073) | 0.004712 \/ 0.011008 (-0.006296) | 0.089225 \/ 0.038508 (0.050717) | 0.053157 \/ 0.023109 (0.030048) | 0.431949 \/ 0.275898 (0.156051) | 0.478128 \/ 0.323480 (0.154648) | 0.006181 \/ 0.007986 (-0.001804) | 0.003387 \/ 0.004328 (-0.000941) | 0.083741 \/ 0.004250 (0.079490) | 0.071610 \/ 0.037052 (0.034557) | 0.414698 \/ 0.258489 (0.156209) | 0.484422 \/ 0.293841 (0.190581) | 0.034988 \/ 0.128546 (-0.093558) | 0.009831 \/ 0.075646 (-0.065816) | 0.089644 \/ 0.419271 (-0.329628) | 0.057053 \/ 0.043533 (0.013520) | 0.413144 \/ 0.255139 (0.158005) | 0.445464 \/ 0.283200 (0.162264) | 0.026109 \/ 0.141683 (-0.115574) | 1.842899 \/ 1.452155 (0.390745) | 1.923774 \/ 1.492716 (0.431057) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.245051 \/ 0.018006 (0.227045) | 0.460444 \/ 0.000490 (0.459954) | 0.000444 \/ 0.000200 (0.000244) | 0.000067 \/ 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034835 \/ 0.037411 (-0.002577) | 0.130078 \/ 0.014526 (0.115553) | 0.147012 \/ 0.176557 (-0.029544) | 0.203097 \/ 0.737135 (-0.534038) | 0.149636 \/ 0.296338 (-0.146702) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.521664 \/ 0.215209 (0.306455) | 5.283865 \/ 2.077655 (3.206210) | 2.456701 \/ 1.504120 (0.952581) | 2.266059 \/ 1.541195 (0.724864) | 2.295387 \/ 1.468490 (0.826897) | 0.613200 \/ 4.584777 (-3.971577) | 4.526107 \/ 3.745712 (0.780394) | 2.047327 \/ 5.269862 (-3.222535) | 1.261063 \/ 4.565676 (-3.304614) | 0.070402 \/ 0.424275 (-0.353873) | 0.014128 \/ 0.007607 (0.006521) | 0.620929 \/ 0.226044 (0.394884) | 6.109127 \/ 2.268929 (3.840198) | 3.081406 \/ 55.444624 (-52.363218) | 2.658224 \/ 6.876477 (-4.218253) | 2.671974 \/ 2.142072 (0.529902) | 0.744081 \/ 4.805227 (-4.061146) | 0.161498 \/ 6.500664 (-6.339166) | 0.075148 \/ 0.075469 (-0.000321) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.585640 \/ 1.841788 (-0.256148) | 17.884321 \/ 8.074308 (9.810013) | 15.938937 \/ 10.191392 (5.747545) | 0.220818 \/ 0.680424 (-0.459605) | 0.021452 \/ 0.534201 (-0.512749) | 0.499747 \/ 0.579283 (-0.079536) | 0.512318 \/ 0.434364 (0.077954) | 0.562853 \/ 0.540337 (0.022515) | 0.678512 \/ 1.386936 (-0.708424) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#aa50937d82256827aee3dbd749c7a23555e05e38 \"CML watermark\")\n"],"created_at":1687869166000,"updated_at":1687880447000,"closed_at":1687879963000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5994","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5994","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5994.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5994.patch","merged_at":1687879963000},"body":"Fix the order of the columns in dataset.features when the order changes with `dataset.select_columns()`.\r\n\r\nI also fixed the same issue for `dataset.flatten()`\r\n\r\nClose https:\/\/github.com\/huggingface\/datasets\/issues\/5993","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5994\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5994\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5993","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5993\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5993\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5993\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5993","id":1776643555,"node_id":"I_kwDODunzps5p5W3j","number":5993,"title":"ValueError: Table schema does not match schema used to create file","user":{"login":"exs-avianello","id":128361578,"node_id":"U_kgDOB6akag","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/128361578?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/exs-avianello","html_url":"https:\/\/github.com\/exs-avianello","followers_url":"https:\/\/api.github.com\/users\/exs-avianello\/followers","following_url":"https:\/\/api.github.com\/users\/exs-avianello\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/exs-avianello\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/exs-avianello\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/exs-avianello\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/exs-avianello\/orgs","repos_url":"https:\/\/api.github.com\/users\/exs-avianello\/repos","events_url":"https:\/\/api.github.com\/users\/exs-avianello\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/exs-avianello\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186.0,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["We'll do a new release of `datasets` soon to make the fix available :)\r\n\r\nIn the meantime you can use `datasets` from source (main)","Thank you very much @lhoestq ! \ud83d\ude80 "],"created_at":1687863247000,"updated_at":1687880202000,"closed_at":1687879964000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nSaving a dataset as parquet fails with a `ValueError: Table schema does not match schema used to create file` if the dataset was obtained out of a `.select_columns()` call with columns selected out of order.\n\n### Steps to reproduce the bug\n\n```python\r\nimport datasets\r\n\r\ndataset = datasets.Dataset.from_dict(\r\n {\r\n \"x1\": [1, 2, 3],\r\n \"x2\": [10, 11, 12],\r\n }\r\n)\r\n\r\nds = dataset.select_columns([\"x2\", \"x1\"])\r\n\r\nds.to_parquet(\"demo.parquet\")\r\n```\r\n\r\n```shell\r\n>>>\r\nValueError: Table schema does not match schema used to create file: \r\ntable:\r\nx2: int64\r\nx1: int64\r\n-- schema metadata --\r\nhuggingface: '{\"info\": {\"features\": {\"x2\": {\"dtype\": \"int64\", \"_type\": \"V' + 53 vs. \r\nfile:\r\nx1: int64\r\nx2: int64\r\n-- schema metadata --\r\nhuggingface: '{\"info\": {\"features\": {\"x1\": {\"dtype\": \"int64\", \"_type\": \"V' + 53\r\n```\r\n\r\n--- \r\n\r\nI think this is because after the `.select_columns()` call with out of order columns, the output dataset features' schema ends up being out of sync with the schema of the arrow table backing it. \r\n\r\n```python\r\nds.features.arrow_schema\r\n>>>\r\nx1: int64\r\nx2: int64\r\n-- schema metadata --\r\nhuggingface: '{\"info\": {\"features\": {\"x1\": {\"dtype\": \"int64\", \"_type\": \"V' + 53\r\n\r\nds.data.schema\r\n>>>\r\nx2: int64\r\nx1: int64\r\n-- schema metadata --\r\nhuggingface: '{\"info\": {\"features\": {\"x2\": {\"dtype\": \"int64\", \"_type\": \"V' + 53\r\n```\r\n\r\n\r\nSo when we call `.to_parquet()`, the call behind the scenes to `datasets.io.parquet.ParquetDatasetWriter(...).write()` which initialises the backend `pyarrow.parquet.ParquetWriter` with `schema = self.dataset.features.arrow_schema` triggers `pyarrow` on write when [it checks](https:\/\/github.com\/apache\/arrow\/blob\/11b140a734a516e436adaddaeb35d23f30dcce44\/python\/pyarrow\/parquet\/core.py#L1086-L1090) that the `ParquetWriter` schema matches the schema of the table being written \ud83d\ude4c \r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/6ed837325cb539a5deb99129e5ad181d0269e050\/src\/datasets\/io\/parquet.py#L139-L141\r\n\n\n### Expected behavior\n\nThe dataset gets successfully saved as parquet. \r\n\r\n*In the same way as it does if saving it as csv:\r\n\r\n```python\r\nimport datasets\r\n\r\ndataset = datasets.Dataset.from_dict(\r\n {\r\n \"x1\": [1, 2, 3],\r\n \"x2\": [10, 11, 12],\r\n }\r\n)\r\n\r\nds = dataset.select_columns([\"x2\", \"x1\"])\r\n\r\nds.to_csv(\"demo.csv\")\r\n```\n\n### Environment info\n\n`python==3.11`\r\n`datasets==2.13.1`\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5993\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5993\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5992","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5992\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5992\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5992\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5992","id":1776460964,"node_id":"PR_kwDODunzps5UAk3C","number":5992,"title":"speedup","user":{"login":"qgallouedec","id":45557362,"node_id":"MDQ6VXNlcjQ1NTU3MzYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/45557362?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/qgallouedec","html_url":"https:\/\/github.com\/qgallouedec","followers_url":"https:\/\/api.github.com\/users\/qgallouedec\/followers","following_url":"https:\/\/api.github.com\/users\/qgallouedec\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/qgallouedec\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/qgallouedec\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/qgallouedec\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/qgallouedec\/orgs","repos_url":"https:\/\/api.github.com\/users\/qgallouedec\/repos","events_url":"https:\/\/api.github.com\/users\/qgallouedec\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/qgallouedec\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_5992). All of your documentation changes will be reflected on that endpoint."],"created_at":1687857478000,"updated_at":1687857787000,"closed_at":1687857484000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5992","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5992","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5992.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5992.patch","merged_at":null},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5992\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5992\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5991","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5991\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5991\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5991\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5991","id":1774456518,"node_id":"I_kwDODunzps5pxA7G","number":5991,"title":"`map` with any joblib backend","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1687775622000,"updated_at":1687775622000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"draft":null,"pull_request":null,"body":"We recently enabled the (experimental) parallel backend switch for data download and extraction but not for `map` yet.\r\n\r\nRight now we're using our `iflatmap_unordered` implementation for multiprocessing that uses a shared Queue to gather progress updates from the subprocesses and show a progress bar in the main process.\r\n\r\nIf a Queue implementation that would work on any joblib backend by leveraging the filesystem that is shared among workers, we can have `iflatmap_unordered` for joblib and therefore a `map` with any joblib backend with a progress bar !\r\n\r\nNote that the Queue doesn't need to be that optimized though since we can choose a small frequency for progress updates (like 1 update per second).","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5991\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5991\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5989","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5989\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5989\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5989\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5989","id":1774134091,"node_id":"I_kwDODunzps5pvyNL","number":5989,"title":"Set a rule on the config and split names","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["in this case we need to decide what to do with the existing datasets with white space characters (there shouldn't be a lot of them I think)","I imagine that we should stop supporting them, and help the user fix them?"],"created_at":1687764854000,"updated_at":1687785178000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"> should we actually allow characters like spaces? maybe it's better to add validation for whitespace symbols and directly in datasets and raise\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets-server\/issues\/853\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5989\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5989\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5988","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5988\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5988\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5988\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5988","id":1773257828,"node_id":"I_kwDODunzps5pscRk","number":5988,"title":"ConnectionError: Couldn't reach dataset_infos.json ","user":{"login":"yulingao","id":20674868,"node_id":"MDQ6VXNlcjIwNjc0ODY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20674868?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yulingao","html_url":"https:\/\/github.com\/yulingao","followers_url":"https:\/\/api.github.com\/users\/yulingao\/followers","following_url":"https:\/\/api.github.com\/users\/yulingao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yulingao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yulingao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yulingao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yulingao\/orgs","repos_url":"https:\/\/api.github.com\/users\/yulingao\/repos","events_url":"https:\/\/api.github.com\/users\/yulingao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yulingao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Unfortunately, I can't reproduce the error. What does the following code return for you?\r\n```python\r\nimport requests\r\nfrom huggingface_hub import hf_hub_url\r\nr = requests.get(hf_hub_url(\"codeparrot\/codeparrot-clean-train\", \"dataset_infos.json\", repo_type=\"dataset\"))\r\n```\r\n\r\nAlso, can you provide more info about your network (region, proxies, etc.)?"],"created_at":1687696771000,"updated_at":1688736057000,"closed_at":1688736057000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI'm trying to load codeparrot\/codeparrot-clean-train, but get the following error:\r\n\r\nConnectionError: Couldn't reach https:\/\/huggingface.co\/datasets\/codeparrot\/codeparrot-clean-train\/resolve\/main\/dataset_infos.json (ConnectionError(ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))))\r\n\r\n\n\n### Steps to reproduce the bug\n\ntrain_data = load_dataset('codeparrot\/codeparrot-clean-train', split='train')\r\n\n\n### Expected behavior\n\ndownload the dataset\n\n### Environment info\n\ncentos7","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5988\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5988\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5987","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5987\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5987\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5987\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5987","id":1773047909,"node_id":"I_kwDODunzps5prpBl","number":5987,"title":"Why max_shard_size is not supported in load_dataset and passed to download_and_prepare","user":{"login":"npuichigo","id":11533479,"node_id":"MDQ6VXNlcjExNTMzNDc5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11533479?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/npuichigo","html_url":"https:\/\/github.com\/npuichigo","followers_url":"https:\/\/api.github.com\/users\/npuichigo\/followers","following_url":"https:\/\/api.github.com\/users\/npuichigo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/npuichigo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/npuichigo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/npuichigo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/npuichigo\/orgs","repos_url":"https:\/\/api.github.com\/users\/npuichigo\/repos","events_url":"https:\/\/api.github.com\/users\/npuichigo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/npuichigo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Can you explain your use case for `max_shard_size`? \r\n\r\nOn some systems, there is a limit to the size of a memory-mapped file, so we could consider exposing this parameter in `load_dataset`.","In my use case, users may choose a proper size to balance the cost and benefit of using large shard size. (On azure blob or hdfs which may automatically download the shard from background)","But `load_dataset` doesn't support caching (and reading) Arrow datasets from remote storage. \r\n\r\n`load_datset_builder` + `download_and_prepare` is not equal to `load_dataset`. The latter has one more step, `builder.as_dataset`, that memory-maps Arrow files, which only works for local files.","Thanks. So if I want to use `IterableDataset` and control the size of single arrow file, how should I organize the data loader? Maybe `load_dataset_build` + `download_and_prepare` + `builder.as_dataset` + `dataset.to_iterable_dataset`?","Yes, this should work.\r\n\r\nI think we can expose `max_shard_size` in `load_dataset`, so feel free to open a PR."],"created_at":1687666753000,"updated_at":1688054768000,"closed_at":1688054768000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/a8a797cc92e860c8d0df71e0aa826f4d2690713e\/src\/datasets\/load.py#L1809\r\n\r\nWhat I can to is break the `load_dataset` and use `load_datset_builder` + `download_and_prepare` instead.\n\n### Steps to reproduce the bug\n\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/a8a797cc92e860c8d0df71e0aa826f4d2690713e\/src\/datasets\/load.py#L1809\n\n### Expected behavior\n\nUsers can define the max shard size.\n\n### Environment info\n\ndatasets==2.13.1","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5987\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5987\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5986","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5986\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5986\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5986\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5986","id":1772233111,"node_id":"PR_kwDODunzps5TygOZ","number":5986,"title":"Make IterableDataset.from_spark more efficient","user":{"login":"mathewjacob1002","id":134338709,"node_id":"U_kgDOCAHYlQ","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/134338709?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mathewjacob1002","html_url":"https:\/\/github.com\/mathewjacob1002","followers_url":"https:\/\/api.github.com\/users\/mathewjacob1002\/followers","following_url":"https:\/\/api.github.com\/users\/mathewjacob1002\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mathewjacob1002\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mathewjacob1002\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mathewjacob1002\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mathewjacob1002\/orgs","repos_url":"https:\/\/api.github.com\/users\/mathewjacob1002\/repos","events_url":"https:\/\/api.github.com\/users\/mathewjacob1002\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mathewjacob1002\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq would you be able to review this please and also approve the workflow?","Sounds good to me :) feel free to run `make style` to apply code formatting","_The documentation is not available anymore as the PR was closed or merged._","cool ! I think we can merge once all comments have been addressed","@lhoestq I just addressed the comments and I think we can move ahead with this! \r\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007734 \/ 0.011353 (-0.003619) | 0.004608 \/ 0.011008 (-0.006400) | 0.094466 \/ 0.038508 (0.055958) | 0.086477 \/ 0.023109 (0.063368) | 0.410311 \/ 0.275898 (0.134413) | 0.455560 \/ 0.323480 (0.132080) | 0.006112 \/ 0.007986 (-0.001874) | 0.003845 \/ 0.004328 (-0.000483) | 0.072506 \/ 0.004250 (0.068256) | 0.066721 \/ 0.037052 (0.029669) | 0.409967 \/ 0.258489 (0.151478) | 0.460480 \/ 0.293841 (0.166639) | 0.036700 \/ 0.128546 (-0.091847) | 0.009854 \/ 0.075646 (-0.065792) | 0.320936 \/ 0.419271 (-0.098335) | 0.061002 \/ 0.043533 (0.017469) | 0.413963 \/ 0.255139 (0.158824) | 0.426787 \/ 0.283200 (0.143588) | 0.029182 \/ 0.141683 (-0.112501) | 1.685136 \/ 1.452155 (0.232981) | 1.754590 \/ 1.492716 (0.261873) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.222698 \/ 0.018006 (0.204692) | 0.505929 \/ 0.000490 (0.505440) | 0.005291 \/ 0.000200 (0.005091) | 0.000097 \/ 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032527 \/ 0.037411 (-0.004884) | 0.094842 \/ 0.014526 (0.080317) | 0.110138 \/ 0.176557 (-0.066418) | 0.193786 \/ 0.737135 (-0.543349) | 0.112593 \/ 0.296338 (-0.183745) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.441671 \/ 0.215209 (0.226461) | 4.392961 \/ 2.077655 (2.315306) | 2.161111 \/ 1.504120 (0.656991) | 1.967080 \/ 1.541195 (0.425885) | 2.065411 \/ 1.468490 (0.596920) | 0.561080 \/ 4.584777 (-4.023697) | 4.159612 \/ 3.745712 (0.413900) | 6.435248 \/ 5.269862 (1.165386) | 3.732338 \/ 4.565676 (-0.833339) | 0.066156 \/ 0.424275 (-0.358119) | 0.008030 \/ 0.007607 (0.000423) | 0.532182 \/ 0.226044 (0.306137) | 5.315142 \/ 2.268929 (3.046213) | 2.680157 \/ 55.444624 (-52.764467) | 2.303799 \/ 6.876477 (-4.572677) | 2.530911 \/ 2.142072 (0.388838) | 0.669504 \/ 4.805227 (-4.135723) | 0.151940 \/ 6.500664 (-6.348724) | 0.066999 \/ 0.075469 (-0.008470) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.424275 \/ 1.841788 (-0.417513) | 21.550742 \/ 8.074308 (13.476434) | 16.031414 \/ 10.191392 (5.840022) | 0.194681 \/ 0.680424 (-0.485743) | 0.020389 \/ 0.534201 (-0.513812) | 0.429808 \/ 0.579283 (-0.149475) | 0.457503 \/ 0.434364 (0.023139) | 0.511522 \/ 0.540337 (-0.028816) | 0.682621 \/ 1.386936 (-0.704315) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007519 \/ 0.011353 (-0.003834) | 0.004445 \/ 0.011008 (-0.006563) | 0.071946 \/ 0.038508 (0.033438) | 0.082982 \/ 0.023109 (0.059873) | 0.459938 \/ 0.275898 (0.184040) | 0.504875 \/ 0.323480 (0.181395) | 0.005805 \/ 0.007986 (-0.002181) | 0.003740 \/ 0.004328 (-0.000589) | 0.071998 \/ 0.004250 (0.067747) | 0.062580 \/ 0.037052 (0.025527) | 0.462263 \/ 0.258489 (0.203774) | 0.506355 \/ 0.293841 (0.212514) | 0.036321 \/ 0.128546 (-0.092225) | 0.009830 \/ 0.075646 (-0.065816) | 0.079810 \/ 0.419271 (-0.339461) | 0.055291 \/ 0.043533 (0.011758) | 0.464093 \/ 0.255139 (0.208954) | 0.481109 \/ 0.283200 (0.197910) | 0.026909 \/ 0.141683 (-0.114774) | 1.652538 \/ 1.452155 (0.200383) | 1.750713 \/ 1.492716 (0.257997) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.267552 \/ 0.018006 (0.249546) | 0.502021 \/ 0.000490 (0.501531) | 0.001635 \/ 0.000200 (0.001435) | 0.000099 \/ 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033747 \/ 0.037411 (-0.003665) | 0.104242 \/ 0.014526 (0.089716) | 0.113829 \/ 0.176557 (-0.062728) | 0.176242 \/ 0.737135 (-0.560893) | 0.117002 \/ 0.296338 (-0.179336) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.476731 \/ 0.215209 (0.261522) | 4.727054 \/ 2.077655 (2.649399) | 2.589396 \/ 1.504120 (1.085276) | 2.511180 \/ 1.541195 (0.969985) | 2.634122 \/ 1.468490 (1.165632) | 0.563840 \/ 4.584777 (-4.020937) | 4.140212 \/ 3.745712 (0.394500) | 6.188789 \/ 5.269862 (0.918928) | 3.716897 \/ 4.565676 (-0.848780) | 0.065823 \/ 0.424275 (-0.358452) | 0.007705 \/ 0.007607 (0.000098) | 0.566580 \/ 0.226044 (0.340535) | 5.653306 \/ 2.268929 (3.384377) | 3.028756 \/ 55.444624 (-52.415868) | 2.592319 \/ 6.876477 (-4.284158) | 2.614250 \/ 2.142072 (0.472178) | 0.667135 \/ 4.805227 (-4.138093) | 0.153455 \/ 6.500664 (-6.347209) | 0.069321 \/ 0.075469 (-0.006148) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.541978 \/ 1.841788 (-0.299810) | 21.747360 \/ 8.074308 (13.673052) | 15.963657 \/ 10.191392 (5.772265) | 0.192843 \/ 0.680424 (-0.487581) | 0.020702 \/ 0.534201 (-0.513499) | 0.433620 \/ 0.579283 (-0.145663) | 0.467327 \/ 0.434364 (0.032963) | 0.507398 \/ 0.540337 (-0.032940) | 0.692797 \/ 1.386936 (-0.694140) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#396cf9419d12e3150e2051793b10f2c813780a90 \"CML watermark\")\n"],"created_at":1687558700000,"updated_at":1688724358000,"closed_at":1688723769000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5986","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5986","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5986.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5986.patch","merged_at":1688723769000},"body":"Moved the code from using collect() to using toLocalIterator, which allows for prefetching partitions that will be selected next, thus allowing for better performance when iterating. ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5986\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5986\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5985","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5985\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5985\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5985\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5985","id":1771588158,"node_id":"I_kwDODunzps5pmEo-","number":5985,"title":"Cannot reuse tokenizer object for dataset map","user":{"login":"vikigenius","id":12724810,"node_id":"MDQ6VXNlcjEyNzI0ODEw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12724810?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vikigenius","html_url":"https:\/\/github.com\/vikigenius","followers_url":"https:\/\/api.github.com\/users\/vikigenius\/followers","following_url":"https:\/\/api.github.com\/users\/vikigenius\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vikigenius\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vikigenius\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vikigenius\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vikigenius\/orgs","repos_url":"https:\/\/api.github.com\/users\/vikigenius\/repos","events_url":"https:\/\/api.github.com\/users\/vikigenius\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vikigenius\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892865,"node_id":"MDU6TGFiZWwxOTM1ODkyODY1","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/duplicate","name":"duplicate","color":"cfd3d7","default":true,"description":"This issue or pull request already exists"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This is a known issue: https:\/\/github.com\/huggingface\/datasets\/issues\/3847.\r\n\r\nFixing this requires significant work - rewriting the `tokenizers` lib to make them immutable.\r\n\r\nThe current solution is to pass `cache_file_name` to `map` to use that file for caching or calling a tokenizer before `map` (with the same set of parameters as the ones in the map transform)"],"created_at":1687531531000,"updated_at":1687782890000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nRelated to https:\/\/github.com\/huggingface\/transformers\/issues\/24441. Not sure if this is a tokenizer issue or caching issue, so filing in both.\r\n\r\nPassing the tokenizer to the dataset map function causes the tokenizer to be fingerprinted weirdly. After calling the tokenizer with arguments like padding and truncation the tokenizer object changes interanally, even though the hash remains the same.\r\n\r\nBut dumps is able to detect that internal change which causes the tokenizer object's fingerprint to change.\r\n\r\n\n\n### Steps to reproduce the bug\n\n```python\r\nfrom transformers import AutoTokenizer\r\nfrom datasets.utils.py_utils import dumps # Huggingface datasets\r\n\r\nt = AutoTokenizer.from_pretrained('bert-base-uncased')\r\nt.save_pretrained(\"tok1\")\r\nth1 = hash(dumps(t))\r\ntext = \"This is an example text\"\r\nttext = t(text, max_length=512, padding=\"max_length\", truncation=True)\r\nt.save_pretrained(\"tok2\")\r\nth2 = hash(dumps(t))\r\n\r\nassert th1 == th2 # Assertion Error\r\n```\r\n\r\nBut if you use just the hash of the object without dumps, the hashes don't change\r\n\r\n```python\r\nfrom transformers import AutoTokenizer\r\nfrom datasets.utils.py_utils import dumps # Huggingface datasets\r\n\r\nt = AutoTokenizer.from_pretrained('bert-base-uncased')\r\nth1 = hash(t) # Just hash no dumps\r\ntext = \"This is an example text\"\r\nttext = t(text, max_length=512, padding=\"max_length\", truncation=True)\r\nth2 = hash(t) # Just hash no dumps\r\n\r\nassert th1 == th2 # This is OK\r\n```\r\n\r\nThis causes situations such as the following\r\n\r\n1. Create a text file like this `yes \"This is an example text\" | head -n 10000 > lines.txt`\r\n\r\n```python\r\nfrom transformers import AutoTokenizer\r\nimport datasets\r\n\r\n\r\nclass TokenizeMapper(object):\r\n \"\"\"Mapper for tokenizer.\r\n\r\n This is needed because the caching mechanism of HuggingFace does not work on\r\n lambdas. Each time a new lambda will be created by a new process which will\r\n lead to a different hash.\r\n This way we can have a universal mapper object in init and reuse it with the same\r\n hash for each process.\r\n \"\"\"\r\n\r\n def __init__(self, tokenizer):\r\n \"\"\"Initialize the tokenizer.\"\"\"\r\n self.tokenizer = tokenizer\r\n\r\n def __call__(self, examples, **kwargs):\r\n \"\"\"Run the mapper.\"\"\"\r\n texts = examples[\"text\"]\r\n tt = self.tokenizer(texts, max_length=256, padding=\"max_length\", truncation=True)\r\n batch_outputs = {\r\n \"input_ids\": tt.input_ids,\r\n \"attention_mask\": tt.attention_mask,\r\n }\r\n return batch_outputs\r\n\r\n\r\nt = AutoTokenizer.from_pretrained('bert-base-uncased')\r\nmapper = TokenizeMapper(t)\r\n\r\nds = datasets.load_dataset(\"text\", data_files=\"lines.txt\")\r\n\r\nmds1 = ds.map(\r\n mapper,\r\n batched=False,\r\n remove_columns=[\"text\"],\r\n).with_format(\"torch\")\r\n\r\nmds2 = ds.map(\r\n mapper,\r\n batched=False,\r\n remove_columns=[\"text\"],\r\n).with_format(\"torch\")\r\n```\r\n\r\nThe second call to map should reuse the cached processed dataset from mds1, but it instead it redoes the tokenization because of the behavior of dumps.\n\n### Expected behavior\n\nWe should be able to initialize a tokenizer. And reusing it should let us reuse the same map computation for the same dataset.\r\n\r\nThe second call to map should reuse the cached processed dataset from mds1, but it instead it redoes the tokenization because of the behavior of dumps.\n\n### Environment info\n\n- `datasets` version: 2.13.0\r\n- Platform: Linux-6.1.31_1-x86_64-with-glibc2.36\r\n- Python version: 3.9.16\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 12.0.1\r\n- Pandas version: 2.0.2","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5985\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5985\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5984","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5984\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5984\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5984\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5984","id":1771571458,"node_id":"I_kwDODunzps5pmAkC","number":5984,"title":"AutoSharding IterableDataset's when num_workers > 1","user":{"login":"mathephysicist","id":25594384,"node_id":"MDQ6VXNlcjI1NTk0Mzg0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25594384?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mathephysicist","html_url":"https:\/\/github.com\/mathephysicist","followers_url":"https:\/\/api.github.com\/users\/mathephysicist\/followers","following_url":"https:\/\/api.github.com\/users\/mathephysicist\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mathephysicist\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mathephysicist\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mathephysicist\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mathephysicist\/orgs","repos_url":"https:\/\/api.github.com\/users\/mathephysicist\/repos","events_url":"https:\/\/api.github.com\/users\/mathephysicist\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mathephysicist\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["For this to be possible, we would have to switch from the \"Streaming\" Arrow format to the \"Random Access\" (IPC\/Feather) format, which allows reading arbitrary record batches (explained [here](https:\/\/arrow.apache.org\/docs\/python\/ipc.html)). We could then use these batches to construct shards.\r\n\r\n@lhoestq @albertvillanova Do you think this use case is worth the switch? Also, we currently shard files, not inner row groups\/chunks. Should we also support sharding row groups (e.g. if the number of input files is 1)?\r\n\r\nPS: I don't expect significant speed-up for local, uncompressed Arrow files.","Alternatively we could support multiprocessing map for iterable datasets and let the user do the CPU intensive task there ?\r\n\r\nThis way it would work on arrow data but also on any iterable dataset","> For this to be possible, we would have to switch from the \"Streaming\" Arrow format to the \"Random Access\" (IPC\/Feather) format, which allows reading arbitrary record batches (explained [here](https:\/\/arrow.apache.org\/docs\/python\/ipc.html)). We could then use these batches to construct shards.\r\n> \r\n> @lhoestq @albertvillanova Do you think this use case is worth the switch? Also, we currently shard files, not inner row groups\/chunks. Should we also support sharding row groups (e.g. if the number of input files is 1)?\r\n> \r\n> PS: I don't expect significant speed-up for local, uncompressed Arrow files.\r\n\r\nCould you explain why you'd need to change the arrow format?\r\n\r\nWhen we use streaming datasets we simply determine the number of worker shards and then add some modulo logic at the appropriate place. Worst case scenario, you'd skip streaming entries according to the number of shards.\r\n\r\nFor PyTorch, I'd be happy to provide an implementation or a sketch thereof, if you point me toward what the testing requirements would be for such a PR.","> Could you explain why you'd need to change the arrow format?\r\n\r\nThis way workers have random access to the location of the file where its dataset subset starts. Currently we're using the Arrow streaming format which doesn't include the metadata of the record batches offsets. This is needed here to efficiently split a dataset made of one single file.","> > Could you explain why you'd need to change the arrow format?\r\n> \r\n> This way workers have random access to the location of the file where its dataset subset starts. Currently we're using the Arrow streaming format which doesn't include the metadata of the record batches offsets. This is needed here to efficiently split a dataset made of one single file.\r\n\r\nI guess I don't understand why you'd need to subset the dataset in the first place. \r\nIt seems sufficient to figure out how to offset or skip rows.\r\n\r\nFor instance, using pyArrow, you could use RecordBatchStreamReader to zero-copy iterate over records with read_next_batch and then only initiate the next step for records modulo worker shard.\r\nThat's one way to do it, where of course you'd need to account for gpu sharding as well.\r\n\r\n\r\nOtherwise, how did you implement worker\/node\/GPU sharding for iterable\/streaming data where you do not have index information or prior splits (e.g. files)?","> For instance, using pyArrow, you could use RecordBatchStreamReader to zero-copy iterate over records with read_next_batch and then only initiate the next step for records modulo worker shard.\r\n\r\nThat works indeed ! And what we meant is that you can make it even faster to instantiate. Indeed using RecordBatchStreamReader you need to get the list of all the record batches in each worker, whereas you could just get the list of record batches per worker if you use the record batches locations in the Arrow IPC file footer. This would be especially appreciated to have a fast instantiation in case you have tens of thousands of Arrow files for example."],"created_at":1687530860000,"updated_at":1688490236000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\n\n\r\nMinimal Example\r\n\r\n```\r\nimport torch\r\nfrom datasets import IterableDataset\r\n\r\nd = IterableDataset.from_file()\r\ndl = torch.utils.data.dataloader.DataLoader(d,num_workers=3)\r\n\r\nfor sample in dl:\r\n print(sample)\r\n\r\n```\r\n\r\nWarning:\r\nToo many dataloader workers: 2 (max is dataset.n_shards=1). Stopping 1 dataloader workers.\r\nTo parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary to have a number of workers greater than dataset.n_shards=1. To enable more parallelism, please split the dataset in more files than 1.\r\n\r\nExpected Behavior:\r\nDataset is sharded each cpu uses subset (contiguously - so you can do checkpoint loading\/saving) \n\n### Motivation\n\nI have a lot of unused cpu's and would like to be able to shard iterable datasets with pytorch's dataloader when num_workers > 1. This is for a very large single file. I am aware that we can use the `split_dataset_by_node` to ensure that each node (for distributed) gets different shards, but we should extend it so that this also continues for multiple workers. \n\n### Your contribution\n\nIf someone points me to what needs to change, I can create a PR.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5984\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5984\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5983","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5983\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5983\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5983\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5983","id":1770578804,"node_id":"PR_kwDODunzps5TtDdy","number":5983,"title":"replaced PathLike as a variable for save_to_disk for dataset_path wit\u2026","user":{"login":"benjaminbrown038","id":35114142,"node_id":"MDQ6VXNlcjM1MTE0MTQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35114142?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/benjaminbrown038","html_url":"https:\/\/github.com\/benjaminbrown038","followers_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/followers","following_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/orgs","repos_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/repos","events_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1687481825000,"updated_at":1687481825000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5983","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5983","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5983.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5983.patch","merged_at":null},"body":"\u2026h str like that of load_from_disk","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5983\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5983\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5982","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5982\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5982\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5982\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5982","id":1770333296,"node_id":"I_kwDODunzps5phSRw","number":5982,"title":"404 on Datasets Documentation Page","user":{"login":"kmulka-bloomberg","id":118509387,"node_id":"U_kgDOBxBPSw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/118509387?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kmulka-bloomberg","html_url":"https:\/\/github.com\/kmulka-bloomberg","followers_url":"https:\/\/api.github.com\/users\/kmulka-bloomberg\/followers","following_url":"https:\/\/api.github.com\/users\/kmulka-bloomberg\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kmulka-bloomberg\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kmulka-bloomberg\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kmulka-bloomberg\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kmulka-bloomberg\/orgs","repos_url":"https:\/\/api.github.com\/users\/kmulka-bloomberg\/repos","events_url":"https:\/\/api.github.com\/users\/kmulka-bloomberg\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kmulka-bloomberg\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This wasn\u2019t working for me a bit earlier, but it looks to be back up now","We had a minor issue updating the docs after the latest release. It should work now :)."],"created_at":1687464897000,"updated_at":1687794303000,"closed_at":1687794303000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nGetting a 404 from the Hugging Face Datasets docs page:\r\nhttps:\/\/huggingface.co\/docs\/datasets\/index\r\n\n\n### Steps to reproduce the bug\n\n1. Go to URL https:\/\/huggingface.co\/docs\/datasets\/index\r\n2. Notice 404 not found\n\n### Expected behavior\n\nURL should either show docs or redirect to new location\n\n### Environment info\n\nhugginface.co","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5982\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5982\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5981","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5981\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5981\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5981\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5981","id":1770310087,"node_id":"I_kwDODunzps5phMnH","number":5981,"title":"Only two cores are getting used in sagemaker with pytorch 3.10 kernel","user":{"login":"mmr-crexi","id":107141022,"node_id":"U_kgDOBmLXng","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/107141022?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mmr-crexi","html_url":"https:\/\/github.com\/mmr-crexi","followers_url":"https:\/\/api.github.com\/users\/mmr-crexi\/followers","following_url":"https:\/\/api.github.com\/users\/mmr-crexi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mmr-crexi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mmr-crexi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mmr-crexi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mmr-crexi\/orgs","repos_url":"https:\/\/api.github.com\/users\/mmr-crexi\/repos","events_url":"https:\/\/api.github.com\/users\/mmr-crexi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mmr-crexi\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think it's more likely that this issue is related to PyTorch than Datasets, as PyTorch (on import) registers functions to execute when forking a process. Maybe this is the culprit: https:\/\/github.com\/pytorch\/pytorch\/issues\/99625","From reading that ticket, it may be down in mkl? Is it worth hotfixing in the meantime, with the express intention of turning it off? I know that's a horribly crufty solution, but it's also deeply frustrating to be limited to 2 cores for operations as simple as filtration.","This is too specific and unrelated to `datasets`, so this shouldn't be fixed here."],"created_at":1687463851000,"updated_at":1688662414000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nWhen using the newer pytorch 3.10 kernel, only 2 cores are being used by huggingface filter and map functions. The Pytorch 3.9 kernel would use as many cores as specified in the num_proc field.\r\n\r\nWe have solved this in our own code by placing the following snippet in the code that is called inside subprocesses:\r\n\r\n```os.sched_setaffinity(0, {i for i in range(1000)})```\r\n\r\nThe problem, as near as we can tell, us that once upon a time, cpu affinity was set using a bitmask (\"0xfffff\" and the like), and affinity recently changed to a list of processors rather than to using the mask. As such, only processors 1 and 17 are shown to be working in htop.\r\n![Selection_072](https:\/\/github.com\/huggingface\/datasets\/assets\/107141022\/04c5a824-5321-4531-afca-7bc84dff36b4)\r\n\r\n\r\nWhen running functions via `map`, the above resetting of affinity works to spread across the cores. When using `filter`, however, only two cores are active.\n\n### Steps to reproduce the bug\n\nRepro steps:\r\n\r\n1. Create an aws sagemaker instance\r\n2. use the pytorch 3_10 kernel\r\n3. Load a dataset\r\n4. run a filter operation\r\n5. watch as only 2 cores are used when num_proc > 2\r\n6. run a map operation\r\n7. watch as only 2 cores are used when num_proc > 2\r\n8. run a map operation with processor affinity reset inside the function called via map\r\n9. Watch as all cores run\r\n\r\n\n\n### Expected behavior\n\nAll specified cores are used via the num_proc argument.\n\n### Environment info\n\nAWS sagemaker with the following init script run in the terminal after instance creation:\r\n\r\nconda init bash\r\nbash\r\nconda activate pytorch_p310\r\npip install Wand PyPDF pytesseract datasets seqeval pdfplumber transformers pymupdf sentencepiece timm donut-python accelerate optimum xgboost\r\npython -m pip install 'git+https:\/\/github.com\/facebookresearch\/detectron2.git'\r\nsudo yum -y install htop\r\nsudo yum -y update\r\nsudo yum -y install wget libstdc++ autoconf automake libtool autoconf-archive pkg-config gcc gcc-c++ make libjpeg-devel libpng-devel libtiff-devel zlib-devel","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5981\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5981\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5980","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5980\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5980\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5980\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5980","id":1770255973,"node_id":"I_kwDODunzps5pg_Zl","number":5980,"title":"Viewing dataset card returns \u201c502 Bad Gateway\u201d","user":{"login":"tbenthompson","id":4241811,"node_id":"MDQ6VXNlcjQyNDE4MTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4241811?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tbenthompson","html_url":"https:\/\/github.com\/tbenthompson","followers_url":"https:\/\/api.github.com\/users\/tbenthompson\/followers","following_url":"https:\/\/api.github.com\/users\/tbenthompson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tbenthompson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tbenthompson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tbenthompson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tbenthompson\/orgs","repos_url":"https:\/\/api.github.com\/users\/tbenthompson\/repos","events_url":"https:\/\/api.github.com\/users\/tbenthompson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tbenthompson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Can you try again? Maybe there was a minor outage.","Yes, it seems to be working now. In case it's helpful, the outage lasted several days. It was failing as late as yesterday morning. ","we fixed something on the server side, glad it's fixed now"],"created_at":1687461288000,"updated_at":1687855099000,"closed_at":1687790565000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"The url is: https:\/\/huggingface.co\/datasets\/Confirm-Labs\/pile_ngrams_trigrams\r\n\r\nI am able to successfully view the \u201cFiles and versions\u201d tab: [Confirm-Labs\/pile_ngrams_trigrams at main](https:\/\/huggingface.co\/datasets\/Confirm-Labs\/pile_ngrams_trigrams\/tree\/main)\r\n\r\nAny help would be appreciated! Thanks! I hope this is the right place to report an issue like this.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5980\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5980\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5979","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5979\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5979\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5979\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5979","id":1770198250,"node_id":"PR_kwDODunzps5TrxS_","number":5979,"title":"set dev version","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_5979). All of your documentation changes will be reflected on that endpoint.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008087 \/ 0.011353 (-0.003266) | 0.004691 \/ 0.011008 (-0.006317) | 0.121545 \/ 0.038508 (0.083037) | 0.057436 \/ 0.023109 (0.034326) | 0.368864 \/ 0.275898 (0.092966) | 0.457199 \/ 0.323480 (0.133719) | 0.006745 \/ 0.007986 (-0.001241) | 0.003689 \/ 0.004328 (-0.000640) | 0.090480 \/ 0.004250 (0.086229) | 0.071368 \/ 0.037052 (0.034316) | 0.372788 \/ 0.258489 (0.114299) | 0.429894 \/ 0.293841 (0.136053) | 0.037544 \/ 0.128546 (-0.091002) | 0.010142 \/ 0.075646 (-0.065505) | 0.420467 \/ 0.419271 (0.001196) | 0.064359 \/ 0.043533 (0.020826) | 0.370345 \/ 0.255139 (0.115206) | 0.405220 \/ 0.283200 (0.122020) | 0.028410 \/ 0.141683 (-0.113273) | 1.824845 \/ 1.452155 (0.372690) | 1.888109 \/ 1.492716 (0.395392) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.234585 \/ 0.018006 (0.216578) | 0.499965 \/ 0.000490 (0.499476) | 0.000461 \/ 0.000200 (0.000261) | 0.000064 \/ 0.000054 (0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032294 \/ 0.037411 (-0.005117) | 0.131769 \/ 0.014526 (0.117243) | 0.146472 \/ 0.176557 (-0.030085) | 0.210035 \/ 0.737135 (-0.527100) | 0.145600 \/ 0.296338 (-0.150739) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.507455 \/ 0.215209 (0.292246) | 5.080090 \/ 2.077655 (3.002435) | 2.506104 \/ 1.504120 (1.001984) | 2.297655 \/ 1.541195 (0.756460) | 2.324920 \/ 1.468490 (0.856430) | 0.645003 \/ 4.584777 (-3.939774) | 4.677856 \/ 3.745712 (0.932144) | 2.254179 \/ 5.269862 (-3.015683) | 1.280663 \/ 4.565676 (-3.285013) | 0.078809 \/ 0.424275 (-0.345466) | 0.014059 \/ 0.007607 (0.006452) | 0.628053 \/ 0.226044 (0.402009) | 6.327289 \/ 2.268929 (4.058360) | 2.957918 \/ 55.444624 (-52.486706) | 2.571568 \/ 6.876477 (-4.304909) | 2.708766 \/ 2.142072 (0.566694) | 0.772868 \/ 4.805227 (-4.032360) | 0.164835 \/ 6.500664 (-6.335829) | 0.075334 \/ 0.075469 (-0.000135) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.471930 \/ 1.841788 (-0.369858) | 17.917340 \/ 8.074308 (9.843032) | 15.719327 \/ 10.191392 (5.527935) | 0.191999 \/ 0.680424 (-0.488424) | 0.022464 \/ 0.534201 (-0.511737) | 0.511038 \/ 0.579283 (-0.068245) | 0.512050 \/ 0.434364 (0.077686) | 0.608711 \/ 0.540337 (0.068373) | 0.749660 \/ 1.386936 (-0.637276) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008028 \/ 0.011353 (-0.003325) | 0.004908 \/ 0.011008 (-0.006100) | 0.092294 \/ 0.038508 (0.053786) | 0.053051 \/ 0.023109 (0.029942) | 0.453862 \/ 0.275898 (0.177964) | 0.512548 \/ 0.323480 (0.189068) | 0.004817 \/ 0.007986 (-0.003168) | 0.005330 \/ 0.004328 (0.001002) | 0.095600 \/ 0.004250 (0.091350) | 0.068763 \/ 0.037052 (0.031710) | 0.453654 \/ 0.258489 (0.195165) | 0.504995 \/ 0.293841 (0.211154) | 0.038123 \/ 0.128546 (-0.090423) | 0.010650 \/ 0.075646 (-0.064996) | 0.102854 \/ 0.419271 (-0.316417) | 0.062973 \/ 0.043533 (0.019440) | 0.430420 \/ 0.255139 (0.175281) | 0.465448 \/ 0.283200 (0.182248) | 0.029736 \/ 0.141683 (-0.111947) | 1.844225 \/ 1.452155 (0.392070) | 1.934685 \/ 1.492716 (0.441968) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.227797 \/ 0.018006 (0.209791) | 0.467868 \/ 0.000490 (0.467378) | 0.004531 \/ 0.000200 (0.004331) | 0.000105 \/ 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.035632 \/ 0.037411 (-0.001780) | 0.145943 \/ 0.014526 (0.131417) | 0.151944 \/ 0.176557 (-0.024613) | 0.220519 \/ 0.737135 (-0.516616) | 0.159732 \/ 0.296338 (-0.136606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.520641 \/ 0.215209 (0.305432) | 5.184740 \/ 2.077655 (3.107086) | 2.538751 \/ 1.504120 (1.034631) | 2.316571 \/ 1.541195 (0.775377) | 2.387898 \/ 1.468490 (0.919408) | 0.614515 \/ 4.584777 (-3.970262) | 4.573142 \/ 3.745712 (0.827430) | 4.657052 \/ 5.269862 (-0.612809) | 2.159664 \/ 4.565676 (-2.406013) | 0.079713 \/ 0.424275 (-0.344562) | 0.014462 \/ 0.007607 (0.006855) | 0.656611 \/ 0.226044 (0.430566) | 6.481630 \/ 2.268929 (4.212702) | 3.135047 \/ 55.444624 (-52.309577) | 2.757502 \/ 6.876477 (-4.118975) | 2.851488 \/ 2.142072 (0.709415) | 0.790795 \/ 4.805227 (-4.014432) | 0.172358 \/ 6.500664 (-6.328306) | 0.080255 \/ 0.075469 (0.004786) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.571391 \/ 1.841788 (-0.270396) | 19.025224 \/ 8.074308 (10.950916) | 17.079230 \/ 10.191392 (6.887838) | 0.172823 \/ 0.680424 (-0.507601) | 0.021845 \/ 0.534201 (-0.512356) | 0.522286 \/ 0.579283 (-0.056998) | 0.510406 \/ 0.434364 (0.076042) | 0.604830 \/ 0.540337 (0.064493) | 0.735466 \/ 1.386936 (-0.651471) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#4084609bdc40d173d1daa74ad2fe98f3ead72f8e \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.010025 \/ 0.011353 (-0.001328) | 0.005699 \/ 0.011008 (-0.005310) | 0.134194 \/ 0.038508 (0.095686) | 0.056154 \/ 0.023109 (0.033045) | 0.470091 \/ 0.275898 (0.194193) | 0.539225 \/ 0.323480 (0.215745) | 0.006659 \/ 0.007986 (-0.001326) | 0.004468 \/ 0.004328 (0.000140) | 0.110040 \/ 0.004250 (0.105790) | 0.074172 \/ 0.037052 (0.037119) | 0.497450 \/ 0.258489 (0.238961) | 0.535048 \/ 0.293841 (0.241207) | 0.051195 \/ 0.128546 (-0.077352) | 0.014926 \/ 0.075646 (-0.060721) | 0.461334 \/ 0.419271 (0.042062) | 0.073773 \/ 0.043533 (0.030240) | 0.450741 \/ 0.255139 (0.195602) | 0.474853 \/ 0.283200 (0.191653) | 0.036372 \/ 0.141683 (-0.105311) | 1.982873 \/ 1.452155 (0.530719) | 1.989912 \/ 1.492716 (0.497196) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.287817 \/ 0.018006 (0.269811) | 0.613415 \/ 0.000490 (0.612926) | 0.007082 \/ 0.000200 (0.006882) | 0.000100 \/ 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031119 \/ 0.037411 (-0.006292) | 0.129886 \/ 0.014526 (0.115361) | 0.143492 \/ 0.176557 (-0.033065) | 0.208536 \/ 0.737135 (-0.528600) | 0.147081 \/ 0.296338 (-0.149257) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.668312 \/ 0.215209 (0.453103) | 6.568609 \/ 2.077655 (4.490955) | 2.708788 \/ 1.504120 (1.204668) | 2.366737 \/ 1.541195 (0.825542) | 2.392598 \/ 1.468490 (0.924108) | 0.967582 \/ 4.584777 (-3.617195) | 5.582743 \/ 3.745712 (1.837031) | 3.021607 \/ 5.269862 (-2.248255) | 1.866402 \/ 4.565676 (-2.699275) | 0.115998 \/ 0.424275 (-0.308277) | 0.015571 \/ 0.007607 (0.007964) | 0.820069 \/ 0.226044 (0.594025) | 8.229725 \/ 2.268929 (5.960797) | 3.437068 \/ 55.444624 (-52.007557) | 2.902312 \/ 6.876477 (-3.974164) | 3.025874 \/ 2.142072 (0.883802) | 1.230359 \/ 4.805227 (-3.574868) | 0.237341 \/ 6.500664 (-6.263323) | 0.089923 \/ 0.075469 (0.014453) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.670970 \/ 1.841788 (-0.170818) | 19.667167 \/ 8.074308 (11.592859) | 21.624423 \/ 10.191392 (11.433031) | 0.231683 \/ 0.680424 (-0.448741) | 0.029145 \/ 0.534201 (-0.505056) | 0.543441 \/ 0.579283 (-0.035842) | 0.617510 \/ 0.434364 (0.183146) | 0.612662 \/ 0.540337 (0.072324) | 0.790589 \/ 1.386936 (-0.596347) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.010324 \/ 0.011353 (-0.001029) | 0.005339 \/ 0.011008 (-0.005669) | 0.104762 \/ 0.038508 (0.066254) | 0.052631 \/ 0.023109 (0.029522) | 0.485864 \/ 0.275898 (0.209966) | 0.595768 \/ 0.323480 (0.272288) | 0.007417 \/ 0.007986 (-0.000569) | 0.005229 \/ 0.004328 (0.000900) | 0.100775 \/ 0.004250 (0.096524) | 0.067144 \/ 0.037052 (0.030092) | 0.522269 \/ 0.258489 (0.263780) | 0.592597 \/ 0.293841 (0.298756) | 0.051101 \/ 0.128546 (-0.077446) | 0.015277 \/ 0.075646 (-0.060369) | 0.115530 \/ 0.419271 (-0.303741) | 0.071922 \/ 0.043533 (0.028390) | 0.490208 \/ 0.255139 (0.235069) | 0.578936 \/ 0.283200 (0.295736) | 0.040382 \/ 0.141683 (-0.101301) | 1.986059 \/ 1.452155 (0.533904) | 2.040600 \/ 1.492716 (0.547883) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.300399 \/ 0.018006 (0.282393) | 0.624702 \/ 0.000490 (0.624212) | 0.004908 \/ 0.000200 (0.004708) | 0.000155 \/ 0.000054 (0.000100) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.038031 \/ 0.037411 (0.000619) | 0.140353 \/ 0.014526 (0.125828) | 0.152600 \/ 0.176557 (-0.023956) | 0.219165 \/ 0.737135 (-0.517970) | 0.154232 \/ 0.296338 (-0.142106) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.698855 \/ 0.215209 (0.483646) | 7.125543 \/ 2.077655 (5.047889) | 3.251222 \/ 1.504120 (1.747102) | 2.953404 \/ 1.541195 (1.412209) | 3.051108 \/ 1.468490 (1.582618) | 0.962068 \/ 4.584777 (-3.622709) | 5.789579 \/ 3.745712 (2.043867) | 5.193271 \/ 5.269862 (-0.076591) | 2.757886 \/ 4.565676 (-1.807790) | 0.111865 \/ 0.424275 (-0.312410) | 0.014684 \/ 0.007607 (0.007077) | 0.875967 \/ 0.226044 (0.649923) | 8.818359 \/ 2.268929 (6.549430) | 4.165216 \/ 55.444624 (-51.279408) | 3.372059 \/ 6.876477 (-3.504418) | 3.486886 \/ 2.142072 (1.344813) | 1.232276 \/ 4.805227 (-3.572951) | 0.238967 \/ 6.500664 (-6.261697) | 0.091584 \/ 0.075469 (0.016115) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.850755 \/ 1.841788 (0.008968) | 20.058756 \/ 8.074308 (11.984448) | 23.761271 \/ 10.191392 (13.569879) | 0.231826 \/ 0.680424 (-0.448598) | 0.030119 \/ 0.534201 (-0.504082) | 0.532614 \/ 0.579283 (-0.046669) | 0.628968 \/ 0.434364 (0.194604) | 0.628403 \/ 0.540337 (0.088066) | 0.745648 \/ 1.386936 (-0.641288) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#a8a797cc92e860c8d0df71e0aa826f4d2690713e \"CML watermark\")\n"],"created_at":1687458734000,"updated_at":1687459342000,"closed_at":1687458742000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5979","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5979","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5979.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5979.patch","merged_at":1687458742000},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5979\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5979\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5978","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5978\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5978\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5978\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5978","id":1770187053,"node_id":"PR_kwDODunzps5Tru2_","number":5978,"title":"Release: 2.13.1","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006173 \/ 0.011353 (-0.005180) | 0.003773 \/ 0.011008 (-0.007235) | 0.099499 \/ 0.038508 (0.060991) | 0.037918 \/ 0.023109 (0.014809) | 0.321329 \/ 0.275898 (0.045431) | 0.379739 \/ 0.323480 (0.056259) | 0.004664 \/ 0.007986 (-0.003322) | 0.002943 \/ 0.004328 (-0.001385) | 0.077759 \/ 0.004250 (0.073509) | 0.055271 \/ 0.037052 (0.018219) | 0.329428 \/ 0.258489 (0.070939) | 0.378731 \/ 0.293841 (0.084890) | 0.027737 \/ 0.128546 (-0.100810) | 0.008566 \/ 0.075646 (-0.067081) | 0.313220 \/ 0.419271 (-0.106052) | 0.047101 \/ 0.043533 (0.003568) | 0.316211 \/ 0.255139 (0.061072) | 0.341826 \/ 0.283200 (0.058626) | 0.020838 \/ 0.141683 (-0.120845) | 1.550064 \/ 1.452155 (0.097909) | 1.706518 \/ 1.492716 (0.213801) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.203093 \/ 0.018006 (0.185087) | 0.425345 \/ 0.000490 (0.424856) | 0.004800 \/ 0.000200 (0.004600) | 0.000077 \/ 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024590 \/ 0.037411 (-0.012821) | 0.098115 \/ 0.014526 (0.083589) | 0.108274 \/ 0.176557 (-0.068282) | 0.170804 \/ 0.737135 (-0.566332) | 0.110560 \/ 0.296338 (-0.185778) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.425251 \/ 0.215209 (0.210042) | 4.239075 \/ 2.077655 (2.161421) | 1.955601 \/ 1.504120 (0.451481) | 1.774796 \/ 1.541195 (0.233602) | 1.826641 \/ 1.468490 (0.358150) | 0.558777 \/ 4.584777 (-4.026000) | 3.361697 \/ 3.745712 (-0.384015) | 1.764468 \/ 5.269862 (-3.505394) | 1.032280 \/ 4.565676 (-3.533396) | 0.067872 \/ 0.424275 (-0.356403) | 0.010998 \/ 0.007607 (0.003391) | 0.525682 \/ 0.226044 (0.299637) | 5.254356 \/ 2.268929 (2.985427) | 2.384332 \/ 55.444624 (-53.060292) | 2.045578 \/ 6.876477 (-4.830898) | 2.170914 \/ 2.142072 (0.028841) | 0.674782 \/ 4.805227 (-4.130445) | 0.135351 \/ 6.500664 (-6.365314) | 0.066591 \/ 0.075469 (-0.008878) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.209181 \/ 1.841788 (-0.632606) | 14.044518 \/ 8.074308 (5.970210) | 13.184705 \/ 10.191392 (2.993313) | 0.130836 \/ 0.680424 (-0.549588) | 0.016582 \/ 0.534201 (-0.517619) | 0.360005 \/ 0.579283 (-0.219279) | 0.379519 \/ 0.434364 (-0.054845) | 0.422174 \/ 0.540337 (-0.118164) | 0.515546 \/ 1.386936 (-0.871390) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006293 \/ 0.011353 (-0.005060) | 0.003784 \/ 0.011008 (-0.007224) | 0.079248 \/ 0.038508 (0.040739) | 0.038452 \/ 0.023109 (0.015343) | 0.444727 \/ 0.275898 (0.168829) | 0.500535 \/ 0.323480 (0.177055) | 0.003455 \/ 0.007986 (-0.004531) | 0.002873 \/ 0.004328 (-0.001455) | 0.077439 \/ 0.004250 (0.073189) | 0.047855 \/ 0.037052 (0.010803) | 0.448049 \/ 0.258489 (0.189560) | 0.509517 \/ 0.293841 (0.215676) | 0.028359 \/ 0.128546 (-0.100188) | 0.008503 \/ 0.075646 (-0.067143) | 0.084961 \/ 0.419271 (-0.334310) | 0.042880 \/ 0.043533 (-0.000653) | 0.436628 \/ 0.255139 (0.181489) | 0.456574 \/ 0.283200 (0.173375) | 0.019539 \/ 0.141683 (-0.122144) | 1.561273 \/ 1.452155 (0.109118) | 1.572018 \/ 1.492716 (0.079301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.230250 \/ 0.018006 (0.212244) | 0.415189 \/ 0.000490 (0.414700) | 0.003213 \/ 0.000200 (0.003013) | 0.000080 \/ 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025541 \/ 0.037411 (-0.011871) | 0.102326 \/ 0.014526 (0.087800) | 0.110258 \/ 0.176557 (-0.066298) | 0.162488 \/ 0.737135 (-0.574647) | 0.112782 \/ 0.296338 (-0.183556) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.457936 \/ 0.215209 (0.242727) | 4.581503 \/ 2.077655 (2.503848) | 2.237659 \/ 1.504120 (0.733540) | 2.029960 \/ 1.541195 (0.488765) | 2.082911 \/ 1.468490 (0.614421) | 0.556485 \/ 4.584777 (-4.028292) | 3.384418 \/ 3.745712 (-0.361295) | 1.748809 \/ 5.269862 (-3.521053) | 1.034759 \/ 4.565676 (-3.530917) | 0.067500 \/ 0.424275 (-0.356776) | 0.011425 \/ 0.007607 (0.003818) | 0.561340 \/ 0.226044 (0.335295) | 5.623629 \/ 2.268929 (3.354701) | 2.733587 \/ 55.444624 (-52.711038) | 2.401578 \/ 6.876477 (-4.474899) | 2.524569 \/ 2.142072 (0.382496) | 0.673170 \/ 4.805227 (-4.132057) | 0.136681 \/ 6.500664 (-6.363983) | 0.068060 \/ 0.075469 (-0.007409) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.318651 \/ 1.841788 (-0.523137) | 14.362123 \/ 8.074308 (6.287815) | 14.385964 \/ 10.191392 (4.194572) | 0.149914 \/ 0.680424 (-0.530510) | 0.016877 \/ 0.534201 (-0.517324) | 0.358406 \/ 0.579283 (-0.220877) | 0.394349 \/ 0.434364 (-0.040015) | 0.422471 \/ 0.540337 (-0.117866) | 0.513807 \/ 1.386936 (-0.873129) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#1b9ce11d1b94e6178df663ff5fcad029849d10fb \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006272 \/ 0.011353 (-0.005080) | 0.003903 \/ 0.011008 (-0.007105) | 0.100180 \/ 0.038508 (0.061672) | 0.037799 \/ 0.023109 (0.014690) | 0.385627 \/ 0.275898 (0.109729) | 0.446518 \/ 0.323480 (0.123038) | 0.004811 \/ 0.007986 (-0.003175) | 0.003032 \/ 0.004328 (-0.001296) | 0.077063 \/ 0.004250 (0.072812) | 0.055564 \/ 0.037052 (0.018512) | 0.397346 \/ 0.258489 (0.138857) | 0.443242 \/ 0.293841 (0.149401) | 0.027904 \/ 0.128546 (-0.100642) | 0.008386 \/ 0.075646 (-0.067260) | 0.315013 \/ 0.419271 (-0.104259) | 0.047943 \/ 0.043533 (0.004410) | 0.378443 \/ 0.255139 (0.123304) | 0.411472 \/ 0.283200 (0.128272) | 0.020465 \/ 0.141683 (-0.121218) | 1.526594 \/ 1.452155 (0.074439) | 1.547018 \/ 1.492716 (0.054301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.219377 \/ 0.018006 (0.201370) | 0.430254 \/ 0.000490 (0.429764) | 0.003218 \/ 0.000200 (0.003018) | 0.000072 \/ 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023667 \/ 0.037411 (-0.013744) | 0.099143 \/ 0.014526 (0.084617) | 0.106044 \/ 0.176557 (-0.070513) | 0.166186 \/ 0.737135 (-0.570949) | 0.108736 \/ 0.296338 (-0.187603) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.437971 \/ 0.215209 (0.222762) | 4.363675 \/ 2.077655 (2.286021) | 2.011993 \/ 1.504120 (0.507873) | 1.845189 \/ 1.541195 (0.303994) | 1.831848 \/ 1.468490 (0.363358) | 0.562402 \/ 4.584777 (-4.022375) | 3.365259 \/ 3.745712 (-0.380453) | 1.781491 \/ 5.269862 (-3.488371) | 1.023454 \/ 4.565676 (-3.542223) | 0.067857 \/ 0.424275 (-0.356418) | 0.011076 \/ 0.007607 (0.003469) | 0.532267 \/ 0.226044 (0.306223) | 5.340344 \/ 2.268929 (3.071415) | 2.388649 \/ 55.444624 (-53.055976) | 2.055373 \/ 6.876477 (-4.821104) | 2.205047 \/ 2.142072 (0.062975) | 0.672909 \/ 4.805227 (-4.132318) | 0.135244 \/ 6.500664 (-6.365420) | 0.066184 \/ 0.075469 (-0.009285) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.206838 \/ 1.841788 (-0.634950) | 13.967075 \/ 8.074308 (5.892767) | 13.143971 \/ 10.191392 (2.952579) | 0.143991 \/ 0.680424 (-0.536433) | 0.016673 \/ 0.534201 (-0.517527) | 0.376180 \/ 0.579283 (-0.203103) | 0.386550 \/ 0.434364 (-0.047814) | 0.440590 \/ 0.540337 (-0.099747) | 0.529974 \/ 1.386936 (-0.856962) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006299 \/ 0.011353 (-0.005054) | 0.003784 \/ 0.011008 (-0.007224) | 0.077875 \/ 0.038508 (0.039367) | 0.038689 \/ 0.023109 (0.015580) | 0.421684 \/ 0.275898 (0.145786) | 0.472649 \/ 0.323480 (0.149169) | 0.003570 \/ 0.007986 (-0.004415) | 0.004448 \/ 0.004328 (0.000120) | 0.077867 \/ 0.004250 (0.073616) | 0.049514 \/ 0.037052 (0.012462) | 0.375983 \/ 0.258489 (0.117494) | 0.470632 \/ 0.293841 (0.176791) | 0.028238 \/ 0.128546 (-0.100308) | 0.008462 \/ 0.075646 (-0.067185) | 0.082452 \/ 0.419271 (-0.336819) | 0.043617 \/ 0.043533 (0.000084) | 0.400874 \/ 0.255139 (0.145735) | 0.426191 \/ 0.283200 (0.142992) | 0.020602 \/ 0.141683 (-0.121081) | 1.567658 \/ 1.452155 (0.115504) | 1.572610 \/ 1.492716 (0.079893) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.246144 \/ 0.018006 (0.228138) | 0.419402 \/ 0.000490 (0.418913) | 0.001691 \/ 0.000200 (0.001491) | 0.000071 \/ 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026105 \/ 0.037411 (-0.011306) | 0.104734 \/ 0.014526 (0.090208) | 0.110257 \/ 0.176557 (-0.066300) | 0.161429 \/ 0.737135 (-0.575706) | 0.114367 \/ 0.296338 (-0.181972) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.453352 \/ 0.215209 (0.238143) | 4.537924 \/ 2.077655 (2.460269) | 2.196193 \/ 1.504120 (0.692073) | 2.002087 \/ 1.541195 (0.460892) | 2.041722 \/ 1.468490 (0.573231) | 0.561643 \/ 4.584777 (-4.023134) | 3.449108 \/ 3.745712 (-0.296605) | 2.862800 \/ 5.269862 (-2.407062) | 1.387895 \/ 4.565676 (-3.177782) | 0.068076 \/ 0.424275 (-0.356199) | 0.011568 \/ 0.007607 (0.003961) | 0.559279 \/ 0.226044 (0.333235) | 5.598738 \/ 2.268929 (3.329809) | 2.676649 \/ 55.444624 (-52.767975) | 2.334588 \/ 6.876477 (-4.541889) | 2.376215 \/ 2.142072 (0.234142) | 0.673109 \/ 4.805227 (-4.132118) | 0.137587 \/ 6.500664 (-6.363077) | 0.069131 \/ 0.075469 (-0.006338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.307332 \/ 1.841788 (-0.534456) | 14.536036 \/ 8.074308 (6.461728) | 14.173734 \/ 10.191392 (3.982342) | 0.145143 \/ 0.680424 (-0.535281) | 0.016662 \/ 0.534201 (-0.517539) | 0.366901 \/ 0.579283 (-0.212383) | 0.394498 \/ 0.434364 (-0.039866) | 0.430546 \/ 0.540337 (-0.109792) | 0.518950 \/ 1.386936 (-0.867986) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#682d21e94ab1e64c11b583de39dc4c93f0101c5a \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008122 \/ 0.011353 (-0.003231) | 0.005585 \/ 0.011008 (-0.005424) | 0.121219 \/ 0.038508 (0.082711) | 0.047616 \/ 0.023109 (0.024507) | 0.440576 \/ 0.275898 (0.164678) | 0.491053 \/ 0.323480 (0.167573) | 0.004774 \/ 0.007986 (-0.003211) | 0.006758 \/ 0.004328 (0.002430) | 0.103852 \/ 0.004250 (0.099602) | 0.071560 \/ 0.037052 (0.034508) | 0.463107 \/ 0.258489 (0.204618) | 0.516904 \/ 0.293841 (0.223063) | 0.048052 \/ 0.128546 (-0.080494) | 0.013679 \/ 0.075646 (-0.061968) | 0.428383 \/ 0.419271 (0.009112) | 0.069468 \/ 0.043533 (0.025936) | 0.432593 \/ 0.255139 (0.177454) | 0.471810 \/ 0.283200 (0.188611) | 0.037541 \/ 0.141683 (-0.104142) | 1.823490 \/ 1.452155 (0.371335) | 1.922558 \/ 1.492716 (0.429842) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.252315 \/ 0.018006 (0.234309) | 0.541757 \/ 0.000490 (0.541267) | 0.000373 \/ 0.000200 (0.000173) | 0.000083 \/ 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030361 \/ 0.037411 (-0.007050) | 0.125928 \/ 0.014526 (0.111402) | 0.145102 \/ 0.176557 (-0.031455) | 0.209798 \/ 0.737135 (-0.527337) | 0.147349 \/ 0.296338 (-0.148990) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.627554 \/ 0.215209 (0.412345) | 5.917422 \/ 2.077655 (3.839767) | 2.491083 \/ 1.504120 (0.986963) | 2.147078 \/ 1.541195 (0.605883) | 2.167511 \/ 1.468490 (0.699021) | 0.903061 \/ 4.584777 (-3.681716) | 5.518537 \/ 3.745712 (1.772825) | 2.654348 \/ 5.269862 (-2.615514) | 1.645121 \/ 4.565676 (-2.920556) | 0.103782 \/ 0.424275 (-0.320493) | 0.013048 \/ 0.007607 (0.005441) | 0.756732 \/ 0.226044 (0.530687) | 7.622873 \/ 2.268929 (5.353945) | 3.122689 \/ 55.444624 (-52.321936) | 2.537735 \/ 6.876477 (-4.338742) | 2.640090 \/ 2.142072 (0.498018) | 1.128635 \/ 4.805227 (-3.676593) | 0.228089 \/ 6.500664 (-6.272575) | 0.086207 \/ 0.075469 (0.010738) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.561591 \/ 1.841788 (-0.280197) | 18.110299 \/ 8.074308 (10.035991) | 20.718017 \/ 10.191392 (10.526625) | 0.225741 \/ 0.680424 (-0.454682) | 0.031738 \/ 0.534201 (-0.502463) | 0.530789 \/ 0.579283 (-0.048495) | 0.607364 \/ 0.434364 (0.173000) | 0.581593 \/ 0.540337 (0.041256) | 0.726033 \/ 1.386936 (-0.660903) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009323 \/ 0.011353 (-0.002030) | 0.005360 \/ 0.011008 (-0.005649) | 0.103608 \/ 0.038508 (0.065100) | 0.050158 \/ 0.023109 (0.027049) | 0.499906 \/ 0.275898 (0.224008) | 0.561005 \/ 0.323480 (0.237525) | 0.005093 \/ 0.007986 (-0.002892) | 0.008285 \/ 0.004328 (0.003956) | 0.103446 \/ 0.004250 (0.099196) | 0.061478 \/ 0.037052 (0.024426) | 0.494016 \/ 0.258489 (0.235527) | 0.537550 \/ 0.293841 (0.243709) | 0.048829 \/ 0.128546 (-0.079717) | 0.017032 \/ 0.075646 (-0.058614) | 0.107748 \/ 0.419271 (-0.311524) | 0.065607 \/ 0.043533 (0.022074) | 0.488709 \/ 0.255139 (0.233570) | 0.512023 \/ 0.283200 (0.228823) | 0.032067 \/ 0.141683 (-0.109616) | 1.907585 \/ 1.452155 (0.455431) | 1.960994 \/ 1.492716 (0.468278) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.278378 \/ 0.018006 (0.260371) | 0.551474 \/ 0.000490 (0.550985) | 0.006886 \/ 0.000200 (0.006686) | 0.000106 \/ 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030674 \/ 0.037411 (-0.006737) | 0.135179 \/ 0.014526 (0.120654) | 0.133703 \/ 0.176557 (-0.042853) | 0.198923 \/ 0.737135 (-0.538212) | 0.155108 \/ 0.296338 (-0.141231) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.690566 \/ 0.215209 (0.475357) | 6.789594 \/ 2.077655 (4.711940) | 2.940668 \/ 1.504120 (1.436549) | 2.562431 \/ 1.541195 (1.021236) | 2.554232 \/ 1.468490 (1.085742) | 0.888470 \/ 4.584777 (-3.696307) | 5.672318 \/ 3.745712 (1.926606) | 2.741626 \/ 5.269862 (-2.528236) | 1.818336 \/ 4.565676 (-2.747340) | 0.110434 \/ 0.424275 (-0.313841) | 0.014114 \/ 0.007607 (0.006507) | 0.830632 \/ 0.226044 (0.604588) | 8.270787 \/ 2.268929 (6.001859) | 3.723486 \/ 55.444624 (-51.721139) | 2.993671 \/ 6.876477 (-3.882806) | 2.918273 \/ 2.142072 (0.776201) | 1.105337 \/ 4.805227 (-3.699891) | 0.222976 \/ 6.500664 (-6.277688) | 0.085290 \/ 0.075469 (0.009820) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.816027 \/ 1.841788 (-0.025760) | 18.496850 \/ 8.074308 (10.422541) | 20.457032 \/ 10.191392 (10.265640) | 0.243533 \/ 0.680424 (-0.436891) | 0.027044 \/ 0.534201 (-0.507157) | 0.500752 \/ 0.579283 (-0.078531) | 0.620963 \/ 0.434364 (0.186599) | 0.607995 \/ 0.540337 (0.067658) | 0.722915 \/ 1.386936 (-0.664021) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#682d21e94ab1e64c11b583de39dc4c93f0101c5a \"CML watermark\")\n"],"created_at":1687458191000,"updated_at":1687459224000,"closed_at":1687458616000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5978","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5978","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5978.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5978.patch","merged_at":1687458616000},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5978\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5978\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5976","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5976\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5976\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5976\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5976","id":1768503913,"node_id":"PR_kwDODunzps5TmAFp","number":5976,"title":"Avoid stuck map operation when subprocesses crashes","user":{"login":"pappacena","id":1213561,"node_id":"MDQ6VXNlcjEyMTM1NjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1213561?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pappacena","html_url":"https:\/\/github.com\/pappacena","followers_url":"https:\/\/api.github.com\/users\/pappacena\/followers","following_url":"https:\/\/api.github.com\/users\/pappacena\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pappacena\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pappacena\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pappacena\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pappacena\/orgs","repos_url":"https:\/\/api.github.com\/users\/pappacena\/repos","events_url":"https:\/\/api.github.com\/users\/pappacena\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pappacena\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Do you think this can be fixed at the Pool level ? Ideally it should be the Pool responsibility to handle this, not the `map` code. We could even subclass Pool if needed (at least the one from `multiprocess`)","@lhoestq it makes sense to me. Just pushed a refactoring creating a `class ProcessPool(multiprocess.pool.Pool)` to keep track of the PID changes.","_The documentation is not available anymore as the PR was closed or merged._","I managed to raise an error without subclassing Pool with two additions to `iflatmap_unordered`:\r\n\r\n1. at the beggining\r\n```python\r\noriginal_pool = list(pool._pool)\r\n```\r\n\r\n2. in the loop\r\n```python\r\nif any(async_result._pool != original_pool for async_result in async_results) and queue.empty():\r\n raise RuntimeError(\r\n \"One of the subprocesses has abruptly died during map operation.\"\r\n \"To debug the error, disable multiprocessing.\"\r\n )\r\n```\r\n\r\nIt's still a fix that only works for `iflatmap_unordered` (so not for map, imap etc) but is maybe simpler that subclassing. It also works for both multiprocessing.Pool and multiprocess.Pool","@lhoestq sorry for the delay. Busy weeks here. \r\n\r\nI just pushed the change you requested. It looks closer to the original proposal, actually.\r\n\r\nIt seems that `map` actually uses `iflatmap_unordered` ([here](https:\/\/github.com\/huggingface\/datasets\/blob\/819bb4346434912eb405ce3f3e9f21dc25a2fe85\/src\/datasets\/arrow_dataset.py#L1509)). I think this solution works fine for the `map` method (which is the one being tested by the new `tests\/test_arrow_dataset.py::BaseDatasetTest::test_map_crash_subprocess`, right?).","Yes fixing iflatmap_unordered does fix Dataset.map, but it won't fix any Pool.map that we may use elsewhere so we'll have to keep this in mind.","It looks all good to me, feel free to fix code formatting by running `make style` and we can merge :)","> Yes fixing iflatmap_unordered does fix Dataset.map, but it won't fix any Pool.map that we may use elsewhere so we'll have to keep this in mind.\r\n\r\nRight, I agree. The best way moving forward is probably not using the buggy `multiprocess.Pool` anymore, and replace it with `concurrent.futures.ProcessPoolExecutor` as much as possible.\r\n\r\nAnyway, I've run `make style` now. Thanks for the support!","It looks like checking the async_result._pool doesn't always work - sorry about that. We might just go back to your original solution then. Would also be cool to open an issue in `multiprocess` to ask if they have a solution or if they plan to fix this.","@lhoestq no problem! Reverted to the previous version.\r\n\r\nTBH, given the discussions [in this python issue](https:\/\/github.com\/python\/cpython\/issues\/66587), I don't think the error in `multiprocess` will be merged upstream any time soon...","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006060 \/ 0.011353 (-0.005293) | 0.003695 \/ 0.011008 (-0.007313) | 0.080484 \/ 0.038508 (0.041976) | 0.061894 \/ 0.023109 (0.038785) | 0.312510 \/ 0.275898 (0.036612) | 0.352398 \/ 0.323480 (0.028918) | 0.004638 \/ 0.007986 (-0.003348) | 0.002918 \/ 0.004328 (-0.001410) | 0.062932 \/ 0.004250 (0.058681) | 0.050859 \/ 0.037052 (0.013807) | 0.316812 \/ 0.258489 (0.058323) | 0.357684 \/ 0.293841 (0.063843) | 0.027622 \/ 0.128546 (-0.100924) | 0.008012 \/ 0.075646 (-0.067634) | 0.260970 \/ 0.419271 (-0.158302) | 0.045807 \/ 0.043533 (0.002275) | 0.321235 \/ 0.255139 (0.066096) | 0.343162 \/ 0.283200 (0.059962) | 0.021136 \/ 0.141683 (-0.120547) | 1.465886 \/ 1.452155 (0.013731) | 1.500216 \/ 1.492716 (0.007500) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.187286 \/ 0.018006 (0.169279) | 0.428724 \/ 0.000490 (0.428235) | 0.003029 \/ 0.000200 (0.002829) | 0.000063 \/ 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.022703 \/ 0.037411 (-0.014708) | 0.072740 \/ 0.014526 (0.058215) | 0.083436 \/ 0.176557 (-0.093120) | 0.144559 \/ 0.737135 (-0.592577) | 0.083958 \/ 0.296338 (-0.212380) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.435729 \/ 0.215209 (0.220520) | 4.351146 \/ 2.077655 (2.273491) | 2.316627 \/ 1.504120 (0.812508) | 2.144587 \/ 1.541195 (0.603393) | 2.209182 \/ 1.468490 (0.740692) | 0.501131 \/ 4.584777 (-4.083646) | 3.077085 \/ 3.745712 (-0.668627) | 4.353706 \/ 5.269862 (-0.916156) | 2.621523 \/ 4.565676 (-1.944154) | 0.058976 \/ 0.424275 (-0.365299) | 0.006467 \/ 0.007607 (-0.001141) | 0.506690 \/ 0.226044 (0.280646) | 5.085787 \/ 2.268929 (2.816858) | 2.731336 \/ 55.444624 (-52.713289) | 2.419451 \/ 6.876477 (-4.457025) | 2.583649 \/ 2.142072 (0.441577) | 0.589869 \/ 4.805227 (-4.215359) | 0.131040 \/ 6.500664 (-6.369624) | 0.061332 \/ 0.075469 (-0.014137) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.220542 \/ 1.841788 (-0.621245) | 18.169643 \/ 8.074308 (10.095335) | 13.251704 \/ 10.191392 (3.060312) | 0.142952 \/ 0.680424 (-0.537472) | 0.016639 \/ 0.534201 (-0.517562) | 0.334851 \/ 0.579283 (-0.244432) | 0.361865 \/ 0.434364 (-0.072499) | 0.380933 \/ 0.540337 (-0.159404) | 0.527374 \/ 1.386936 (-0.859562) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006319 \/ 0.011353 (-0.005034) | 0.003778 \/ 0.011008 (-0.007231) | 0.062388 \/ 0.038508 (0.023880) | 0.062228 \/ 0.023109 (0.039119) | 0.373727 \/ 0.275898 (0.097829) | 0.399442 \/ 0.323480 (0.075962) | 0.005434 \/ 0.007986 (-0.002551) | 0.003020 \/ 0.004328 (-0.001308) | 0.062774 \/ 0.004250 (0.058524) | 0.052784 \/ 0.037052 (0.015732) | 0.376428 \/ 0.258489 (0.117939) | 0.405039 \/ 0.293841 (0.111198) | 0.027884 \/ 0.128546 (-0.100662) | 0.008086 \/ 0.075646 (-0.067561) | 0.067078 \/ 0.419271 (-0.352194) | 0.042927 \/ 0.043533 (-0.000606) | 0.372142 \/ 0.255139 (0.117003) | 0.389604 \/ 0.283200 (0.106405) | 0.021582 \/ 0.141683 (-0.120101) | 1.473332 \/ 1.452155 (0.021177) | 1.536018 \/ 1.492716 (0.043302) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.184729 \/ 0.018006 (0.166723) | 0.421065 \/ 0.000490 (0.420575) | 0.002681 \/ 0.000200 (0.002481) | 0.000070 \/ 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026067 \/ 0.037411 (-0.011344) | 0.077138 \/ 0.014526 (0.062612) | 0.085178 \/ 0.176557 (-0.091379) | 0.139681 \/ 0.737135 (-0.597454) | 0.087528 \/ 0.296338 (-0.208810) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.444899 \/ 0.215209 (0.229690) | 4.459168 \/ 2.077655 (2.381513) | 2.408792 \/ 1.504120 (0.904672) | 2.237243 \/ 1.541195 (0.696048) | 2.296298 \/ 1.468490 (0.827808) | 0.498508 \/ 4.584777 (-4.086269) | 3.067064 \/ 3.745712 (-0.678648) | 4.470577 \/ 5.269862 (-0.799284) | 2.701972 \/ 4.565676 (-1.863705) | 0.057711 \/ 0.424275 (-0.366564) | 0.006443 \/ 0.007607 (-0.001164) | 0.524046 \/ 0.226044 (0.298002) | 5.229928 \/ 2.268929 (2.961000) | 2.862101 \/ 55.444624 (-52.582523) | 2.545972 \/ 6.876477 (-4.330504) | 2.606459 \/ 2.142072 (0.464387) | 0.593285 \/ 4.805227 (-4.211942) | 0.124913 \/ 6.500664 (-6.375751) | 0.061942 \/ 0.075469 (-0.013527) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.322162 \/ 1.841788 (-0.519625) | 18.745796 \/ 8.074308 (10.671488) | 13.955443 \/ 10.191392 (3.764051) | 0.145610 \/ 0.680424 (-0.534814) | 0.016817 \/ 0.534201 (-0.517384) | 0.331180 \/ 0.579283 (-0.248103) | 0.343019 \/ 0.434364 (-0.091345) | 0.379459 \/ 0.540337 (-0.160878) | 0.526403 \/ 1.386936 (-0.860533) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#aca4cdcc79f16ec5157a2a3a665fdef0e3aa176d \"CML watermark\")\n"],"created_at":1687382311000,"updated_at":1688983119000,"closed_at":1688982607000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5976","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5976","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5976.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5976.patch","merged_at":1688982607000},"body":"I've been using Dataset.map() with `num_proc=os.cpu_count()` to leverage multicore processing for my datasets, but from time to time I get stuck processes waiting forever. Apparently, when one of the subprocesses is abruptly killed (OOM killer, segfault, SIGKILL, etc), the main process keeps waiting for the async task sent to that child process to finish.\r\n\r\nIt seems to be easy to reproduce the issue with the following script:\r\n\r\n```\r\nimport os\r\nfrom datasets import Dataset, Features, Value\r\n\r\n\r\ndef do_stuck(item):\r\n os.kill(os.getpid(), 9)\r\n\r\ndata = {\r\n \"col1\": list(range(5)),\r\n \"col2\": list(range(5)),\r\n}\r\n\r\nds = Dataset.from_dict(\r\n data,\r\n features=Features({\r\n \"col1\": Value(\"int64\"),\r\n \"col2\": Value(\"int64\"),\r\n }),\r\n)\r\n\r\nprint(ds.map(do_stuck, num_proc=4))\r\n```\r\n\r\nThis is an old behavior in Python, which apparently was fixed a few years ago in `concurrent.futures.ProcessPoolExecutor` ([ref](https:\/\/bugs.python.org\/issue9205)), but not in `multiprocessing.pool.Pool` \/ `multiprocess.pool.Pool`, which is used by `Dataset.map` ([ref](https:\/\/bugs.python.org\/issue22393)).\r\n\r\nThis PR is an idea to try to detect when a child process gets killed, and raises a `RuntimeError` warning the dataset.map() caller.\r\n\r\nEDIT: Related proposal for future improvement: https:\/\/github.com\/huggingface\/datasets\/discussions\/5977","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5976\/reactions","total_count":2,"+1":2,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5976\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5975","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5975\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5975\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5975\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5975","id":1768271343,"node_id":"I_kwDODunzps5pZa3v","number":5975,"title":"Streaming Dataset behind Proxy - FileNotFoundError","user":{"login":"Veluchs","id":135350576,"node_id":"U_kgDOCBFJMA","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/135350576?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Veluchs","html_url":"https:\/\/github.com\/Veluchs","followers_url":"https:\/\/api.github.com\/users\/Veluchs\/followers","following_url":"https:\/\/api.github.com\/users\/Veluchs\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Veluchs\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Veluchs\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Veluchs\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Veluchs\/orgs","repos_url":"https:\/\/api.github.com\/users\/Veluchs\/repos","events_url":"https:\/\/api.github.com\/users\/Veluchs\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Veluchs\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Duplicate of #","Hi ! can you try to set the upper case environment variables `HTTP_PROXY` and `HTTPS_PROXY` ?\r\n\r\nWe use `aiohttp` for streaming and it uses case sensitive environment variables","Hi, thanks for the quick reply.\r\n\r\nI set the uppercase env variables with\r\n\r\n`\r\nos.environ['HTTP_PROXY'] = \"http:\/\/example.com:xxxx\" \r\nos.environ['HTTPS_PROXY'] = \"http:\/\/example.com:xxxx\" \r\n`\r\n\r\nHowever, I still get the same error.\r\n\r\nOne thing that could be helpfull: When downloading a dataset without streaming i get the following message:\r\n_HF google storage unreachable. Downloading and preparing it from source_.\r\nThe download does however work as expected.\r\n","Are you able to use `aiohttp` to get the file at `https:\/\/huggingface.co\/datasets\/facebook\/voxpopuli\/resolve\/main\/data\/n_files.json` using your proxy ?","It only works when passing trust_env=True when creating the ClientSession, as well as setting ssl=False.\r\n\r\nWorking Example:\r\n\r\n```\r\nimport os\r\n\r\nos.environ['HTTP_PROXY'] = \"xyz\"\r\nos.environ['HTTPS_PROXY'] = \"xyz\"\r\n\r\nimport asyncio\r\nimport aiohttp\r\n\r\nasync def download_pep(url):\r\n async with aiohttp.ClientSession(trust_env=True) as session:\r\n print(\"1\")\r\n async with session.get(url, ssl=False) as resp:\r\n print(\"2\")\r\n content = await resp.text()\r\n print(content)\r\n return content\r\n\r\nasyncio.run(download_pep(\"https:\/\/huggingface.co\/datasets\/facebook\/voxpopuli\/resolve\/main\/data\/n_files.json\"))\r\n```\r\n\r\n\r\n\r\nSSL Verification has been a problem with other packages as well. Usually I circumvent the problem by setting\r\n```\r\nimport ssl\r\nssl._create_default_https_context = ssl._create_unverified_context\r\n```\r\n(probably not the best idea for security), although here aiohttp does not seem to use this default context.","We do pass `trust_env` as well. Could you share the full stack trace you get when streaming using `datasets` ? That could help locate where we might have forgotten to pass `trust_env`","Is there a way to disable ssl verification when streaming a dataset. I suspect this might be the isssue with my proxy.\r\n\r\n\r\nHere you go:\r\n\r\n```\r\nFileNotFoundError Traceback (most recent call last)\r\nCell In[8], line 3\r\n 1 from datasets import load_dataset\r\n----> 3 ds = load_dataset(\"facebook\/voxpopuli\", name=\"de\", streaming=True)\r\n 5 sample = next(iter(ds))\r\n\r\nFile [~\/.conda\/envs\/audio_hf\/lib\/python3.10\/site-packages\/datasets\/load.py:1790](https:\/\/vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net\/home\/wrsbri\/projects\/audio_course\/~\/.conda\/envs\/audio_hf\/lib\/python3.10\/site-packages\/datasets\/load.py:1790), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1788 # Return iterable dataset in case of streaming\r\n 1789 if streaming:\r\n-> 1790 return builder_instance.as_streaming_dataset(split=split)\r\n 1792 # Some datasets are already processed on the HF google storage\r\n 1793 # Don't try downloading from Google storage for the packaged datasets as text, json, csv or pandas\r\n 1794 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n\r\nFile [~\/.conda\/envs\/audio_hf\/lib\/python3.10\/site-packages\/datasets\/builder.py:1281](https:\/\/vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net\/home\/wrsbri\/projects\/audio_course\/~\/.conda\/envs\/audio_hf\/lib\/python3.10\/site-packages\/datasets\/builder.py:1281), in DatasetBuilder.as_streaming_dataset(self, split, base_path)\r\n 1274 dl_manager = StreamingDownloadManager(\r\n 1275 base_path=base_path or self.base_path,\r\n 1276 download_config=DownloadConfig(use_auth_token=self.use_auth_token, storage_options=self.storage_options),\r\n 1277 dataset_name=self.name,\r\n 1278 data_dir=self.config.data_dir,\r\n 1279 )\r\n 1280 self._check_manual_download(dl_manager)\r\n-> 1281 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n 1282 # By default, return all splits\r\n 1283 if split is None:\r\n\r\nFile [~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/facebook--voxpopuli\/b5ff837284f0778eefe0f642734e142d8c3f574eba8c9c8a4b13602297f73604\/voxpopuli.py:120](https:\/\/vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net\/home\/wrsbri\/projects\/audio_course\/~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/facebook--voxpopuli\/b5ff837284f0778eefe0f642734e142d8c3f574eba8c9c8a4b13602297f73604\/voxpopuli.py:120), in Voxpopuli._split_generators(self, dl_manager)\r\n 118 def _split_generators(self, dl_manager):\r\n 119 n_shards_path = dl_manager.download_and_extract(_N_SHARDS_FILE)\r\n--> 120 with open(n_shards_path) as f:\r\n 121 n_shards = json.load(f)\r\n 123 if self.config.name == \"en_accented\":\r\n\r\nFile [~\/.conda\/envs\/audio_hf\/lib\/python3.10\/site-packages\/datasets\/streaming.py:71](https:\/\/vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net\/home\/wrsbri\/projects\/audio_course\/~\/.conda\/envs\/audio_hf\/lib\/python3.10\/site-packages\/datasets\/streaming.py:71), in extend_module_for_streaming..wrap_auth..wrapper(*args, **kwargs)\r\n 69 @wraps(function)\r\n 70 def wrapper(*args, **kwargs):\r\n---> 71 return function(*args, use_auth_token=use_auth_token, **kwargs)\r\n\r\nFile [~\/.conda\/envs\/audio_hf\/lib\/python3.10\/site-packages\/datasets\/download\/streaming_download_manager.py:517](https:\/\/vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net\/home\/wrsbri\/projects\/audio_course\/~\/.conda\/envs\/audio_hf\/lib\/python3.10\/site-packages\/datasets\/download\/streaming_download_manager.py:517), in xopen(file, mode, use_auth_token, *args, **kwargs)\r\n 515 except FileNotFoundError:\r\n 516 if file.startswith(config.HF_ENDPOINT):\r\n--> 517 raise FileNotFoundError(\r\n 518 file + \"\\nIf the repo is private or gated, make sure to log in with `huggingface-cli login`.\"\r\n 519 ) from None\r\n 520 else:\r\n 521 raise\r\n\r\nFileNotFoundError: https:\/\/huggingface.co\/datasets\/facebook\/voxpopuli\/resolve\/main\/data\/n_files.json\r\nIf the repo is private or gated, make sure to log in with `huggingface-cli login`.\r\n```","> Is there a way to disable ssl verification when streaming a dataset.\r\n\r\nI don't think so.\r\n\r\nWe use `fsspec` HTTPFileSystem implementation that is based on `aiohttp`. If you register a subclass of HTTPFileSystem that has SSL disabled by default it could work, but I wouldn't recommended it because it can raise security issues.","Okay thanks for your help! I guess I have to figure out how to improve the proxy environment \/ see if I can make it work with ssl connections."],"created_at":1687374602000,"updated_at":1688104539000,"closed_at":1688104538000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nWhen trying to stream a dataset i get the following error after a few minutes of waiting.\r\n\r\n```\r\nFileNotFoundError: https:\/\/huggingface.co\/datasets\/facebook\/voxpopuli\/resolve\/main\/data\/n_files.json\r\nIf the repo is private or gated, make sure to log in with `huggingface-cli login`.\r\n```\r\n\r\nI have already set the proxy environment variables. Downloading a Dataset without streaming works as expected.\r\nStill i suspect that this is connected to being behind a proxy.\r\n\r\nIs there a way to set the proxy for streaming datasets? Possibly a keyword argument that gets passed to ffspec?\r\n\r\n### Steps to reproduce the bug\r\n\r\nThis is the code i use.\r\n\r\n```\r\nimport os\r\nos.environ['http_proxy'] = \"http:\/\/example.com:xxxx\" \r\nos.environ['https_proxy'] = \"http:\/\/example.com:xxxx\" \r\n\r\n\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"facebook\/voxpopuli\", name=\"de\", streaming=True)\r\n```\r\n\r\n### Expected behavior\r\n\r\nI would expect the streaming functionality to use the set proxy settings.\r\n\r\n### Environment info\r\n\r\n\r\n- `datasets` version: 2.13.0\r\n- Platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.11\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 2.0.2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5975\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5975\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5974","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5974\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5974\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5974\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5974","id":1767981231,"node_id":"PR_kwDODunzps5TkXCb","number":5974,"title":"Deprecate `errors` param in favor of `encoding_errors` in text builder","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006518 \/ 0.011353 (-0.004835) | 0.004121 \/ 0.011008 (-0.006887) | 0.103350 \/ 0.038508 (0.064842) | 0.045030 \/ 0.023109 (0.021920) | 0.351670 \/ 0.275898 (0.075772) | 0.408110 \/ 0.323480 (0.084630) | 0.003883 \/ 0.007986 (-0.004102) | 0.003352 \/ 0.004328 (-0.000977) | 0.078786 \/ 0.004250 (0.074535) | 0.063977 \/ 0.037052 (0.026925) | 0.369759 \/ 0.258489 (0.111270) | 0.415103 \/ 0.293841 (0.121262) | 0.033069 \/ 0.128546 (-0.095477) | 0.008863 \/ 0.075646 (-0.066783) | 0.353660 \/ 0.419271 (-0.065611) | 0.055714 \/ 0.043533 (0.012181) | 0.350458 \/ 0.255139 (0.095319) | 0.369505 \/ 0.283200 (0.086305) | 0.022822 \/ 0.141683 (-0.118861) | 1.537588 \/ 1.452155 (0.085433) | 1.590569 \/ 1.492716 (0.097853) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.206826 \/ 0.018006 (0.188819) | 0.471625 \/ 0.000490 (0.471135) | 0.005188 \/ 0.000200 (0.004988) | 0.000316 \/ 0.000054 (0.000261) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028148 \/ 0.037411 (-0.009263) | 0.111941 \/ 0.014526 (0.097415) | 0.122106 \/ 0.176557 (-0.054451) | 0.181127 \/ 0.737135 (-0.556009) | 0.127534 \/ 0.296338 (-0.168805) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.409520 \/ 0.215209 (0.194311) | 4.098455 \/ 2.077655 (2.020800) | 1.852447 \/ 1.504120 (0.348327) | 1.657036 \/ 1.541195 (0.115842) | 1.709624 \/ 1.468490 (0.241134) | 0.542806 \/ 4.584777 (-4.041970) | 3.809352 \/ 3.745712 (0.063640) | 1.855412 \/ 5.269862 (-3.414449) | 1.109180 \/ 4.565676 (-3.456497) | 0.066801 \/ 0.424275 (-0.357474) | 0.011832 \/ 0.007607 (0.004225) | 0.518338 \/ 0.226044 (0.292293) | 5.190108 \/ 2.268929 (2.921179) | 2.320602 \/ 55.444624 (-53.124023) | 1.991416 \/ 6.876477 (-4.885060) | 2.106989 \/ 2.142072 (-0.035084) | 0.668914 \/ 4.805227 (-4.136313) | 0.145325 \/ 6.500664 (-6.355340) | 0.065145 \/ 0.075469 (-0.010324) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.254706 \/ 1.841788 (-0.587082) | 14.707264 \/ 8.074308 (6.632956) | 14.615423 \/ 10.191392 (4.424031) | 0.170764 \/ 0.680424 (-0.509659) | 0.017905 \/ 0.534201 (-0.516296) | 0.435606 \/ 0.579283 (-0.143677) | 0.434648 \/ 0.434364 (0.000284) | 0.520813 \/ 0.540337 (-0.019524) | 0.633902 \/ 1.386936 (-0.753034) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007212 \/ 0.011353 (-0.004141) | 0.004301 \/ 0.011008 (-0.006707) | 0.080767 \/ 0.038508 (0.042258) | 0.051949 \/ 0.023109 (0.028840) | 0.398473 \/ 0.275898 (0.122575) | 0.465038 \/ 0.323480 (0.141558) | 0.005580 \/ 0.007986 (-0.002406) | 0.003556 \/ 0.004328 (-0.000773) | 0.080682 \/ 0.004250 (0.076431) | 0.059517 \/ 0.037052 (0.022464) | 0.421171 \/ 0.258489 (0.162682) | 0.459752 \/ 0.293841 (0.165911) | 0.032960 \/ 0.128546 (-0.095586) | 0.009107 \/ 0.075646 (-0.066539) | 0.086382 \/ 0.419271 (-0.332889) | 0.056053 \/ 0.043533 (0.012520) | 0.393357 \/ 0.255139 (0.138218) | 0.412972 \/ 0.283200 (0.129772) | 0.031115 \/ 0.141683 (-0.110568) | 1.576961 \/ 1.452155 (0.124806) | 1.627249 \/ 1.492716 (0.134533) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.227618 \/ 0.018006 (0.209612) | 0.444640 \/ 0.000490 (0.444150) | 0.004376 \/ 0.000200 (0.004176) | 0.000092 \/ 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030826 \/ 0.037411 (-0.006586) | 0.117587 \/ 0.014526 (0.103062) | 0.127467 \/ 0.176557 (-0.049089) | 0.184440 \/ 0.737135 (-0.552695) | 0.133664 \/ 0.296338 (-0.162675) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.443183 \/ 0.215209 (0.227974) | 4.408312 \/ 2.077655 (2.330658) | 2.132487 \/ 1.504120 (0.628367) | 1.923632 \/ 1.541195 (0.382438) | 1.967882 \/ 1.468490 (0.499392) | 0.552954 \/ 4.584777 (-4.031823) | 3.777701 \/ 3.745712 (0.031989) | 1.857686 \/ 5.269862 (-3.412176) | 1.104847 \/ 4.565676 (-3.460829) | 0.068350 \/ 0.424275 (-0.355925) | 0.012437 \/ 0.007607 (0.004830) | 0.559258 \/ 0.226044 (0.333214) | 5.593258 \/ 2.268929 (3.324330) | 2.648059 \/ 55.444624 (-52.796565) | 2.277428 \/ 6.876477 (-4.599049) | 2.351685 \/ 2.142072 (0.209612) | 0.678750 \/ 4.805227 (-4.126477) | 0.145550 \/ 6.500664 (-6.355114) | 0.066556 \/ 0.075469 (-0.008913) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.327128 \/ 1.841788 (-0.514659) | 15.649079 \/ 8.074308 (7.574771) | 14.478659 \/ 10.191392 (4.287267) | 0.147633 \/ 0.680424 (-0.532791) | 0.018502 \/ 0.534201 (-0.515699) | 0.438556 \/ 0.579283 (-0.140727) | 0.433381 \/ 0.434364 (-0.000983) | 0.514367 \/ 0.540337 (-0.025970) | 0.618347 \/ 1.386936 (-0.768589) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#16aa1c886c5b499641a4bb3d8ce4a4f7de8244b7 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006078 \/ 0.011353 (-0.005275) | 0.003914 \/ 0.011008 (-0.007095) | 0.102039 \/ 0.038508 (0.063531) | 0.037660 \/ 0.023109 (0.014551) | 0.348963 \/ 0.275898 (0.073065) | 0.407284 \/ 0.323480 (0.083804) | 0.004661 \/ 0.007986 (-0.003324) | 0.003253 \/ 0.004328 (-0.001076) | 0.078276 \/ 0.004250 (0.074025) | 0.054144 \/ 0.037052 (0.017091) | 0.376715 \/ 0.258489 (0.118225) | 0.418499 \/ 0.293841 (0.124658) | 0.027627 \/ 0.128546 (-0.100919) | 0.008494 \/ 0.075646 (-0.067152) | 0.316894 \/ 0.419271 (-0.102377) | 0.046560 \/ 0.043533 (0.003027) | 0.339835 \/ 0.255139 (0.084696) | 0.374628 \/ 0.283200 (0.091428) | 0.020729 \/ 0.141683 (-0.120954) | 1.502769 \/ 1.452155 (0.050615) | 1.548756 \/ 1.492716 (0.056040) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.229192 \/ 0.018006 (0.211186) | 0.426245 \/ 0.000490 (0.425756) | 0.005190 \/ 0.000200 (0.004990) | 0.000081 \/ 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024271 \/ 0.037411 (-0.013140) | 0.098869 \/ 0.014526 (0.084343) | 0.105079 \/ 0.176557 (-0.071477) | 0.164707 \/ 0.737135 (-0.572428) | 0.110337 \/ 0.296338 (-0.186002) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.426593 \/ 0.215209 (0.211383) | 4.293977 \/ 2.077655 (2.216323) | 1.928502 \/ 1.504120 (0.424382) | 1.728623 \/ 1.541195 (0.187428) | 1.792084 \/ 1.468490 (0.323594) | 0.568737 \/ 4.584777 (-4.016040) | 3.438534 \/ 3.745712 (-0.307178) | 1.797798 \/ 5.269862 (-3.472063) | 1.054078 \/ 4.565676 (-3.511598) | 0.068711 \/ 0.424275 (-0.355564) | 0.011250 \/ 0.007607 (0.003643) | 0.529299 \/ 0.226044 (0.303255) | 5.283965 \/ 2.268929 (3.015037) | 2.358274 \/ 55.444624 (-53.086350) | 2.012818 \/ 6.876477 (-4.863659) | 2.109923 \/ 2.142072 (-0.032149) | 0.679556 \/ 4.805227 (-4.125671) | 0.138346 \/ 6.500664 (-6.362318) | 0.066349 \/ 0.075469 (-0.009120) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.193994 \/ 1.841788 (-0.647794) | 14.073158 \/ 8.074308 (5.998850) | 13.488525 \/ 10.191392 (3.297133) | 0.144536 \/ 0.680424 (-0.535888) | 0.016748 \/ 0.534201 (-0.517453) | 0.362703 \/ 0.579283 (-0.216580) | 0.389511 \/ 0.434364 (-0.044853) | 0.427296 \/ 0.540337 (-0.113041) | 0.513227 \/ 1.386936 (-0.873709) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006215 \/ 0.011353 (-0.005138) | 0.003834 \/ 0.011008 (-0.007174) | 0.078001 \/ 0.038508 (0.039493) | 0.036537 \/ 0.023109 (0.013428) | 0.369724 \/ 0.275898 (0.093826) | 0.426761 \/ 0.323480 (0.103281) | 0.003602 \/ 0.007986 (-0.004383) | 0.003001 \/ 0.004328 (-0.001327) | 0.075989 \/ 0.004250 (0.071739) | 0.048618 \/ 0.037052 (0.011566) | 0.374296 \/ 0.258489 (0.115807) | 0.430330 \/ 0.293841 (0.136489) | 0.028299 \/ 0.128546 (-0.100247) | 0.008537 \/ 0.075646 (-0.067109) | 0.083275 \/ 0.419271 (-0.335997) | 0.043136 \/ 0.043533 (-0.000397) | 0.359072 \/ 0.255139 (0.103933) | 0.387391 \/ 0.283200 (0.104192) | 0.021202 \/ 0.141683 (-0.120481) | 1.520832 \/ 1.452155 (0.068677) | 1.567030 \/ 1.492716 (0.074313) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.230944 \/ 0.018006 (0.212938) | 0.422159 \/ 0.000490 (0.421669) | 0.003447 \/ 0.000200 (0.003247) | 0.000125 \/ 0.000054 (0.000071) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025442 \/ 0.037411 (-0.011969) | 0.103944 \/ 0.014526 (0.089418) | 0.110577 \/ 0.176557 (-0.065979) | 0.161393 \/ 0.737135 (-0.575743) | 0.113482 \/ 0.296338 (-0.182857) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.485765 \/ 0.215209 (0.270556) | 4.845737 \/ 2.077655 (2.768083) | 2.556732 \/ 1.504120 (1.052612) | 2.348638 \/ 1.541195 (0.807443) | 2.379289 \/ 1.468490 (0.910799) | 0.561261 \/ 4.584777 (-4.023516) | 3.482468 \/ 3.745712 (-0.263244) | 3.061319 \/ 5.269862 (-2.208543) | 1.483938 \/ 4.565676 (-3.081738) | 0.067584 \/ 0.424275 (-0.356691) | 0.011333 \/ 0.007607 (0.003726) | 0.594342 \/ 0.226044 (0.368297) | 5.935477 \/ 2.268929 (3.666548) | 3.025029 \/ 55.444624 (-52.419595) | 2.687032 \/ 6.876477 (-4.189445) | 2.752470 \/ 2.142072 (0.610398) | 0.674470 \/ 4.805227 (-4.130757) | 0.136777 \/ 6.500664 (-6.363887) | 0.068335 \/ 0.075469 (-0.007134) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.336456 \/ 1.841788 (-0.505332) | 14.376007 \/ 8.074308 (6.301699) | 14.171375 \/ 10.191392 (3.979983) | 0.159620 \/ 0.680424 (-0.520804) | 0.016685 \/ 0.534201 (-0.517516) | 0.364344 \/ 0.579283 (-0.214939) | 0.395358 \/ 0.434364 (-0.039006) | 0.424876 \/ 0.540337 (-0.115461) | 0.513267 \/ 1.386936 (-0.873669) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#6ed837325cb539a5deb99129e5ad181d0269e050 \"CML watermark\")\n"],"created_at":1687365098000,"updated_at":1687775683000,"closed_at":1687775260000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5974","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5974","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5974.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5974.patch","merged_at":1687775260000},"body":"For consistency with the JSON builder and Pandas","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5974\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5974\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5972","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5972\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5972\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5972\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5972","id":1767897485,"node_id":"PR_kwDODunzps5TkE7K","number":5972,"title":"Filter unsupported extensions","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006983 \/ 0.011353 (-0.004369) | 0.004473 \/ 0.011008 (-0.006535) | 0.105158 \/ 0.038508 (0.066650) | 0.048973 \/ 0.023109 (0.025864) | 0.358771 \/ 0.275898 (0.082873) | 0.432389 \/ 0.323480 (0.108909) | 0.005689 \/ 0.007986 (-0.002297) | 0.003584 \/ 0.004328 (-0.000744) | 0.080852 \/ 0.004250 (0.076601) | 0.066133 \/ 0.037052 (0.029081) | 0.370981 \/ 0.258489 (0.112492) | 0.406942 \/ 0.293841 (0.113101) | 0.032123 \/ 0.128546 (-0.096424) | 0.009313 \/ 0.075646 (-0.066333) | 0.355220 \/ 0.419271 (-0.064051) | 0.055768 \/ 0.043533 (0.012235) | 0.370545 \/ 0.255139 (0.115406) | 0.375619 \/ 0.283200 (0.092419) | 0.024258 \/ 0.141683 (-0.117425) | 1.559073 \/ 1.452155 (0.106918) | 1.616520 \/ 1.492716 (0.123804) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.277893 \/ 0.018006 (0.259887) | 0.535447 \/ 0.000490 (0.534957) | 0.004877 \/ 0.000200 (0.004677) | 0.000092 \/ 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029444 \/ 0.037411 (-0.007968) | 0.114366 \/ 0.014526 (0.099841) | 0.130957 \/ 0.176557 (-0.045599) | 0.189604 \/ 0.737135 (-0.547531) | 0.131682 \/ 0.296338 (-0.164656) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.412315 \/ 0.215209 (0.197106) | 4.093879 \/ 2.077655 (2.016225) | 1.856169 \/ 1.504120 (0.352050) | 1.655358 \/ 1.541195 (0.114164) | 1.758190 \/ 1.468490 (0.289699) | 0.545829 \/ 4.584777 (-4.038948) | 3.871436 \/ 3.745712 (0.125724) | 1.938244 \/ 5.269862 (-3.331618) | 1.122727 \/ 4.565676 (-3.442950) | 0.067107 \/ 0.424275 (-0.357168) | 0.012012 \/ 0.007607 (0.004405) | 0.518868 \/ 0.226044 (0.292824) | 5.235081 \/ 2.268929 (2.966153) | 2.335115 \/ 55.444624 (-53.109509) | 2.013074 \/ 6.876477 (-4.863402) | 2.219808 \/ 2.142072 (0.077735) | 0.674602 \/ 4.805227 (-4.130626) | 0.147051 \/ 6.500664 (-6.353613) | 0.068444 \/ 0.075469 (-0.007025) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.245600 \/ 1.841788 (-0.596188) | 15.537727 \/ 8.074308 (7.463419) | 15.074300 \/ 10.191392 (4.882908) | 0.194217 \/ 0.680424 (-0.486207) | 0.018536 \/ 0.534201 (-0.515665) | 0.437085 \/ 0.579283 (-0.142198) | 0.441123 \/ 0.434364 (0.006759) | 0.530681 \/ 0.540337 (-0.009657) | 0.649154 \/ 1.386936 (-0.737782) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007243 \/ 0.011353 (-0.004110) | 0.004688 \/ 0.011008 (-0.006320) | 0.079809 \/ 0.038508 (0.041301) | 0.046915 \/ 0.023109 (0.023805) | 0.415144 \/ 0.275898 (0.139246) | 0.474867 \/ 0.323480 (0.151388) | 0.004550 \/ 0.007986 (-0.003435) | 0.004585 \/ 0.004328 (0.000257) | 0.080837 \/ 0.004250 (0.076587) | 0.061667 \/ 0.037052 (0.024614) | 0.411321 \/ 0.258489 (0.152832) | 0.464195 \/ 0.293841 (0.170354) | 0.032510 \/ 0.128546 (-0.096037) | 0.009306 \/ 0.075646 (-0.066340) | 0.086637 \/ 0.419271 (-0.332635) | 0.053335 \/ 0.043533 (0.009802) | 0.402302 \/ 0.255139 (0.147163) | 0.424864 \/ 0.283200 (0.141664) | 0.026573 \/ 0.141683 (-0.115110) | 1.566793 \/ 1.452155 (0.114639) | 1.628118 \/ 1.492716 (0.135401) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.317802 \/ 0.018006 (0.299796) | 0.544593 \/ 0.000490 (0.544103) | 0.005690 \/ 0.000200 (0.005490) | 0.000107 \/ 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033015 \/ 0.037411 (-0.004397) | 0.121940 \/ 0.014526 (0.107414) | 0.132920 \/ 0.176557 (-0.043637) | 0.191481 \/ 0.737135 (-0.545655) | 0.139139 \/ 0.296338 (-0.157199) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.460382 \/ 0.215209 (0.245173) | 4.610046 \/ 2.077655 (2.532392) | 2.296573 \/ 1.504120 (0.792453) | 2.099735 \/ 1.541195 (0.558540) | 2.213913 \/ 1.468490 (0.745423) | 0.544871 \/ 4.584777 (-4.039906) | 3.814174 \/ 3.745712 (0.068462) | 3.246397 \/ 5.269862 (-2.023464) | 1.480236 \/ 4.565676 (-3.085440) | 0.068464 \/ 0.424275 (-0.355811) | 0.012651 \/ 0.007607 (0.005043) | 0.564989 \/ 0.226044 (0.338944) | 5.639188 \/ 2.268929 (3.370259) | 2.827601 \/ 55.444624 (-52.617023) | 2.473743 \/ 6.876477 (-4.402734) | 2.567413 \/ 2.142072 (0.425340) | 0.674351 \/ 4.805227 (-4.130876) | 0.146248 \/ 6.500664 (-6.354416) | 0.067553 \/ 0.075469 (-0.007916) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.346703 \/ 1.841788 (-0.495085) | 16.494787 \/ 8.074308 (8.420479) | 15.179487 \/ 10.191392 (4.988095) | 0.181864 \/ 0.680424 (-0.498560) | 0.018857 \/ 0.534201 (-0.515344) | 0.437787 \/ 0.579283 (-0.141496) | 0.431770 \/ 0.434364 (-0.002594) | 0.507116 \/ 0.540337 (-0.033221) | 0.608899 \/ 1.386936 (-0.778037) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#0fd5b7412f907675e76b183a6e39ef6d176fdcc0 \"CML watermark\")\n","_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005963 \/ 0.011353 (-0.005390) | 0.003743 \/ 0.011008 (-0.007265) | 0.098519 \/ 0.038508 (0.060011) | 0.037392 \/ 0.023109 (0.014283) | 0.322706 \/ 0.275898 (0.046808) | 0.380032 \/ 0.323480 (0.056552) | 0.004694 \/ 0.007986 (-0.003292) | 0.002897 \/ 0.004328 (-0.001432) | 0.078664 \/ 0.004250 (0.074414) | 0.052646 \/ 0.037052 (0.015594) | 0.335523 \/ 0.258489 (0.077034) | 0.375464 \/ 0.293841 (0.081623) | 0.027537 \/ 0.128546 (-0.101010) | 0.008452 \/ 0.075646 (-0.067194) | 0.313844 \/ 0.419271 (-0.105427) | 0.047368 \/ 0.043533 (0.003835) | 0.313833 \/ 0.255139 (0.058694) | 0.342284 \/ 0.283200 (0.059085) | 0.021136 \/ 0.141683 (-0.120547) | 1.544764 \/ 1.452155 (0.092610) | 1.563850 \/ 1.492716 (0.071134) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.188609 \/ 0.018006 (0.170603) | 0.421686 \/ 0.000490 (0.421196) | 0.003336 \/ 0.000200 (0.003136) | 0.000077 \/ 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023678 \/ 0.037411 (-0.013733) | 0.099191 \/ 0.014526 (0.084665) | 0.105819 \/ 0.176557 (-0.070738) | 0.169654 \/ 0.737135 (-0.567481) | 0.110240 \/ 0.296338 (-0.186099) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.425497 \/ 0.215209 (0.210288) | 4.237165 \/ 2.077655 (2.159510) | 1.902953 \/ 1.504120 (0.398833) | 1.699012 \/ 1.541195 (0.157818) | 1.751107 \/ 1.468490 (0.282617) | 0.563326 \/ 4.584777 (-4.021451) | 3.394189 \/ 3.745712 (-0.351523) | 2.706129 \/ 5.269862 (-2.563732) | 1.361522 \/ 4.565676 (-3.204155) | 0.067776 \/ 0.424275 (-0.356499) | 0.010959 \/ 0.007607 (0.003352) | 0.530905 \/ 0.226044 (0.304860) | 5.322467 \/ 2.268929 (3.053538) | 2.384356 \/ 55.444624 (-53.060269) | 2.044196 \/ 6.876477 (-4.832281) | 2.119837 \/ 2.142072 (-0.022235) | 0.682236 \/ 4.805227 (-4.122991) | 0.136921 \/ 6.500664 (-6.363743) | 0.066784 \/ 0.075469 (-0.008685) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.210642 \/ 1.841788 (-0.631146) | 13.804572 \/ 8.074308 (5.730264) | 13.309229 \/ 10.191392 (3.117837) | 0.154356 \/ 0.680424 (-0.526068) | 0.016833 \/ 0.534201 (-0.517368) | 0.366503 \/ 0.579283 (-0.212780) | 0.385201 \/ 0.434364 (-0.049163) | 0.426713 \/ 0.540337 (-0.113624) | 0.516795 \/ 1.386936 (-0.870141) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006144 \/ 0.011353 (-0.005209) | 0.003723 \/ 0.011008 (-0.007285) | 0.077427 \/ 0.038508 (0.038919) | 0.037636 \/ 0.023109 (0.014527) | 0.375048 \/ 0.275898 (0.099150) | 0.442254 \/ 0.323480 (0.118774) | 0.003506 \/ 0.007986 (-0.004480) | 0.003751 \/ 0.004328 (-0.000577) | 0.076771 \/ 0.004250 (0.072521) | 0.047915 \/ 0.037052 (0.010862) | 0.378918 \/ 0.258489 (0.120429) | 0.435300 \/ 0.293841 (0.141459) | 0.028317 \/ 0.128546 (-0.100230) | 0.008413 \/ 0.075646 (-0.067233) | 0.082774 \/ 0.419271 (-0.336497) | 0.043211 \/ 0.043533 (-0.000321) | 0.362022 \/ 0.255139 (0.106883) | 0.404928 \/ 0.283200 (0.121728) | 0.020692 \/ 0.141683 (-0.120991) | 1.527303 \/ 1.452155 (0.075148) | 1.596091 \/ 1.492716 (0.103375) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.225537 \/ 0.018006 (0.207530) | 0.399901 \/ 0.000490 (0.399412) | 0.000424 \/ 0.000200 (0.000224) | 0.000058 \/ 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026483 \/ 0.037411 (-0.010928) | 0.104373 \/ 0.014526 (0.089847) | 0.111271 \/ 0.176557 (-0.065286) | 0.163872 \/ 0.737135 (-0.573264) | 0.113991 \/ 0.296338 (-0.182347) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.456484 \/ 0.215209 (0.241275) | 4.572652 \/ 2.077655 (2.494998) | 2.374908 \/ 1.504120 (0.870788) | 2.207855 \/ 1.541195 (0.666661) | 2.260009 \/ 1.468490 (0.791519) | 0.562678 \/ 4.584777 (-4.022099) | 3.441778 \/ 3.745712 (-0.303934) | 1.729006 \/ 5.269862 (-3.540855) | 1.024937 \/ 4.565676 (-3.540739) | 0.068707 \/ 0.424275 (-0.355568) | 0.011334 \/ 0.007607 (0.003727) | 0.564293 \/ 0.226044 (0.338248) | 5.638367 \/ 2.268929 (3.369438) | 2.665654 \/ 55.444624 (-52.778970) | 2.320033 \/ 6.876477 (-4.556444) | 2.328706 \/ 2.142072 (0.186634) | 0.677433 \/ 4.805227 (-4.127794) | 0.137190 \/ 6.500664 (-6.363474) | 0.068585 \/ 0.075469 (-0.006885) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.312476 \/ 1.841788 (-0.529312) | 14.206685 \/ 8.074308 (6.132377) | 14.217928 \/ 10.191392 (4.026536) | 0.143416 \/ 0.680424 (-0.537007) | 0.016647 \/ 0.534201 (-0.517554) | 0.361228 \/ 0.579283 (-0.218055) | 0.396185 \/ 0.434364 (-0.038178) | 0.423275 \/ 0.540337 (-0.117063) | 0.512966 \/ 1.386936 (-0.873970) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#b424648fd68bd0b5279eb916cec4836d1220e268 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008913 \/ 0.011353 (-0.002440) | 0.005142 \/ 0.011008 (-0.005866) | 0.133958 \/ 0.038508 (0.095449) | 0.049180 \/ 0.023109 (0.026071) | 0.389169 \/ 0.275898 (0.113270) | 0.481513 \/ 0.323480 (0.158033) | 0.006555 \/ 0.007986 (-0.001430) | 0.003806 \/ 0.004328 (-0.000522) | 0.102056 \/ 0.004250 (0.097806) | 0.083259 \/ 0.037052 (0.046207) | 0.392536 \/ 0.258489 (0.134047) | 0.447503 \/ 0.293841 (0.153662) | 0.047472 \/ 0.128546 (-0.081074) | 0.014748 \/ 0.075646 (-0.060899) | 0.475619 \/ 0.419271 (0.056348) | 0.107306 \/ 0.043533 (0.063773) | 0.421942 \/ 0.255139 (0.166803) | 0.419736 \/ 0.283200 (0.136536) | 0.044195 \/ 0.141683 (-0.097488) | 1.793840 \/ 1.452155 (0.341686) | 1.960204 \/ 1.492716 (0.467488) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.252046 \/ 0.018006 (0.234040) | 0.627725 \/ 0.000490 (0.627236) | 0.007435 \/ 0.000200 (0.007235) | 0.000526 \/ 0.000054 (0.000472) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034656 \/ 0.037411 (-0.002755) | 0.114534 \/ 0.014526 (0.100008) | 0.135804 \/ 0.176557 (-0.040753) | 0.209309 \/ 0.737135 (-0.527826) | 0.140369 \/ 0.296338 (-0.155969) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.636736 \/ 0.215209 (0.421527) | 6.039985 \/ 2.077655 (3.962330) | 2.640141 \/ 1.504120 (1.136021) | 2.284492 \/ 1.541195 (0.743297) | 2.324956 \/ 1.468490 (0.856466) | 0.934499 \/ 4.584777 (-3.650278) | 5.673415 \/ 3.745712 (1.927703) | 5.184584 \/ 5.269862 (-0.085278) | 2.661911 \/ 4.565676 (-1.903766) | 0.150420 \/ 0.424275 (-0.273855) | 0.015655 \/ 0.007607 (0.008048) | 0.748290 \/ 0.226044 (0.522246) | 7.579755 \/ 2.268929 (5.310827) | 3.346732 \/ 55.444624 (-52.097892) | 2.708212 \/ 6.876477 (-4.168264) | 2.682423 \/ 2.142072 (0.540351) | 1.170389 \/ 4.805227 (-3.634838) | 0.215775 \/ 6.500664 (-6.284889) | 0.076360 \/ 0.075469 (0.000891) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.516794 \/ 1.841788 (-0.324993) | 18.709117 \/ 8.074308 (10.634809) | 22.492542 \/ 10.191392 (12.301150) | 0.237978 \/ 0.680424 (-0.442446) | 0.027828 \/ 0.534201 (-0.506373) | 0.499968 \/ 0.579283 (-0.079315) | 0.645899 \/ 0.434364 (0.211535) | 0.548599 \/ 0.540337 (0.008262) | 0.675428 \/ 1.386936 (-0.711508) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008469 \/ 0.011353 (-0.002884) | 0.005420 \/ 0.011008 (-0.005589) | 0.093340 \/ 0.038508 (0.054832) | 0.045896 \/ 0.023109 (0.022786) | 0.533267 \/ 0.275898 (0.257369) | 0.596034 \/ 0.323480 (0.272555) | 0.004816 \/ 0.007986 (-0.003170) | 0.004379 \/ 0.004328 (0.000051) | 0.096356 \/ 0.004250 (0.092106) | 0.058339 \/ 0.037052 (0.021287) | 0.574464 \/ 0.258489 (0.315975) | 0.649301 \/ 0.293841 (0.355461) | 0.047599 \/ 0.128546 (-0.080947) | 0.013759 \/ 0.075646 (-0.061887) | 0.104672 \/ 0.419271 (-0.314599) | 0.061658 \/ 0.043533 (0.018125) | 0.560956 \/ 0.255139 (0.305817) | 0.585328 \/ 0.283200 (0.302128) | 0.034137 \/ 0.141683 (-0.107546) | 1.844528 \/ 1.452155 (0.392373) | 1.971398 \/ 1.492716 (0.478682) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.278666 \/ 0.018006 (0.260660) | 0.577342 \/ 0.000490 (0.576853) | 0.005496 \/ 0.000200 (0.005296) | 0.000131 \/ 0.000054 (0.000076) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029863 \/ 0.037411 (-0.007549) | 0.161703 \/ 0.014526 (0.147177) | 0.132279 \/ 0.176557 (-0.044277) | 0.227345 \/ 0.737135 (-0.509791) | 0.138047 \/ 0.296338 (-0.158291) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.651535 \/ 0.215209 (0.436326) | 7.077949 \/ 2.077655 (5.000295) | 2.926990 \/ 1.504120 (1.422871) | 2.598872 \/ 1.541195 (1.057678) | 2.614192 \/ 1.468490 (1.145702) | 0.913845 \/ 4.584777 (-3.670932) | 5.704301 \/ 3.745712 (1.958589) | 2.796914 \/ 5.269862 (-2.472948) | 1.836096 \/ 4.565676 (-2.729580) | 0.106294 \/ 0.424275 (-0.317981) | 0.012705 \/ 0.007607 (0.005098) | 0.836336 \/ 0.226044 (0.610291) | 8.234079 \/ 2.268929 (5.965150) | 3.836410 \/ 55.444624 (-51.608215) | 3.116752 \/ 6.876477 (-3.759724) | 3.154258 \/ 2.142072 (1.012186) | 1.195794 \/ 4.805227 (-3.609434) | 0.240491 \/ 6.500664 (-6.260173) | 0.087913 \/ 0.075469 (0.012444) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.724723 \/ 1.841788 (-0.117064) | 19.492194 \/ 8.074308 (11.417885) | 21.443341 \/ 10.191392 (11.251949) | 0.245819 \/ 0.680424 (-0.434605) | 0.027024 \/ 0.534201 (-0.507177) | 0.481071 \/ 0.579283 (-0.098212) | 0.596359 \/ 0.434364 (0.161995) | 0.646462 \/ 0.540337 (0.106124) | 0.706380 \/ 1.386936 (-0.680556) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#67ca664e6d5ef137127b238aae1d0aff54e22db2 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006634 \/ 0.011353 (-0.004719) | 0.004003 \/ 0.011008 (-0.007005) | 0.097874 \/ 0.038508 (0.059365) | 0.043528 \/ 0.023109 (0.020419) | 0.302293 \/ 0.275898 (0.026395) | 0.357041 \/ 0.323480 (0.033561) | 0.003761 \/ 0.007986 (-0.004225) | 0.004312 \/ 0.004328 (-0.000016) | 0.076253 \/ 0.004250 (0.072003) | 0.062807 \/ 0.037052 (0.025755) | 0.316737 \/ 0.258489 (0.058248) | 0.356722 \/ 0.293841 (0.062881) | 0.030816 \/ 0.128546 (-0.097730) | 0.008691 \/ 0.075646 (-0.066955) | 0.328366 \/ 0.419271 (-0.090906) | 0.062299 \/ 0.043533 (0.018766) | 0.293877 \/ 0.255139 (0.038738) | 0.319832 \/ 0.283200 (0.036632) | 0.024996 \/ 0.141683 (-0.116687) | 1.473912 \/ 1.452155 (0.021758) | 1.565439 \/ 1.492716 (0.072723) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.208428 \/ 0.018006 (0.190422) | 0.435618 \/ 0.000490 (0.435128) | 0.000695 \/ 0.000200 (0.000495) | 0.000056 \/ 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026253 \/ 0.037411 (-0.011158) | 0.106908 \/ 0.014526 (0.092382) | 0.117075 \/ 0.176557 (-0.059482) | 0.177969 \/ 0.737135 (-0.559166) | 0.123400 \/ 0.296338 (-0.172938) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.424970 \/ 0.215209 (0.209761) | 4.203233 \/ 2.077655 (2.125578) | 2.009679 \/ 1.504120 (0.505559) | 1.825691 \/ 1.541195 (0.284496) | 1.870639 \/ 1.468490 (0.402149) | 0.530758 \/ 4.584777 (-4.054019) | 3.718791 \/ 3.745712 (-0.026921) | 1.800206 \/ 5.269862 (-3.469656) | 1.071651 \/ 4.565676 (-3.494025) | 0.065126 \/ 0.424275 (-0.359149) | 0.011312 \/ 0.007607 (0.003704) | 0.532503 \/ 0.226044 (0.306458) | 5.353950 \/ 2.268929 (3.085021) | 2.463548 \/ 55.444624 (-52.981076) | 2.139832 \/ 6.876477 (-4.736645) | 2.238722 \/ 2.142072 (0.096650) | 0.655736 \/ 4.805227 (-4.149492) | 0.141689 \/ 6.500664 (-6.358975) | 0.063282 \/ 0.075469 (-0.012187) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.183523 \/ 1.841788 (-0.658265) | 14.146428 \/ 8.074308 (6.072120) | 14.312883 \/ 10.191392 (4.121491) | 0.169286 \/ 0.680424 (-0.511138) | 0.017343 \/ 0.534201 (-0.516858) | 0.397934 \/ 0.579283 (-0.181349) | 0.417791 \/ 0.434364 (-0.016573) | 0.463639 \/ 0.540337 (-0.076698) | 0.562787 \/ 1.386936 (-0.824149) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006594 \/ 0.011353 (-0.004759) | 0.004086 \/ 0.011008 (-0.006922) | 0.075122 \/ 0.038508 (0.036614) | 0.041849 \/ 0.023109 (0.018740) | 0.362645 \/ 0.275898 (0.086747) | 0.464350 \/ 0.323480 (0.140870) | 0.003760 \/ 0.007986 (-0.004226) | 0.003327 \/ 0.004328 (-0.001001) | 0.076154 \/ 0.004250 (0.071904) | 0.053232 \/ 0.037052 (0.016180) | 0.407863 \/ 0.258489 (0.149374) | 0.460787 \/ 0.293841 (0.166946) | 0.031917 \/ 0.128546 (-0.096630) | 0.008770 \/ 0.075646 (-0.066876) | 0.082612 \/ 0.419271 (-0.336660) | 0.051311 \/ 0.043533 (0.007779) | 0.354508 \/ 0.255139 (0.099369) | 0.419533 \/ 0.283200 (0.136334) | 0.023980 \/ 0.141683 (-0.117703) | 1.491255 \/ 1.452155 (0.039100) | 1.536101 \/ 1.492716 (0.043384) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.178261 \/ 0.018006 (0.160255) | 0.444680 \/ 0.000490 (0.444190) | 0.013761 \/ 0.000200 (0.013561) | 0.000117 \/ 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027875 \/ 0.037411 (-0.009536) | 0.111269 \/ 0.014526 (0.096744) | 0.121096 \/ 0.176557 (-0.055461) | 0.174387 \/ 0.737135 (-0.562749) | 0.124714 \/ 0.296338 (-0.171624) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.445422 \/ 0.215209 (0.230213) | 4.435877 \/ 2.077655 (2.358222) | 2.221895 \/ 1.504120 (0.717775) | 2.030571 \/ 1.541195 (0.489376) | 2.074863 \/ 1.468490 (0.606373) | 0.543331 \/ 4.584777 (-4.041446) | 3.753615 \/ 3.745712 (0.007903) | 3.317074 \/ 5.269862 (-1.952787) | 1.630390 \/ 4.565676 (-2.935286) | 0.066726 \/ 0.424275 (-0.357549) | 0.011556 \/ 0.007607 (0.003949) | 0.546985 \/ 0.226044 (0.320941) | 5.460634 \/ 2.268929 (3.191705) | 2.705945 \/ 55.444624 (-52.738679) | 2.373425 \/ 6.876477 (-4.503052) | 2.401472 \/ 2.142072 (0.259399) | 0.663225 \/ 4.805227 (-4.142002) | 0.143694 \/ 6.500664 (-6.356970) | 0.065283 \/ 0.075469 (-0.010186) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.264804 \/ 1.841788 (-0.576983) | 14.803228 \/ 8.074308 (6.728919) | 14.178514 \/ 10.191392 (3.987122) | 0.162651 \/ 0.680424 (-0.517772) | 0.017586 \/ 0.534201 (-0.516615) | 0.398740 \/ 0.579283 (-0.180543) | 0.414478 \/ 0.434364 (-0.019886) | 0.465442 \/ 0.540337 (-0.074895) | 0.563450 \/ 1.386936 (-0.823486) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#76f75a9a3b2aaad05ea0ea5ab77e01fd2ca66760 \"CML watermark\")\n"],"created_at":1687362181000,"updated_at":1687443809000,"closed_at":1687443386000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5972","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5972","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5972.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5972.patch","merged_at":1687443386000},"body":"I used a regex to filter the data files based on their extension for packaged builders.\r\n\r\nI tried and a regex is 10x faster that using `in` to check if the extension is in the list of supported extensions.\r\n\r\nSupersedes https:\/\/github.com\/huggingface\/datasets\/pull\/5850\r\n\r\nClose https:\/\/github.com\/huggingface\/datasets\/issues\/5849\r\n\r\nI also did a small change to favor the parquet module in case of a draw in the extension counter.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5972\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5972\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5971","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5971\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5971\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5971\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5971","id":1767053635,"node_id":"I_kwDODunzps5pUxlD","number":5971,"title":"Docs: make \"repository structure\" easier to find","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"open","locked":false,"assignee":{"login":"benjaminbrown038","id":35114142.0,"node_id":"MDQ6VXNlcjM1MTE0MTQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35114142?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/benjaminbrown038","html_url":"https:\/\/github.com\/benjaminbrown038","followers_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/followers","following_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/orgs","repos_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/repos","events_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/received_events","type":"User","site_admin":false},"assignees":[{"login":"benjaminbrown038","id":35114142,"node_id":"MDQ6VXNlcjM1MTE0MTQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35114142?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/benjaminbrown038","html_url":"https:\/\/github.com\/benjaminbrown038","followers_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/followers","following_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/orgs","repos_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/repos","events_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Loading a local dataset also works the same way when `data_files` are not specified, so I agree we should make this info easier to discover \r\n\r\ncc @stevhliu ","Is this issue open? If so, I will self assign. ","@benjaminbrown038 Yes, it is. Maybe @stevhliu can give some pointers on improving this doc page's discoverability.","I think we can add a version of the [Main use-case](https:\/\/huggingface.co\/docs\/datasets\/repository_structure#main-usecase) section to the [Share a dataset to the Hub](https:\/\/huggingface.co\/docs\/datasets\/upload_dataset) tutorial. \r\n\r\nCurrently, it doesn't tell you *how* to structure the repository; it only tells you how to create it. So adding the \"main use-case\" will help bridge the gap and make it easier to find. We should also add a link to the [Structure your repository](https:\/\/huggingface.co\/docs\/datasets\/repository_structure) guide for users who want to learn about the other options.","#self-assign"],"created_at":1687336004000,"updated_at":1688539898000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"The page https:\/\/huggingface.co\/docs\/datasets\/repository_structure explains how to create a simple repository structure without a dataset script.\r\nIt's the simplest way to create a dataset and should be easier to find, particularly on the docs' first pages.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5971\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5971\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5970","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5970\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5970\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5970\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5970","id":1766010356,"node_id":"I_kwDODunzps5pQy30","number":5970,"title":"description disappearing from Info when Uploading a Dataset Created with `from_dict`","user":{"login":"balisujohn","id":20377292,"node_id":"MDQ6VXNlcjIwMzc3Mjky","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20377292?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/balisujohn","html_url":"https:\/\/github.com\/balisujohn","followers_url":"https:\/\/api.github.com\/users\/balisujohn\/followers","following_url":"https:\/\/api.github.com\/users\/balisujohn\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/balisujohn\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/balisujohn\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/balisujohn\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/balisujohn\/orgs","repos_url":"https:\/\/api.github.com\/users\/balisujohn\/repos","events_url":"https:\/\/api.github.com\/users\/balisujohn\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/balisujohn\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Here's a minimal way to reproduce the bug, for the sake of convenience.\r\n````\r\nfrom datasets import Dataset, DatasetInfo, load_dataset\r\n\r\n\r\nepisodes_dict = {\"test\":[1,2,3],\"test2\": [1,2,4]}\r\n\r\nhugging_face_dataset = Dataset.from_dict(\r\n episodes_dict, info=DatasetInfo(description=\"test_str\")\r\n)\r\nprint(hugging_face_dataset.info)\r\n\r\nhugging_face_dataset.push_to_hub(\"balisujohn\/minari_test\", private=True)\r\n\r\nredownloaded_dataset= load_dataset(\"balisujohn\/minari_test\")[\"train\"]\r\n\r\n\r\nprint(redownloaded_dataset.info)\r\n````\r\n","Thanks for reporting !\r\n\r\nFor now I would recommend uploading a separate JSON file for your metadata.\r\n\r\nAlternatively you can upload a second configuration of the dataset containing your metadata but this feature is not released yet (though you can already use it from [here](https:\/\/github.com\/huggingface\/datasets\/pull\/5331), it will be released soon)"],"created_at":1687288706000,"updated_at":1687443836000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nWhen uploading a dataset created locally using `from_dict` with a specified `description` field. It appears before upload, but is missing after upload and re-download.\r\n\r\n\r\n### Steps to reproduce the bug\r\n\r\nI think the most relevant pattern in the code might be the following lines:\r\n\r\n```\r\ndescription_json_str = json.dumps(\r\n {\r\n \"dataset_id\": dataset.spec.dataset_id,\r\n \"env_name\": dataset.spec.env_spec.id,\r\n \"action_space\": serialize_space(dataset.spec.action_space),\r\n \"observation_space\": serialize_space(dataset.spec.observation_space),\r\n }\r\n)\r\n\r\nhugging_face_dataset = Dataset.from_dict(\r\n episodes_dict, info=DatasetInfo(description=description_json_str)\r\n)\r\n\r\n```\r\nWhich comes from this function https:\/\/github.com\/balisujohn\/minarai\/blob\/8e023727f0a8488c4451651d9f7a79b981412c40\/minari\/integrations\/hugging_face.py#L39\r\n\r\n\r\n\r\nTo replicate,\r\nclone this branch of my Minari fork https:\/\/github.com\/balisujohn\/minarai\/tree\/dev-huggingface then run\r\n\r\n```\r\npython3.8 -m venv env\r\nsource env\/bin\/activate\r\npython3 -m pip install -e .\r\npython3 -m pip install pytest\r\n```\r\n\r\nThe change the hugging face repo path in the test called `test_hugging_face_push_and_pull_dataset` in `tests\/integrations\/test_hugging_face.py` to one you have permissions to write to.\r\n\r\nThen run:\r\n\r\n```\r\npytest tests\/integrations\/test_hugging_face.py::test_hugging_face_push_and_pull_dataset\r\n```\r\n\r\n\r\n\r\n\r\n\r\n\r\n### Expected behavior\r\n\r\nDATASET INFO BEFORE UPLOADING\r\nDatasetInfo(description='{\"dataset_id\": \"dummy-combo-test-v0\", \"env_name\": \"DummyComboEnv-v0\", \"action_space\": \"{\\\\\"type\\\\\": \\\\\"Tuple\\\\\", \\\\\"subspaces\\\\\": [{\\\\\"type\\\\\": \\\\\"Box\\\\\", \\\\\"dtype\\\\\": \\\\\"float32\\\\\", \\\\\"shape\\\\\": [1], \\\\\"low\\\\\": [2.0], \\\\\"high\\\\\": [3.0]}, {\\\\\"type\\\\\": \\\\\"Box\\\\\", \\\\\"dtype\\\\\": \\\\\"float32\\\\\", \\\\\"shape\\\\\": [1], \\\\\"low\\\\\": [4.0], \\\\\"high\\\\\": [5.0]}]}\", \"observation_space\": \"{\\\\\"type\\\\\": \\\\\"Tuple\\\\\", \\\\\"subspaces\\\\\": [{\\\\\"type\\\\\": \\\\\"Box\\\\\", \\\\\"dtype\\\\\": \\\\\"float32\\\\\", \\\\\"shape\\\\\": [1], \\\\\"low\\\\\": [2.0], \\\\\"high\\\\\": [3.0]}, {\\\\\"type\\\\\": \\\\\"Tuple\\\\\", \\\\\"subspaces\\\\\": [{\\\\\"type\\\\\": \\\\\"Box\\\\\", \\\\\"dtype\\\\\": \\\\\"float32\\\\\", \\\\\"shape\\\\\": [1], \\\\\"low\\\\\": [2.0], \\\\\"high\\\\\": [3.0]}, {\\\\\"type\\\\\": \\\\\"Dict\\\\\", \\\\\"subspaces\\\\\": {\\\\\"component_1\\\\\": {\\\\\"type\\\\\": \\\\\"Box\\\\\", \\\\\"dtype\\\\\": \\\\\"float32\\\\\", \\\\\"shape\\\\\": [1], \\\\\"low\\\\\": [-1.0], \\\\\"high\\\\\": [1.0]}, \\\\\"component_2\\\\\": {\\\\\"type\\\\\": \\\\\"Dict\\\\\", \\\\\"subspaces\\\\\": {\\\\\"subcomponent_1\\\\\": {\\\\\"type\\\\\": \\\\\"Box\\\\\", \\\\\"dtype\\\\\": \\\\\"float32\\\\\", \\\\\"shape\\\\\": [1], \\\\\"low\\\\\": [2.0], \\\\\"high\\\\\": [3.0]}, \\\\\"subcomponent_2\\\\\": {\\\\\"type\\\\\": \\\\\"Tuple\\\\\", \\\\\"subspaces\\\\\": [{\\\\\"type\\\\\": \\\\\"Box\\\\\", \\\\\"dtype\\\\\": \\\\\"float32\\\\\", \\\\\"shape\\\\\": [1], \\\\\"low\\\\\": [4.0], \\\\\"high\\\\\": [5.0]}, {\\\\\"type\\\\\": \\\\\"Discrete\\\\\", \\\\\"dtype\\\\\": \\\\\"int64\\\\\", \\\\\"start\\\\\": 0, \\\\\"n\\\\\": 10}]}}}}}]}]}\"}', citation='', homepage='', license='', features={'observations': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'component_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'component_2': {'subcomponent_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'subcomponent_2': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Value(dtype='int64', id=None)}}}}}, 'actions': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None)}, 'rewards': Value(dtype='int64', id=None), 'truncations': Value(dtype='bool', id=None), 'terminations': Value(dtype='bool', id=None), 'episode_ids': Value(dtype='int64', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name=None, config_name=None, version=None, splits=None, download_checksums=None, download_size=None, post_processing_size=None, dataset_size=None, size_in_bytes=None)\r\n...\r\nDATASET INFO AFTER UPLOADING AND DOWNLOADING\r\nDatasetInfo(description='', citation='', homepage='', license='', features={'observations': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'component_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'component_2': {'subcomponent_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'subcomponent_2': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Value(dtype='int64', id=None)}}}}}, 'actions': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None)}, 'rewards': Value(dtype='int64', id=None), 'truncations': Value(dtype='bool', id=None), 'terminations': Value(dtype='bool', id=None), 'episode_ids': Value(dtype='int64', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name=None, config_name=None, version=None, splits={'train': SplitInfo(name='train', num_bytes=4846, num_examples=60, shard_lengths=None, dataset_name='parquet')}, download_checksums={'https:\/\/huggingface.co\/datasets\/balisujohn\/minari_test\/resolve\/8217b614ff9ba5edc1a30c7df430e92a46f65363\/data\/train-00000-of-00001-7c5900b93b35745e.parquet': {'num_bytes': 9052, 'checksum': None}}, download_size=9052, post_processing_size=None, dataset_size=4846, size_in_bytes=13898)\r\n...\r\n\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 2.13.0\r\n- Platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 12.0.1\r\n- Pandas version: 2.0.2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5970\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5970\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5969","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5969\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5969\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5969\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5969","id":1765529905,"node_id":"PR_kwDODunzps5Tcgq4","number":5969,"title":"Add `encoding` and `errors` params to JSON loader","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006770 \/ 0.011353 (-0.004583) | 0.004143 \/ 0.011008 (-0.006865) | 0.098928 \/ 0.038508 (0.060420) | 0.044893 \/ 0.023109 (0.021783) | 0.302630 \/ 0.275898 (0.026732) | 0.368173 \/ 0.323480 (0.044693) | 0.005631 \/ 0.007986 (-0.002354) | 0.003397 \/ 0.004328 (-0.000931) | 0.075748 \/ 0.004250 (0.071497) | 0.062582 \/ 0.037052 (0.025530) | 0.329586 \/ 0.258489 (0.071097) | 0.362625 \/ 0.293841 (0.068784) | 0.033250 \/ 0.128546 (-0.095296) | 0.008880 \/ 0.075646 (-0.066766) | 0.329683 \/ 0.419271 (-0.089588) | 0.054426 \/ 0.043533 (0.010893) | 0.297940 \/ 0.255139 (0.042801) | 0.319796 \/ 0.283200 (0.036597) | 0.023296 \/ 0.141683 (-0.118387) | 1.462142 \/ 1.452155 (0.009987) | 1.495796 \/ 1.492716 (0.003079) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.201771 \/ 0.018006 (0.183765) | 0.454514 \/ 0.000490 (0.454024) | 0.003333 \/ 0.000200 (0.003133) | 0.000081 \/ 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028084 \/ 0.037411 (-0.009327) | 0.109452 \/ 0.014526 (0.094926) | 0.119200 \/ 0.176557 (-0.057357) | 0.180302 \/ 0.737135 (-0.556834) | 0.125653 \/ 0.296338 (-0.170686) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.409819 \/ 0.215209 (0.194610) | 4.055117 \/ 2.077655 (1.977462) | 1.855279 \/ 1.504120 (0.351159) | 1.655281 \/ 1.541195 (0.114086) | 1.687938 \/ 1.468490 (0.219448) | 0.528352 \/ 4.584777 (-4.056425) | 3.750250 \/ 3.745712 (0.004538) | 3.386741 \/ 5.269862 (-1.883121) | 1.572036 \/ 4.565676 (-2.993640) | 0.065125 \/ 0.424275 (-0.359150) | 0.011259 \/ 0.007607 (0.003652) | 0.513449 \/ 0.226044 (0.287405) | 5.139421 \/ 2.268929 (2.870492) | 2.316973 \/ 55.444624 (-53.127651) | 1.984109 \/ 6.876477 (-4.892368) | 2.127915 \/ 2.142072 (-0.014158) | 0.653238 \/ 4.805227 (-4.151989) | 0.142686 \/ 6.500664 (-6.357978) | 0.063666 \/ 0.075469 (-0.011803) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.185174 \/ 1.841788 (-0.656614) | 14.790282 \/ 8.074308 (6.715974) | 13.089222 \/ 10.191392 (2.897830) | 0.146055 \/ 0.680424 (-0.534369) | 0.017835 \/ 0.534201 (-0.516366) | 0.399598 \/ 0.579283 (-0.179685) | 0.425296 \/ 0.434364 (-0.009068) | 0.478552 \/ 0.540337 (-0.061786) | 0.579702 \/ 1.386936 (-0.807234) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006750 \/ 0.011353 (-0.004603) | 0.004156 \/ 0.011008 (-0.006853) | 0.074948 \/ 0.038508 (0.036440) | 0.043368 \/ 0.023109 (0.020259) | 0.355389 \/ 0.275898 (0.079491) | 0.429167 \/ 0.323480 (0.105687) | 0.003911 \/ 0.007986 (-0.004075) | 0.004340 \/ 0.004328 (0.000012) | 0.075940 \/ 0.004250 (0.071689) | 0.054293 \/ 0.037052 (0.017241) | 0.400317 \/ 0.258489 (0.141827) | 0.432001 \/ 0.293841 (0.138160) | 0.032340 \/ 0.128546 (-0.096206) | 0.008876 \/ 0.075646 (-0.066770) | 0.082284 \/ 0.419271 (-0.336987) | 0.050819 \/ 0.043533 (0.007286) | 0.351994 \/ 0.255139 (0.096855) | 0.375917 \/ 0.283200 (0.092717) | 0.022466 \/ 0.141683 (-0.119217) | 1.538824 \/ 1.452155 (0.086669) | 1.563995 \/ 1.492716 (0.071279) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.227330 \/ 0.018006 (0.209323) | 0.446380 \/ 0.000490 (0.445890) | 0.000408 \/ 0.000200 (0.000208) | 0.000058 \/ 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028534 \/ 0.037411 (-0.008878) | 0.113467 \/ 0.014526 (0.098941) | 0.123590 \/ 0.176557 (-0.052966) | 0.174309 \/ 0.737135 (-0.562827) | 0.130631 \/ 0.296338 (-0.165707) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.441020 \/ 0.215209 (0.225811) | 4.386564 \/ 2.077655 (2.308909) | 2.100704 \/ 1.504120 (0.596584) | 1.901484 \/ 1.541195 (0.360289) | 1.963494 \/ 1.468490 (0.495004) | 0.536838 \/ 4.584777 (-4.047939) | 3.739071 \/ 3.745712 (-0.006642) | 3.278981 \/ 5.269862 (-1.990881) | 1.515476 \/ 4.565676 (-3.050201) | 0.066388 \/ 0.424275 (-0.357887) | 0.011857 \/ 0.007607 (0.004250) | 0.545507 \/ 0.226044 (0.319463) | 5.441479 \/ 2.268929 (3.172550) | 2.602144 \/ 55.444624 (-52.842480) | 2.235583 \/ 6.876477 (-4.640894) | 2.293458 \/ 2.142072 (0.151385) | 0.658535 \/ 4.805227 (-4.146692) | 0.141327 \/ 6.500664 (-6.359337) | 0.063726 \/ 0.075469 (-0.011743) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.247819 \/ 1.841788 (-0.593968) | 15.234524 \/ 8.074308 (7.160216) | 14.592700 \/ 10.191392 (4.401308) | 0.141952 \/ 0.680424 (-0.538472) | 0.017747 \/ 0.534201 (-0.516454) | 0.396819 \/ 0.579283 (-0.182465) | 0.415902 \/ 0.434364 (-0.018462) | 0.464619 \/ 0.540337 (-0.075718) | 0.560866 \/ 1.386936 (-0.826070) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#4b7f6c59deb868e21f295917548fa2df10dd0158 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008278 \/ 0.011353 (-0.003075) | 0.005044 \/ 0.011008 (-0.005964) | 0.123382 \/ 0.038508 (0.084874) | 0.054039 \/ 0.023109 (0.030929) | 0.382338 \/ 0.275898 (0.106440) | 0.453287 \/ 0.323480 (0.129807) | 0.006342 \/ 0.007986 (-0.001644) | 0.003930 \/ 0.004328 (-0.000398) | 0.094039 \/ 0.004250 (0.089789) | 0.076525 \/ 0.037052 (0.039472) | 0.394066 \/ 0.258489 (0.135577) | 0.445600 \/ 0.293841 (0.151759) | 0.039348 \/ 0.128546 (-0.089199) | 0.010485 \/ 0.075646 (-0.065161) | 0.433730 \/ 0.419271 (0.014459) | 0.082671 \/ 0.043533 (0.039138) | 0.375250 \/ 0.255139 (0.120111) | 0.416269 \/ 0.283200 (0.133070) | 0.038397 \/ 0.141683 (-0.103286) | 1.864834 \/ 1.452155 (0.412680) | 2.010453 \/ 1.492716 (0.517737) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.240008 \/ 0.018006 (0.222002) | 0.470975 \/ 0.000490 (0.470485) | 0.004001 \/ 0.000200 (0.003801) | 0.000097 \/ 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031107 \/ 0.037411 (-0.006304) | 0.129371 \/ 0.014526 (0.114846) | 0.141559 \/ 0.176557 (-0.034997) | 0.205571 \/ 0.737135 (-0.531564) | 0.144611 \/ 0.296338 (-0.151728) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.506972 \/ 0.215209 (0.291763) | 5.055951 \/ 2.077655 (2.978296) | 2.397438 \/ 1.504120 (0.893318) | 2.170435 \/ 1.541195 (0.629240) | 2.240296 \/ 1.468490 (0.771806) | 0.641559 \/ 4.584777 (-3.943218) | 4.644772 \/ 3.745712 (0.899060) | 4.064200 \/ 5.269862 (-1.205662) | 1.946991 \/ 4.565676 (-2.618685) | 0.086413 \/ 0.424275 (-0.337862) | 0.015082 \/ 0.007607 (0.007475) | 0.670413 \/ 0.226044 (0.444369) | 6.331346 \/ 2.268929 (4.062418) | 2.965813 \/ 55.444624 (-52.478812) | 2.547952 \/ 6.876477 (-4.328524) | 2.718390 \/ 2.142072 (0.576318) | 0.796657 \/ 4.805227 (-4.008571) | 0.173229 \/ 6.500664 (-6.327435) | 0.079606 \/ 0.075469 (0.004137) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.568761 \/ 1.841788 (-0.273026) | 18.485432 \/ 8.074308 (10.411124) | 15.758513 \/ 10.191392 (5.567121) | 0.170427 \/ 0.680424 (-0.509997) | 0.021421 \/ 0.534201 (-0.512780) | 0.518623 \/ 0.579283 (-0.060660) | 0.525887 \/ 0.434364 (0.091523) | 0.640331 \/ 0.540337 (0.099993) | 0.766748 \/ 1.386936 (-0.620188) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007680 \/ 0.011353 (-0.003673) | 0.005289 \/ 0.011008 (-0.005719) | 0.093773 \/ 0.038508 (0.055265) | 0.054997 \/ 0.023109 (0.031888) | 0.456277 \/ 0.275898 (0.180379) | 0.500642 \/ 0.323480 (0.177162) | 0.005935 \/ 0.007986 (-0.002050) | 0.004375 \/ 0.004328 (0.000047) | 0.094131 \/ 0.004250 (0.089881) | 0.063399 \/ 0.037052 (0.026347) | 0.470546 \/ 0.258489 (0.212057) | 0.504989 \/ 0.293841 (0.211148) | 0.038541 \/ 0.128546 (-0.090006) | 0.010403 \/ 0.075646 (-0.065244) | 0.102469 \/ 0.419271 (-0.316802) | 0.063105 \/ 0.043533 (0.019572) | 0.466005 \/ 0.255139 (0.210866) | 0.458677 \/ 0.283200 (0.175477) | 0.028407 \/ 0.141683 (-0.113276) | 1.893829 \/ 1.452155 (0.441675) | 1.917954 \/ 1.492716 (0.425238) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.272760 \/ 0.018006 (0.254754) | 0.476159 \/ 0.000490 (0.475669) | 0.008467 \/ 0.000200 (0.008267) | 0.000146 \/ 0.000054 (0.000091) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.035755 \/ 0.037411 (-0.001656) | 0.145038 \/ 0.014526 (0.130512) | 0.148322 \/ 0.176557 (-0.028235) | 0.210193 \/ 0.737135 (-0.526943) | 0.156547 \/ 0.296338 (-0.139792) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.541204 \/ 0.215209 (0.325995) | 5.382746 \/ 2.077655 (3.305091) | 2.704229 \/ 1.504120 (1.200109) | 2.468422 \/ 1.541195 (0.927227) | 2.522672 \/ 1.468490 (1.054182) | 0.644899 \/ 4.584777 (-3.939878) | 4.654401 \/ 3.745712 (0.908689) | 2.159223 \/ 5.269862 (-3.110638) | 1.280098 \/ 4.565676 (-3.285578) | 0.080053 \/ 0.424275 (-0.344222) | 0.014383 \/ 0.007607 (0.006776) | 0.662770 \/ 0.226044 (0.436725) | 6.617651 \/ 2.268929 (4.348722) | 3.234347 \/ 55.444624 (-52.210277) | 2.861417 \/ 6.876477 (-4.015059) | 2.888928 \/ 2.142072 (0.746856) | 0.792854 \/ 4.805227 (-4.012374) | 0.172553 \/ 6.500664 (-6.328111) | 0.078402 \/ 0.075469 (0.002933) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.565351 \/ 1.841788 (-0.276436) | 18.681916 \/ 8.074308 (10.607608) | 17.264473 \/ 10.191392 (7.073081) | 0.168461 \/ 0.680424 (-0.511963) | 0.021353 \/ 0.534201 (-0.512848) | 0.517843 \/ 0.579283 (-0.061440) | 0.519907 \/ 0.434364 (0.085543) | 0.623687 \/ 0.540337 (0.083350) | 0.761796 \/ 1.386936 (-0.625140) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#bbf58747f734a46e75937bdbcbc05b06ade0224a \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006750 \/ 0.011353 (-0.004603) | 0.004268 \/ 0.011008 (-0.006741) | 0.098644 \/ 0.038508 (0.060136) | 0.044643 \/ 0.023109 (0.021534) | 0.309420 \/ 0.275898 (0.033522) | 0.379294 \/ 0.323480 (0.055815) | 0.005729 \/ 0.007986 (-0.002256) | 0.003615 \/ 0.004328 (-0.000714) | 0.076086 \/ 0.004250 (0.071835) | 0.068994 \/ 0.037052 (0.031942) | 0.325653 \/ 0.258489 (0.067164) | 0.375187 \/ 0.293841 (0.081347) | 0.032546 \/ 0.128546 (-0.096000) | 0.009089 \/ 0.075646 (-0.066557) | 0.329905 \/ 0.419271 (-0.089366) | 0.066832 \/ 0.043533 (0.023300) | 0.299247 \/ 0.255139 (0.044108) | 0.323460 \/ 0.283200 (0.040260) | 0.034226 \/ 0.141683 (-0.107457) | 1.475659 \/ 1.452155 (0.023505) | 1.556234 \/ 1.492716 (0.063518) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.292305 \/ 0.018006 (0.274299) | 0.542584 \/ 0.000490 (0.542094) | 0.003047 \/ 0.000200 (0.002847) | 0.000082 \/ 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030096 \/ 0.037411 (-0.007315) | 0.112341 \/ 0.014526 (0.097815) | 0.124965 \/ 0.176557 (-0.051591) | 0.183159 \/ 0.737135 (-0.553976) | 0.131885 \/ 0.296338 (-0.164453) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.426437 \/ 0.215209 (0.211228) | 4.260984 \/ 2.077655 (2.183330) | 2.078358 \/ 1.504120 (0.574238) | 1.877644 \/ 1.541195 (0.336449) | 2.044036 \/ 1.468490 (0.575546) | 0.532980 \/ 4.584777 (-4.051797) | 3.749573 \/ 3.745712 (0.003860) | 1.944155 \/ 5.269862 (-3.325706) | 1.090307 \/ 4.565676 (-3.475370) | 0.065445 \/ 0.424275 (-0.358830) | 0.011237 \/ 0.007607 (0.003630) | 0.521448 \/ 0.226044 (0.295403) | 5.213118 \/ 2.268929 (2.944189) | 2.507829 \/ 55.444624 (-52.936795) | 2.177179 \/ 6.876477 (-4.699297) | 2.351161 \/ 2.142072 (0.209088) | 0.656775 \/ 4.805227 (-4.148452) | 0.141207 \/ 6.500664 (-6.359457) | 0.063286 \/ 0.075469 (-0.012183) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.190281 \/ 1.841788 (-0.651506) | 15.327424 \/ 8.074308 (7.253116) | 13.300695 \/ 10.191392 (3.109303) | 0.190484 \/ 0.680424 (-0.489939) | 0.017984 \/ 0.534201 (-0.516217) | 0.405714 \/ 0.579283 (-0.173569) | 0.435915 \/ 0.434364 (0.001551) | 0.494083 \/ 0.540337 (-0.046254) | 0.600616 \/ 1.386936 (-0.786320) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006740 \/ 0.011353 (-0.004613) | 0.004289 \/ 0.011008 (-0.006719) | 0.076532 \/ 0.038508 (0.038024) | 0.043305 \/ 0.023109 (0.020196) | 0.356111 \/ 0.275898 (0.080213) | 0.434121 \/ 0.323480 (0.110641) | 0.005599 \/ 0.007986 (-0.002387) | 0.003461 \/ 0.004328 (-0.000868) | 0.077097 \/ 0.004250 (0.072847) | 0.055369 \/ 0.037052 (0.018317) | 0.367093 \/ 0.258489 (0.108604) | 0.418801 \/ 0.293841 (0.124960) | 0.032057 \/ 0.128546 (-0.096489) | 0.009048 \/ 0.075646 (-0.066599) | 0.082897 \/ 0.419271 (-0.336374) | 0.050287 \/ 0.043533 (0.006754) | 0.352060 \/ 0.255139 (0.096921) | 0.376278 \/ 0.283200 (0.093078) | 0.023924 \/ 0.141683 (-0.117759) | 1.522780 \/ 1.452155 (0.070626) | 1.578938 \/ 1.492716 (0.086222) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.287317 \/ 0.018006 (0.269311) | 0.508490 \/ 0.000490 (0.508000) | 0.000431 \/ 0.000200 (0.000231) | 0.000056 \/ 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031139 \/ 0.037411 (-0.006272) | 0.113927 \/ 0.014526 (0.099401) | 0.128147 \/ 0.176557 (-0.048409) | 0.179712 \/ 0.737135 (-0.557424) | 0.134364 \/ 0.296338 (-0.161975) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.452834 \/ 0.215209 (0.237625) | 4.507944 \/ 2.077655 (2.430289) | 2.287758 \/ 1.504120 (0.783638) | 2.091145 \/ 1.541195 (0.549951) | 2.196228 \/ 1.468490 (0.727738) | 0.539306 \/ 4.584777 (-4.045471) | 3.838941 \/ 3.745712 (0.093228) | 1.908801 \/ 5.269862 (-3.361060) | 1.139235 \/ 4.565676 (-3.426442) | 0.066677 \/ 0.424275 (-0.357599) | 0.011422 \/ 0.007607 (0.003815) | 0.562966 \/ 0.226044 (0.336921) | 5.633712 \/ 2.268929 (3.364784) | 2.788622 \/ 55.444624 (-52.656002) | 2.438465 \/ 6.876477 (-4.438012) | 2.523479 \/ 2.142072 (0.381407) | 0.668730 \/ 4.805227 (-4.136498) | 0.143977 \/ 6.500664 (-6.356687) | 0.064661 \/ 0.075469 (-0.010808) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.291708 \/ 1.841788 (-0.550080) | 15.573316 \/ 8.074308 (7.499008) | 14.435099 \/ 10.191392 (4.243707) | 0.147745 \/ 0.680424 (-0.532679) | 0.017602 \/ 0.534201 (-0.516599) | 0.401560 \/ 0.579283 (-0.177723) | 0.429861 \/ 0.434364 (-0.004502) | 0.469800 \/ 0.540337 (-0.070538) | 0.567515 \/ 1.386936 (-0.819421) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#79c340f5dcfd06340f180f6c6ea2d5ef81f49d98 \"CML watermark\")\n"],"created_at":1687271315000,"updated_at":1687354790000,"closed_at":1687354342000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5969","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5969","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5969.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5969.patch","merged_at":1687354342000},"body":"\"Requested\" in https:\/\/discuss.huggingface.co\/t\/utf-16-for-datasets\/43828\/3.\r\n\r\n`pd.read_json` also has these parameters, so it makes sense to be consistent.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5969\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5969\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5968","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5968\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5968\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5968\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5968","id":1765252561,"node_id":"I_kwDODunzps5pN53R","number":5968,"title":"Common Voice datasets still need `use_auth_token=True`","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["cc @pcuenca as well. \r\n\r\nNot super urgent btw","The issue commes from the dataset itself and is not related to the `datasets` lib\r\n\r\nsee https:\/\/huggingface.co\/datasets\/mozilla-foundation\/common_voice_6_1\/blob\/2c475b3b88e0f2e5828f830a4b91618a25ff20b7\/common_voice_6_1.py#L148-L152","Let's remove these lines in the dataset no? cc @anton-l @Vaibhavs10 "],"created_at":1687262317000,"updated_at":1687342117000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nWe don't need to pass `use_auth_token=True` anymore to download gated datasets or models, so the following should work if correctly logged in.\r\n\r\n```py\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"mozilla-foundation\/common_voice_6_1\", \"tr\", split=\"train+validation\")\r\n```\r\n\r\nHowever it throws an error - probably because something weird is hardcoded into the dataset loading script.\n\n### Steps to reproduce the bug\n\n1.) \r\n```\r\nhuggingface-cli login\r\n```\r\n\r\n2.) Make sure that you have accepted the license here:\r\nhttps:\/\/huggingface.co\/datasets\/mozilla-foundation\/common_voice_6_1\r\n\r\n3.) Run:\r\n```py\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"mozilla-foundation\/common_voice_6_1\", \"tr\", split=\"train+validation\")\r\n```\r\n\r\n4.) You'll get:\r\n\r\n```\r\nFile ~\/hf\/lib\/python3.10\/site-packages\/datasets\/builder.py:963, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)\r\n 961 split_dict = SplitDict(dataset_name=self.name)\r\n 962 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 963 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 965 # Checksums verification\r\n 966 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:\r\n\r\nFile ~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/mozilla-foundation--common_voice_6_1\/f4d7854c466f5bd4908988dbd39044ec4fc634d89e0515ab0c51715c0127ffe3\/common_voice_6_1.py:150, in CommonVoice._split_generators(self, dl_manager)\r\n 148 hf_auth_token = dl_manager.download_config.use_auth_token\r\n 149 if hf_auth_token is None:\r\n--> 150 raise ConnectionError(\r\n 151 \"Please set use_auth_token=True or use_auth_token='' to download this dataset\"\r\n 152 )\r\n 154 bundle_url_template = STATS[\"bundleURLTemplate\"]\r\n 155 bundle_version = bundle_url_template.split(\"\/\")[0]\r\n\r\nConnectionError: Please set use_auth_token=True or use_auth_token='' to download this dataset\r\n```\n\n### Expected behavior\n\nOne should not have to pass `use_auth_token=True`. Also see discussion here: https:\/\/github.com\/huggingface\/blog\/pull\/1243#discussion_r1235131150\n\n### Environment info\n\n```\r\n- `datasets` version: 2.13.0\r\n- Platform: Linux-6.2.0-76060200-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.6\r\n- Huggingface_hub version: 0.16.0.dev0\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 1.5.3\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5968\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5968\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5967","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5967\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5967\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5967\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5967","id":1763926520,"node_id":"I_kwDODunzps5pI2H4","number":5967,"title":"Config name \/ split name lost after map with multiproc","user":{"login":"sanchit-gandhi","id":93869735,"node_id":"U_kgDOBZhWpw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/93869735?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sanchit-gandhi","html_url":"https:\/\/github.com\/sanchit-gandhi","followers_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/followers","following_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/orgs","repos_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/repos","events_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This must be due to DatasetInfo.from_merge which drops them and is used in `concatenate_datasets`.\r\n\r\nAnd you're experiencing this issue because multiprocessing does concatenate the resulting datasets from each process.\r\n\r\nMaybe they should be kept if all the subdatasets share the same values for config_name and split","That sounds like a clean workaround!"],"created_at":1687195656000,"updated_at":1687942525000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nPerforming a `.map` method on a dataset loses it's config name \/ split name only if run with multiproc\n\n### Steps to reproduce the bug\n\n```python\r\nfrom datasets import Audio, load_dataset\r\nfrom transformers import AutoFeatureExtractor\r\nimport numpy as np\r\n\r\n# load dummy dataset\r\nlibri = load_dataset(\"hf-internal-testing\/librispeech_asr_dummy\", \"clean\")\r\n\r\n# make train \/ test splits\r\nlibri = libri[\"validation\"].train_test_split(seed=42, shuffle=True, test_size=0.1)\r\n\r\n# example feature extractor\r\nmodel_id = \"ntu-spml\/distilhubert\"\r\nfeature_extractor = AutoFeatureExtractor.from_pretrained(model_id, do_normalize=True, return_attention_mask=True)\r\n\r\nsampling_rate = feature_extractor.sampling_rate\r\n\r\nlibri = libri.cast_column(\"audio\", Audio(sampling_rate=sampling_rate))\r\n\r\nmax_duration = 30.0\r\n\r\ndef preprocess_function(examples):\r\n audio_arrays = [x[\"array\"] for x in examples[\"audio\"]]\r\n inputs = feature_extractor(\r\n audio_arrays,\r\n sampling_rate=feature_extractor.sampling_rate,\r\n max_length=int(feature_extractor.sampling_rate * max_duration),\r\n truncation=True,\r\n return_attention_mask=True,\r\n )\r\n return inputs\r\n\r\n# single proc map\r\nlibri_encoded = libri.map(\r\n preprocess_function, remove_columns=[\"audio\", \"file\"], batched=True, num_proc=1\r\n)\r\n\r\nprint(10 * \"=\" ,\"Single processing\", 10 * \"=\")\r\nprint(\"Config name before: \", libri[\"train\"].config_name, \" Split name before: \", libri[\"train\"].split)\r\nprint(\"Config name after: \", libri_encoded[\"train\"].config_name, \" Split name after: \", libri_encoded[\"train\"].split)\r\n\r\n# multi proc map\r\nlibri_encoded = libri.map(\r\n preprocess_function, remove_columns=[\"audio\", \"file\"], batched=True, num_proc=2\r\n)\r\n\r\nprint(10 * \"=\" ,\"Multi processing\", 10 * \"=\")\r\nprint(\"Config name before: \", libri[\"train\"].config_name, \" Split name before: \", libri[\"train\"].split)\r\nprint(\"Config name after: \", libri_encoded[\"train\"].config_name, \" Split name after: \", libri_encoded[\"train\"].split)\r\n```\r\n\r\n**Print Output:**\r\n```\r\n========== Single processing ==========\r\nConfig name before: clean Split name before: validation\r\nConfig name after: clean Split name after: validation\r\n========== Multi processing ==========\r\nConfig name before: clean Split name before: validation\r\nConfig name after: None Split name after: None\r\n```\r\n\r\n=> we can see that the config\/split names are lost in the multiprocessing setting\r\n\r\n\r\n\n\n### Expected behavior\n\nShould retain both config \/ split names in the multiproc setting\n\n### Environment info\n\n- `datasets` version: 2.13.1.dev0\r\n- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.6\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 12.0.0\r\n- Pandas version: 2.0.2","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5967\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5967\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5966","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5966\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5966\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5966\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5966","id":1763885914,"node_id":"PR_kwDODunzps5TXBLP","number":5966,"title":"Fix JSON generation in benchmarks CI","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006186 \/ 0.011353 (-0.005167) | 0.003744 \/ 0.011008 (-0.007264) | 0.097295 \/ 0.038508 (0.058787) | 0.037106 \/ 0.023109 (0.013997) | 0.424154 \/ 0.275898 (0.148256) | 0.474536 \/ 0.323480 (0.151057) | 0.003454 \/ 0.007986 (-0.004532) | 0.003865 \/ 0.004328 (-0.000463) | 0.077348 \/ 0.004250 (0.073097) | 0.051728 \/ 0.037052 (0.014675) | 0.437120 \/ 0.258489 (0.178631) | 0.478379 \/ 0.293841 (0.184538) | 0.028939 \/ 0.128546 (-0.099608) | 0.008376 \/ 0.075646 (-0.067270) | 0.312002 \/ 0.419271 (-0.107270) | 0.053723 \/ 0.043533 (0.010190) | 0.424815 \/ 0.255139 (0.169676) | 0.446203 \/ 0.283200 (0.163004) | 0.026553 \/ 0.141683 (-0.115130) | 1.479983 \/ 1.452155 (0.027828) | 1.530613 \/ 1.492716 (0.037896) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.196627 \/ 0.018006 (0.178620) | 0.422361 \/ 0.000490 (0.421871) | 0.003442 \/ 0.000200 (0.003242) | 0.000077 \/ 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.022913 \/ 0.037411 (-0.014499) | 0.096011 \/ 0.014526 (0.081485) | 0.104091 \/ 0.176557 (-0.072466) | 0.163273 \/ 0.737135 (-0.573862) | 0.109142 \/ 0.296338 (-0.187197) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.431032 \/ 0.215209 (0.215823) | 4.314391 \/ 2.077655 (2.236737) | 2.003812 \/ 1.504120 (0.499692) | 1.799538 \/ 1.541195 (0.258344) | 1.830026 \/ 1.468490 (0.361536) | 0.560131 \/ 4.584777 (-4.024646) | 3.368997 \/ 3.745712 (-0.376715) | 1.703032 \/ 5.269862 (-3.566830) | 1.026949 \/ 4.565676 (-3.538727) | 0.067507 \/ 0.424275 (-0.356768) | 0.010910 \/ 0.007607 (0.003303) | 0.532606 \/ 0.226044 (0.306562) | 5.345179 \/ 2.268929 (3.076250) | 2.368077 \/ 55.444624 (-53.076548) | 2.028913 \/ 6.876477 (-4.847564) | 2.147621 \/ 2.142072 (0.005549) | 0.675696 \/ 4.805227 (-4.129531) | 0.134902 \/ 6.500664 (-6.365762) | 0.065004 \/ 0.075469 (-0.010465) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.233412 \/ 1.841788 (-0.608376) | 13.767465 \/ 8.074308 (5.693157) | 13.933653 \/ 10.191392 (3.742261) | 0.129010 \/ 0.680424 (-0.551414) | 0.016708 \/ 0.534201 (-0.517493) | 0.362341 \/ 0.579283 (-0.216942) | 0.390902 \/ 0.434364 (-0.043462) | 0.429156 \/ 0.540337 (-0.111182) | 0.521166 \/ 1.386936 (-0.865770) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006169 \/ 0.011353 (-0.005184) | 0.003839 \/ 0.011008 (-0.007169) | 0.078784 \/ 0.038508 (0.040276) | 0.040218 \/ 0.023109 (0.017109) | 0.360439 \/ 0.275898 (0.084541) | 0.423957 \/ 0.323480 (0.100477) | 0.003456 \/ 0.007986 (-0.004529) | 0.002900 \/ 0.004328 (-0.001428) | 0.078820 \/ 0.004250 (0.074569) | 0.047240 \/ 0.037052 (0.010187) | 0.372081 \/ 0.258489 (0.113592) | 0.424263 \/ 0.293841 (0.130422) | 0.027977 \/ 0.128546 (-0.100569) | 0.008400 \/ 0.075646 (-0.067246) | 0.084399 \/ 0.419271 (-0.334872) | 0.043303 \/ 0.043533 (-0.000230) | 0.361583 \/ 0.255139 (0.106444) | 0.394987 \/ 0.283200 (0.111787) | 0.020006 \/ 0.141683 (-0.121677) | 1.520208 \/ 1.452155 (0.068053) | 1.587335 \/ 1.492716 (0.094619) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.223847 \/ 0.018006 (0.205840) | 0.402194 \/ 0.000490 (0.401704) | 0.000384 \/ 0.000200 (0.000184) | 0.000057 \/ 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024902 \/ 0.037411 (-0.012509) | 0.099076 \/ 0.014526 (0.084550) | 0.108041 \/ 0.176557 (-0.068516) | 0.159385 \/ 0.737135 (-0.577750) | 0.111442 \/ 0.296338 (-0.184896) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.446232 \/ 0.215209 (0.231023) | 4.464927 \/ 2.077655 (2.387272) | 2.155234 \/ 1.504120 (0.651114) | 1.953645 \/ 1.541195 (0.412450) | 1.965991 \/ 1.468490 (0.497501) | 0.553473 \/ 4.584777 (-4.031304) | 3.321397 \/ 3.745712 (-0.424315) | 1.693761 \/ 5.269862 (-3.576101) | 1.006299 \/ 4.565676 (-3.559378) | 0.067013 \/ 0.424275 (-0.357262) | 0.011116 \/ 0.007607 (0.003509) | 0.555014 \/ 0.226044 (0.328970) | 5.535694 \/ 2.268929 (3.266765) | 2.598339 \/ 55.444624 (-52.846285) | 2.249298 \/ 6.876477 (-4.627179) | 2.243419 \/ 2.142072 (0.101347) | 0.667603 \/ 4.805227 (-4.137624) | 0.133322 \/ 6.500664 (-6.367343) | 0.065473 \/ 0.075469 (-0.009996) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.293051 \/ 1.841788 (-0.548737) | 14.103731 \/ 8.074308 (6.029423) | 14.215204 \/ 10.191392 (4.023812) | 0.143990 \/ 0.680424 (-0.536434) | 0.016805 \/ 0.534201 (-0.517396) | 0.363264 \/ 0.579283 (-0.216019) | 0.392769 \/ 0.434364 (-0.041594) | 0.425291 \/ 0.540337 (-0.115046) | 0.515479 \/ 1.386936 (-0.871457) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#e03a58f3f5d7e6f07279fb833e62d859a0babaad \"CML watermark\")\n","_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006346 \/ 0.011353 (-0.005006) | 0.004130 \/ 0.011008 (-0.006878) | 0.096898 \/ 0.038508 (0.058390) | 0.042564 \/ 0.023109 (0.019455) | 0.343748 \/ 0.275898 (0.067850) | 0.412515 \/ 0.323480 (0.089035) | 0.006153 \/ 0.007986 (-0.001833) | 0.003345 \/ 0.004328 (-0.000984) | 0.075314 \/ 0.004250 (0.071064) | 0.061478 \/ 0.037052 (0.024426) | 0.362948 \/ 0.258489 (0.104459) | 0.401533 \/ 0.293841 (0.107692) | 0.032363 \/ 0.128546 (-0.096184) | 0.008780 \/ 0.075646 (-0.066867) | 0.328691 \/ 0.419271 (-0.090580) | 0.054253 \/ 0.043533 (0.010721) | 0.340783 \/ 0.255139 (0.085644) | 0.360705 \/ 0.283200 (0.077505) | 0.023183 \/ 0.141683 (-0.118500) | 1.484078 \/ 1.452155 (0.031924) | 1.528581 \/ 1.492716 (0.035865) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.208732 \/ 0.018006 (0.190726) | 0.452572 \/ 0.000490 (0.452082) | 0.002936 \/ 0.000200 (0.002737) | 0.000082 \/ 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024616 \/ 0.037411 (-0.012795) | 0.107547 \/ 0.014526 (0.093021) | 0.114492 \/ 0.176557 (-0.062065) | 0.171770 \/ 0.737135 (-0.565365) | 0.122538 \/ 0.296338 (-0.173800) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.406140 \/ 0.215209 (0.190930) | 4.062391 \/ 2.077655 (1.984736) | 1.865962 \/ 1.504120 (0.361842) | 1.682236 \/ 1.541195 (0.141041) | 1.738119 \/ 1.468490 (0.269629) | 0.532244 \/ 4.584777 (-4.052533) | 3.816421 \/ 3.745712 (0.070709) | 2.981205 \/ 5.269862 (-2.288656) | 1.519497 \/ 4.565676 (-3.046179) | 0.065904 \/ 0.424275 (-0.358371) | 0.011277 \/ 0.007607 (0.003670) | 0.512789 \/ 0.226044 (0.286745) | 5.107618 \/ 2.268929 (2.838690) | 2.419399 \/ 55.444624 (-53.025226) | 2.079262 \/ 6.876477 (-4.797214) | 2.150447 \/ 2.142072 (0.008375) | 0.696737 \/ 4.805227 (-4.108490) | 0.142497 \/ 6.500664 (-6.358167) | 0.063521 \/ 0.075469 (-0.011949) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.180692 \/ 1.841788 (-0.661095) | 14.343084 \/ 8.074308 (6.268776) | 13.303719 \/ 10.191392 (3.112327) | 0.164234 \/ 0.680424 (-0.516190) | 0.017439 \/ 0.534201 (-0.516762) | 0.399712 \/ 0.579283 (-0.179571) | 0.428248 \/ 0.434364 (-0.006115) | 0.471909 \/ 0.540337 (-0.068428) | 0.573853 \/ 1.386936 (-0.813083) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006210 \/ 0.011353 (-0.005143) | 0.004104 \/ 0.011008 (-0.006905) | 0.075140 \/ 0.038508 (0.036632) | 0.044647 \/ 0.023109 (0.021538) | 0.370120 \/ 0.275898 (0.094222) | 0.452936 \/ 0.323480 (0.129457) | 0.003943 \/ 0.007986 (-0.004042) | 0.003285 \/ 0.004328 (-0.001043) | 0.075267 \/ 0.004250 (0.071017) | 0.055517 \/ 0.037052 (0.018465) | 0.396385 \/ 0.258489 (0.137896) | 0.447870 \/ 0.293841 (0.154029) | 0.031342 \/ 0.128546 (-0.097204) | 0.008720 \/ 0.075646 (-0.066926) | 0.082702 \/ 0.419271 (-0.336570) | 0.051010 \/ 0.043533 (0.007477) | 0.350546 \/ 0.255139 (0.095407) | 0.425395 \/ 0.283200 (0.142195) | 0.024483 \/ 0.141683 (-0.117200) | 1.467341 \/ 1.452155 (0.015186) | 1.537187 \/ 1.492716 (0.044471) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.218067 \/ 0.018006 (0.200061) | 0.441603 \/ 0.000490 (0.441114) | 0.003711 \/ 0.000200 (0.003512) | 0.000092 \/ 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028669 \/ 0.037411 (-0.008742) | 0.112941 \/ 0.014526 (0.098415) | 0.122584 \/ 0.176557 (-0.053972) | 0.176494 \/ 0.737135 (-0.560641) | 0.129369 \/ 0.296338 (-0.166970) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.434543 \/ 0.215209 (0.219334) | 4.344056 \/ 2.077655 (2.266401) | 2.079286 \/ 1.504120 (0.575166) | 1.887264 \/ 1.541195 (0.346069) | 1.910386 \/ 1.468490 (0.441896) | 0.538824 \/ 4.584777 (-4.045953) | 3.844786 \/ 3.745712 (0.099074) | 2.902091 \/ 5.269862 (-2.367770) | 1.270852 \/ 4.565676 (-3.294824) | 0.066324 \/ 0.424275 (-0.357951) | 0.011346 \/ 0.007607 (0.003739) | 0.537122 \/ 0.226044 (0.311078) | 5.367354 \/ 2.268929 (3.098426) | 2.533672 \/ 55.444624 (-52.910952) | 2.203260 \/ 6.876477 (-4.673217) | 2.224310 \/ 2.142072 (0.082237) | 0.663806 \/ 4.805227 (-4.141422) | 0.142758 \/ 6.500664 (-6.357906) | 0.063870 \/ 0.075469 (-0.011599) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.260487 \/ 1.841788 (-0.581301) | 14.800106 \/ 8.074308 (6.725798) | 13.993488 \/ 10.191392 (3.802096) | 0.165829 \/ 0.680424 (-0.514595) | 0.017347 \/ 0.534201 (-0.516854) | 0.401819 \/ 0.579283 (-0.177464) | 0.424577 \/ 0.434364 (-0.009787) | 0.475161 \/ 0.540337 (-0.065176) | 0.574659 \/ 1.386936 (-0.812277) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#02e1e9ab6df4720f57b2d08c0b800cecac79a7c8 \"CML watermark\")\n"],"created_at":1687193766000,"updated_at":1687195751000,"closed_at":1687195330000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5966","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5966","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5966.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5966.patch","merged_at":1687195330000},"body":"Related to changes made in https:\/\/github.com\/iterative\/dvc\/pull\/9475","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5966\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5966\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5965","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5965\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5965\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5965\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5965","id":1763648540,"node_id":"I_kwDODunzps5pHyQc","number":5965,"title":"\"Couldn't cast array of type\" in complex datasets","user":{"login":"piercefreeman","id":1712066,"node_id":"MDQ6VXNlcjE3MTIwNjY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1712066?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/piercefreeman","html_url":"https:\/\/github.com\/piercefreeman","followers_url":"https:\/\/api.github.com\/users\/piercefreeman\/followers","following_url":"https:\/\/api.github.com\/users\/piercefreeman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/piercefreeman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/piercefreeman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/piercefreeman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/piercefreeman\/orgs","repos_url":"https:\/\/api.github.com\/users\/piercefreeman\/repos","events_url":"https:\/\/api.github.com\/users\/piercefreeman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/piercefreeman\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":{"login":"mariosasko","id":47462742.0,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"assignees":[{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting! \r\n\r\nSpecifying the target features explicitly should avoid this error:\r\n```python\r\ndataset = dataset.map(\r\n batch_process,\r\n batched=True,\r\n batch_size=1,\r\n num_proc=1,\r\n remove_columns=dataset.column_names,\r\n features=datasets.Features({\"texts\": datasets.Sequence(datasets.Value(\"string\"))})\r\n)\r\n```\r\n\r\nThis error stems from our type promotion not handling the nested case. But this promotion\/casting allocates memory in most scenarios, which can be problematic for large datasets, so explicitly passing the features is the optimal solution.","Hi @mariosasko thanks for the context, this is helpful to know. Would it be worth having some logic to generate this explicit feature specification automatically if a type annotation for a .map returns a dataclass that can be inferred?\r\n\r\nFeels like something that would be easy to implement and could save memory \/ deal with this case in a standardized way.","> . Would it be worth having some logic to generate this explicit feature specification automatically if a type annotation for a .map returns a dataclass that can be inferred?\r\n\r\nInteresting proposal! Yes, we could consider doing this if the (return) type hint is `TypedDict`, and raise an error that type hints are incorrect if the cast using the inferred types fails."],"created_at":1687184174000,"updated_at":1687377603000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nWhen doing a map of a dataset with complex types, sometimes `datasets` is unable to interpret the valid schema of a returned datasets.map() function. This often comes from conflicting types, like when both empty lists and filled lists are competing for the same field value.\r\n\r\nThis is prone to happen in batch mapping, when the mapper returns a sequence of null\/empty values and other batches are non-null. A workaround is to manually cast the new batch to a pyarrow table (like implemented in this [workaround](https:\/\/github.com\/piercefreeman\/lassen\/pull\/3)) but it feels like this ideally should be solved at the core library level.\r\n\r\nNote that the reproduction case only throws this error if the first datapoint has the empty list. If it is processed later, datasets already detects its representation as list-type and therefore allows the empty list to be provided.\n\n### Steps to reproduce the bug\n\nA trivial reproduction case:\r\n\r\n```python\r\nfrom typing import Iterator, Any\r\nimport pandas as pd\r\nfrom datasets import Dataset\r\n\r\ndef batch_to_examples(batch: dict[str, list[Any]]) -> Iterator[dict[str, Any]]:\r\n for i in range(next(iter(lengths))):\r\n yield {feature: values[i] for feature, values in batch.items()}\r\n\r\ndef examples_to_batch(examples) -> dict[str, list[Any]]:\r\n batch = {}\r\n\r\n for example in examples:\r\n for feature, value in example.items():\r\n if feature not in batch:\r\n batch[feature] = []\r\n batch[feature].append(value)\r\n\r\n return batch\r\n\r\ndef batch_process(examples, explicit_schema: bool):\r\n new_examples = []\r\n for example in batch_to_examples(examples):\r\n new_examples.append(dict(texts=example[\"raw_text\"].split()))\r\n return examples_to_batch(new_examples)\r\n\r\ndf = pd.DataFrame(\r\n [\r\n {\"raw_text\": \"\"},\r\n {\"raw_text\": \"This is a test\"},\r\n {\"raw_text\": \"This is another test\"},\r\n ]\r\n)\r\n\r\ndataset = Dataset.from_pandas(df)\r\n\r\n# datasets won't be able to typehint a dataset that starts with an empty example.\r\nwith pytest.raises(TypeError, match=\"Couldn't cast array of type\"):\r\n dataset = dataset.map(\r\n batch_process,\r\n batched=True,\r\n batch_size=1,\r\n num_proc=1,\r\n remove_columns=dataset.column_names,\r\n )\r\n```\r\n\r\nThis results in crashes like:\r\n\r\n```bash\r\n File \"\/Users\/piercefreeman\/Library\/Caches\/pypoetry\/virtualenvs\/example-9kBqeSPy-py3.11\/lib\/python3.11\/site-packages\/datasets\/table.py\", line 1819, in wrapper\r\n return func(array, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/piercefreeman\/Library\/Caches\/pypoetry\/virtualenvs\/example-9kBqeSPy-py3.11\/lib\/python3.11\/site-packages\/datasets\/table.py\", line 2109, in cast_array_to_feature\r\n return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/piercefreeman\/Library\/Caches\/pypoetry\/virtualenvs\/example-9kBqeSPy-py3.11\/lib\/python3.11\/site-packages\/datasets\/table.py\", line 1819, in wrapper\r\n return func(array, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/piercefreeman\/Library\/Caches\/pypoetry\/virtualenvs\/example-9kBqeSPy-py3.11\/lib\/python3.11\/site-packages\/datasets\/table.py\", line 1998, in array_cast\r\n raise TypeError(f\"Couldn't cast array of type {array.type} to {pa_type}\")\r\nTypeError: Couldn't cast array of type string to null\r\n```\n\n### Expected behavior\n\nThe code should successfully map and create a new dataset without error.\n\n### Environment info\n\nMac OSX, Linux","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5965\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5965\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5964","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5964\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5964\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5964\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5964","id":1763513574,"node_id":"PR_kwDODunzps5TVweZ","number":5964,"title":"Always return list in `list_datasets`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006795 \/ 0.011353 (-0.004558) | 0.004170 \/ 0.011008 (-0.006838) | 0.098698 \/ 0.038508 (0.060190) | 0.045393 \/ 0.023109 (0.022284) | 0.309205 \/ 0.275898 (0.033307) | 0.361333 \/ 0.323480 (0.037853) | 0.006009 \/ 0.007986 (-0.001977) | 0.003334 \/ 0.004328 (-0.000995) | 0.075071 \/ 0.004250 (0.070821) | 0.062587 \/ 0.037052 (0.025535) | 0.322395 \/ 0.258489 (0.063906) | 0.360499 \/ 0.293841 (0.066659) | 0.032243 \/ 0.128546 (-0.096303) | 0.008768 \/ 0.075646 (-0.066878) | 0.329799 \/ 0.419271 (-0.089472) | 0.062261 \/ 0.043533 (0.018728) | 0.298112 \/ 0.255139 (0.042973) | 0.322815 \/ 0.283200 (0.039615) | 0.032348 \/ 0.141683 (-0.109335) | 1.445807 \/ 1.452155 (-0.006347) | 1.528768 \/ 1.492716 (0.036051) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.195701 \/ 0.018006 (0.177695) | 0.437042 \/ 0.000490 (0.436552) | 0.003867 \/ 0.000200 (0.003667) | 0.000080 \/ 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026713 \/ 0.037411 (-0.010698) | 0.109548 \/ 0.014526 (0.095022) | 0.119216 \/ 0.176557 (-0.057341) | 0.178947 \/ 0.737135 (-0.558188) | 0.125224 \/ 0.296338 (-0.171114) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.400885 \/ 0.215209 (0.185676) | 3.991223 \/ 2.077655 (1.913568) | 1.818449 \/ 1.504120 (0.314329) | 1.609285 \/ 1.541195 (0.068090) | 1.666675 \/ 1.468490 (0.198184) | 0.531486 \/ 4.584777 (-4.053291) | 3.770142 \/ 3.745712 (0.024430) | 3.057189 \/ 5.269862 (-2.212673) | 1.517491 \/ 4.565676 (-3.048186) | 0.065782 \/ 0.424275 (-0.358493) | 0.011251 \/ 0.007607 (0.003644) | 0.504277 \/ 0.226044 (0.278233) | 5.038979 \/ 2.268929 (2.770050) | 2.254717 \/ 55.444624 (-53.189908) | 1.929743 \/ 6.876477 (-4.946734) | 2.080051 \/ 2.142072 (-0.062022) | 0.656831 \/ 4.805227 (-4.148396) | 0.142860 \/ 6.500664 (-6.357804) | 0.063057 \/ 0.075469 (-0.012412) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.208819 \/ 1.841788 (-0.632969) | 14.456966 \/ 8.074308 (6.382658) | 12.839799 \/ 10.191392 (2.648407) | 0.164361 \/ 0.680424 (-0.516063) | 0.017330 \/ 0.534201 (-0.516871) | 0.397384 \/ 0.579283 (-0.181899) | 0.422704 \/ 0.434364 (-0.011660) | 0.472065 \/ 0.540337 (-0.068273) | 0.576960 \/ 1.386936 (-0.809976) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006950 \/ 0.011353 (-0.004403) | 0.004012 \/ 0.011008 (-0.006997) | 0.076050 \/ 0.038508 (0.037542) | 0.046646 \/ 0.023109 (0.023537) | 0.353813 \/ 0.275898 (0.077915) | 0.417111 \/ 0.323480 (0.093631) | 0.005422 \/ 0.007986 (-0.002564) | 0.003356 \/ 0.004328 (-0.000972) | 0.076662 \/ 0.004250 (0.072411) | 0.055018 \/ 0.037052 (0.017966) | 0.371561 \/ 0.258489 (0.113072) | 0.410471 \/ 0.293841 (0.116630) | 0.031860 \/ 0.128546 (-0.096686) | 0.008754 \/ 0.075646 (-0.066893) | 0.083192 \/ 0.419271 (-0.336079) | 0.050479 \/ 0.043533 (0.006946) | 0.351725 \/ 0.255139 (0.096586) | 0.371596 \/ 0.283200 (0.088396) | 0.023042 \/ 0.141683 (-0.118641) | 1.480533 \/ 1.452155 (0.028379) | 1.545970 \/ 1.492716 (0.053254) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.220095 \/ 0.018006 (0.202089) | 0.441550 \/ 0.000490 (0.441061) | 0.000375 \/ 0.000200 (0.000175) | 0.000056 \/ 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029527 \/ 0.037411 (-0.007884) | 0.111645 \/ 0.014526 (0.097119) | 0.125732 \/ 0.176557 (-0.050825) | 0.177322 \/ 0.737135 (-0.559813) | 0.128620 \/ 0.296338 (-0.167718) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.432415 \/ 0.215209 (0.217206) | 4.314381 \/ 2.077655 (2.236726) | 2.079450 \/ 1.504120 (0.575331) | 1.893139 \/ 1.541195 (0.351944) | 1.951363 \/ 1.468490 (0.482873) | 0.531466 \/ 4.584777 (-4.053311) | 3.716860 \/ 3.745712 (-0.028852) | 1.850111 \/ 5.269862 (-3.419750) | 1.100676 \/ 4.565676 (-3.465000) | 0.066247 \/ 0.424275 (-0.358028) | 0.011503 \/ 0.007607 (0.003896) | 0.537208 \/ 0.226044 (0.311164) | 5.367560 \/ 2.268929 (3.098631) | 2.543697 \/ 55.444624 (-52.900927) | 2.221670 \/ 6.876477 (-4.654806) | 2.252009 \/ 2.142072 (0.109937) | 0.658509 \/ 4.805227 (-4.146718) | 0.142345 \/ 6.500664 (-6.358319) | 0.064701 \/ 0.075469 (-0.010768) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.266442 \/ 1.841788 (-0.575346) | 15.105953 \/ 8.074308 (7.031645) | 14.288229 \/ 10.191392 (4.096837) | 0.161182 \/ 0.680424 (-0.519242) | 0.017074 \/ 0.534201 (-0.517127) | 0.399464 \/ 0.579283 (-0.179819) | 0.419459 \/ 0.434364 (-0.014905) | 0.467553 \/ 0.540337 (-0.072784) | 0.566337 \/ 1.386936 (-0.820599) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#53ac2d9662f9e5923ae7c52199eaa620d82f0043 \"CML watermark\")\n"],"created_at":1687180028000,"updated_at":1687195777000,"closed_at":1687195361000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5964","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5964","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5964.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5964.patch","merged_at":1687195361000},"body":"Fix #5925 \r\n\r\nPlus, deprecate `list_datasets`\/`inspect_dataset` in favor of `huggingface_hub.list_datasets`\/\"git clone workflow\" (downloads data files)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5964\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5964\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5963","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5963\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5963\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5963\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5963","id":1762774457,"node_id":"I_kwDODunzps5pEc25","number":5963,"title":"Got an error _pickle.PicklingError use Dataset.from_spark.","user":{"login":"yanzia12138","id":112800614,"node_id":"U_kgDOBrkzZg","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/112800614?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yanzia12138","html_url":"https:\/\/github.com\/yanzia12138","followers_url":"https:\/\/api.github.com\/users\/yanzia12138\/followers","following_url":"https:\/\/api.github.com\/users\/yanzia12138\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yanzia12138\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yanzia12138\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yanzia12138\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yanzia12138\/orgs","repos_url":"https:\/\/api.github.com\/users\/yanzia12138\/repos","events_url":"https:\/\/api.github.com\/users\/yanzia12138\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yanzia12138\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["i got error using method from_spark when using multi-node Spark cluster. seems could only use \"from_spark\" in local?","@lhoestq ","cc @maddiedawson it looks like there an issue with `_validate_cache_dir` ?\r\n\r\nIt looks like the function passed to mapPartitions has a reference to the Spark dataset builder, and therefore contains the SparkContext itself.\r\n\r\nI think it can be fixed by defining `create_cache_and_write_probe` outside the Spark dataset builder, and pass a `partial(create_cache_and_write_probe, cache_dir=self._cache_dir)` to `mapPartitions`","Just saw this; thanks for flagging! Your proposed solution sounds good. I can prepare a PR","@maddiedawson can you show me the demo ,so i can test in local .before your PR"],"created_at":1687152635000,"updated_at":1688003906000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":" python 3.9.2\r\nGot an error _pickle.PicklingError use Dataset.from_spark.\r\n\r\nDid the dataset import load data from spark dataframe using multi-node Spark cluster\r\ndf = spark.read.parquet(args.input_data).repartition(50)\r\nds = Dataset.from_spark(df, keep_in_memory=True,\r\n cache_dir=\"\/pnc-data\/data\/nuplan\/t5_spark\/cache_data\")\r\nds.save_to_disk(args.output_data)\r\n\r\nError : \r\n_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforma\r\ntion. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.\r\n23\/06\/16 21:17:20 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)\r\n\r\n_Originally posted by @yanzia12138 in https:\/\/github.com\/huggingface\/datasets\/issues\/5701#issuecomment-1594674306_\r\n \r\nW\r\nTraceback (most recent call last):\r\n File \"\/home\/work\/main.py\", line 100, in \r\n run(args)\r\n File \"\/home\/work\/main.py\", line 80, in run\r\n ds = Dataset.from_spark(df1, keep_in_memory=True,\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py\", line 1281, in from_spark\r\n return SparkDatasetReader(\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/datasets\/io\/spark.py\", line 53, in read\r\n self.builder.download_and_prepare(\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 909, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 1004, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/datasets\/packaged_modules\/spark\/spark.py\", line 254, in _prepare_split\r\n self._validate_cache_dir()\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/datasets\/packaged_modules\/spark\/spark.py\", line 122, in _validate_cache_dir\r\n self._spark.sparkContext.parallelize(range(1), 1).mapPartitions(create_cache_and_write_probe).collect()\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/pyspark\/rdd.py\", line 950, in collect\r\n sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/pyspark\/rdd.py\", line 2951, in _jrdd\r\n wrapped_func = _wrap_function(self.ctx, self.func, self._prev_jrdd_deserializer,\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/pyspark\/rdd.py\", line 2830, in _wrap_function\r\n pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command)\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/pyspark\/rdd.py\", line 2816, in _prepare_for_python_RDD\r\n pickled_command = ser.dumps(command)\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/pyspark\/serializers.py\", line 447, in dumps\r\n raise pickle.PicklingError(msg)\r\n_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. S\r\nparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.\r\n23\/06\/19 13:51:21 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5963\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5963\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5962","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5962\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5962\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5962\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5962","id":1761589882,"node_id":"I_kwDODunzps5o_7p6","number":5962,"title":"Issue with train_test_split maintaining the same underlying PyArrow Table","user":{"login":"Oziel14","id":70730520,"node_id":"MDQ6VXNlcjcwNzMwNTIw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/70730520?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Oziel14","html_url":"https:\/\/github.com\/Oziel14","followers_url":"https:\/\/api.github.com\/users\/Oziel14\/followers","following_url":"https:\/\/api.github.com\/users\/Oziel14\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Oziel14\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Oziel14\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Oziel14\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Oziel14\/orgs","repos_url":"https:\/\/api.github.com\/users\/Oziel14\/repos","events_url":"https:\/\/api.github.com\/users\/Oziel14\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Oziel14\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1686968398000,"updated_at":1686968398000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI've been using the train_test_split method in the datasets module to split my HuggingFace Dataset into separate training, validation, and testing subsets. However, I've noticed an issue where the split datasets appear to maintain the same underlying PyArrow Table.\n\n### Steps to reproduce the bug\n\n1. Load any dataset ```dataset = load_dataset(\"lhoestq\/demo1\")``` \r\n2. Try the next code:\r\n```python\r\nfrom datasets import Dataset, DatasetDict\r\n\r\ntrain_size = 0.6\r\n\r\nsplit_train = dataset[\"train\"].train_test_split(\r\n train_size=train_size,\r\n)\r\n\r\nseparate_dataset_dict = DatasetDict({\r\n \"train\": split_train[\"train\"],\r\n \"test\": split_train[\"test\"],\r\n})\r\n```\r\n3. The next code ```print(separate_dataset_dict)``` when printing the dataset it gives the indication that they have 3 and 2 rows respectively.\r\n4. But the next code: \r\n ```python\r\nprint(len(separate_dataset_dict[\"train\"].data['id']))\r\nprint(len(separate_dataset_dict[\"test\"].data['id'])) \r\n```\r\n\r\n Indicates that both tables still have 5 rows.\n\n### Expected behavior\n\nHowever, I've noticed that train_test_split[\"train\"].data, test_val_split[\"train\"].data, and test_val_split[\"test\"].data are identical, suggesting that they all point to the same underlying PyArrow Table. This means that the split datasets are not independent, as I expected.\r\n\r\nI believe this is a bug in the train_test_split implementation, as I would expect this function to return datasets with separate underlying PyArrow Tables. Could you please help me understand if this is expected behavior, or if there's a workaround to create truly independent split datasets?\r\n\r\nI would appreciate any assistance with this issue. Thank you.\n\n### Environment info\n\nI tried in Colab:\r\n\r\n- `datasets` version: 2.13.0\r\n- Platform: Windows-10-10.0.22621-SP0\r\n- Python version: 3.10.11\r\n- Huggingface_hub version: 0.14.1\r\n- PyArrow version: 12.0.0\r\n- Pandas version: 2.0.1\r\n\r\nand my PC:\r\n\r\n- `datasets` version: 2.13.0\r\n- Platform: Linux-5.15.107+-x86_64-with-glibc2.31\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.5.3","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5962\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5962\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5961","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5961\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5961\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5961\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5961","id":1758525111,"node_id":"I_kwDODunzps5o0Pa3","number":5961,"title":"IterableDataset: split by node and map may preprocess samples that will be skipped anyway","user":{"login":"johnchienbronci","id":27708347,"node_id":"MDQ6VXNlcjI3NzA4MzQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27708347?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/johnchienbronci","html_url":"https:\/\/github.com\/johnchienbronci","followers_url":"https:\/\/api.github.com\/users\/johnchienbronci\/followers","following_url":"https:\/\/api.github.com\/users\/johnchienbronci\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/johnchienbronci\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/johnchienbronci\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/johnchienbronci\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/johnchienbronci\/orgs","repos_url":"https:\/\/api.github.com\/users\/johnchienbronci\/repos","events_url":"https:\/\/api.github.com\/users\/johnchienbronci\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/johnchienbronci\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Does \"number of shards\" refer to the total number of data?\r\n\r\nmy config:\r\nnproc_per_node=2\r\nds=ds['train'] = load_dataset(streaming=True).take(50000)\r\n\r\nI'm test again: in prepare_data(), data have the same for each GPU\r\n","The number of shards is `ds.n_shards`. It corresponds generally to the number of files the dataset is made of, to be able to distribute to several nodes.\r\n\r\n**You don't end up with the same data per GPU**. But all the samples are going through your preprocessing function you pass to map. They are just skipped afterwards to only keep 1 sample out of n(GPUs)","For each GPU, although see the same data in prepare_data(), the actual training data will not be the same in the end. \r\nIs my understanding correct?\r\n\r\nWhere can I print the actual training data for each GPU?","> For each GPU, although see the same data in prepare_data(), the actual training data will not be the same in the end.\r\nIs my understanding correct?\r\n\r\nYes exactly :)\r\n\r\n> Where can I print the actual training data for each GPU?\r\n\r\nYou should call print in the data_collator","I print out n_shards, and under multiple GPUs, this value is always 1.\r\nIs this value correct?","Yes it's correct, and it explains why you always have the same data passed to your map function (the data can't be split).\r\n\r\nBut after being passed to `map`, each GPU keeps one example out of n(GPUs) so that you don't end up with duplicate data across GPUs","> > For each GPU, although see the same data in prepare_data(), the actual training data will not be the same in the end.\r\n> > Is my understanding correct?\r\n> \r\n> Yes exactly :)\r\n> \r\n> > Where can I print the actual training data for each GPU?\r\n> \r\n> You should call print in the data_collator\r\n\r\nOK, when printing the train data in the data collator, each GPU sees different data.\r\n\r\nThanks for your reply"],"created_at":1686824950000,"updated_at":1687224640000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":" There are two ways an iterable dataset can be split by node:\r\n1. if the number of shards is a factor of number of GPUs: in that case the shards are evenly distributed per GPU\r\n2. otherwise, each GPU iterate on the data and at the end keeps 1 sample out of n(GPUs) - skipping the others.\r\n\r\nIn case 2. it's therefore possible to have the same examples passed to `prepare_dataset` for each GPU.\r\n\r\nThis doesn't sound optimized though, because it runs the preprocessing on samples that won't be used in the end.\r\n\r\nCould you open a new issue so that we can discuss about this and find a solution ?\r\n\r\n_Originally posted by @lhoestq in https:\/\/github.com\/huggingface\/datasets\/issues\/5360#issuecomment-1592729051_\r\n ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5961\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5961\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5959","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5959\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5959\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5959\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5959","id":1757397507,"node_id":"I_kwDODunzps5ov8ID","number":5959,"title":"read metric glue.py from local file ","user":{"login":"JiazhaoLi","id":31148397,"node_id":"MDQ6VXNlcjMxMTQ4Mzk3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31148397?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JiazhaoLi","html_url":"https:\/\/github.com\/JiazhaoLi","followers_url":"https:\/\/api.github.com\/users\/JiazhaoLi\/followers","following_url":"https:\/\/api.github.com\/users\/JiazhaoLi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JiazhaoLi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JiazhaoLi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JiazhaoLi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JiazhaoLi\/orgs","repos_url":"https:\/\/api.github.com\/users\/JiazhaoLi\/repos","events_url":"https:\/\/api.github.com\/users\/JiazhaoLi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JiazhaoLi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Sorry, I solve this by call `evaluate.load('glue_metric.py','sst-2')`\r\n"],"created_at":1686765575000,"updated_at":1686765856000,"closed_at":1686765856000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nCurrently, The server is off-line. I am using the glue metric from the local file downloaded from the hub. \r\nI download \/ cached datasets using `load_dataset('glue','sst2', cache_dir='\/xxx')` to cache them and then in the off-line mode, I use `load_dataset('xxx\/glue.py','sst2', cache_dir='\/xxx')`. I can successfully reuse cached datasets.\r\n\r\nMy problem is about the load_metric. \r\nWhen I run `load_dataset('xxx\/glue_metric.py','sst2',cache_dir='\/xxx')` , it returns \r\n\r\n` File \"xx\/lib64\/python3.9\/site-packages\/datasets\/utils\/deprecation_utils.py\", line 46, in wrapper\r\n return deprecated_function(*args, **kwargs)\r\n File \"xx\/\/lib64\/python3.9\/site-packages\/datasets\/load.py\", line 1392, in load_metric\r\n metric = metric_cls(\r\nTypeError: 'NoneType' object is not callable`\r\n\r\nThanks in advance for help! \r\n### Steps to reproduce the bug\r\n\r\nN\/A\r\n\r\n### Expected behavior\r\n\r\nN\/A\r\n\r\n### Environment info\r\n\r\n`datasets == 2.12.0`","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5959\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5959\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5958","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5958\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5958\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5958\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5958","id":1757265971,"node_id":"PR_kwDODunzps5TA3__","number":5958,"title":"set dev version","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_5958). All of your documentation changes will be reflected on that endpoint.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006232 \/ 0.011353 (-0.005121) | 0.003788 \/ 0.011008 (-0.007220) | 0.100014 \/ 0.038508 (0.061506) | 0.036488 \/ 0.023109 (0.013379) | 0.306255 \/ 0.275898 (0.030357) | 0.363337 \/ 0.323480 (0.039857) | 0.004765 \/ 0.007986 (-0.003221) | 0.002935 \/ 0.004328 (-0.001394) | 0.078897 \/ 0.004250 (0.074647) | 0.052221 \/ 0.037052 (0.015169) | 0.315169 \/ 0.258489 (0.056680) | 0.353050 \/ 0.293841 (0.059209) | 0.029059 \/ 0.128546 (-0.099488) | 0.008599 \/ 0.075646 (-0.067047) | 0.318770 \/ 0.419271 (-0.100502) | 0.046631 \/ 0.043533 (0.003098) | 0.303728 \/ 0.255139 (0.048589) | 0.332379 \/ 0.283200 (0.049180) | 0.021164 \/ 0.141683 (-0.120519) | 1.576963 \/ 1.452155 (0.124808) | 1.629575 \/ 1.492716 (0.136859) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.204246 \/ 0.018006 (0.186240) | 0.426600 \/ 0.000490 (0.426110) | 0.004336 \/ 0.000200 (0.004136) | 0.000082 \/ 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024039 \/ 0.037411 (-0.013372) | 0.098240 \/ 0.014526 (0.083715) | 0.108889 \/ 0.176557 (-0.067668) | 0.170827 \/ 0.737135 (-0.566308) | 0.111288 \/ 0.296338 (-0.185051) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.418103 \/ 0.215209 (0.202894) | 4.190759 \/ 2.077655 (2.113104) | 1.875978 \/ 1.504120 (0.371858) | 1.679198 \/ 1.541195 (0.138003) | 1.737965 \/ 1.468490 (0.269474) | 0.556660 \/ 4.584777 (-4.028117) | 3.413800 \/ 3.745712 (-0.331912) | 3.004999 \/ 5.269862 (-2.264862) | 1.464030 \/ 4.565676 (-3.101647) | 0.067338 \/ 0.424275 (-0.356937) | 0.011486 \/ 0.007607 (0.003879) | 0.522589 \/ 0.226044 (0.296544) | 5.214653 \/ 2.268929 (2.945724) | 2.316903 \/ 55.444624 (-53.127722) | 1.991941 \/ 6.876477 (-4.884536) | 2.110601 \/ 2.142072 (-0.031471) | 0.665400 \/ 4.805227 (-4.139828) | 0.135755 \/ 6.500664 (-6.364910) | 0.065980 \/ 0.075469 (-0.009489) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.197269 \/ 1.841788 (-0.644519) | 14.085205 \/ 8.074308 (6.010897) | 14.083360 \/ 10.191392 (3.891968) | 0.148054 \/ 0.680424 (-0.532369) | 0.016548 \/ 0.534201 (-0.517653) | 0.371538 \/ 0.579283 (-0.207745) | 0.391068 \/ 0.434364 (-0.043296) | 0.430589 \/ 0.540337 (-0.109748) | 0.529319 \/ 1.386936 (-0.857617) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006214 \/ 0.011353 (-0.005138) | 0.003846 \/ 0.011008 (-0.007162) | 0.078559 \/ 0.038508 (0.040051) | 0.037855 \/ 0.023109 (0.014745) | 0.437479 \/ 0.275898 (0.161581) | 0.497588 \/ 0.323480 (0.174108) | 0.003491 \/ 0.007986 (-0.004494) | 0.003900 \/ 0.004328 (-0.000428) | 0.078443 \/ 0.004250 (0.074193) | 0.048019 \/ 0.037052 (0.010967) | 0.452076 \/ 0.258489 (0.193587) | 0.494597 \/ 0.293841 (0.200756) | 0.028127 \/ 0.128546 (-0.100419) | 0.008549 \/ 0.075646 (-0.067098) | 0.082977 \/ 0.419271 (-0.336295) | 0.043133 \/ 0.043533 (-0.000400) | 0.441342 \/ 0.255139 (0.186203) | 0.464339 \/ 0.283200 (0.181139) | 0.020110 \/ 0.141683 (-0.121573) | 1.485181 \/ 1.452155 (0.033026) | 1.532019 \/ 1.492716 (0.039302) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.228014 \/ 0.018006 (0.210007) | 0.416887 \/ 0.000490 (0.416397) | 0.001133 \/ 0.000200 (0.000933) | 0.000108 \/ 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026452 \/ 0.037411 (-0.010960) | 0.104328 \/ 0.014526 (0.089802) | 0.110045 \/ 0.176557 (-0.066511) | 0.164725 \/ 0.737135 (-0.572410) | 0.116348 \/ 0.296338 (-0.179990) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.483502 \/ 0.215209 (0.268293) | 4.829814 \/ 2.077655 (2.752159) | 2.505271 \/ 1.504120 (1.001151) | 2.305819 \/ 1.541195 (0.764624) | 2.348633 \/ 1.468490 (0.880143) | 0.562316 \/ 4.584777 (-4.022461) | 3.426425 \/ 3.745712 (-0.319287) | 1.737934 \/ 5.269862 (-3.531927) | 1.042616 \/ 4.565676 (-3.523061) | 0.068088 \/ 0.424275 (-0.356187) | 0.011735 \/ 0.007607 (0.004128) | 0.586339 \/ 0.226044 (0.360295) | 5.861283 \/ 2.268929 (3.592354) | 2.953956 \/ 55.444624 (-52.490668) | 2.626611 \/ 6.876477 (-4.249865) | 2.687978 \/ 2.142072 (0.545906) | 0.672748 \/ 4.805227 (-4.132479) | 0.137231 \/ 6.500664 (-6.363433) | 0.068149 \/ 0.075469 (-0.007320) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.323139 \/ 1.841788 (-0.518649) | 14.503102 \/ 8.074308 (6.428794) | 14.092102 \/ 10.191392 (3.900710) | 0.165395 \/ 0.680424 (-0.515028) | 0.016898 \/ 0.534201 (-0.517303) | 0.366905 \/ 0.579283 (-0.212378) | 0.396671 \/ 0.434364 (-0.037692) | 0.421831 \/ 0.540337 (-0.118506) | 0.514075 \/ 1.386936 (-0.872861) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#9d4238c132dd44b9a6e1dfe7101228bdeb538d57 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007778 \/ 0.011353 (-0.003575) | 0.004624 \/ 0.011008 (-0.006384) | 0.123426 \/ 0.038508 (0.084918) | 0.052209 \/ 0.023109 (0.029100) | 0.341084 \/ 0.275898 (0.065186) | 0.421905 \/ 0.323480 (0.098425) | 0.005768 \/ 0.007986 (-0.002217) | 0.003647 \/ 0.004328 (-0.000682) | 0.085569 \/ 0.004250 (0.081319) | 0.070473 \/ 0.037052 (0.033421) | 0.356626 \/ 0.258489 (0.098136) | 0.407413 \/ 0.293841 (0.113572) | 0.038800 \/ 0.128546 (-0.089746) | 0.010289 \/ 0.075646 (-0.065357) | 0.462707 \/ 0.419271 (0.043436) | 0.060390 \/ 0.043533 (0.016858) | 0.349805 \/ 0.255139 (0.094666) | 0.355288 \/ 0.283200 (0.072088) | 0.025364 \/ 0.141683 (-0.116318) | 1.745720 \/ 1.452155 (0.293565) | 1.852764 \/ 1.492716 (0.360048) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.290582 \/ 0.018006 (0.272576) | 0.480044 \/ 0.000490 (0.479554) | 0.007658 \/ 0.000200 (0.007458) | 0.000100 \/ 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031529 \/ 0.037411 (-0.005882) | 0.130441 \/ 0.014526 (0.115915) | 0.147653 \/ 0.176557 (-0.028904) | 0.215935 \/ 0.737135 (-0.521200) | 0.149871 \/ 0.296338 (-0.146467) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.461662 \/ 0.215209 (0.246453) | 4.570353 \/ 2.077655 (2.492698) | 2.104416 \/ 1.504120 (0.600297) | 1.936974 \/ 1.541195 (0.395779) | 2.139167 \/ 1.468490 (0.670677) | 0.645100 \/ 4.584777 (-3.939677) | 4.361536 \/ 3.745712 (0.615824) | 2.155960 \/ 5.269862 (-3.113902) | 1.207854 \/ 4.565676 (-3.357822) | 0.080162 \/ 0.424275 (-0.344113) | 0.014265 \/ 0.007607 (0.006658) | 0.606294 \/ 0.226044 (0.380250) | 5.928093 \/ 2.268929 (3.659165) | 2.701811 \/ 55.444624 (-52.742813) | 2.344490 \/ 6.876477 (-4.531987) | 2.435997 \/ 2.142072 (0.293925) | 0.761020 \/ 4.805227 (-4.044207) | 0.165860 \/ 6.500664 (-6.334804) | 0.075666 \/ 0.075469 (0.000197) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.427318 \/ 1.841788 (-0.414469) | 17.327468 \/ 8.074308 (9.253160) | 15.323065 \/ 10.191392 (5.131673) | 0.178518 \/ 0.680424 (-0.501905) | 0.020888 \/ 0.534201 (-0.513313) | 0.497891 \/ 0.579283 (-0.081393) | 0.487717 \/ 0.434364 (0.053353) | 0.581430 \/ 0.540337 (0.041093) | 0.703430 \/ 1.386936 (-0.683506) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007954 \/ 0.011353 (-0.003399) | 0.004442 \/ 0.011008 (-0.006566) | 0.090950 \/ 0.038508 (0.052442) | 0.054282 \/ 0.023109 (0.031173) | 0.424474 \/ 0.275898 (0.148576) | 0.531770 \/ 0.323480 (0.208290) | 0.004492 \/ 0.007986 (-0.003493) | 0.004745 \/ 0.004328 (0.000416) | 0.088213 \/ 0.004250 (0.083962) | 0.063967 \/ 0.037052 (0.026914) | 0.454256 \/ 0.258489 (0.195767) | 0.502870 \/ 0.293841 (0.209029) | 0.038203 \/ 0.128546 (-0.090343) | 0.010327 \/ 0.075646 (-0.065319) | 0.097809 \/ 0.419271 (-0.321463) | 0.062136 \/ 0.043533 (0.018604) | 0.426148 \/ 0.255139 (0.171009) | 0.467812 \/ 0.283200 (0.184612) | 0.029148 \/ 0.141683 (-0.112535) | 1.762307 \/ 1.452155 (0.310152) | 1.814238 \/ 1.492716 (0.321521) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.195676 \/ 0.018006 (0.177670) | 0.475382 \/ 0.000490 (0.474892) | 0.003070 \/ 0.000200 (0.002870) | 0.000112 \/ 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033945 \/ 0.037411 (-0.003466) | 0.134666 \/ 0.014526 (0.120140) | 0.147585 \/ 0.176557 (-0.028971) | 0.209472 \/ 0.737135 (-0.527664) | 0.154471 \/ 0.296338 (-0.141867) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.518132 \/ 0.215209 (0.302923) | 5.103423 \/ 2.077655 (3.025768) | 2.565207 \/ 1.504120 (1.061087) | 2.389454 \/ 1.541195 (0.848259) | 2.391706 \/ 1.468490 (0.923216) | 0.606463 \/ 4.584777 (-3.978314) | 4.392227 \/ 3.745712 (0.646515) | 2.067121 \/ 5.269862 (-3.202741) | 1.217551 \/ 4.565676 (-3.348125) | 0.074304 \/ 0.424275 (-0.349971) | 0.013418 \/ 0.007607 (0.005811) | 0.623327 \/ 0.226044 (0.397282) | 6.340233 \/ 2.268929 (4.071304) | 3.153948 \/ 55.444624 (-52.290677) | 2.824548 \/ 6.876477 (-4.051929) | 2.938402 \/ 2.142072 (0.796329) | 0.774305 \/ 4.805227 (-4.030922) | 0.170681 \/ 6.500664 (-6.329983) | 0.075895 \/ 0.075469 (0.000426) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.473491 \/ 1.841788 (-0.368296) | 17.372294 \/ 8.074308 (9.297986) | 15.550201 \/ 10.191392 (5.358809) | 0.191402 \/ 0.680424 (-0.489022) | 0.021401 \/ 0.534201 (-0.512800) | 0.484377 \/ 0.579283 (-0.094906) | 0.488844 \/ 0.434364 (0.054480) | 0.563336 \/ 0.540337 (0.022999) | 0.694210 \/ 1.386936 (-0.692726) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#b96da7f51d81e52d7b587685f820b5e55f71e07d \"CML watermark\")\n"],"created_at":1686759994000,"updated_at":1686760495000,"closed_at":1686760011000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5958","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5958","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5958.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5958.patch","merged_at":1686760011000},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5958\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5958\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5957","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5957\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5957\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5957\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5957","id":1757252466,"node_id":"PR_kwDODunzps5TA1EB","number":5957,"title":"Release: 2.13.0","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006498 \/ 0.011353 (-0.004855) | 0.003970 \/ 0.011008 (-0.007038) | 0.099242 \/ 0.038508 (0.060734) | 0.044363 \/ 0.023109 (0.021254) | 0.313900 \/ 0.275898 (0.038002) | 0.386562 \/ 0.323480 (0.063082) | 0.003837 \/ 0.007986 (-0.004149) | 0.004203 \/ 0.004328 (-0.000125) | 0.076191 \/ 0.004250 (0.071940) | 0.058823 \/ 0.037052 (0.021771) | 0.333838 \/ 0.258489 (0.075349) | 0.368235 \/ 0.293841 (0.074394) | 0.030774 \/ 0.128546 (-0.097772) | 0.008787 \/ 0.075646 (-0.066860) | 0.326474 \/ 0.419271 (-0.092798) | 0.050903 \/ 0.043533 (0.007370) | 0.303928 \/ 0.255139 (0.048789) | 0.321532 \/ 0.283200 (0.038333) | 0.024162 \/ 0.141683 (-0.117520) | 1.479662 \/ 1.452155 (0.027507) | 1.520300 \/ 1.492716 (0.027584) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.212403 \/ 0.018006 (0.194397) | 0.448019 \/ 0.000490 (0.447529) | 0.005465 \/ 0.000200 (0.005265) | 0.000388 \/ 0.000054 (0.000334) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027533 \/ 0.037411 (-0.009878) | 0.117477 \/ 0.014526 (0.102952) | 0.121182 \/ 0.176557 (-0.055374) | 0.181150 \/ 0.737135 (-0.555985) | 0.128557 \/ 0.296338 (-0.167782) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.397763 \/ 0.215209 (0.182554) | 3.959460 \/ 2.077655 (1.881805) | 1.822057 \/ 1.504120 (0.317937) | 1.627020 \/ 1.541195 (0.085826) | 1.695394 \/ 1.468490 (0.226904) | 0.536848 \/ 4.584777 (-4.047929) | 3.765205 \/ 3.745712 (0.019493) | 3.196300 \/ 5.269862 (-2.073561) | 1.623583 \/ 4.565676 (-2.942094) | 0.065823 \/ 0.424275 (-0.358452) | 0.011062 \/ 0.007607 (0.003455) | 0.500428 \/ 0.226044 (0.274384) | 5.008816 \/ 2.268929 (2.739888) | 2.314660 \/ 55.444624 (-53.129965) | 2.007429 \/ 6.876477 (-4.869047) | 2.141438 \/ 2.142072 (-0.000635) | 0.656697 \/ 4.805227 (-4.148530) | 0.143555 \/ 6.500664 (-6.357109) | 0.063928 \/ 0.075469 (-0.011541) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.169038 \/ 1.841788 (-0.672750) | 15.027186 \/ 8.074308 (6.952878) | 13.571484 \/ 10.191392 (3.380092) | 0.166437 \/ 0.680424 (-0.513986) | 0.017656 \/ 0.534201 (-0.516545) | 0.397725 \/ 0.579283 (-0.181558) | 0.451019 \/ 0.434364 (0.016655) | 0.469134 \/ 0.540337 (-0.071203) | 0.575885 \/ 1.386936 (-0.811051) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006887 \/ 0.011353 (-0.004465) | 0.004166 \/ 0.011008 (-0.006842) | 0.077137 \/ 0.038508 (0.038629) | 0.055631 \/ 0.023109 (0.032522) | 0.397658 \/ 0.275898 (0.121760) | 0.473981 \/ 0.323480 (0.150502) | 0.005365 \/ 0.007986 (-0.002621) | 0.003401 \/ 0.004328 (-0.000928) | 0.076481 \/ 0.004250 (0.072231) | 0.056014 \/ 0.037052 (0.018961) | 0.415253 \/ 0.258489 (0.156764) | 0.457620 \/ 0.293841 (0.163779) | 0.031850 \/ 0.128546 (-0.096696) | 0.008869 \/ 0.075646 (-0.066777) | 0.083475 \/ 0.419271 (-0.335796) | 0.049232 \/ 0.043533 (0.005699) | 0.392947 \/ 0.255139 (0.137808) | 0.417243 \/ 0.283200 (0.134043) | 0.024554 \/ 0.141683 (-0.117129) | 1.508081 \/ 1.452155 (0.055926) | 1.541845 \/ 1.492716 (0.049129) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.228470 \/ 0.018006 (0.210464) | 0.450933 \/ 0.000490 (0.450443) | 0.001508 \/ 0.000200 (0.001308) | 0.000083 \/ 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030189 \/ 0.037411 (-0.007222) | 0.118853 \/ 0.014526 (0.104327) | 0.124809 \/ 0.176557 (-0.051747) | 0.175066 \/ 0.737135 (-0.562069) | 0.129819 \/ 0.296338 (-0.166519) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.451830 \/ 0.215209 (0.236621) | 4.505352 \/ 2.077655 (2.427698) | 2.309303 \/ 1.504120 (0.805183) | 2.120983 \/ 1.541195 (0.579789) | 2.198808 \/ 1.468490 (0.730317) | 0.543836 \/ 4.584777 (-4.040940) | 3.836650 \/ 3.745712 (0.090938) | 1.872293 \/ 5.269862 (-3.397568) | 1.122335 \/ 4.565676 (-3.443342) | 0.067463 \/ 0.424275 (-0.356812) | 0.012143 \/ 0.007607 (0.004536) | 0.553674 \/ 0.226044 (0.327630) | 5.572101 \/ 2.268929 (3.303173) | 2.772151 \/ 55.444624 (-52.672473) | 2.451557 \/ 6.876477 (-4.424920) | 2.521241 \/ 2.142072 (0.379169) | 0.665799 \/ 4.805227 (-4.139428) | 0.143842 \/ 6.500664 (-6.356822) | 0.065373 \/ 0.075469 (-0.010096) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.271013 \/ 1.841788 (-0.570775) | 15.290054 \/ 8.074308 (7.215746) | 14.807044 \/ 10.191392 (4.615652) | 0.163767 \/ 0.680424 (-0.516657) | 0.017383 \/ 0.534201 (-0.516818) | 0.393046 \/ 0.579283 (-0.186237) | 0.423056 \/ 0.434364 (-0.011308) | 0.459193 \/ 0.540337 (-0.081145) | 0.559964 \/ 1.386936 (-0.826972) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#011b75f044ef7fa6b8981ef3496615296aeb315b \"CML watermark\")\n","_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006112 \/ 0.011353 (-0.005241) | 0.003712 \/ 0.011008 (-0.007297) | 0.099996 \/ 0.038508 (0.061488) | 0.037526 \/ 0.023109 (0.014417) | 0.305834 \/ 0.275898 (0.029936) | 0.361368 \/ 0.323480 (0.037888) | 0.004849 \/ 0.007986 (-0.003136) | 0.002912 \/ 0.004328 (-0.001417) | 0.077729 \/ 0.004250 (0.073479) | 0.053203 \/ 0.037052 (0.016151) | 0.318088 \/ 0.258489 (0.059599) | 0.371745 \/ 0.293841 (0.077904) | 0.029384 \/ 0.128546 (-0.099162) | 0.008504 \/ 0.075646 (-0.067142) | 0.318472 \/ 0.419271 (-0.100799) | 0.046043 \/ 0.043533 (0.002510) | 0.310418 \/ 0.255139 (0.055279) | 0.335044 \/ 0.283200 (0.051844) | 0.020364 \/ 0.141683 (-0.121319) | 1.503201 \/ 1.452155 (0.051047) | 1.556408 \/ 1.492716 (0.063692) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.210245 \/ 0.018006 (0.192239) | 0.418918 \/ 0.000490 (0.418428) | 0.002552 \/ 0.000200 (0.002352) | 0.000084 \/ 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.022295 \/ 0.037411 (-0.015116) | 0.099534 \/ 0.014526 (0.085008) | 0.106432 \/ 0.176557 (-0.070124) | 0.165110 \/ 0.737135 (-0.572026) | 0.109851 \/ 0.296338 (-0.186488) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.423947 \/ 0.215209 (0.208738) | 4.232978 \/ 2.077655 (2.155323) | 2.004849 \/ 1.504120 (0.500729) | 1.814345 \/ 1.541195 (0.273151) | 1.809192 \/ 1.468490 (0.340702) | 0.561146 \/ 4.584777 (-4.023631) | 3.385043 \/ 3.745712 (-0.360669) | 1.708265 \/ 5.269862 (-3.561597) | 1.030290 \/ 4.565676 (-3.535387) | 0.067095 \/ 0.424275 (-0.357180) | 0.011052 \/ 0.007607 (0.003445) | 0.522416 \/ 0.226044 (0.296371) | 5.207003 \/ 2.268929 (2.938075) | 2.367067 \/ 55.444624 (-53.077558) | 1.998705 \/ 6.876477 (-4.877772) | 2.068633 \/ 2.142072 (-0.073439) | 0.672396 \/ 4.805227 (-4.132831) | 0.135818 \/ 6.500664 (-6.364846) | 0.065229 \/ 0.075469 (-0.010240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.187079 \/ 1.841788 (-0.654709) | 13.893153 \/ 8.074308 (5.818845) | 13.951328 \/ 10.191392 (3.759936) | 0.142519 \/ 0.680424 (-0.537905) | 0.016546 \/ 0.534201 (-0.517655) | 0.364008 \/ 0.579283 (-0.215275) | 0.385957 \/ 0.434364 (-0.048407) | 0.425218 \/ 0.540337 (-0.115120) | 0.519586 \/ 1.386936 (-0.867350) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005914 \/ 0.011353 (-0.005439) | 0.003619 \/ 0.011008 (-0.007389) | 0.077806 \/ 0.038508 (0.039298) | 0.037254 \/ 0.023109 (0.014144) | 0.378976 \/ 0.275898 (0.103078) | 0.433620 \/ 0.323480 (0.110140) | 0.003291 \/ 0.007986 (-0.004694) | 0.004523 \/ 0.004328 (0.000194) | 0.077604 \/ 0.004250 (0.073353) | 0.047493 \/ 0.037052 (0.010441) | 0.396027 \/ 0.258489 (0.137538) | 0.453345 \/ 0.293841 (0.159504) | 0.028170 \/ 0.128546 (-0.100376) | 0.008431 \/ 0.075646 (-0.067215) | 0.083985 \/ 0.419271 (-0.335286) | 0.045149 \/ 0.043533 (0.001617) | 0.369364 \/ 0.255139 (0.114225) | 0.407191 \/ 0.283200 (0.123991) | 0.024033 \/ 0.141683 (-0.117649) | 1.516838 \/ 1.452155 (0.064683) | 1.564260 \/ 1.492716 (0.071544) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.200848 \/ 0.018006 (0.182842) | 0.407818 \/ 0.000490 (0.407328) | 0.003971 \/ 0.000200 (0.003771) | 0.000077 \/ 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025033 \/ 0.037411 (-0.012378) | 0.103585 \/ 0.014526 (0.089059) | 0.108741 \/ 0.176557 (-0.067816) | 0.161061 \/ 0.737135 (-0.576075) | 0.112763 \/ 0.296338 (-0.183576) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.479913 \/ 0.215209 (0.264704) | 4.801904 \/ 2.077655 (2.724249) | 2.511433 \/ 1.504120 (1.007313) | 2.307523 \/ 1.541195 (0.766328) | 2.338343 \/ 1.468490 (0.869853) | 0.557731 \/ 4.584777 (-4.027046) | 3.386261 \/ 3.745712 (-0.359451) | 2.999978 \/ 5.269862 (-2.269883) | 1.463058 \/ 4.565676 (-3.102619) | 0.067645 \/ 0.424275 (-0.356630) | 0.011224 \/ 0.007607 (0.003617) | 0.596854 \/ 0.226044 (0.370810) | 5.940946 \/ 2.268929 (3.672017) | 2.980194 \/ 55.444624 (-52.464430) | 2.634961 \/ 6.876477 (-4.241516) | 2.648160 \/ 2.142072 (0.506088) | 0.669728 \/ 4.805227 (-4.135499) | 0.135536 \/ 6.500664 (-6.365128) | 0.066865 \/ 0.075469 (-0.008604) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.287151 \/ 1.841788 (-0.554637) | 14.491681 \/ 8.074308 (6.417373) | 14.185752 \/ 10.191392 (3.994360) | 0.129391 \/ 0.680424 (-0.551032) | 0.016650 \/ 0.534201 (-0.517551) | 0.380111 \/ 0.579283 (-0.199172) | 0.392877 \/ 0.434364 (-0.041487) | 0.439402 \/ 0.540337 (-0.100935) | 0.530865 \/ 1.386936 (-0.856071) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#9aaee6fd0b2bcbe18e4829602084bcd83d669c5e \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.011446 \/ 0.011353 (0.000093) | 0.006623 \/ 0.011008 (-0.004386) | 0.131915 \/ 0.038508 (0.093407) | 0.047364 \/ 0.023109 (0.024255) | 0.369203 \/ 0.275898 (0.093305) | 0.451509 \/ 0.323480 (0.128029) | 0.006265 \/ 0.007986 (-0.001720) | 0.004072 \/ 0.004328 (-0.000257) | 0.098626 \/ 0.004250 (0.094375) | 0.079523 \/ 0.037052 (0.042470) | 0.406038 \/ 0.258489 (0.147549) | 0.450564 \/ 0.293841 (0.156723) | 0.050793 \/ 0.128546 (-0.077753) | 0.014667 \/ 0.075646 (-0.060979) | 0.401359 \/ 0.419271 (-0.017913) | 0.072299 \/ 0.043533 (0.028767) | 0.404456 \/ 0.255139 (0.149317) | 0.396223 \/ 0.283200 (0.113023) | 0.037048 \/ 0.141683 (-0.104635) | 1.869123 \/ 1.452155 (0.416968) | 1.953621 \/ 1.492716 (0.460905) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.237246 \/ 0.018006 (0.219240) | 0.533207 \/ 0.000490 (0.532717) | 0.007392 \/ 0.000200 (0.007192) | 0.000117 \/ 0.000054 (0.000062) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029458 \/ 0.037411 (-0.007954) | 0.112438 \/ 0.014526 (0.097912) | 0.139115 \/ 0.176557 (-0.037441) | 0.215225 \/ 0.737135 (-0.521911) | 0.134440 \/ 0.296338 (-0.161898) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.616783 \/ 0.215209 (0.401574) | 6.113925 \/ 2.077655 (4.036270) | 2.403465 \/ 1.504120 (0.899345) | 1.967523 \/ 1.541195 (0.426329) | 2.042144 \/ 1.468490 (0.573654) | 0.927447 \/ 4.584777 (-3.657330) | 5.280413 \/ 3.745712 (1.534701) | 2.715335 \/ 5.269862 (-2.554527) | 1.755640 \/ 4.565676 (-2.810036) | 0.114370 \/ 0.424275 (-0.309905) | 0.013583 \/ 0.007607 (0.005976) | 0.761701 \/ 0.226044 (0.535657) | 7.466049 \/ 2.268929 (5.197120) | 3.041943 \/ 55.444624 (-52.402682) | 2.314477 \/ 6.876477 (-4.562000) | 2.469285 \/ 2.142072 (0.327213) | 1.216055 \/ 4.805227 (-3.589172) | 0.214205 \/ 6.500664 (-6.286459) | 0.080901 \/ 0.075469 (0.005432) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.565185 \/ 1.841788 (-0.276603) | 18.387986 \/ 8.074308 (10.313678) | 19.665109 \/ 10.191392 (9.473717) | 0.226670 \/ 0.680424 (-0.453754) | 0.028430 \/ 0.534201 (-0.505771) | 0.510526 \/ 0.579283 (-0.068757) | 0.623178 \/ 0.434364 (0.188814) | 0.592039 \/ 0.540337 (0.051702) | 0.728462 \/ 1.386936 (-0.658474) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009161 \/ 0.011353 (-0.002192) | 0.004891 \/ 0.011008 (-0.006117) | 0.106502 \/ 0.038508 (0.067994) | 0.048234 \/ 0.023109 (0.025125) | 0.451173 \/ 0.275898 (0.175275) | 0.557948 \/ 0.323480 (0.234468) | 0.005350 \/ 0.007986 (-0.002635) | 0.004559 \/ 0.004328 (0.000230) | 0.110393 \/ 0.004250 (0.106142) | 0.060624 \/ 0.037052 (0.023572) | 0.459265 \/ 0.258489 (0.200776) | 0.575302 \/ 0.293841 (0.281461) | 0.051379 \/ 0.128546 (-0.077167) | 0.015576 \/ 0.075646 (-0.060070) | 0.116650 \/ 0.419271 (-0.302621) | 0.065534 \/ 0.043533 (0.022001) | 0.461431 \/ 0.255139 (0.206292) | 0.487677 \/ 0.283200 (0.204477) | 0.037773 \/ 0.141683 (-0.103910) | 1.992416 \/ 1.452155 (0.540261) | 1.991280 \/ 1.492716 (0.498564) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.233607 \/ 0.018006 (0.215601) | 0.507539 \/ 0.000490 (0.507049) | 0.001307 \/ 0.000200 (0.001107) | 0.000096 \/ 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032897 \/ 0.037411 (-0.004514) | 0.126549 \/ 0.014526 (0.112023) | 0.137893 \/ 0.176557 (-0.038663) | 0.192124 \/ 0.737135 (-0.545012) | 0.147300 \/ 0.296338 (-0.149038) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.679371 \/ 0.215209 (0.464162) | 6.673249 \/ 2.077655 (4.595595) | 2.979141 \/ 1.504120 (1.475022) | 2.568789 \/ 1.541195 (1.027594) | 2.537540 \/ 1.468490 (1.069050) | 0.973555 \/ 4.584777 (-3.611222) | 5.313536 \/ 3.745712 (1.567824) | 2.693283 \/ 5.269862 (-2.576579) | 1.819483 \/ 4.565676 (-2.746194) | 0.111644 \/ 0.424275 (-0.312631) | 0.013218 \/ 0.007607 (0.005611) | 0.776114 \/ 0.226044 (0.550070) | 7.758907 \/ 2.268929 (5.489978) | 3.417611 \/ 55.444624 (-52.027013) | 2.859502 \/ 6.876477 (-4.016975) | 2.927726 \/ 2.142072 (0.785653) | 1.163671 \/ 4.805227 (-3.641556) | 0.228636 \/ 6.500664 (-6.272028) | 0.082077 \/ 0.075469 (0.006607) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.746150 \/ 1.841788 (-0.095637) | 17.961955 \/ 8.074308 (9.887647) | 21.590545 \/ 10.191392 (11.399153) | 0.210017 \/ 0.680424 (-0.470406) | 0.028435 \/ 0.534201 (-0.505766) | 0.509253 \/ 0.579283 (-0.070030) | 0.606993 \/ 0.434364 (0.172629) | 0.587189 \/ 0.540337 (0.046851) | 0.684023 \/ 1.386936 (-0.702913) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#9aaee6fd0b2bcbe18e4829602084bcd83d669c5e \"CML watermark\")\n"],"created_at":1686759446000,"updated_at":1686760419000,"closed_at":1686759879000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5957","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5957","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5957.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5957.patch","merged_at":1686759879000},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5957\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5957\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5956","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5956\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5956\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5956\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5956","id":1756959367,"node_id":"PR_kwDODunzps5S_1o2","number":5956,"title":"Fix ArrowExamplesIterable.shard_data_sources","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005893 \/ 0.011353 (-0.005460) | 0.003682 \/ 0.011008 (-0.007327) | 0.098358 \/ 0.038508 (0.059850) | 0.028130 \/ 0.023109 (0.005020) | 0.305960 \/ 0.275898 (0.030062) | 0.334869 \/ 0.323480 (0.011390) | 0.003522 \/ 0.007986 (-0.004463) | 0.003683 \/ 0.004328 (-0.000645) | 0.079418 \/ 0.004250 (0.075168) | 0.037662 \/ 0.037052 (0.000609) | 0.310893 \/ 0.258489 (0.052404) | 0.341347 \/ 0.293841 (0.047506) | 0.027450 \/ 0.128546 (-0.101096) | 0.008381 \/ 0.075646 (-0.067265) | 0.316020 \/ 0.419271 (-0.103252) | 0.045079 \/ 0.043533 (0.001546) | 0.307806 \/ 0.255139 (0.052667) | 0.331804 \/ 0.283200 (0.048604) | 0.091806 \/ 0.141683 (-0.049877) | 1.492611 \/ 1.452155 (0.040457) | 1.551762 \/ 1.492716 (0.059046) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.201640 \/ 0.018006 (0.183634) | 0.422776 \/ 0.000490 (0.422286) | 0.003734 \/ 0.000200 (0.003535) | 0.000080 \/ 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025429 \/ 0.037411 (-0.011982) | 0.104699 \/ 0.014526 (0.090173) | 0.110505 \/ 0.176557 (-0.066051) | 0.171252 \/ 0.737135 (-0.565883) | 0.113131 \/ 0.296338 (-0.183208) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.419914 \/ 0.215209 (0.204705) | 4.184414 \/ 2.077655 (2.106760) | 1.999263 \/ 1.504120 (0.495143) | 1.828669 \/ 1.541195 (0.287474) | 1.940366 \/ 1.468490 (0.471876) | 0.556939 \/ 4.584777 (-4.027838) | 3.389164 \/ 3.745712 (-0.356548) | 1.796323 \/ 5.269862 (-3.473538) | 1.048843 \/ 4.565676 (-3.516833) | 0.067315 \/ 0.424275 (-0.356960) | 0.011531 \/ 0.007607 (0.003923) | 0.517226 \/ 0.226044 (0.291182) | 5.167255 \/ 2.268929 (2.898326) | 2.431129 \/ 55.444624 (-53.013495) | 2.133913 \/ 6.876477 (-4.742564) | 2.359021 \/ 2.142072 (0.216948) | 0.666390 \/ 4.805227 (-4.138838) | 0.135147 \/ 6.500664 (-6.365517) | 0.064855 \/ 0.075469 (-0.010614) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.166530 \/ 1.841788 (-0.675258) | 14.060551 \/ 8.074308 (5.986242) | 14.171663 \/ 10.191392 (3.980271) | 0.285821 \/ 0.680424 (-0.394603) | 0.016867 \/ 0.534201 (-0.517334) | 0.369102 \/ 0.579283 (-0.210181) | 0.393580 \/ 0.434364 (-0.040784) | 0.423721 \/ 0.540337 (-0.116616) | 0.512559 \/ 1.386936 (-0.874377) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006674 \/ 0.011353 (-0.004679) | 0.004006 \/ 0.011008 (-0.007002) | 0.080160 \/ 0.038508 (0.041652) | 0.032508 \/ 0.023109 (0.009399) | 0.378168 \/ 0.275898 (0.102270) | 0.417796 \/ 0.323480 (0.094316) | 0.003706 \/ 0.007986 (-0.004280) | 0.002995 \/ 0.004328 (-0.001333) | 0.079275 \/ 0.004250 (0.075025) | 0.043690 \/ 0.037052 (0.006638) | 0.377717 \/ 0.258489 (0.119228) | 0.439801 \/ 0.293841 (0.145961) | 0.028438 \/ 0.128546 (-0.100108) | 0.008661 \/ 0.075646 (-0.066985) | 0.085280 \/ 0.419271 (-0.333991) | 0.043716 \/ 0.043533 (0.000183) | 0.370086 \/ 0.255139 (0.114947) | 0.403763 \/ 0.283200 (0.120563) | 0.095022 \/ 0.141683 (-0.046661) | 1.534376 \/ 1.452155 (0.082221) | 1.597658 \/ 1.492716 (0.104942) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.240229 \/ 0.018006 (0.222223) | 0.496281 \/ 0.000490 (0.495792) | 0.002165 \/ 0.000200 (0.001965) | 0.000075 \/ 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025330 \/ 0.037411 (-0.012081) | 0.102414 \/ 0.014526 (0.087888) | 0.112733 \/ 0.176557 (-0.063824) | 0.161181 \/ 0.737135 (-0.575955) | 0.114196 \/ 0.296338 (-0.182143) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.456808 \/ 0.215209 (0.241599) | 4.534937 \/ 2.077655 (2.457283) | 2.318834 \/ 1.504120 (0.814714) | 2.074085 \/ 1.541195 (0.532890) | 2.117409 \/ 1.468490 (0.648919) | 0.559110 \/ 4.584777 (-4.025667) | 3.371695 \/ 3.745712 (-0.374017) | 2.543154 \/ 5.269862 (-2.726708) | 1.360552 \/ 4.565676 (-3.205125) | 0.067602 \/ 0.424275 (-0.356674) | 0.011396 \/ 0.007607 (0.003789) | 0.561666 \/ 0.226044 (0.335622) | 5.607666 \/ 2.268929 (3.338737) | 2.802775 \/ 55.444624 (-52.641849) | 2.486162 \/ 6.876477 (-4.390315) | 2.390885 \/ 2.142072 (0.248813) | 0.667407 \/ 4.805227 (-4.137820) | 0.135948 \/ 6.500664 (-6.364717) | 0.067272 \/ 0.075469 (-0.008197) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.279664 \/ 1.841788 (-0.562124) | 15.188099 \/ 8.074308 (7.113791) | 14.380355 \/ 10.191392 (4.188963) | 0.140344 \/ 0.680424 (-0.540080) | 0.016832 \/ 0.534201 (-0.517369) | 0.364631 \/ 0.579283 (-0.214652) | 0.400306 \/ 0.434364 (-0.034058) | 0.430793 \/ 0.540337 (-0.109545) | 0.525923 \/ 1.386936 (-0.861013) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#48ca19cf1f4d1c99765a1f847c1f6b849496d99d \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008502 \/ 0.011353 (-0.002851) | 0.005946 \/ 0.011008 (-0.005062) | 0.131279 \/ 0.038508 (0.092771) | 0.035400 \/ 0.023109 (0.012291) | 0.423240 \/ 0.275898 (0.147342) | 0.470248 \/ 0.323480 (0.146768) | 0.004949 \/ 0.007986 (-0.003037) | 0.004544 \/ 0.004328 (0.000215) | 0.106856 \/ 0.004250 (0.102605) | 0.046579 \/ 0.037052 (0.009527) | 0.441135 \/ 0.258489 (0.182646) | 0.470401 \/ 0.293841 (0.176561) | 0.047231 \/ 0.128546 (-0.081315) | 0.017278 \/ 0.075646 (-0.058368) | 0.401937 \/ 0.419271 (-0.017335) | 0.067151 \/ 0.043533 (0.023619) | 0.453908 \/ 0.255139 (0.198769) | 0.422171 \/ 0.283200 (0.138971) | 0.123583 \/ 0.141683 (-0.018100) | 1.852895 \/ 1.452155 (0.400740) | 1.827282 \/ 1.492716 (0.334566) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.246419 \/ 0.018006 (0.228413) | 0.576930 \/ 0.000490 (0.576440) | 0.007511 \/ 0.000200 (0.007312) | 0.000165 \/ 0.000054 (0.000111) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032732 \/ 0.037411 (-0.004680) | 0.130266 \/ 0.014526 (0.115740) | 0.150537 \/ 0.176557 (-0.026019) | 0.218554 \/ 0.737135 (-0.518582) | 0.148572 \/ 0.296338 (-0.147766) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.598611 \/ 0.215209 (0.383402) | 6.181219 \/ 2.077655 (4.103564) | 2.473468 \/ 1.504120 (0.969348) | 2.206374 \/ 1.541195 (0.665179) | 2.216707 \/ 1.468490 (0.748217) | 0.981295 \/ 4.584777 (-3.603482) | 5.716384 \/ 3.745712 (1.970672) | 5.882327 \/ 5.269862 (0.612466) | 2.761081 \/ 4.565676 (-1.804595) | 0.113544 \/ 0.424275 (-0.310731) | 0.015131 \/ 0.007607 (0.007524) | 0.850939 \/ 0.226044 (0.624894) | 8.046611 \/ 2.268929 (5.777682) | 3.340542 \/ 55.444624 (-52.104083) | 2.673692 \/ 6.876477 (-4.202785) | 2.926330 \/ 2.142072 (0.784257) | 1.176164 \/ 4.805227 (-3.629064) | 0.226745 \/ 6.500664 (-6.273919) | 0.085910 \/ 0.075469 (0.010441) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.483792 \/ 1.841788 (-0.357995) | 18.895009 \/ 8.074308 (10.820701) | 20.982461 \/ 10.191392 (10.791069) | 0.253085 \/ 0.680424 (-0.427339) | 0.031284 \/ 0.534201 (-0.502917) | 0.516569 \/ 0.579283 (-0.062714) | 0.635781 \/ 0.434364 (0.201417) | 0.604359 \/ 0.540337 (0.064022) | 0.725278 \/ 1.386936 (-0.661658) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009220 \/ 0.011353 (-0.002133) | 0.005792 \/ 0.011008 (-0.005216) | 0.099795 \/ 0.038508 (0.061287) | 0.033812 \/ 0.023109 (0.010703) | 0.459386 \/ 0.275898 (0.183488) | 0.518067 \/ 0.323480 (0.194587) | 0.005083 \/ 0.007986 (-0.002902) | 0.004145 \/ 0.004328 (-0.000183) | 0.103506 \/ 0.004250 (0.099255) | 0.050429 \/ 0.037052 (0.013377) | 0.478149 \/ 0.258489 (0.219660) | 0.531280 \/ 0.293841 (0.237440) | 0.047373 \/ 0.128546 (-0.081173) | 0.013647 \/ 0.075646 (-0.061999) | 0.115174 \/ 0.419271 (-0.304098) | 0.061099 \/ 0.043533 (0.017566) | 0.455002 \/ 0.255139 (0.199863) | 0.507765 \/ 0.283200 (0.224565) | 0.112219 \/ 0.141683 (-0.029464) | 1.873591 \/ 1.452155 (0.421436) | 1.952061 \/ 1.492716 (0.459345) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.283587 \/ 0.018006 (0.265581) | 0.587562 \/ 0.000490 (0.587073) | 0.001252 \/ 0.000200 (0.001052) | 0.000095 \/ 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032706 \/ 0.037411 (-0.004705) | 0.137715 \/ 0.014526 (0.123189) | 0.131932 \/ 0.176557 (-0.044625) | 0.200042 \/ 0.737135 (-0.537094) | 0.159327 \/ 0.296338 (-0.137011) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.624061 \/ 0.215209 (0.408852) | 6.386235 \/ 2.077655 (4.308580) | 2.908786 \/ 1.504120 (1.404666) | 2.589855 \/ 1.541195 (1.048660) | 2.387988 \/ 1.468490 (0.919498) | 0.952625 \/ 4.584777 (-3.632152) | 5.571641 \/ 3.745712 (1.825929) | 2.711154 \/ 5.269862 (-2.558708) | 1.788015 \/ 4.565676 (-2.777662) | 0.104488 \/ 0.424275 (-0.319787) | 0.015213 \/ 0.007607 (0.007606) | 0.798446 \/ 0.226044 (0.572401) | 8.011614 \/ 2.268929 (5.742686) | 3.711951 \/ 55.444624 (-51.732673) | 2.896881 \/ 6.876477 (-3.979595) | 3.172116 \/ 2.142072 (1.030043) | 1.136816 \/ 4.805227 (-3.668411) | 0.239254 \/ 6.500664 (-6.261410) | 0.081136 \/ 0.075469 (0.005667) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.798246 \/ 1.841788 (-0.043542) | 19.497108 \/ 8.074308 (11.422800) | 23.450258 \/ 10.191392 (13.258866) | 0.250021 \/ 0.680424 (-0.430403) | 0.029138 \/ 0.534201 (-0.505063) | 0.532984 \/ 0.579283 (-0.046299) | 0.638161 \/ 0.434364 (0.203797) | 0.615720 \/ 0.540337 (0.075382) | 0.770621 \/ 1.386936 (-0.616315) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#7d8345c5f8a844ff44cfbb30cbda514ffe89bfd7 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009120 \/ 0.011353 (-0.002233) | 0.005381 \/ 0.011008 (-0.005627) | 0.139719 \/ 0.038508 (0.101211) | 0.037229 \/ 0.023109 (0.014120) | 0.414633 \/ 0.275898 (0.138734) | 0.480313 \/ 0.323480 (0.156833) | 0.005027 \/ 0.007986 (-0.002959) | 0.005015 \/ 0.004328 (0.000687) | 0.108513 \/ 0.004250 (0.104263) | 0.056167 \/ 0.037052 (0.019115) | 0.407588 \/ 0.258489 (0.149099) | 0.518899 \/ 0.293841 (0.225058) | 0.048857 \/ 0.128546 (-0.079689) | 0.013694 \/ 0.075646 (-0.061952) | 0.418035 \/ 0.419271 (-0.001237) | 0.067755 \/ 0.043533 (0.024222) | 0.417740 \/ 0.255139 (0.162601) | 0.478622 \/ 0.283200 (0.195422) | 0.118290 \/ 0.141683 (-0.023393) | 1.901473 \/ 1.452155 (0.449319) | 1.978126 \/ 1.492716 (0.485409) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.271960 \/ 0.018006 (0.253954) | 0.602745 \/ 0.000490 (0.602255) | 0.005371 \/ 0.000200 (0.005171) | 0.000102 \/ 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029620 \/ 0.037411 (-0.007791) | 0.122402 \/ 0.014526 (0.107877) | 0.132645 \/ 0.176557 (-0.043911) | 0.212635 \/ 0.737135 (-0.524500) | 0.136901 \/ 0.296338 (-0.159438) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.644017 \/ 0.215209 (0.428808) | 6.597151 \/ 2.077655 (4.519496) | 2.454471 \/ 1.504120 (0.950351) | 2.151357 \/ 1.541195 (0.610163) | 2.290748 \/ 1.468490 (0.822258) | 0.970194 \/ 4.584777 (-3.614583) | 5.475275 \/ 3.745712 (1.729563) | 2.772658 \/ 5.269862 (-2.497204) | 1.785311 \/ 4.565676 (-2.780366) | 0.114503 \/ 0.424275 (-0.309772) | 0.015374 \/ 0.007607 (0.007767) | 0.768413 \/ 0.226044 (0.542368) | 7.956219 \/ 2.268929 (5.687290) | 3.272138 \/ 55.444624 (-52.172486) | 2.539638 \/ 6.876477 (-4.336839) | 2.713526 \/ 2.142072 (0.571454) | 1.181221 \/ 4.805227 (-3.624006) | 0.236327 \/ 6.500664 (-6.264337) | 0.089815 \/ 0.075469 (0.014345) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.521805 \/ 1.841788 (-0.319983) | 18.196529 \/ 8.074308 (10.122221) | 20.287324 \/ 10.191392 (10.095932) | 0.256959 \/ 0.680424 (-0.423465) | 0.028846 \/ 0.534201 (-0.505355) | 0.522354 \/ 0.579283 (-0.056929) | 0.600216 \/ 0.434364 (0.165852) | 0.607668 \/ 0.540337 (0.067331) | 0.762101 \/ 1.386936 (-0.624835) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009227 \/ 0.011353 (-0.002126) | 0.005398 \/ 0.011008 (-0.005610) | 0.094998 \/ 0.038508 (0.056490) | 0.036633 \/ 0.023109 (0.013524) | 0.493317 \/ 0.275898 (0.217419) | 0.517216 \/ 0.323480 (0.193736) | 0.005510 \/ 0.007986 (-0.002476) | 0.004249 \/ 0.004328 (-0.000079) | 0.107936 \/ 0.004250 (0.103685) | 0.050223 \/ 0.037052 (0.013171) | 0.580275 \/ 0.258489 (0.321786) | 0.551477 \/ 0.293841 (0.257636) | 0.048758 \/ 0.128546 (-0.079788) | 0.013954 \/ 0.075646 (-0.061692) | 0.107021 \/ 0.419271 (-0.312250) | 0.064416 \/ 0.043533 (0.020884) | 0.485225 \/ 0.255139 (0.230086) | 0.513862 \/ 0.283200 (0.230663) | 0.118848 \/ 0.141683 (-0.022835) | 1.755396 \/ 1.452155 (0.303241) | 1.970349 \/ 1.492716 (0.477633) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.290743 \/ 0.018006 (0.272737) | 0.603293 \/ 0.000490 (0.602803) | 0.006814 \/ 0.000200 (0.006614) | 0.000156 \/ 0.000054 (0.000101) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029862 \/ 0.037411 (-0.007550) | 0.136530 \/ 0.014526 (0.122005) | 0.133728 \/ 0.176557 (-0.042829) | 0.194709 \/ 0.737135 (-0.542427) | 0.151080 \/ 0.296338 (-0.145258) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.649202 \/ 0.215209 (0.433993) | 6.637578 \/ 2.077655 (4.559923) | 3.040135 \/ 1.504120 (1.536015) | 2.671308 \/ 1.541195 (1.130113) | 2.722412 \/ 1.468490 (1.253922) | 0.953029 \/ 4.584777 (-3.631748) | 5.805002 \/ 3.745712 (2.059290) | 5.049939 \/ 5.269862 (-0.219922) | 2.284053 \/ 4.565676 (-2.281623) | 0.130399 \/ 0.424275 (-0.293876) | 0.014726 \/ 0.007607 (0.007119) | 0.932570 \/ 0.226044 (0.706526) | 8.576693 \/ 2.268929 (6.307765) | 4.032738 \/ 55.444624 (-51.411886) | 3.274715 \/ 6.876477 (-3.601762) | 3.513788 \/ 2.142072 (1.371716) | 1.130624 \/ 4.805227 (-3.674603) | 0.219597 \/ 6.500664 (-6.281067) | 0.081425 \/ 0.075469 (0.005956) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.735312 \/ 1.841788 (-0.106476) | 18.438587 \/ 8.074308 (10.364279) | 21.582310 \/ 10.191392 (11.390918) | 0.224040 \/ 0.680424 (-0.456384) | 0.027590 \/ 0.534201 (-0.506611) | 0.503598 \/ 0.579283 (-0.075685) | 0.624379 \/ 0.434364 (0.190015) | 0.571911 \/ 0.540337 (0.031574) | 0.723215 \/ 1.386936 (-0.663721) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#9e40d28f2b0060a429c70827191fa5ff3ce8cf27 \"CML watermark\")\n"],"created_at":1686750638000,"updated_at":1686753792000,"closed_at":1686753225000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5956","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5956","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5956.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5956.patch","merged_at":1686753225000},"body":"ArrowExamplesIterable.shard_data_sources was outdated\r\n\r\nI also fixed a warning message by not using format_type= in with_format()","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5956\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5956\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5955","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5955\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5955\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5955\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5955","id":1756827133,"node_id":"I_kwDODunzps5otw39","number":5955,"title":"Strange bug in loading local JSON files, using load_dataset","user":{"login":"Night-Quiet","id":73934131,"node_id":"MDQ6VXNlcjczOTM0MTMx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/73934131?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Night-Quiet","html_url":"https:\/\/github.com\/Night-Quiet","followers_url":"https:\/\/api.github.com\/users\/Night-Quiet\/followers","following_url":"https:\/\/api.github.com\/users\/Night-Quiet\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Night-Quiet\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Night-Quiet\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Night-Quiet\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Night-Quiet\/orgs","repos_url":"https:\/\/api.github.com\/users\/Night-Quiet\/repos","events_url":"https:\/\/api.github.com\/users\/Night-Quiet\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Night-Quiet\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This is the actual error:\r\n```\r\nFailed to read file '\/home\/lakala\/hjc\/code\/pycode\/glm\/temp.json' with error : cannot mix list and non-list, non-null values\r\n```\r\nWhich means some samples are incorrectly formatted.\r\n\r\nPyArrow, a storage backend that we use under the hood, requires that all the list elements have the same level of nesting (same number of dimensions) or are `None`.\r\n```python\r\nimport pyarrow as pa\r\npa.array([[1, 2, 3], 2]) # ArrowInvalid: cannot mix list and non-list, non-null values\r\npa.array([[1, 2, 3], [2]]) # works\r\n``` ","@mariosasko \r\nI used the same operation to check the original data before and after slicing.\r\nThis is reflected in my code.\r\n160000 is not a specific number.\r\nI can also get output using 150000.\r\nThis doesn't seem to align very well with what you said.\r\nBecause if only some sample formats are incorrect.\r\nSo there should be an error in one of the front and back slices.\r\nthank you for your reply.","Our JSON loader does the following in your case:\r\n\r\n```python\r\nimport json\r\nimport pyarrow as pa\r\n\r\nwith open(file, encoding=\"utf-8\") as f:\r\n dataset = json.load(f)\r\nkeys = set().union(*[row.keys() for row in dataset])\r\nmapping = {col: [row.get(col) for row in dataset] for col in keys}\r\npa_table = pa.Table.from_pydict(mapping) # the ArrowInvalid error comes from here\r\n```\r\n\r\nSo if this code throws an error with correctly-formatted JSON, then this is an Arrow bug and should be reported in their repo.\r\n\r\n> I used the same operation to check the original data before and after slicing.\r\nThis is reflected in my code.\r\n160000 is not a specific number.\r\nI can also get output using 150000.\r\nThis doesn't seem to align very well with what you said.\r\nBecause if only some sample formats are incorrect.\r\nSo there should be an error in one of the front and back slices.\r\n\r\nYou should shuffle the data to make sure that's not the case","@mariosasko \r\nThank you.\r\nI will try again."],"created_at":1686746760000,"updated_at":1687358535000,"closed_at":1687358535000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI am using 'load_dataset 'loads a JSON file, but I found a strange bug: an error will be reported when the length of the JSON file exceeds 160000 (uncertain exact number). I have checked the data through the following code and there are no issues. So I cannot determine the true reason for this error. \r\n\r\nThe data is a list containing a dictionary. As follows: \r\n\r\n[\r\n{'input': 'someting...', 'target': 'someting...', 'type': 'someting...', 'history': ['someting...', ...]}, \r\n...\r\n]\n\n### Steps to reproduce the bug\n\n```\r\nimport json\r\nfrom datasets import load_dataset\r\n\r\npath = \"target.json\"\r\ntemp_path = \"temp.json\"\r\n\r\nwith open(path, \"r\") as f:\r\n data = json.load(f)\r\n print(f\"\\n-------the JSON file length is: {len(data)}-------\\n\")\r\n\r\nwith open(temp_path, \"w\") as f:\r\n json.dump(data[:160000], f)\r\ndataset = load_dataset(\"json\", data_files=temp_path)\r\nprint(\"\\n-------This works when the JSON file length is 160000-------\\n\")\r\n\r\nwith open(temp_path, \"w\") as f:\r\n json.dump(data[160000:], f)\r\ndataset = load_dataset(\"json\", data_files=temp_path)\r\nprint(\"\\n-------This works and eliminates data issues-------\\n\")\r\n\r\nwith open(temp_path, \"w\") as f:\r\n json.dump(data[:170000], f)\r\ndataset = load_dataset(\"json\", data_files=temp_path)\r\n```\n\n### Expected behavior\n\n```\r\n-------the JSON file length is: 173049-------\r\n\r\nDownloading and preparing dataset json\/default to \/root\/.cache\/huggingface\/datasets\/json\/default-acf3c7f418c5f4b4\/0.0.0\/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 3328.81it\/s]\r\nExtracting data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 639.47it\/s]\r\nDataset json downloaded and prepared to \/root\/.cache\/huggingface\/datasets\/json\/default-acf3c7f418c5f4b4\/0.0.0\/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4. Subsequent calls will reuse this data.\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 265.85it\/s]\r\n\r\n-------This works when the JSON file length is 160000-------\r\n\r\nDownloading and preparing dataset json\/default to \/root\/.cache\/huggingface\/datasets\/json\/default-a42f04b263ceea6a\/0.0.0\/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 2038.05it\/s]\r\nExtracting data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 794.83it\/s]\r\nDataset json downloaded and prepared to \/root\/.cache\/huggingface\/datasets\/json\/default-a42f04b263ceea6a\/0.0.0\/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4. Subsequent calls will reuse this data.\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 681.00it\/s]\r\n\r\n-------This works and eliminates data issues-------\r\n\r\nDownloading and preparing dataset json\/default to \/root\/.cache\/huggingface\/datasets\/json\/default-63f391c89599c7b0\/0.0.0\/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 3682.44it\/s]\r\nExtracting data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 788.70it\/s]\r\nGenerating train split: 0 examples [00:00, ? examples\/s]Failed to read file '\/home\/lakala\/hjc\/code\/pycode\/glm\/temp.json' with error : cannot mix list and non-list, non-null values\r\nTraceback (most recent call last):\r\n File \"\/home\/lakala\/conda\/envs\/glm\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1858, in _prepare_split_single\r\n for _, table in generator:\r\n File \"\/home\/lakala\/conda\/envs\/glm\/lib\/python3.8\/site-packages\/datasets\/packaged_modules\/json\/json.py\", line 146, in _generate_tables\r\n raise ValueError(f\"Not able to read records in the JSON file at {file}.\") from None\r\nValueError: Not able to read records in the JSON file at \/home\/lakala\/hjc\/code\/pycode\/glm\/temp.json.\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"\/home\/lakala\/hjc\/code\/pycode\/glm\/test.py\", line 22, in \r\n dataset = load_dataset(\"json\", data_files=temp_path)\r\n File \"\/home\/lakala\/conda\/envs\/glm\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1797, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/lakala\/conda\/envs\/glm\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 890, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/lakala\/conda\/envs\/glm\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 985, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/lakala\/conda\/envs\/glm\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1746, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"\/home\/lakala\/conda\/envs\/glm\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1891, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\ndatasets.builder.DatasetGenerationError: An error occurred while generating the dataset\r\n```\n\n### Environment info\n\n```\r\nUbuntu==22.04\r\n\r\npython==3.8\r\n\r\npytorch-transformers==1.2.0\r\ntransformers== 4.27.1\r\ndatasets==2.12.0\r\nnumpy==1.24.3\r\npandas==1.5.3\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5955\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5955\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5954","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5954\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5954\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5954\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5954","id":1756572994,"node_id":"PR_kwDODunzps5S-hSP","number":5954,"title":"Better filenotfound for gated","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006374 \/ 0.011353 (-0.004979) | 0.004100 \/ 0.011008 (-0.006909) | 0.104031 \/ 0.038508 (0.065523) | 0.035186 \/ 0.023109 (0.012076) | 0.328904 \/ 0.275898 (0.053006) | 0.361409 \/ 0.323480 (0.037929) | 0.003855 \/ 0.007986 (-0.004130) | 0.004140 \/ 0.004328 (-0.000189) | 0.080406 \/ 0.004250 (0.076156) | 0.045658 \/ 0.037052 (0.008606) | 0.341133 \/ 0.258489 (0.082644) | 0.372688 \/ 0.293841 (0.078847) | 0.032025 \/ 0.128546 (-0.096521) | 0.008877 \/ 0.075646 (-0.066769) | 0.354784 \/ 0.419271 (-0.064488) | 0.068874 \/ 0.043533 (0.025341) | 0.335441 \/ 0.255139 (0.080302) | 0.356498 \/ 0.283200 (0.073298) | 0.113367 \/ 0.141683 (-0.028316) | 1.522458 \/ 1.452155 (0.070304) | 1.608046 \/ 1.492716 (0.115329) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.231653 \/ 0.018006 (0.213647) | 0.446678 \/ 0.000490 (0.446188) | 0.003246 \/ 0.000200 (0.003046) | 0.000085 \/ 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025299 \/ 0.037411 (-0.012112) | 0.111440 \/ 0.014526 (0.096914) | 0.118758 \/ 0.176557 (-0.057799) | 0.175037 \/ 0.737135 (-0.562098) | 0.124583 \/ 0.296338 (-0.171755) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.418694 \/ 0.215209 (0.203484) | 4.174695 \/ 2.077655 (2.097041) | 1.890323 \/ 1.504120 (0.386203) | 1.683300 \/ 1.541195 (0.142106) | 1.781954 \/ 1.468490 (0.313464) | 0.546131 \/ 4.584777 (-4.038645) | 3.768055 \/ 3.745712 (0.022343) | 1.839878 \/ 5.269862 (-3.429983) | 1.111877 \/ 4.565676 (-3.453800) | 0.068568 \/ 0.424275 (-0.355707) | 0.011950 \/ 0.007607 (0.004343) | 0.527469 \/ 0.226044 (0.301425) | 5.274887 \/ 2.268929 (3.005958) | 2.391274 \/ 55.444624 (-53.053351) | 2.063837 \/ 6.876477 (-4.812640) | 2.140627 \/ 2.142072 (-0.001445) | 0.681508 \/ 4.805227 (-4.123719) | 0.148203 \/ 6.500664 (-6.352461) | 0.064456 \/ 0.075469 (-0.011013) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.221478 \/ 1.841788 (-0.620310) | 14.713705 \/ 8.074308 (6.639397) | 14.674184 \/ 10.191392 (4.482792) | 0.148411 \/ 0.680424 (-0.532012) | 0.017858 \/ 0.534201 (-0.516343) | 0.436166 \/ 0.579283 (-0.143117) | 0.437290 \/ 0.434364 (0.002926) | 0.521994 \/ 0.540337 (-0.018343) | 0.635488 \/ 1.386936 (-0.751448) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006108 \/ 0.011353 (-0.005245) | 0.003888 \/ 0.011008 (-0.007120) | 0.078424 \/ 0.038508 (0.039916) | 0.033618 \/ 0.023109 (0.010509) | 0.376284 \/ 0.275898 (0.100386) | 0.396957 \/ 0.323480 (0.073477) | 0.003799 \/ 0.007986 (-0.004187) | 0.003160 \/ 0.004328 (-0.001168) | 0.078358 \/ 0.004250 (0.074107) | 0.045597 \/ 0.037052 (0.008545) | 0.386396 \/ 0.258489 (0.127907) | 0.412985 \/ 0.293841 (0.119144) | 0.031610 \/ 0.128546 (-0.096936) | 0.008720 \/ 0.075646 (-0.066926) | 0.085944 \/ 0.419271 (-0.333328) | 0.050780 \/ 0.043533 (0.007247) | 0.378099 \/ 0.255139 (0.122960) | 0.381894 \/ 0.283200 (0.098694) | 0.098926 \/ 0.141683 (-0.042756) | 1.513842 \/ 1.452155 (0.061688) | 1.595040 \/ 1.492716 (0.102323) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.208169 \/ 0.018006 (0.190163) | 0.431653 \/ 0.000490 (0.431163) | 0.000935 \/ 0.000200 (0.000735) | 0.000088 \/ 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029600 \/ 0.037411 (-0.007812) | 0.116936 \/ 0.014526 (0.102410) | 0.125603 \/ 0.176557 (-0.050953) | 0.177007 \/ 0.737135 (-0.560129) | 0.130602 \/ 0.296338 (-0.165736) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.457158 \/ 0.215209 (0.241949) | 4.563254 \/ 2.077655 (2.485599) | 2.303549 \/ 1.504120 (0.799429) | 2.107269 \/ 1.541195 (0.566074) | 2.130861 \/ 1.468490 (0.662371) | 0.548931 \/ 4.584777 (-4.035846) | 3.745578 \/ 3.745712 (-0.000134) | 1.820372 \/ 5.269862 (-3.449490) | 1.099316 \/ 4.565676 (-3.466361) | 0.068218 \/ 0.424275 (-0.356057) | 0.012336 \/ 0.007607 (0.004728) | 0.569721 \/ 0.226044 (0.343676) | 5.691312 \/ 2.268929 (3.422384) | 2.797483 \/ 55.444624 (-52.647141) | 2.422621 \/ 6.876477 (-4.453855) | 2.426187 \/ 2.142072 (0.284115) | 0.674777 \/ 4.805227 (-4.130451) | 0.144855 \/ 6.500664 (-6.355809) | 0.065805 \/ 0.075469 (-0.009664) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.305078 \/ 1.841788 (-0.536709) | 14.874315 \/ 8.074308 (6.800007) | 14.541301 \/ 10.191392 (4.349909) | 0.175818 \/ 0.680424 (-0.504606) | 0.018169 \/ 0.534201 (-0.516032) | 0.435836 \/ 0.579283 (-0.143447) | 0.458397 \/ 0.434364 (0.024033) | 0.506232 \/ 0.540337 (-0.034106) | 0.605306 \/ 1.386936 (-0.781630) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#7e0c1ceab96821c7c6557482d25a9bd2078d716a \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006138 \/ 0.011353 (-0.005215) | 0.003792 \/ 0.011008 (-0.007216) | 0.099417 \/ 0.038508 (0.060908) | 0.028739 \/ 0.023109 (0.005630) | 0.302835 \/ 0.275898 (0.026937) | 0.336397 \/ 0.323480 (0.012918) | 0.003537 \/ 0.007986 (-0.004449) | 0.002973 \/ 0.004328 (-0.001355) | 0.077461 \/ 0.004250 (0.073211) | 0.039493 \/ 0.037052 (0.002440) | 0.302367 \/ 0.258489 (0.043878) | 0.344936 \/ 0.293841 (0.051095) | 0.027813 \/ 0.128546 (-0.100733) | 0.008591 \/ 0.075646 (-0.067055) | 0.318975 \/ 0.419271 (-0.100297) | 0.045971 \/ 0.043533 (0.002438) | 0.301672 \/ 0.255139 (0.046533) | 0.328202 \/ 0.283200 (0.045003) | 0.091400 \/ 0.141683 (-0.050282) | 1.487215 \/ 1.452155 (0.035060) | 1.557730 \/ 1.492716 (0.065014) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.208343 \/ 0.018006 (0.190336) | 0.426764 \/ 0.000490 (0.426275) | 0.001196 \/ 0.000200 (0.000996) | 0.000069 \/ 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024332 \/ 0.037411 (-0.013079) | 0.101861 \/ 0.014526 (0.087335) | 0.108669 \/ 0.176557 (-0.067888) | 0.172042 \/ 0.737135 (-0.565093) | 0.113048 \/ 0.296338 (-0.183290) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.421419 \/ 0.215209 (0.206210) | 4.200816 \/ 2.077655 (2.123162) | 1.913516 \/ 1.504120 (0.409396) | 1.712167 \/ 1.541195 (0.170972) | 1.762129 \/ 1.468490 (0.293639) | 0.561616 \/ 4.584777 (-4.023161) | 3.398122 \/ 3.745712 (-0.347590) | 1.744323 \/ 5.269862 (-3.525538) | 1.036023 \/ 4.565676 (-3.529653) | 0.067658 \/ 0.424275 (-0.356617) | 0.011145 \/ 0.007607 (0.003538) | 0.522803 \/ 0.226044 (0.296759) | 5.226245 \/ 2.268929 (2.957317) | 2.355148 \/ 55.444624 (-53.089476) | 2.014939 \/ 6.876477 (-4.861538) | 2.140028 \/ 2.142072 (-0.002044) | 0.695049 \/ 4.805227 (-4.110178) | 0.138428 \/ 6.500664 (-6.362236) | 0.066721 \/ 0.075469 (-0.008748) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.219610 \/ 1.841788 (-0.622177) | 14.239576 \/ 8.074308 (6.165268) | 14.381955 \/ 10.191392 (4.190563) | 0.131208 \/ 0.680424 (-0.549216) | 0.016698 \/ 0.534201 (-0.517503) | 0.361373 \/ 0.579283 (-0.217910) | 0.382560 \/ 0.434364 (-0.051804) | 0.419427 \/ 0.540337 (-0.120911) | 0.508314 \/ 1.386936 (-0.878622) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006174 \/ 0.011353 (-0.005179) | 0.003893 \/ 0.011008 (-0.007115) | 0.079614 \/ 0.038508 (0.041106) | 0.028685 \/ 0.023109 (0.005576) | 0.368627 \/ 0.275898 (0.092729) | 0.411599 \/ 0.323480 (0.088119) | 0.003573 \/ 0.007986 (-0.004413) | 0.002989 \/ 0.004328 (-0.001340) | 0.078653 \/ 0.004250 (0.074402) | 0.041146 \/ 0.037052 (0.004094) | 0.362387 \/ 0.258489 (0.103898) | 0.417234 \/ 0.293841 (0.123393) | 0.027958 \/ 0.128546 (-0.100589) | 0.008695 \/ 0.075646 (-0.066952) | 0.084637 \/ 0.419271 (-0.334635) | 0.044188 \/ 0.043533 (0.000655) | 0.358514 \/ 0.255139 (0.103375) | 0.392314 \/ 0.283200 (0.109114) | 0.093986 \/ 0.141683 (-0.047697) | 1.535366 \/ 1.452155 (0.083212) | 1.605978 \/ 1.492716 (0.113262) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.196215 \/ 0.018006 (0.178209) | 0.429403 \/ 0.000490 (0.428913) | 0.003736 \/ 0.000200 (0.003536) | 0.000078 \/ 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025281 \/ 0.037411 (-0.012130) | 0.104325 \/ 0.014526 (0.089799) | 0.111548 \/ 0.176557 (-0.065009) | 0.162326 \/ 0.737135 (-0.574809) | 0.113853 \/ 0.296338 (-0.182486) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.447600 \/ 0.215209 (0.232391) | 4.463422 \/ 2.077655 (2.385767) | 2.168028 \/ 1.504120 (0.663908) | 1.968699 \/ 1.541195 (0.427504) | 2.035531 \/ 1.468490 (0.567041) | 0.564575 \/ 4.584777 (-4.020202) | 3.435338 \/ 3.745712 (-0.310374) | 2.981930 \/ 5.269862 (-2.287932) | 1.492172 \/ 4.565676 (-3.073505) | 0.067981 \/ 0.424275 (-0.356294) | 0.011254 \/ 0.007607 (0.003647) | 0.544385 \/ 0.226044 (0.318340) | 5.441694 \/ 2.268929 (3.172765) | 2.650168 \/ 55.444624 (-52.794456) | 2.333974 \/ 6.876477 (-4.542503) | 2.383424 \/ 2.142072 (0.241351) | 0.669814 \/ 4.805227 (-4.135414) | 0.135456 \/ 6.500664 (-6.365209) | 0.067067 \/ 0.075469 (-0.008402) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.313275 \/ 1.841788 (-0.528513) | 14.527636 \/ 8.074308 (6.453328) | 14.470957 \/ 10.191392 (4.279565) | 0.144361 \/ 0.680424 (-0.536063) | 0.016847 \/ 0.534201 (-0.517354) | 0.365158 \/ 0.579283 (-0.214125) | 0.393809 \/ 0.434364 (-0.040555) | 0.428527 \/ 0.540337 (-0.111810) | 0.515816 \/ 1.386936 (-0.871120) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#7845d4c3c301226b3f8941ac90aaa123bfd7c69e \"CML watermark\")\n"],"created_at":1686738790000,"updated_at":1686746007000,"closed_at":1686745591000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5954","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5954","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5954.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5954.patch","merged_at":1686745591000},"body":"close https:\/\/github.com\/huggingface\/datasets\/issues\/5953\r\n\r\n\"image\"\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5954\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5954\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5953","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5953\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5953\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5953\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5953","id":1756520523,"node_id":"I_kwDODunzps5osmBL","number":5953,"title":"Bad error message when trying to download gated dataset","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["cc @sanchit-gandhi @Vaibhavs10 @lhoestq - this is mainly for demos that use Common Voice datasets as done here: https:\/\/github.com\/facebookresearch\/fairseq\/tree\/main\/examples\/mms#-transformers\r\n","Hi ! the error for me is\r\n\r\n```\r\nFileNotFoundError: Couldn't find a dataset script at \/content\/mozilla-foundation\/common_voice_13_0\/common_voice_13_0.py or any data file in the same directory. Couldn't find 'mozilla-foundation\/common_voice_13_0' on the Hugging Face Hub either: FileNotFoundError: Dataset 'mozilla-foundation\/common_voice_13_0' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`.\r\n```\r\n\r\nAnd tbh idk how you managed to get your error. \"n_shards.json\" is not even a thing in `datasets`","Okay, I am able to reproduce @patrickvonplaten's original error: https:\/\/github.com\/Vaibhavs10\/scratchpad\/blob\/main\/cv13_datasets_test.ipynb\r\n\r\nAlso not sure why it looks for `n_shards.json`","Ok I see, this file is downloaded from the CV dataset script - let me investigate","Ok I see: when you log out you no longer have access to the repository.\r\n\r\nTherefore the dataset script is loaded from cache:\r\n```\r\nWARNING:datasets.load:Using the latest cached version of the module from \/root\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/mozilla-foundation--common_voice_13_0\/22809012aac1fc9803eaffc44122e4149043748e93933935d5ea19898587e4d7 (last modified on Wed Jun 14 10:13:17 2023) since it couldn't be found locally at mozilla-foundation\/common_voice_13_0., or remotely on the Hugging Face Hub.\r\n```\r\n\r\nand the script tries to download the n_shards.json but fails","Is this ok for you https:\/\/github.com\/huggingface\/datasets\/pull\/5954 ?\r\n\r\nI'll do a release this afternoon","Cool! ","this is included in the new release 2.13.0"],"created_at":1686737019000,"updated_at":1686760611000,"closed_at":1686745592000,"author_association":"MEMBER","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nWhen I attempt to download a model from the Hub that is gated without being logged in, I get a nice error message. E.g.:\r\n\r\nE.g.\r\n```sh\r\nRepository Not Found for url: https:\/\/huggingface.co\/api\/models\/DeepFloyd\/IF-I-XL-v1.0.\r\nPlease make sure you specified the correct `repo_id` and `repo_type`.\r\nIf you are trying to access a private or gated repo, make sure you are authenticated.\r\nInvalid username or password..\r\nWill try to load from local cache.\r\n```\r\n\r\nIf I do the same for a gated dataset on the Hub, I'm not gated a nice error message IMO:\r\n\r\n```sh\r\nFile ~\/hf\/lib\/python3.10\/site-packages\/fsspec\/implementations\/http.py:430, in HTTPFileSystem._info(self, url, **kwargs)\r\n 427 except Exception as exc:\r\n 428 if policy == \"get\":\r\n 429 # If get failed, then raise a FileNotFoundError\r\n--> 430 raise FileNotFoundError(url) from exc\r\n 431 logger.debug(str(exc))\r\n 433 return {\"name\": url, \"size\": None, **info, \"type\": \"file\"}\r\n\r\nFileNotFoundError: https:\/\/huggingface.co\/datasets\/mozilla-foundation\/common_voice_13_0\/resolve\/main\/n_shards.json\r\n```\n\n### Steps to reproduce the bug\n\n```\r\nhuggingface-cli logout\r\n```\r\n\r\nand then:\r\n\r\n```py\r\nfrom datasets import load_dataset, Audio\r\n\r\n# English\r\nstream_data = load_dataset(\"mozilla-foundation\/common_voice_13_0\", \"en\", split=\"test\", streaming=True)\r\nstream_data = stream_data.cast_column(\"audio\", Audio(sampling_rate=16000))\r\nen_sample = next(iter(stream_data))[\"audio\"][\"array\"]\r\n\r\n# Swahili\r\nstream_data = load_dataset(\"mozilla-foundation\/common_voice_13_0\", \"sw\", split=\"test\", streaming=True)\r\nstream_data = stream_data.cast_column(\"audio\", Audio(sampling_rate=16000))\r\nsw_sample = next(iter(stream_data))[\"audio\"][\"array\"]\r\n```\n\n### Expected behavior\n\nBetter error message\n\n### Environment info\n\nCopy-and-paste the text below in your GitHub issue.\r\n\r\n- `datasets` version: 2.12.0\r\n- Platform: Linux-6.2.0-76060200-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.6\r\n- Huggingface_hub version: 0.16.0.dev0\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 1.5.3\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5953\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5953\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5952","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5952\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5952\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5952\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5952","id":1756481591,"node_id":"PR_kwDODunzps5S-OIh","number":5952,"title":"Add Arrow builder docs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006522 \/ 0.011353 (-0.004831) | 0.004319 \/ 0.011008 (-0.006690) | 0.099280 \/ 0.038508 (0.060772) | 0.033117 \/ 0.023109 (0.010007) | 0.339392 \/ 0.275898 (0.063494) | 0.366219 \/ 0.323480 (0.042739) | 0.003896 \/ 0.007986 (-0.004090) | 0.003412 \/ 0.004328 (-0.000916) | 0.076655 \/ 0.004250 (0.072404) | 0.045203 \/ 0.037052 (0.008150) | 0.355800 \/ 0.258489 (0.097311) | 0.372533 \/ 0.293841 (0.078692) | 0.032318 \/ 0.128546 (-0.096229) | 0.009030 \/ 0.075646 (-0.066616) | 0.328701 \/ 0.419271 (-0.090571) | 0.052891 \/ 0.043533 (0.009358) | 0.341131 \/ 0.255139 (0.085992) | 0.351593 \/ 0.283200 (0.068393) | 0.105136 \/ 0.141683 (-0.036546) | 1.475953 \/ 1.452155 (0.023798) | 1.566074 \/ 1.492716 (0.073357) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.216671 \/ 0.018006 (0.198664) | 0.446952 \/ 0.000490 (0.446462) | 0.006340 \/ 0.000200 (0.006140) | 0.000096 \/ 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028293 \/ 0.037411 (-0.009118) | 0.112298 \/ 0.014526 (0.097773) | 0.118634 \/ 0.176557 (-0.057923) | 0.175542 \/ 0.737135 (-0.561593) | 0.124773 \/ 0.296338 (-0.171565) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.435209 \/ 0.215209 (0.220000) | 4.344361 \/ 2.077655 (2.266706) | 2.128943 \/ 1.504120 (0.624823) | 1.945465 \/ 1.541195 (0.404271) | 2.049932 \/ 1.468490 (0.581442) | 0.547126 \/ 4.584777 (-4.037651) | 3.768698 \/ 3.745712 (0.022986) | 1.924441 \/ 5.269862 (-3.345420) | 1.146364 \/ 4.565676 (-3.419312) | 0.067466 \/ 0.424275 (-0.356809) | 0.011175 \/ 0.007607 (0.003568) | 0.540978 \/ 0.226044 (0.314933) | 5.393120 \/ 2.268929 (3.124191) | 2.639027 \/ 55.444624 (-52.805597) | 2.327216 \/ 6.876477 (-4.549261) | 2.500532 \/ 2.142072 (0.358460) | 0.679120 \/ 4.805227 (-4.126107) | 0.148824 \/ 6.500664 (-6.351840) | 0.064195 \/ 0.075469 (-0.011274) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.158387 \/ 1.841788 (-0.683401) | 14.880751 \/ 8.074308 (6.806443) | 14.725249 \/ 10.191392 (4.533857) | 0.149785 \/ 0.680424 (-0.530639) | 0.017338 \/ 0.534201 (-0.516863) | 0.390980 \/ 0.579283 (-0.188303) | 0.425611 \/ 0.434364 (-0.008753) | 0.458851 \/ 0.540337 (-0.081487) | 0.559209 \/ 1.386936 (-0.827727) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006835 \/ 0.011353 (-0.004518) | 0.004318 \/ 0.011008 (-0.006690) | 0.076715 \/ 0.038508 (0.038207) | 0.033528 \/ 0.023109 (0.010419) | 0.411986 \/ 0.275898 (0.136087) | 0.438752 \/ 0.323480 (0.115272) | 0.004039 \/ 0.007986 (-0.003947) | 0.003509 \/ 0.004328 (-0.000819) | 0.077924 \/ 0.004250 (0.073673) | 0.049519 \/ 0.037052 (0.012467) | 0.420595 \/ 0.258489 (0.162106) | 0.450536 \/ 0.293841 (0.156695) | 0.032817 \/ 0.128546 (-0.095729) | 0.008963 \/ 0.075646 (-0.066684) | 0.083818 \/ 0.419271 (-0.335454) | 0.057591 \/ 0.043533 (0.014058) | 0.404605 \/ 0.255139 (0.149466) | 0.423661 \/ 0.283200 (0.140462) | 0.110698 \/ 0.141683 (-0.030984) | 1.512515 \/ 1.452155 (0.060361) | 1.569207 \/ 1.492716 (0.076490) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.200795 \/ 0.018006 (0.182789) | 0.448853 \/ 0.000490 (0.448363) | 0.003657 \/ 0.000200 (0.003457) | 0.000102 \/ 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031612 \/ 0.037411 (-0.005799) | 0.116712 \/ 0.014526 (0.102186) | 0.126162 \/ 0.176557 (-0.050395) | 0.180522 \/ 0.737135 (-0.556614) | 0.129768 \/ 0.296338 (-0.166570) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.433797 \/ 0.215209 (0.218588) | 4.353099 \/ 2.077655 (2.275444) | 2.117582 \/ 1.504120 (0.613462) | 1.934487 \/ 1.541195 (0.393292) | 2.016988 \/ 1.468490 (0.548498) | 0.531387 \/ 4.584777 (-4.053390) | 3.843520 \/ 3.745712 (0.097807) | 1.879560 \/ 5.269862 (-3.390301) | 1.129445 \/ 4.565676 (-3.436231) | 0.065952 \/ 0.424275 (-0.358323) | 0.011566 \/ 0.007607 (0.003959) | 0.533949 \/ 0.226044 (0.307904) | 5.327447 \/ 2.268929 (3.058518) | 2.572202 \/ 55.444624 (-52.872422) | 2.240723 \/ 6.876477 (-4.635753) | 2.329290 \/ 2.142072 (0.187217) | 0.662162 \/ 4.805227 (-4.143066) | 0.143191 \/ 6.500664 (-6.357473) | 0.065273 \/ 0.075469 (-0.010196) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.274945 \/ 1.841788 (-0.566843) | 15.444511 \/ 8.074308 (7.370203) | 14.793524 \/ 10.191392 (4.602132) | 0.175607 \/ 0.680424 (-0.504817) | 0.017324 \/ 0.534201 (-0.516877) | 0.396172 \/ 0.579283 (-0.183111) | 0.437334 \/ 0.434364 (0.002970) | 0.472621 \/ 0.540337 (-0.067716) | 0.574888 \/ 1.386936 (-0.812048) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#b4ab1b3ed7257b0e0ad075d7271a51835f320a5e \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006976 \/ 0.011353 (-0.004377) | 0.004541 \/ 0.011008 (-0.006467) | 0.106085 \/ 0.038508 (0.067577) | 0.029148 \/ 0.023109 (0.006039) | 0.306386 \/ 0.275898 (0.030488) | 0.351474 \/ 0.323480 (0.027994) | 0.003924 \/ 0.007986 (-0.004062) | 0.004588 \/ 0.004328 (0.000260) | 0.090479 \/ 0.004250 (0.086229) | 0.041195 \/ 0.037052 (0.004142) | 0.346020 \/ 0.258489 (0.087531) | 0.362526 \/ 0.293841 (0.068685) | 0.041020 \/ 0.128546 (-0.087526) | 0.012536 \/ 0.075646 (-0.063110) | 0.333247 \/ 0.419271 (-0.086024) | 0.059786 \/ 0.043533 (0.016253) | 0.318094 \/ 0.255139 (0.062955) | 0.343879 \/ 0.283200 (0.060679) | 0.110083 \/ 0.141683 (-0.031600) | 1.514027 \/ 1.452155 (0.061872) | 1.551435 \/ 1.492716 (0.058719) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.235401 \/ 0.018006 (0.217395) | 0.544292 \/ 0.000490 (0.543803) | 0.005284 \/ 0.000200 (0.005084) | 0.000112 \/ 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025008 \/ 0.037411 (-0.012403) | 0.102235 \/ 0.014526 (0.087709) | 0.105523 \/ 0.176557 (-0.071034) | 0.180846 \/ 0.737135 (-0.556289) | 0.107078 \/ 0.296338 (-0.189261) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.502374 \/ 0.215209 (0.287165) | 5.224254 \/ 2.077655 (3.146600) | 1.987193 \/ 1.504120 (0.483073) | 1.694680 \/ 1.541195 (0.153485) | 1.663907 \/ 1.468490 (0.195417) | 0.786470 \/ 4.584777 (-3.798307) | 4.977895 \/ 3.745712 (1.232183) | 4.713451 \/ 5.269862 (-0.556410) | 2.298763 \/ 4.565676 (-2.266913) | 0.090225 \/ 0.424275 (-0.334051) | 0.011427 \/ 0.007607 (0.003820) | 0.640686 \/ 0.226044 (0.414641) | 6.351727 \/ 2.268929 (4.082798) | 2.636912 \/ 55.444624 (-52.807712) | 2.075566 \/ 6.876477 (-4.800911) | 2.080260 \/ 2.142072 (-0.061812) | 0.952727 \/ 4.805227 (-3.852500) | 0.188651 \/ 6.500664 (-6.312013) | 0.068997 \/ 0.075469 (-0.006472) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.258878 \/ 1.841788 (-0.582910) | 15.444724 \/ 8.074308 (7.370416) | 17.521918 \/ 10.191392 (7.330526) | 0.189732 \/ 0.680424 (-0.490692) | 0.031084 \/ 0.534201 (-0.503117) | 0.445150 \/ 0.579283 (-0.134133) | 0.575844 \/ 0.434364 (0.141480) | 0.498162 \/ 0.540337 (-0.042176) | 0.635885 \/ 1.386936 (-0.751051) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007402 \/ 0.011353 (-0.003951) | 0.005058 \/ 0.011008 (-0.005950) | 0.077659 \/ 0.038508 (0.039151) | 0.034934 \/ 0.023109 (0.011825) | 0.373139 \/ 0.275898 (0.097241) | 0.411857 \/ 0.323480 (0.088377) | 0.003751 \/ 0.007986 (-0.004235) | 0.003634 \/ 0.004328 (-0.000695) | 0.075914 \/ 0.004250 (0.071663) | 0.037555 \/ 0.037052 (0.000503) | 0.387482 \/ 0.258489 (0.128993) | 0.434407 \/ 0.293841 (0.140566) | 0.040540 \/ 0.128546 (-0.088006) | 0.013458 \/ 0.075646 (-0.062189) | 0.096129 \/ 0.419271 (-0.323143) | 0.055369 \/ 0.043533 (0.011836) | 0.386564 \/ 0.255139 (0.131425) | 0.410417 \/ 0.283200 (0.127218) | 0.093265 \/ 0.141683 (-0.048418) | 1.432841 \/ 1.452155 (-0.019314) | 1.533180 \/ 1.492716 (0.040463) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.281051 \/ 0.018006 (0.263045) | 0.547635 \/ 0.000490 (0.547146) | 0.004434 \/ 0.000200 (0.004234) | 0.000105 \/ 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026409 \/ 0.037411 (-0.011002) | 0.098586 \/ 0.014526 (0.084060) | 0.109223 \/ 0.176557 (-0.067334) | 0.165958 \/ 0.737135 (-0.571177) | 0.111751 \/ 0.296338 (-0.184587) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.542717 \/ 0.215209 (0.327508) | 5.530075 \/ 2.077655 (3.452420) | 2.351141 \/ 1.504120 (0.847022) | 2.021659 \/ 1.541195 (0.480464) | 1.964900 \/ 1.468490 (0.496410) | 0.819698 \/ 4.584777 (-3.765079) | 4.917412 \/ 3.745712 (1.171700) | 2.425149 \/ 5.269862 (-2.844712) | 1.561953 \/ 4.565676 (-3.003724) | 0.098417 \/ 0.424275 (-0.325858) | 0.012594 \/ 0.007607 (0.004986) | 0.717212 \/ 0.226044 (0.491168) | 6.994833 \/ 2.268929 (4.725904) | 2.997347 \/ 55.444624 (-52.447277) | 2.388366 \/ 6.876477 (-4.488111) | 2.502913 \/ 2.142072 (0.360841) | 1.030545 \/ 4.805227 (-3.774682) | 0.184844 \/ 6.500664 (-6.315820) | 0.076889 \/ 0.075469 (0.001420) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.371647 \/ 1.841788 (-0.470141) | 15.522995 \/ 8.074308 (7.448687) | 17.349823 \/ 10.191392 (7.158431) | 0.229709 \/ 0.680424 (-0.450714) | 0.023303 \/ 0.534201 (-0.510898) | 0.413874 \/ 0.579283 (-0.165409) | 0.567552 \/ 0.434364 (0.133188) | 0.491722 \/ 0.540337 (-0.048615) | 0.590640 \/ 1.386936 (-0.796296) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#f1911ffa5d1f58f509d04fe1ddeb9d00a63f94d5 \"CML watermark\")\n"],"created_at":1686735766000,"updated_at":1686753751000,"closed_at":1686753279000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5952","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5952","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5952.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5952.patch","merged_at":1686753279000},"body":"following https:\/\/github.com\/huggingface\/datasets\/pull\/5944","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5952\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5952\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5951","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5951\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5951\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5951\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5951","id":1756363546,"node_id":"I_kwDODunzps5or_sa","number":5951,"title":"What is the Right way to use discofuse dataset??","user":{"login":"akesh1235","id":125154243,"node_id":"U_kgDOB3Wzww","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/125154243?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/akesh1235","html_url":"https:\/\/github.com\/akesh1235","followers_url":"https:\/\/api.github.com\/users\/akesh1235\/followers","following_url":"https:\/\/api.github.com\/users\/akesh1235\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/akesh1235\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/akesh1235\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/akesh1235\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/akesh1235\/orgs","repos_url":"https:\/\/api.github.com\/users\/akesh1235\/repos","events_url":"https:\/\/api.github.com\/users\/akesh1235\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/akesh1235\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for opening https:\/\/huggingface.co\/datasets\/discofuse\/discussions\/3, let's continue the discussion over there if you don't mind","I have posted there also sir, please check\r\n@lhoestq"],"created_at":1686731919000,"updated_at":1686749106000,"closed_at":1686744616000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"[Click here for Dataset link](https:\/\/huggingface.co\/datasets\/discofuse\/viewer\/discofuse-wikipedia\/train?row=6)\r\n**Below is the following way, as per my understanding , Is it correct :question: :question:**\r\n\r\nThe **columns\/features from `DiscoFuse dataset`** that will be the **input to the `encoder` and `decoder`** are:\r\n\r\n[Click here for Dataset link](https:\/\/huggingface.co\/datasets\/discofuse\/viewer\/discofuse-wikipedia\/train?row=6)\r\n\r\n1. **coherent_first_sentence**\r\n\r\n2. **coherent_second_sentence**\r\n\r\n3. **incoherent_first_sentence**\r\n\r\n4. **incoherent_second_sentence**\r\n\r\n[Click here for Dataset link](https:\/\/huggingface.co\/datasets\/discofuse\/viewer\/discofuse-wikipedia\/train?row=6)\r\n\r\nThe **`encoder` will take these four columns as input and encode them into a sequence of hidden states. The `decoder` will then take these hidden states as input and decode them into a new sentence that fuses the two original sentences together.**\r\n\r\nThe **discourse type, connective_string, has_coref_type_pronoun, and has_coref_type_nominal columns will not be used as input to the encoder or decoder.** These columns are used to provide additional information about the dataset, but they are not necessary for the task of sentence fusion.\r\n\r\nPlease correct me if I am wrong; otherwise, if this understanding is right, how shall I implement this task practically?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5951\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5951\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5950","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5950\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5950\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5950\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5950","id":1755197946,"node_id":"I_kwDODunzps5onjH6","number":5950,"title":"Support for data with instance-wise dictionary as features","user":{"login":"richardwth","id":33274336,"node_id":"MDQ6VXNlcjMzMjc0MzM2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33274336?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richardwth","html_url":"https:\/\/github.com\/richardwth","followers_url":"https:\/\/api.github.com\/users\/richardwth\/followers","following_url":"https:\/\/api.github.com\/users\/richardwth\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richardwth\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richardwth\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richardwth\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richardwth\/orgs","repos_url":"https:\/\/api.github.com\/users\/richardwth\/repos","events_url":"https:\/\/api.github.com\/users\/richardwth\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richardwth\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! We use the Arrow columnar format under the hood, which doesn't support such dictionaries: each field must have a fixed type and exist in each sample.\r\n\r\nInstead you can restructure your data like\r\n```\r\n{\r\n \"index\": 0,\r\n \"keys\": [\"2 * x + y >= 3\"],\r\n \"values\": [[\"2 * x + y >= 3\", \"4 * x + 2 * y >= 6\"]],\r\n }\r\n},\r\n...\r\n{\r\n \"index\": 9999,\r\n \"keys\": [\"x >= 6\"],\r\n \"values\": [[\"x >= 6\", \"x >= 0\", \"x >= -1\"]],\r\n},\r\n...\r\n```"],"created_at":1686671340000,"updated_at":1686744818000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\n\nI notice that when loading data instances with feature type of python dictionary, the dictionary keys would be broadcast so that every instance has the same set of keys. Please see an example in the Motivation section.\r\n\r\nIt is possible to avoid this behavior, i.e., load dictionary features as it is and do not broadcast the keys among instances? Please note that these dictionaries would have to be processed dynamically at each training iteration into strings (and tokenized).\n\n### Motivation\n\nI am trying to load a dataset from a json file. Each instance of the dataset has a feature that is a dictionary but its keys depend on the instance. Every two instances may have different keys. For example, imagine a dataset that contains a set of math expressions from a bunch of mutually redundant expressions:\r\n```\r\n{\r\n \"index\": 0,\r\n \"feature\": {\r\n \"2 * x + y >= 3\": [\"2 * x + y >= 3\", \"4 * x + 2 * y >= 6\"],\r\n ...\r\n }\r\n},\r\n...\r\n{\r\n \"index\": 9999,\r\n \"feature\": {\r\n \"x >= 6\": [\"x >= 6\", \"x >= 0\", \"x >= -1\"],\r\n ...\r\n }\r\n},\r\n...\r\n```\r\nWhen directly loading the dataset using `data = load_dataset(\"json\", data_files=file_paths, split='train')`, each instance would have all the keys from other instances and None as values. That is, instance of index 0 becomes:\r\n```\r\n{\r\n \"index\": 0,\r\n \"feature\": {\r\n \"2 * x + y >= 3\": [\"2 * x + y >= 3\", \"4 * x + 2 * y >= 6\"],\r\n ...\r\n \"x >= 6\": None, # keys from other instances\r\n ...\r\n }\r\n},\r\n```\r\nThis is not desirable. Moreover, issue would be raised if I attempt to combine two such datasets using `data = concatenate_datasets(multi_datasets)`, perhaps because their dictionary features contain different keys.\r\n\r\nA solution I can think of is to store the dictionary features as a long string, and evaluate it later. Please kindly suggest any other solution using existing methods of datasets.\n\n### Your contribution\n\nN\/A","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5950\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5950\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5949","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5949\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5949\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5949\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5949","id":1754843717,"node_id":"PR_kwDODunzps5S4oPC","number":5949,"title":"Replace metadata utils with `huggingface_hub`'s RepoCard API","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006635 \/ 0.011353 (-0.004718) | 0.004439 \/ 0.011008 (-0.006570) | 0.107831 \/ 0.038508 (0.069323) | 0.035664 \/ 0.023109 (0.012555) | 0.393733 \/ 0.275898 (0.117835) | 0.418336 \/ 0.323480 (0.094856) | 0.005739 \/ 0.007986 (-0.002247) | 0.005737 \/ 0.004328 (0.001408) | 0.079820 \/ 0.004250 (0.075569) | 0.045402 \/ 0.037052 (0.008349) | 0.396108 \/ 0.258489 (0.137619) | 0.422951 \/ 0.293841 (0.129110) | 0.030506 \/ 0.128546 (-0.098040) | 0.009785 \/ 0.075646 (-0.065861) | 0.375302 \/ 0.419271 (-0.043969) | 0.054355 \/ 0.043533 (0.010823) | 0.399652 \/ 0.255139 (0.144513) | 0.410825 \/ 0.283200 (0.127625) | 0.109238 \/ 0.141683 (-0.032445) | 1.687532 \/ 1.452155 (0.235378) | 1.736829 \/ 1.492716 (0.244113) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.226514 \/ 0.018006 (0.208508) | 0.487010 \/ 0.000490 (0.486520) | 0.006436 \/ 0.000200 (0.006236) | 0.000102 \/ 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029097 \/ 0.037411 (-0.008315) | 0.122979 \/ 0.014526 (0.108453) | 0.129454 \/ 0.176557 (-0.047103) | 0.194006 \/ 0.737135 (-0.543129) | 0.137968 \/ 0.296338 (-0.158370) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.466425 \/ 0.215209 (0.251216) | 4.627307 \/ 2.077655 (2.549652) | 2.108840 \/ 1.504120 (0.604720) | 1.882547 \/ 1.541195 (0.341353) | 1.891077 \/ 1.468490 (0.422587) | 0.590646 \/ 4.584777 (-3.994131) | 4.176918 \/ 3.745712 (0.431205) | 2.071475 \/ 5.269862 (-3.198386) | 1.173815 \/ 4.565676 (-3.391862) | 0.075330 \/ 0.424275 (-0.348945) | 0.012944 \/ 0.007607 (0.005337) | 0.587080 \/ 0.226044 (0.361036) | 5.827053 \/ 2.268929 (3.558125) | 2.694258 \/ 55.444624 (-52.750366) | 2.276997 \/ 6.876477 (-4.599480) | 2.329678 \/ 2.142072 (0.187605) | 0.721860 \/ 4.805227 (-4.083367) | 0.159238 \/ 6.500664 (-6.341426) | 0.073013 \/ 0.075469 (-0.002456) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.345396 \/ 1.841788 (-0.496391) | 16.619283 \/ 8.074308 (8.544975) | 14.754754 \/ 10.191392 (4.563362) | 0.180784 \/ 0.680424 (-0.499639) | 0.020376 \/ 0.534201 (-0.513825) | 0.451010 \/ 0.579283 (-0.128273) | 0.481524 \/ 0.434364 (0.047160) | 0.564777 \/ 0.540337 (0.024440) | 0.683232 \/ 1.386936 (-0.703704) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007243 \/ 0.011353 (-0.004110) | 0.005262 \/ 0.011008 (-0.005746) | 0.084090 \/ 0.038508 (0.045581) | 0.037429 \/ 0.023109 (0.014320) | 0.404038 \/ 0.275898 (0.128140) | 0.445040 \/ 0.323480 (0.121560) | 0.006220 \/ 0.007986 (-0.001766) | 0.004256 \/ 0.004328 (-0.000072) | 0.083794 \/ 0.004250 (0.079544) | 0.052655 \/ 0.037052 (0.015603) | 0.414083 \/ 0.258489 (0.155594) | 0.458190 \/ 0.293841 (0.164349) | 0.032719 \/ 0.128546 (-0.095828) | 0.010063 \/ 0.075646 (-0.065583) | 0.092281 \/ 0.419271 (-0.326990) | 0.053888 \/ 0.043533 (0.010355) | 0.407813 \/ 0.255139 (0.152674) | 0.431692 \/ 0.283200 (0.148493) | 0.119799 \/ 0.141683 (-0.021884) | 1.709853 \/ 1.452155 (0.257698) | 1.771592 \/ 1.492716 (0.278876) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.246540 \/ 0.018006 (0.228534) | 0.483199 \/ 0.000490 (0.482709) | 0.002514 \/ 0.000200 (0.002315) | 0.000096 \/ 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031576 \/ 0.037411 (-0.005835) | 0.130020 \/ 0.014526 (0.115495) | 0.140285 \/ 0.176557 (-0.036272) | 0.196164 \/ 0.737135 (-0.540972) | 0.143924 \/ 0.296338 (-0.152414) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.488549 \/ 0.215209 (0.273340) | 4.888055 \/ 2.077655 (2.810400) | 2.389163 \/ 1.504120 (0.885043) | 2.184626 \/ 1.541195 (0.643431) | 2.260227 \/ 1.468490 (0.791737) | 0.601331 \/ 4.584777 (-3.983446) | 4.386159 \/ 3.745712 (0.640447) | 3.345814 \/ 5.269862 (-1.924048) | 1.734360 \/ 4.565676 (-2.831317) | 0.073199 \/ 0.424275 (-0.351076) | 0.012397 \/ 0.007607 (0.004790) | 0.601411 \/ 0.226044 (0.375366) | 6.135000 \/ 2.268929 (3.866072) | 2.930169 \/ 55.444624 (-52.514456) | 2.532631 \/ 6.876477 (-4.343845) | 2.619351 \/ 2.142072 (0.477279) | 0.740954 \/ 4.805227 (-4.064274) | 0.162936 \/ 6.500664 (-6.337728) | 0.073885 \/ 0.075469 (-0.001585) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.502493 \/ 1.841788 (-0.339294) | 17.026756 \/ 8.074308 (8.952448) | 15.880958 \/ 10.191392 (5.689566) | 0.167261 \/ 0.680424 (-0.513163) | 0.020347 \/ 0.534201 (-0.513854) | 0.452902 \/ 0.579283 (-0.126381) | 0.481614 \/ 0.434364 (0.047250) | 0.539893 \/ 0.540337 (-0.000445) | 0.653401 \/ 1.386936 (-0.733535) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#6a5781212e968e2515afdf29370a6eab6f657120 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008268 \/ 0.011353 (-0.003084) | 0.005538 \/ 0.011008 (-0.005470) | 0.126136 \/ 0.038508 (0.087628) | 0.046100 \/ 0.023109 (0.022991) | 0.366882 \/ 0.275898 (0.090984) | 0.408912 \/ 0.323480 (0.085432) | 0.007090 \/ 0.007986 (-0.000895) | 0.004820 \/ 0.004328 (0.000491) | 0.091432 \/ 0.004250 (0.087181) | 0.058390 \/ 0.037052 (0.021338) | 0.368787 \/ 0.258489 (0.110298) | 0.419429 \/ 0.293841 (0.125588) | 0.034958 \/ 0.128546 (-0.093588) | 0.010526 \/ 0.075646 (-0.065120) | 0.463063 \/ 0.419271 (0.043791) | 0.070544 \/ 0.043533 (0.027011) | 0.366182 \/ 0.255139 (0.111043) | 0.390851 \/ 0.283200 (0.107652) | 0.128377 \/ 0.141683 (-0.013306) | 1.819385 \/ 1.452155 (0.367231) | 1.928834 \/ 1.492716 (0.436117) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.228413 \/ 0.018006 (0.210407) | 0.485511 \/ 0.000490 (0.485021) | 0.005395 \/ 0.000200 (0.005195) | 0.000119 \/ 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.035209 \/ 0.037411 (-0.002203) | 0.144492 \/ 0.014526 (0.129967) | 0.150467 \/ 0.176557 (-0.026089) | 0.223861 \/ 0.737135 (-0.513274) | 0.156363 \/ 0.296338 (-0.139975) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.517751 \/ 0.215209 (0.302542) | 5.150438 \/ 2.077655 (3.072783) | 2.483601 \/ 1.504120 (0.979481) | 2.279786 \/ 1.541195 (0.738592) | 2.374510 \/ 1.468490 (0.906020) | 0.637547 \/ 4.584777 (-3.947230) | 4.845393 \/ 3.745712 (1.099681) | 2.241554 \/ 5.269862 (-3.028307) | 1.290105 \/ 4.565676 (-3.275572) | 0.079791 \/ 0.424275 (-0.344484) | 0.014915 \/ 0.007607 (0.007308) | 0.640468 \/ 0.226044 (0.414423) | 6.394810 \/ 2.268929 (4.125881) | 3.012748 \/ 55.444624 (-52.431876) | 2.625565 \/ 6.876477 (-4.250912) | 2.792435 \/ 2.142072 (0.650363) | 0.782284 \/ 4.805227 (-4.022944) | 0.171628 \/ 6.500664 (-6.329036) | 0.081714 \/ 0.075469 (0.006245) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.592411 \/ 1.841788 (-0.249377) | 18.999604 \/ 8.074308 (10.925295) | 18.469946 \/ 10.191392 (8.278554) | 0.200878 \/ 0.680424 (-0.479546) | 0.021595 \/ 0.534201 (-0.512606) | 0.519247 \/ 0.579283 (-0.060036) | 0.534940 \/ 0.434364 (0.100576) | 0.656325 \/ 0.540337 (0.115987) | 0.789658 \/ 1.386936 (-0.597278) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008093 \/ 0.011353 (-0.003260) | 0.005524 \/ 0.011008 (-0.005484) | 0.092339 \/ 0.038508 (0.053831) | 0.045619 \/ 0.023109 (0.022510) | 0.449376 \/ 0.275898 (0.173478) | 0.478587 \/ 0.323480 (0.155107) | 0.006978 \/ 0.007986 (-0.001007) | 0.004622 \/ 0.004328 (0.000294) | 0.090618 \/ 0.004250 (0.086368) | 0.059321 \/ 0.037052 (0.022269) | 0.450989 \/ 0.258489 (0.192500) | 0.491652 \/ 0.293841 (0.197811) | 0.033308 \/ 0.128546 (-0.095238) | 0.010677 \/ 0.075646 (-0.064969) | 0.099836 \/ 0.419271 (-0.319435) | 0.055937 \/ 0.043533 (0.012404) | 0.440560 \/ 0.255139 (0.185421) | 0.475305 \/ 0.283200 (0.192105) | 0.130829 \/ 0.141683 (-0.010854) | 1.857943 \/ 1.452155 (0.405789) | 1.989534 \/ 1.492716 (0.496818) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.244715 \/ 0.018006 (0.226709) | 0.482866 \/ 0.000490 (0.482377) | 0.001100 \/ 0.000200 (0.000900) | 0.000095 \/ 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.036288 \/ 0.037411 (-0.001124) | 0.147903 \/ 0.014526 (0.133377) | 0.154141 \/ 0.176557 (-0.022416) | 0.221863 \/ 0.737135 (-0.515272) | 0.162319 \/ 0.296338 (-0.134019) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.536972 \/ 0.215209 (0.321763) | 5.382866 \/ 2.077655 (3.305211) | 2.719575 \/ 1.504120 (1.215456) | 2.516596 \/ 1.541195 (0.975401) | 2.699602 \/ 1.468490 (1.231112) | 0.639886 \/ 4.584777 (-3.944891) | 5.109746 \/ 3.745712 (1.364034) | 2.260206 \/ 5.269862 (-3.009656) | 1.305506 \/ 4.565676 (-3.260170) | 0.080262 \/ 0.424275 (-0.344013) | 0.014801 \/ 0.007607 (0.007194) | 0.661228 \/ 0.226044 (0.435184) | 6.596485 \/ 2.268929 (4.327557) | 3.226114 \/ 55.444624 (-52.218510) | 2.859776 \/ 6.876477 (-4.016701) | 3.059355 \/ 2.142072 (0.917282) | 0.793413 \/ 4.805227 (-4.011814) | 0.176521 \/ 6.500664 (-6.324143) | 0.084062 \/ 0.075469 (0.008593) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.642085 \/ 1.841788 (-0.199703) | 20.355459 \/ 8.074308 (12.281151) | 17.979620 \/ 10.191392 (7.788228) | 0.229329 \/ 0.680424 (-0.451094) | 0.025681 \/ 0.534201 (-0.508520) | 0.534142 \/ 0.579283 (-0.045141) | 0.623439 \/ 0.434364 (0.189075) | 0.621938 \/ 0.540337 (0.081601) | 0.759038 \/ 1.386936 (-0.627898) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#6a98ff43225df344139023a5b7eb9caef610b677 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007703 \/ 0.011353 (-0.003649) | 0.005362 \/ 0.011008 (-0.005646) | 0.113111 \/ 0.038508 (0.074602) | 0.038891 \/ 0.023109 (0.015782) | 0.348938 \/ 0.275898 (0.073040) | 0.398079 \/ 0.323480 (0.074599) | 0.006707 \/ 0.007986 (-0.001278) | 0.004489 \/ 0.004328 (0.000160) | 0.087194 \/ 0.004250 (0.082943) | 0.054268 \/ 0.037052 (0.017216) | 0.359949 \/ 0.258489 (0.101460) | 0.402959 \/ 0.293841 (0.109118) | 0.032508 \/ 0.128546 (-0.096038) | 0.010224 \/ 0.075646 (-0.065422) | 0.387007 \/ 0.419271 (-0.032264) | 0.058971 \/ 0.043533 (0.015439) | 0.345085 \/ 0.255139 (0.089946) | 0.384306 \/ 0.283200 (0.101107) | 0.122253 \/ 0.141683 (-0.019430) | 1.706353 \/ 1.452155 (0.254199) | 1.840780 \/ 1.492716 (0.348063) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.254374 \/ 0.018006 (0.236368) | 0.497387 \/ 0.000490 (0.496897) | 0.012294 \/ 0.000200 (0.012094) | 0.000108 \/ 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030902 \/ 0.037411 (-0.006509) | 0.132098 \/ 0.014526 (0.117573) | 0.140311 \/ 0.176557 (-0.036245) | 0.205887 \/ 0.737135 (-0.531249) | 0.143992 \/ 0.296338 (-0.152347) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.467367 \/ 0.215209 (0.252158) | 4.669936 \/ 2.077655 (2.592281) | 2.155358 \/ 1.504120 (0.651238) | 1.984132 \/ 1.541195 (0.442937) | 2.102352 \/ 1.468490 (0.633861) | 0.607014 \/ 4.584777 (-3.977763) | 4.396479 \/ 3.745712 (0.650767) | 4.666056 \/ 5.269862 (-0.603806) | 2.176649 \/ 4.565676 (-2.389028) | 0.072657 \/ 0.424275 (-0.351619) | 0.012367 \/ 0.007607 (0.004759) | 0.569706 \/ 0.226044 (0.343661) | 5.749083 \/ 2.268929 (3.480154) | 2.640824 \/ 55.444624 (-52.803801) | 2.310253 \/ 6.876477 (-4.566224) | 2.486748 \/ 2.142072 (0.344676) | 0.737891 \/ 4.805227 (-4.067336) | 0.163507 \/ 6.500664 (-6.337157) | 0.075776 \/ 0.075469 (0.000307) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.362710 \/ 1.841788 (-0.479078) | 17.010705 \/ 8.074308 (8.936396) | 15.084231 \/ 10.191392 (4.892839) | 0.218274 \/ 0.680424 (-0.462150) | 0.019555 \/ 0.534201 (-0.514646) | 0.456013 \/ 0.579283 (-0.123270) | 0.502772 \/ 0.434364 (0.068408) | 0.581480 \/ 0.540337 (0.041142) | 0.686952 \/ 1.386936 (-0.699984) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007976 \/ 0.011353 (-0.003377) | 0.005141 \/ 0.011008 (-0.005868) | 0.086629 \/ 0.038508 (0.048121) | 0.039553 \/ 0.023109 (0.016444) | 0.433028 \/ 0.275898 (0.157130) | 0.463444 \/ 0.323480 (0.139964) | 0.006967 \/ 0.007986 (-0.001018) | 0.005814 \/ 0.004328 (0.001485) | 0.086266 \/ 0.004250 (0.082015) | 0.055384 \/ 0.037052 (0.018332) | 0.428733 \/ 0.258489 (0.170243) | 0.475670 \/ 0.293841 (0.181829) | 0.032872 \/ 0.128546 (-0.095674) | 0.010664 \/ 0.075646 (-0.064983) | 0.094357 \/ 0.419271 (-0.324915) | 0.058386 \/ 0.043533 (0.014854) | 0.431114 \/ 0.255139 (0.175975) | 0.441728 \/ 0.283200 (0.158528) | 0.131942 \/ 0.141683 (-0.009740) | 1.782214 \/ 1.452155 (0.330060) | 1.843185 \/ 1.492716 (0.350469) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.247047 \/ 0.018006 (0.229041) | 0.488931 \/ 0.000490 (0.488441) | 0.002657 \/ 0.000200 (0.002457) | 0.000106 \/ 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033893 \/ 0.037411 (-0.003518) | 0.131021 \/ 0.014526 (0.116495) | 0.142892 \/ 0.176557 (-0.033665) | 0.200955 \/ 0.737135 (-0.536180) | 0.151329 \/ 0.296338 (-0.145010) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.521138 \/ 0.215209 (0.305929) | 5.085207 \/ 2.077655 (3.007552) | 2.652901 \/ 1.504120 (1.148781) | 2.401545 \/ 1.541195 (0.860350) | 2.553461 \/ 1.468490 (1.084971) | 0.615347 \/ 4.584777 (-3.969430) | 4.448038 \/ 3.745712 (0.702326) | 2.049997 \/ 5.269862 (-3.219865) | 1.190602 \/ 4.565676 (-3.375075) | 0.073356 \/ 0.424275 (-0.350919) | 0.013685 \/ 0.007607 (0.006078) | 0.626705 \/ 0.226044 (0.400660) | 6.391941 \/ 2.268929 (4.123012) | 3.218864 \/ 55.444624 (-52.225760) | 2.858808 \/ 6.876477 (-4.017669) | 3.005808 \/ 2.142072 (0.863736) | 0.740725 \/ 4.805227 (-4.064502) | 0.161904 \/ 6.500664 (-6.338760) | 0.073727 \/ 0.075469 (-0.001742) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.488623 \/ 1.841788 (-0.353164) | 17.584367 \/ 8.074308 (9.510059) | 16.281818 \/ 10.191392 (6.090426) | 0.164482 \/ 0.680424 (-0.515942) | 0.020197 \/ 0.534201 (-0.514003) | 0.456750 \/ 0.579283 (-0.122533) | 0.501156 \/ 0.434364 (0.066792) | 0.549779 \/ 0.540337 (0.009442) | 0.650156 \/ 1.386936 (-0.736780) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#2b6cc63b868ea4ee60502845ebec68abb943958b \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008337 \/ 0.011353 (-0.003016) | 0.005911 \/ 0.011008 (-0.005097) | 0.129037 \/ 0.038508 (0.090529) | 0.046071 \/ 0.023109 (0.022962) | 0.418657 \/ 0.275898 (0.142759) | 0.490340 \/ 0.323480 (0.166860) | 0.006387 \/ 0.007986 (-0.001598) | 0.004724 \/ 0.004328 (0.000396) | 0.097953 \/ 0.004250 (0.093702) | 0.069025 \/ 0.037052 (0.031972) | 0.431178 \/ 0.258489 (0.172689) | 0.458363 \/ 0.293841 (0.164522) | 0.049341 \/ 0.128546 (-0.079205) | 0.014637 \/ 0.075646 (-0.061009) | 0.439800 \/ 0.419271 (0.020529) | 0.069905 \/ 0.043533 (0.026373) | 0.406775 \/ 0.255139 (0.151636) | 0.441989 \/ 0.283200 (0.158790) | 0.046009 \/ 0.141683 (-0.095674) | 1.847630 \/ 1.452155 (0.395475) | 1.904067 \/ 1.492716 (0.411351) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.288305 \/ 0.018006 (0.270299) | 0.594547 \/ 0.000490 (0.594058) | 0.005600 \/ 0.000200 (0.005400) | 0.000106 \/ 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033847 \/ 0.037411 (-0.003564) | 0.125139 \/ 0.014526 (0.110613) | 0.147982 \/ 0.176557 (-0.028574) | 0.208396 \/ 0.737135 (-0.528739) | 0.144005 \/ 0.296338 (-0.152334) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.669175 \/ 0.215209 (0.453966) | 6.605289 \/ 2.077655 (4.527634) | 2.720468 \/ 1.504120 (1.216348) | 2.341355 \/ 1.541195 (0.800160) | 2.402069 \/ 1.468490 (0.933578) | 0.939303 \/ 4.584777 (-3.645474) | 5.718545 \/ 3.745712 (1.972833) | 2.856235 \/ 5.269862 (-2.413627) | 1.821555 \/ 4.565676 (-2.744121) | 0.105473 \/ 0.424275 (-0.318802) | 0.014490 \/ 0.007607 (0.006883) | 0.774349 \/ 0.226044 (0.548305) | 8.065048 \/ 2.268929 (5.796120) | 3.508482 \/ 55.444624 (-51.936143) | 2.822881 \/ 6.876477 (-4.053596) | 2.962947 \/ 2.142072 (0.820875) | 1.138944 \/ 4.805227 (-3.666284) | 0.248414 \/ 6.500664 (-6.252250) | 0.095665 \/ 0.075469 (0.020196) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.688231 \/ 1.841788 (-0.153557) | 18.673305 \/ 8.074308 (10.598997) | 22.768663 \/ 10.191392 (12.577271) | 0.211238 \/ 0.680424 (-0.469186) | 0.031380 \/ 0.534201 (-0.502821) | 0.517175 \/ 0.579283 (-0.062108) | 0.626437 \/ 0.434364 (0.192073) | 0.624225 \/ 0.540337 (0.083888) | 0.743746 \/ 1.386936 (-0.643191) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008888 \/ 0.011353 (-0.002464) | 0.005491 \/ 0.011008 (-0.005517) | 0.105013 \/ 0.038508 (0.066505) | 0.049456 \/ 0.023109 (0.026347) | 0.528989 \/ 0.275898 (0.253091) | 0.651871 \/ 0.323480 (0.328391) | 0.006683 \/ 0.007986 (-0.001302) | 0.004365 \/ 0.004328 (0.000037) | 0.098161 \/ 0.004250 (0.093911) | 0.075615 \/ 0.037052 (0.038563) | 0.543746 \/ 0.258489 (0.285257) | 0.650855 \/ 0.293841 (0.357014) | 0.050220 \/ 0.128546 (-0.078327) | 0.014471 \/ 0.075646 (-0.061175) | 0.115903 \/ 0.419271 (-0.303368) | 0.065925 \/ 0.043533 (0.022392) | 0.527797 \/ 0.255139 (0.272658) | 0.543834 \/ 0.283200 (0.260634) | 0.043005 \/ 0.141683 (-0.098678) | 1.842846 \/ 1.452155 (0.390691) | 1.970615 \/ 1.492716 (0.477899) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.287350 \/ 0.018006 (0.269343) | 0.591139 \/ 0.000490 (0.590649) | 0.006423 \/ 0.000200 (0.006223) | 0.000107 \/ 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034594 \/ 0.037411 (-0.002818) | 0.137155 \/ 0.014526 (0.122629) | 0.154662 \/ 0.176557 (-0.021894) | 0.217834 \/ 0.737135 (-0.519301) | 0.159642 \/ 0.296338 (-0.136696) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.664288 \/ 0.215209 (0.449079) | 6.926912 \/ 2.077655 (4.849257) | 3.028957 \/ 1.504120 (1.524837) | 2.625178 \/ 1.541195 (1.083983) | 2.725316 \/ 1.468490 (1.256826) | 1.015715 \/ 4.584777 (-3.569062) | 5.834694 \/ 3.745712 (2.088982) | 5.105269 \/ 5.269862 (-0.164593) | 2.316194 \/ 4.565676 (-2.249483) | 0.113802 \/ 0.424275 (-0.310473) | 0.014079 \/ 0.007607 (0.006472) | 0.893727 \/ 0.226044 (0.667683) | 8.577701 \/ 2.268929 (6.308772) | 3.706907 \/ 55.444624 (-51.737717) | 3.087530 \/ 6.876477 (-3.788947) | 3.295004 \/ 2.142072 (1.152931) | 1.204172 \/ 4.805227 (-3.601055) | 0.248720 \/ 6.500664 (-6.251944) | 0.107208 \/ 0.075469 (0.031739) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.800058 \/ 1.841788 (-0.041730) | 19.253646 \/ 8.074308 (11.179338) | 22.590804 \/ 10.191392 (12.399412) | 0.270687 \/ 0.680424 (-0.409737) | 0.028678 \/ 0.534201 (-0.505522) | 0.534670 \/ 0.579283 (-0.044613) | 0.642881 \/ 0.434364 (0.208518) | 0.615521 \/ 0.540337 (0.075184) | 0.723733 \/ 1.386936 (-0.663203) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#2591cd45a002a06bd551343ec785abf16f1433e2 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.017236 \/ 0.011353 (0.005883) | 0.005341 \/ 0.011008 (-0.005667) | 0.131471 \/ 0.038508 (0.092963) | 0.048868 \/ 0.023109 (0.025758) | 0.448942 \/ 0.275898 (0.173044) | 0.498721 \/ 0.323480 (0.175241) | 0.006825 \/ 0.007986 (-0.001161) | 0.004587 \/ 0.004328 (0.000259) | 0.104142 \/ 0.004250 (0.099891) | 0.075521 \/ 0.037052 (0.038469) | 0.439538 \/ 0.258489 (0.181049) | 0.498720 \/ 0.293841 (0.204879) | 0.051352 \/ 0.128546 (-0.077194) | 0.015070 \/ 0.075646 (-0.060576) | 0.441752 \/ 0.419271 (0.022480) | 0.089166 \/ 0.043533 (0.045633) | 0.428909 \/ 0.255139 (0.173770) | 0.446648 \/ 0.283200 (0.163448) | 0.042371 \/ 0.141683 (-0.099312) | 1.993948 \/ 1.452155 (0.541793) | 2.065756 \/ 1.492716 (0.573039) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.257279 \/ 0.018006 (0.239273) | 0.575453 \/ 0.000490 (0.574964) | 0.004120 \/ 0.000200 (0.003920) | 0.000114 \/ 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034012 \/ 0.037411 (-0.003399) | 0.141737 \/ 0.014526 (0.127211) | 0.145241 \/ 0.176557 (-0.031316) | 0.226196 \/ 0.737135 (-0.510939) | 0.149526 \/ 0.296338 (-0.146813) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.665762 \/ 0.215209 (0.450553) | 6.683737 \/ 2.077655 (4.606083) | 2.869485 \/ 1.504120 (1.365365) | 2.462808 \/ 1.541195 (0.921613) | 2.526808 \/ 1.468490 (1.058318) | 0.957518 \/ 4.584777 (-3.627259) | 5.926261 \/ 3.745712 (2.180548) | 5.027822 \/ 5.269862 (-0.242040) | 2.643185 \/ 4.565676 (-1.922491) | 0.117014 \/ 0.424275 (-0.307261) | 0.015142 \/ 0.007607 (0.007535) | 0.835694 \/ 0.226044 (0.609650) | 8.427356 \/ 2.268929 (6.158427) | 3.649597 \/ 55.444624 (-51.795027) | 2.989607 \/ 6.876477 (-3.886870) | 3.043160 \/ 2.142072 (0.901088) | 1.158872 \/ 4.805227 (-3.646355) | 0.240456 \/ 6.500664 (-6.260208) | 0.089196 \/ 0.075469 (0.013726) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.689361 \/ 1.841788 (-0.152427) | 18.842158 \/ 8.074308 (10.767850) | 22.604249 \/ 10.191392 (12.412857) | 0.248487 \/ 0.680424 (-0.431936) | 0.029668 \/ 0.534201 (-0.504533) | 0.536283 \/ 0.579283 (-0.043001) | 0.663253 \/ 0.434364 (0.228890) | 0.622973 \/ 0.540337 (0.082635) | 0.735297 \/ 1.386936 (-0.651639) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009296 \/ 0.011353 (-0.002057) | 0.005955 \/ 0.011008 (-0.005053) | 0.105723 \/ 0.038508 (0.067215) | 0.051184 \/ 0.023109 (0.028074) | 0.527095 \/ 0.275898 (0.251197) | 0.631697 \/ 0.323480 (0.308217) | 0.006577 \/ 0.007986 (-0.001408) | 0.004452 \/ 0.004328 (0.000124) | 0.105921 \/ 0.004250 (0.101670) | 0.071951 \/ 0.037052 (0.034899) | 0.572518 \/ 0.258489 (0.314029) | 0.623957 \/ 0.293841 (0.330116) | 0.050861 \/ 0.128546 (-0.077686) | 0.014897 \/ 0.075646 (-0.060749) | 0.122013 \/ 0.419271 (-0.297258) | 0.067194 \/ 0.043533 (0.023661) | 0.530352 \/ 0.255139 (0.275213) | 0.563912 \/ 0.283200 (0.280712) | 0.034756 \/ 0.141683 (-0.106927) | 1.961580 \/ 1.452155 (0.509425) | 2.052412 \/ 1.492716 (0.559696) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.304996 \/ 0.018006 (0.286990) | 0.584899 \/ 0.000490 (0.584409) | 0.010444 \/ 0.000200 (0.010244) | 0.000134 \/ 0.000054 (0.000080) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032540 \/ 0.037411 (-0.004871) | 0.137349 \/ 0.014526 (0.122823) | 0.146233 \/ 0.176557 (-0.030323) | 0.206978 \/ 0.737135 (-0.530157) | 0.154380 \/ 0.296338 (-0.141959) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.705438 \/ 0.215209 (0.490229) | 7.042159 \/ 2.077655 (4.964504) | 3.285501 \/ 1.504120 (1.781381) | 2.904710 \/ 1.541195 (1.363515) | 2.952838 \/ 1.468490 (1.484348) | 0.987784 \/ 4.584777 (-3.596993) | 5.949550 \/ 3.745712 (2.203838) | 2.927148 \/ 5.269862 (-2.342714) | 1.870054 \/ 4.565676 (-2.695622) | 0.119548 \/ 0.424275 (-0.304727) | 0.014565 \/ 0.007607 (0.006958) | 0.858311 \/ 0.226044 (0.632266) | 8.721679 \/ 2.268929 (6.452750) | 4.100825 \/ 55.444624 (-51.343800) | 3.358093 \/ 6.876477 (-3.518383) | 3.499637 \/ 2.142072 (1.357564) | 1.208932 \/ 4.805227 (-3.596295) | 0.232961 \/ 6.500664 (-6.267703) | 0.089727 \/ 0.075469 (0.014258) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.780143 \/ 1.841788 (-0.061645) | 19.074991 \/ 8.074308 (11.000683) | 21.218487 \/ 10.191392 (11.027095) | 0.258690 \/ 0.680424 (-0.421734) | 0.029514 \/ 0.534201 (-0.504687) | 0.541764 \/ 0.579283 (-0.037519) | 0.640603 \/ 0.434364 (0.206239) | 0.635336 \/ 0.540337 (0.094999) | 0.756309 \/ 1.386936 (-0.630627) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#1b525c199e6352aa8aac55f1dcddeb55a80db373 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009619 \/ 0.011353 (-0.001734) | 0.005683 \/ 0.011008 (-0.005325) | 0.136971 \/ 0.038508 (0.098463) | 0.051607 \/ 0.023109 (0.028497) | 0.439716 \/ 0.275898 (0.163818) | 0.486193 \/ 0.323480 (0.162713) | 0.006304 \/ 0.007986 (-0.001681) | 0.004489 \/ 0.004328 (0.000160) | 0.103837 \/ 0.004250 (0.099587) | 0.082954 \/ 0.037052 (0.045901) | 0.447286 \/ 0.258489 (0.188797) | 0.495434 \/ 0.293841 (0.201593) | 0.049244 \/ 0.128546 (-0.079302) | 0.015176 \/ 0.075646 (-0.060470) | 0.444406 \/ 0.419271 (0.025134) | 0.074766 \/ 0.043533 (0.031233) | 0.438585 \/ 0.255139 (0.183446) | 0.438232 \/ 0.283200 (0.155032) | 0.043372 \/ 0.141683 (-0.098311) | 2.057286 \/ 1.452155 (0.605131) | 2.049540 \/ 1.492716 (0.556824) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.298038 \/ 0.018006 (0.280031) | 0.630771 \/ 0.000490 (0.630281) | 0.008287 \/ 0.000200 (0.008087) | 0.000123 \/ 0.000054 (0.000068) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033637 \/ 0.037411 (-0.003775) | 0.128327 \/ 0.014526 (0.113801) | 0.150672 \/ 0.176557 (-0.025885) | 0.228521 \/ 0.737135 (-0.508614) | 0.142733 \/ 0.296338 (-0.153606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.629072 \/ 0.215209 (0.413863) | 6.612047 \/ 2.077655 (4.534392) | 2.715594 \/ 1.504120 (1.211474) | 2.327823 \/ 1.541195 (0.786628) | 2.417508 \/ 1.468490 (0.949018) | 0.959134 \/ 4.584777 (-3.625643) | 5.669921 \/ 3.745712 (1.924209) | 2.977920 \/ 5.269862 (-2.291941) | 1.814564 \/ 4.565676 (-2.751112) | 0.120233 \/ 0.424275 (-0.304042) | 0.015859 \/ 0.007607 (0.008252) | 0.822618 \/ 0.226044 (0.596574) | 8.440306 \/ 2.268929 (6.171377) | 3.721611 \/ 55.444624 (-51.723013) | 2.954867 \/ 6.876477 (-3.921610) | 3.135364 \/ 2.142072 (0.993292) | 1.226475 \/ 4.805227 (-3.578752) | 0.246658 \/ 6.500664 (-6.254006) | 0.093920 \/ 0.075469 (0.018451) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.665631 \/ 1.841788 (-0.176157) | 19.136369 \/ 8.074308 (11.062061) | 23.659564 \/ 10.191392 (13.468172) | 0.273430 \/ 0.680424 (-0.406994) | 0.028180 \/ 0.534201 (-0.506021) | 0.559588 \/ 0.579283 (-0.019695) | 0.649203 \/ 0.434364 (0.214840) | 0.647113 \/ 0.540337 (0.106776) | 0.737978 \/ 1.386936 (-0.648958) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009104 \/ 0.011353 (-0.002249) | 0.006838 \/ 0.011008 (-0.004171) | 0.104516 \/ 0.038508 (0.066008) | 0.047986 \/ 0.023109 (0.024877) | 0.521849 \/ 0.275898 (0.245951) | 0.586281 \/ 0.323480 (0.262801) | 0.006225 \/ 0.007986 (-0.001760) | 0.005713 \/ 0.004328 (0.001384) | 0.111507 \/ 0.004250 (0.107257) | 0.072320 \/ 0.037052 (0.035267) | 0.551061 \/ 0.258489 (0.292572) | 0.628034 \/ 0.293841 (0.334193) | 0.055417 \/ 0.128546 (-0.073129) | 0.019613 \/ 0.075646 (-0.056034) | 0.123958 \/ 0.419271 (-0.295314) | 0.066132 \/ 0.043533 (0.022600) | 0.504461 \/ 0.255139 (0.249322) | 0.560428 \/ 0.283200 (0.277229) | 0.036098 \/ 0.141683 (-0.105585) | 1.927398 \/ 1.452155 (0.475243) | 2.015952 \/ 1.492716 (0.523235) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.313065 \/ 0.018006 (0.295059) | 0.609174 \/ 0.000490 (0.608684) | 0.008755 \/ 0.000200 (0.008555) | 0.000120 \/ 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.040042 \/ 0.037411 (0.002630) | 0.136053 \/ 0.014526 (0.121527) | 0.143406 \/ 0.176557 (-0.033150) | 0.213080 \/ 0.737135 (-0.524055) | 0.154730 \/ 0.296338 (-0.141609) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.692706 \/ 0.215209 (0.477497) | 6.952968 \/ 2.077655 (4.875314) | 3.232023 \/ 1.504120 (1.727903) | 2.835450 \/ 1.541195 (1.294256) | 2.933821 \/ 1.468490 (1.465331) | 0.984712 \/ 4.584777 (-3.600065) | 6.127651 \/ 3.745712 (2.381939) | 2.956781 \/ 5.269862 (-2.313081) | 1.879928 \/ 4.565676 (-2.685748) | 0.111069 \/ 0.424275 (-0.313206) | 0.014598 \/ 0.007607 (0.006991) | 0.871486 \/ 0.226044 (0.645442) | 8.588500 \/ 2.268929 (6.319572) | 3.910740 \/ 55.444624 (-51.533885) | 3.115781 \/ 6.876477 (-3.760695) | 3.222367 \/ 2.142072 (1.080294) | 1.229680 \/ 4.805227 (-3.575547) | 0.232092 \/ 6.500664 (-6.268572) | 0.097717 \/ 0.075469 (0.022248) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.774193 \/ 1.841788 (-0.067595) | 19.863087 \/ 8.074308 (11.788779) | 24.058856 \/ 10.191392 (13.867464) | 0.214917 \/ 0.680424 (-0.465507) | 0.028771 \/ 0.534201 (-0.505430) | 0.544548 \/ 0.579283 (-0.034735) | 0.655882 \/ 0.434364 (0.221518) | 0.629110 \/ 0.540337 (0.088773) | 0.749246 \/ 1.386936 (-0.637690) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#f4a5ea6a42dcfef1577288b51beeccc0eb124cee \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007075 \/ 0.011353 (-0.004278) | 0.005195 \/ 0.011008 (-0.005813) | 0.113043 \/ 0.038508 (0.074535) | 0.038442 \/ 0.023109 (0.015333) | 0.336310 \/ 0.275898 (0.060412) | 0.381888 \/ 0.323480 (0.058409) | 0.005990 \/ 0.007986 (-0.001996) | 0.003893 \/ 0.004328 (-0.000435) | 0.093123 \/ 0.004250 (0.088872) | 0.058449 \/ 0.037052 (0.021397) | 0.359463 \/ 0.258489 (0.100974) | 0.427485 \/ 0.293841 (0.133644) | 0.041454 \/ 0.128546 (-0.087092) | 0.013016 \/ 0.075646 (-0.062630) | 0.372849 \/ 0.419271 (-0.046422) | 0.059386 \/ 0.043533 (0.015853) | 0.381398 \/ 0.255139 (0.126259) | 0.367603 \/ 0.283200 (0.084403) | 0.033907 \/ 0.141683 (-0.107775) | 1.628903 \/ 1.452155 (0.176749) | 1.764131 \/ 1.492716 (0.271415) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.298329 \/ 0.018006 (0.280322) | 0.593030 \/ 0.000490 (0.592540) | 0.007653 \/ 0.000200 (0.007453) | 0.000091 \/ 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025445 \/ 0.037411 (-0.011966) | 0.112062 \/ 0.014526 (0.097536) | 0.119863 \/ 0.176557 (-0.056693) | 0.178389 \/ 0.737135 (-0.558746) | 0.129934 \/ 0.296338 (-0.166404) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.532834 \/ 0.215209 (0.317625) | 5.250908 \/ 2.077655 (3.173253) | 2.086920 \/ 1.504120 (0.582800) | 1.799745 \/ 1.541195 (0.258550) | 1.909648 \/ 1.468490 (0.441158) | 0.825382 \/ 4.584777 (-3.759395) | 5.268304 \/ 3.745712 (1.522592) | 2.533347 \/ 5.269862 (-2.736515) | 1.730187 \/ 4.565676 (-2.835490) | 0.099824 \/ 0.424275 (-0.324451) | 0.012969 \/ 0.007607 (0.005362) | 0.732234 \/ 0.226044 (0.506189) | 6.989066 \/ 2.268929 (4.720138) | 2.873486 \/ 55.444624 (-52.571138) | 2.274351 \/ 6.876477 (-4.602125) | 2.311060 \/ 2.142072 (0.168987) | 1.125366 \/ 4.805227 (-3.679861) | 0.214522 \/ 6.500664 (-6.286142) | 0.077579 \/ 0.075469 (0.002110) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.670950 \/ 1.841788 (-0.170838) | 18.131528 \/ 8.074308 (10.057220) | 21.277823 \/ 10.191392 (11.086431) | 0.238807 \/ 0.680424 (-0.441617) | 0.032251 \/ 0.534201 (-0.501950) | 0.503859 \/ 0.579283 (-0.075424) | 0.604825 \/ 0.434364 (0.170461) | 0.555623 \/ 0.540337 (0.015286) | 0.647301 \/ 1.386936 (-0.739635) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.010857 \/ 0.011353 (-0.000496) | 0.005581 \/ 0.011008 (-0.005427) | 0.094346 \/ 0.038508 (0.055838) | 0.053084 \/ 0.023109 (0.029975) | 0.457586 \/ 0.275898 (0.181688) | 0.545475 \/ 0.323480 (0.221995) | 0.006761 \/ 0.007986 (-0.001225) | 0.005094 \/ 0.004328 (0.000765) | 0.095509 \/ 0.004250 (0.091258) | 0.077182 \/ 0.037052 (0.040130) | 0.498717 \/ 0.258489 (0.240228) | 0.542433 \/ 0.293841 (0.248592) | 0.051547 \/ 0.128546 (-0.076999) | 0.014633 \/ 0.075646 (-0.061014) | 0.106843 \/ 0.419271 (-0.312428) | 0.068459 \/ 0.043533 (0.024926) | 0.435793 \/ 0.255139 (0.180654) | 0.475484 \/ 0.283200 (0.192285) | 0.039495 \/ 0.141683 (-0.102188) | 1.684906 \/ 1.452155 (0.232751) | 1.798693 \/ 1.492716 (0.305976) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.279853 \/ 0.018006 (0.261847) | 0.601016 \/ 0.000490 (0.600526) | 0.002055 \/ 0.000200 (0.001855) | 0.000219 \/ 0.000054 (0.000165) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030935 \/ 0.037411 (-0.006477) | 0.121197 \/ 0.014526 (0.106671) | 0.143360 \/ 0.176557 (-0.033197) | 0.200862 \/ 0.737135 (-0.536274) | 0.138656 \/ 0.296338 (-0.157683) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.613904 \/ 0.215209 (0.398695) | 6.155422 \/ 2.077655 (4.077767) | 2.777238 \/ 1.504120 (1.273118) | 2.473045 \/ 1.541195 (0.931851) | 2.604470 \/ 1.468490 (1.135980) | 0.898871 \/ 4.584777 (-3.685906) | 5.739666 \/ 3.745712 (1.993954) | 4.719822 \/ 5.269862 (-0.550040) | 2.727354 \/ 4.565676 (-1.838322) | 0.108232 \/ 0.424275 (-0.316043) | 0.013632 \/ 0.007607 (0.006025) | 0.771802 \/ 0.226044 (0.545757) | 7.987466 \/ 2.268929 (5.718537) | 3.609856 \/ 55.444624 (-51.834768) | 2.974421 \/ 6.876477 (-3.902056) | 2.956567 \/ 2.142072 (0.814495) | 1.093792 \/ 4.805227 (-3.711435) | 0.213369 \/ 6.500664 (-6.287295) | 0.084486 \/ 0.075469 (0.009017) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.693855 \/ 1.841788 (-0.147933) | 18.055027 \/ 8.074308 (9.980719) | 21.397964 \/ 10.191392 (11.206571) | 0.240549 \/ 0.680424 (-0.439875) | 0.031212 \/ 0.534201 (-0.502989) | 0.513657 \/ 0.579283 (-0.065626) | 0.651348 \/ 0.434364 (0.216985) | 0.603740 \/ 0.540337 (0.063402) | 0.752287 \/ 1.386936 (-0.634649) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#6f3f38d00dd40a444ae54c18caa28304ae36b9c3 \"CML watermark\")\n"],"created_at":1686661399000,"updated_at":1687884471000,"closed_at":1687883912000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5949","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5949","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5949.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5949.patch","merged_at":1687883912000},"body":"Use `huggingface_hub`'s RepoCard API instead of `DatasetMetadata` for modifying the card's YAML, and deprecate `datasets.utils.metadata` and `datasets.utils.readme`.\r\n\r\nAfter removing these modules, we can also delete `datasets.utils.resources` since the moon landing repo now stores its own version of these resources for the metadata UI.\r\n\r\nPS: this change requires bumping `huggingface_hub` to 0.13.0 (Transformers requires 0.14.0, so should be ok)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5949\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5949\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5948","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5948\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5948\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5948\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5948","id":1754794611,"node_id":"PR_kwDODunzps5S4dUt","number":5948,"title":"Fix sequence of array support for most dtype","user":{"login":"qgallouedec","id":45557362,"node_id":"MDQ6VXNlcjQ1NTU3MzYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/45557362?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/qgallouedec","html_url":"https:\/\/github.com\/qgallouedec","followers_url":"https:\/\/api.github.com\/users\/qgallouedec\/followers","following_url":"https:\/\/api.github.com\/users\/qgallouedec\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/qgallouedec\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/qgallouedec\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/qgallouedec\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/qgallouedec\/orgs","repos_url":"https:\/\/api.github.com\/users\/qgallouedec\/repos","events_url":"https:\/\/api.github.com\/users\/qgallouedec\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/qgallouedec\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007220 \/ 0.011353 (-0.004133) | 0.004558 \/ 0.011008 (-0.006451) | 0.116647 \/ 0.038508 (0.078139) | 0.046845 \/ 0.023109 (0.023736) | 0.352429 \/ 0.275898 (0.076531) | 0.429739 \/ 0.323480 (0.106259) | 0.006620 \/ 0.007986 (-0.001366) | 0.003731 \/ 0.004328 (-0.000597) | 0.088683 \/ 0.004250 (0.084433) | 0.070583 \/ 0.037052 (0.033530) | 0.366699 \/ 0.258489 (0.108210) | 0.420730 \/ 0.293841 (0.126889) | 0.037342 \/ 0.128546 (-0.091204) | 0.010041 \/ 0.075646 (-0.065605) | 0.383477 \/ 0.419271 (-0.035795) | 0.060279 \/ 0.043533 (0.016746) | 0.349988 \/ 0.255139 (0.094849) | 0.371423 \/ 0.283200 (0.088224) | 0.026725 \/ 0.141683 (-0.114958) | 1.736886 \/ 1.452155 (0.284731) | 1.812874 \/ 1.492716 (0.320157) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.253256 \/ 0.018006 (0.235250) | 0.563470 \/ 0.000490 (0.562980) | 0.010475 \/ 0.000200 (0.010275) | 0.000164 \/ 0.000054 (0.000110) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030518 \/ 0.037411 (-0.006893) | 0.133324 \/ 0.014526 (0.118798) | 0.137095 \/ 0.176557 (-0.039461) | 0.202227 \/ 0.737135 (-0.534909) | 0.144195 \/ 0.296338 (-0.152143) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.480870 \/ 0.215209 (0.265661) | 4.822713 \/ 2.077655 (2.745058) | 2.124183 \/ 1.504120 (0.620064) | 1.910733 \/ 1.541195 (0.369538) | 1.970266 \/ 1.468490 (0.501776) | 0.624695 \/ 4.584777 (-3.960082) | 4.459659 \/ 3.745712 (0.713947) | 2.210123 \/ 5.269862 (-3.059739) | 1.300520 \/ 4.565676 (-3.265157) | 0.077096 \/ 0.424275 (-0.347180) | 0.013333 \/ 0.007607 (0.005726) | 0.596841 \/ 0.226044 (0.370797) | 5.917397 \/ 2.268929 (3.648469) | 2.699397 \/ 55.444624 (-52.745228) | 2.274833 \/ 6.876477 (-4.601644) | 2.525376 \/ 2.142072 (0.383304) | 0.755718 \/ 4.805227 (-4.049510) | 0.163587 \/ 6.500664 (-6.337077) | 0.072817 \/ 0.075469 (-0.002653) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.524306 \/ 1.841788 (-0.317481) | 18.843312 \/ 8.074308 (10.769004) | 15.694644 \/ 10.191392 (5.503252) | 0.177400 \/ 0.680424 (-0.503024) | 0.020104 \/ 0.534201 (-0.514097) | 0.466421 \/ 0.579283 (-0.112862) | 0.537274 \/ 0.434364 (0.102910) | 0.576920 \/ 0.540337 (0.036583) | 0.718889 \/ 1.386936 (-0.668047) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007671 \/ 0.011353 (-0.003682) | 0.004850 \/ 0.011008 (-0.006158) | 0.090085 \/ 0.038508 (0.051576) | 0.052023 \/ 0.023109 (0.028914) | 0.508575 \/ 0.275898 (0.232677) | 0.590024 \/ 0.323480 (0.266544) | 0.004564 \/ 0.007986 (-0.003422) | 0.005345 \/ 0.004328 (0.001017) | 0.087904 \/ 0.004250 (0.083653) | 0.064446 \/ 0.037052 (0.027394) | 0.525625 \/ 0.258489 (0.267136) | 0.584307 \/ 0.293841 (0.290466) | 0.037221 \/ 0.128546 (-0.091325) | 0.010588 \/ 0.075646 (-0.065059) | 0.098612 \/ 0.419271 (-0.320659) | 0.059597 \/ 0.043533 (0.016064) | 0.488064 \/ 0.255139 (0.232925) | 0.522330 \/ 0.283200 (0.239131) | 0.030004 \/ 0.141683 (-0.111679) | 1.732512 \/ 1.452155 (0.280357) | 1.809027 \/ 1.492716 (0.316310) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.218741 \/ 0.018006 (0.200735) | 0.494946 \/ 0.000490 (0.494456) | 0.004580 \/ 0.000200 (0.004380) | 0.000104 \/ 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034916 \/ 0.037411 (-0.002495) | 0.133695 \/ 0.014526 (0.119169) | 0.147964 \/ 0.176557 (-0.028592) | 0.213210 \/ 0.737135 (-0.523926) | 0.148850 \/ 0.296338 (-0.147488) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.508855 \/ 0.215209 (0.293646) | 5.065088 \/ 2.077655 (2.987433) | 2.473110 \/ 1.504120 (0.968990) | 2.259765 \/ 1.541195 (0.718570) | 2.359189 \/ 1.468490 (0.890699) | 0.639082 \/ 4.584777 (-3.945695) | 4.768195 \/ 3.745712 (1.022482) | 2.253803 \/ 5.269862 (-3.016059) | 1.442996 \/ 4.565676 (-3.122680) | 0.078761 \/ 0.424275 (-0.345514) | 0.013936 \/ 0.007607 (0.006329) | 0.625977 \/ 0.226044 (0.399933) | 6.260817 \/ 2.268929 (3.991888) | 3.149640 \/ 55.444624 (-52.294985) | 2.753555 \/ 6.876477 (-4.122921) | 2.831872 \/ 2.142072 (0.689799) | 0.781294 \/ 4.805227 (-4.023933) | 0.169109 \/ 6.500664 (-6.331555) | 0.075810 \/ 0.075469 (0.000341) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.533282 \/ 1.841788 (-0.308506) | 19.460579 \/ 8.074308 (11.386271) | 17.250424 \/ 10.191392 (7.059032) | 0.193485 \/ 0.680424 (-0.486939) | 0.020650 \/ 0.534201 (-0.513551) | 0.472110 \/ 0.579283 (-0.107173) | 0.532276 \/ 0.434364 (0.097912) | 0.613152 \/ 0.540337 (0.072814) | 0.684684 \/ 1.386936 (-0.702252) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#650a86ee122209d4a8c8e8068c01ebfd3ba553f5 \"CML watermark\")\n"],"created_at":1686659939000,"updated_at":1686755515000,"closed_at":1686755013000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5948","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5948","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5948.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5948.patch","merged_at":1686755013000},"body":"Fixes #5936 \r\nAlso, a related fix to #5927 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5948\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5948\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5947","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5947\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5947\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5947\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5947","id":1754359316,"node_id":"I_kwDODunzps5okWYU","number":5947,"title":"Return the audio filename when decoding fails due to corrupt files","user":{"login":"wetdog","id":8949105,"node_id":"MDQ6VXNlcjg5NDkxMDU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8949105?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/wetdog","html_url":"https:\/\/github.com\/wetdog","followers_url":"https:\/\/api.github.com\/users\/wetdog\/followers","following_url":"https:\/\/api.github.com\/users\/wetdog\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/wetdog\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/wetdog\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/wetdog\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/wetdog\/orgs","repos_url":"https:\/\/api.github.com\/users\/wetdog\/repos","events_url":"https:\/\/api.github.com\/users\/wetdog\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/wetdog\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! The audio data don't always exist as files on disk - the blobs are often stored in the Arrow files. For now I'd suggest disabling decoding with `.cast_column(\"audio\", Audio(decode=False))` and apply your own decoding that handles corrupted files (maybe to filter them out ?)\r\n\r\ncc @sanchit-gandhi since it's related to our discussion about allowing users to make decoding return `None` and show a warning when there are corrupted files","Thanks @lhoestq, I wasn't aware of the decode flag. It makes more sense as you say to show a warning when there are corrupted files together with some metadata of the file that allows to filter them from the dataset.\r\n\r\nMy workaround was to catch the LibsndfileError and generate a dummy audio with an unsual sample rate to filter it later. However returning `None` seems better. \r\n\r\n`try:\r\n array, sampling_rate = sf.read(file)\r\nexcept sf.LibsndfileError:\r\n print(\"bad file\")\r\n array = np.array([0.0])\r\n sampling_rate = 99.000` \r\n\r\n"],"created_at":1686645849000,"updated_at":1686746701000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\r\n\r\nReturn the audio filename when the audio decoding fails. Although currently there are some checks for mp3 and opus formats with the library version there are still cases when the audio decoding could fail, eg. Corrupt file. \r\n\r\n### Motivation\r\n\r\nWhen you try to load an object file dataset and the decoding fails you can't know which file is corrupt\r\n```\r\n\r\nraise LibsndfileError(err, prefix=\"Error opening {0!r}: \".format(self.name))\r\nsoundfile.LibsndfileError: Error opening <_io.BytesIO object at 0x7f5ab7e38290>: Format not recognised.\r\n```\r\n\r\n### Your contribution\r\n\r\nMake a PR to Add exceptions for LIbsndfileError to return the audio filename or path when soundfile decoding fails.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5947\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5947\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5946","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5946\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5946\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5946\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5946","id":1754234469,"node_id":"I_kwDODunzps5oj35l","number":5946,"title":"IndexError Not Solving -> IndexError: Invalid key: ?? is out of bounds for size 0 or ??","user":{"login":"syngokhan","id":70565543,"node_id":"MDQ6VXNlcjcwNTY1NTQz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/70565543?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/syngokhan","html_url":"https:\/\/github.com\/syngokhan","followers_url":"https:\/\/api.github.com\/users\/syngokhan\/followers","following_url":"https:\/\/api.github.com\/users\/syngokhan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/syngokhan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/syngokhan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/syngokhan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/syngokhan\/orgs","repos_url":"https:\/\/api.github.com\/users\/syngokhan\/repos","events_url":"https:\/\/api.github.com\/users\/syngokhan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/syngokhan\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["https:\/\/colab.research.google.com\/#scrollTo=AQ_HCYruWIHU&fileId=https%3A\/\/huggingface.co\/dfurman\/falcon-40b-chat-oasst1\/blob\/main\/finetune_falcon40b_oasst1_with_bnb_peft.ipynb\r\n\r\nI ran the same administration exactly the same but got the same error","Looks related to https:\/\/discuss.huggingface.co\/t\/indexerror-invalid-key-16-is-out-of-bounds-for-size-0\/14298\/4?u=lhoestq","> Looks related to https:\/\/discuss.huggingface.co\/t\/indexerror-invalid-key-16-is-out-of-bounds-for-size-0\/14298\/4?u=lhoestq\n\nThe problem has not been solved, I have tried this before, but the problem is the same","> \r\n\r\n@syngokhan did u solve it? \r\nI am desperate "],"created_at":1686641655000,"updated_at":1689263983000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nin :1 \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/transformers\/trainer.py:1537 in train \u2502\r\n\u2502 \u2502\r\n\u2502 1534 \u2502 \u2502 inner_training_loop = find_executable_batch_size( \u2502\r\n\u2502 1535 \u2502 \u2502 \u2502 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size \u2502\r\n\u2502 1536 \u2502 \u2502 ) \u2502\r\n\u2502 \u2771 1537 \u2502 \u2502 return inner_training_loop( \u2502\r\n\u2502 1538 \u2502 \u2502 \u2502 args=args, \u2502\r\n\u2502 1539 \u2502 \u2502 \u2502 resume_from_checkpoint=resume_from_checkpoint, \u2502\r\n\u2502 1540 \u2502 \u2502 \u2502 trial=trial, \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/transformers\/trainer.py:1789 in _inner_training_loop \u2502\r\n\u2502 \u2502\r\n\u2502 1786 \u2502 \u2502 \u2502 \u2502 rng_to_sync = True \u2502\r\n\u2502 1787 \u2502 \u2502 \u2502 \u2502\r\n\u2502 1788 \u2502 \u2502 \u2502 step = -1 \u2502\r\n\u2502 \u2771 1789 \u2502 \u2502 \u2502 for step, inputs in enumerate(epoch_iterator): \u2502\r\n\u2502 1790 \u2502 \u2502 \u2502 \u2502 total_batched_samples += 1 \u2502\r\n\u2502 1791 \u2502 \u2502 \u2502 \u2502 if rng_to_sync: \u2502\r\n\u2502 1792 \u2502 \u2502 \u2502 \u2502 \u2502 self._load_rng_state(resume_from_checkpoint) \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/accelerate\/data_loader.py:377 in __iter__ \u2502\r\n\u2502 \u2502\r\n\u2502 374 \u2502 \u2502 dataloader_iter = super().__iter__() \u2502\r\n\u2502 375 \u2502 \u2502 # We iterate one batch ahead to check when we are at the end \u2502\r\n\u2502 376 \u2502 \u2502 try: \u2502\r\n\u2502 \u2771 377 \u2502 \u2502 \u2502 current_batch = next(dataloader_iter) \u2502\r\n\u2502 378 \u2502 \u2502 except StopIteration: \u2502\r\n\u2502 379 \u2502 \u2502 \u2502 yield \u2502\r\n\u2502 380 \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/torch\/utils\/data\/dataloader.py:633 in __next__ \u2502\r\n\u2502 \u2502\r\n\u2502 630 \u2502 \u2502 \u2502 if self._sampler_iter is None: \u2502\r\n\u2502 631 \u2502 \u2502 \u2502 \u2502 # TODO(https:\/\/github.com\/pytorch\/pytorch\/issues\/76750) \u2502\r\n\u2502 632 \u2502 \u2502 \u2502 \u2502 self._reset() # type: ignore[call-arg] \u2502\r\n\u2502 \u2771 633 \u2502 \u2502 \u2502 data = self._next_data() \u2502\r\n\u2502 634 \u2502 \u2502 \u2502 self._num_yielded += 1 \u2502\r\n\u2502 635 \u2502 \u2502 \u2502 if self._dataset_kind == _DatasetKind.Iterable and \\ \u2502\r\n\u2502 636 \u2502 \u2502 \u2502 \u2502 \u2502 self._IterableDataset_len_called is not None and \\ \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/torch\/utils\/data\/dataloader.py:677 in _next_data \u2502\r\n\u2502 \u2502\r\n\u2502 674 \u2502 \u2502\r\n\u2502 675 \u2502 def _next_data(self): \u2502\r\n\u2502 676 \u2502 \u2502 index = self._next_index() # may raise StopIteration \u2502\r\n\u2502 \u2771 677 \u2502 \u2502 data = self._dataset_fetcher.fetch(index) # may raise StopIteration \u2502\r\n\u2502 678 \u2502 \u2502 if self._pin_memory: \u2502\r\n\u2502 679 \u2502 \u2502 \u2502 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) \u2502\r\n\u2502 680 \u2502 \u2502 return data \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/torch\/utils\/data\/_utils\/fetch.py:49 in fetch \u2502\r\n\u2502 \u2502\r\n\u2502 46 \u2502 def fetch(self, possibly_batched_index): \u2502\r\n\u2502 47 \u2502 \u2502 if self.auto_collation: \u2502\r\n\u2502 48 \u2502 \u2502 \u2502 if hasattr(self.dataset, \"__getitems__\") and self.dataset.__getitems__: \u2502\r\n\u2502 \u2771 49 \u2502 \u2502 \u2502 \u2502 data = self.dataset.__getitems__(possibly_batched_index) \u2502\r\n\u2502 50 \u2502 \u2502 \u2502 else: \u2502\r\n\u2502 51 \u2502 \u2502 \u2502 \u2502 data = [self.dataset[idx] for idx in possibly_batched_index] \u2502\r\n\u2502 52 \u2502 \u2502 else: \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/datasets\/arrow_dataset.py:2782 in __getitems__ \u2502\r\n\u2502 \u2502\r\n\u2502 2779 \u2502 \u2502\r\n\u2502 2780 \u2502 def __getitems__(self, keys: List) -> List: \u2502\r\n\u2502 2781 \u2502 \u2502 \"\"\"Can be used to get a batch using a list of integers indices.\"\"\" \u2502\r\n\u2502 \u2771 2782 \u2502 \u2502 batch = self.__getitem__(keys) \u2502\r\n\u2502 2783 \u2502 \u2502 n_examples = len(batch[next(iter(batch))]) \u2502\r\n\u2502 2784 \u2502 \u2502 return [{col: array[i] for col, array in batch.items()} for i in range(n_example \u2502\r\n\u2502 2785 \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/datasets\/arrow_dataset.py:2778 in __getitem__ \u2502\r\n\u2502 \u2502\r\n\u2502 2775 \u2502 \u2502\r\n\u2502 2776 \u2502 def __getitem__(self, key): # noqa: F811 \u2502\r\n\u2502 2777 \u2502 \u2502 \"\"\"Can be used to index columns (by string names) or rows (by integer index or i \u2502\r\n\u2502 \u2771 2778 \u2502 \u2502 return self._getitem(key) \u2502\r\n\u2502 2779 \u2502 \u2502\r\n\u2502 2780 \u2502 def __getitems__(self, keys: List) -> List: \u2502\r\n\u2502 2781 \u2502 \u2502 \"\"\"Can be used to get a batch using a list of integers indices.\"\"\" \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/datasets\/arrow_dataset.py:2762 in _getitem \u2502\r\n\u2502 \u2502\r\n\u2502 2759 \u2502 \u2502 format_kwargs = kwargs[\"format_kwargs\"] if \"format_kwargs\" in kwargs else self._ \u2502\r\n\u2502 2760 \u2502 \u2502 format_kwargs = format_kwargs if format_kwargs is not None else {} \u2502\r\n\u2502 2761 \u2502 \u2502 formatter = get_formatter(format_type, features=self._info.features, **format_kw \u2502\r\n\u2502 \u2771 2762 \u2502 \u2502 pa_subtable = query_table(self._data, key, indices=self._indices if self._indice \u2502\r\n\u2502 2763 \u2502 \u2502 formatted_output = format_table( \u2502\r\n\u2502 2764 \u2502 \u2502 \u2502 pa_subtable, key, formatter=formatter, format_columns=format_columns, output \u2502\r\n\u2502 2765 \u2502 \u2502 ) \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/datasets\/formatting\/formatting.py:578 in query_table \u2502\r\n\u2502 \u2502\r\n\u2502 575 \u2502 \u2502 _check_valid_column_key(key, table.column_names) \u2502\r\n\u2502 576 \u2502 else: \u2502\r\n\u2502 577 \u2502 \u2502 size = indices.num_rows if indices is not None else table.num_rows \u2502\r\n\u2502 \u2771 578 \u2502 \u2502 _check_valid_index_key(key, size) \u2502\r\n\u2502 579 \u2502 # Query the main table \u2502\r\n\u2502 580 \u2502 if indices is None: \u2502\r\n\u2502 581 \u2502 \u2502 pa_subtable = _query_table(table, key) \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/datasets\/formatting\/formatting.py:531 in \u2502\r\n\u2502 _check_valid_index_key \u2502\r\n\u2502 \u2502\r\n\u2502 528 \u2502 \u2502 \u2502 _check_valid_index_key(min(key), size=size) \u2502\r\n\u2502 529 \u2502 elif isinstance(key, Iterable): \u2502\r\n\u2502 530 \u2502 \u2502 if len(key) > 0: \u2502\r\n\u2502 \u2771 531 \u2502 \u2502 \u2502 _check_valid_index_key(int(max(key)), size=size) \u2502\r\n\u2502 532 \u2502 \u2502 \u2502 _check_valid_index_key(int(min(key)), size=size) \u2502\r\n\u2502 533 \u2502 else: \u2502\r\n\u2502 534 \u2502 \u2502 _raise_bad_key_type(key) \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/datasets\/formatting\/formatting.py:521 in \u2502\r\n\u2502 _check_valid_index_key \u2502\r\n\u2502 \u2502\r\n\u2502 518 def _check_valid_index_key(key: Union[int, slice, range, Iterable], size: int) -> None: \u2502\r\n\u2502 519 \u2502 if isinstance(key, int): \u2502\r\n\u2502 520 \u2502 \u2502 if (key < 0 and key + size < 0) or (key >= size): \u2502\r\n\u2502 \u2771 521 \u2502 \u2502 \u2502 raise IndexError(f\"Invalid key: {key} is out of bounds for size {size}\") \u2502\r\n\u2502 522 \u2502 \u2502 return \u2502\r\n\u2502 523 \u2502 elif isinstance(key, slice): \u2502\r\n\u2502 524 \u2502 \u2502 pass \n\n### Steps to reproduce the bug\n\n``\r\nimport json\r\nimport os\r\nfrom pprint import pprint\r\n\r\nimport bitsandbytes as bnb\r\nimport pandas as pd\r\nimport torch\r\nimport torch.nn as nn\r\nimport transformers\r\nfrom datasets import Dataset,load_dataset\r\n\r\nfrom peft import (\r\n LoraConfig,\r\n PeftConfig,\r\n PeftModel,\r\n get_peft_model,\r\n prepare_model_for_kbit_training\r\n)\r\n\r\nfrom transformers import (\r\n AutoConfig,\r\n AutoModelForCausalLM,\r\n AutoTokenizer,\r\n BitsAndBytesConfig,\r\n\r\n)\r\n\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\r\n\r\ndef print_trainable_parameters(model):\r\n \"\"\"\r\n Prints the number of trainable parameters in the model.\r\n \"\"\"\r\n trainable_params = 0\r\n all_param = 0\r\n for _, param in model.named_parameters():\r\n all_param += param.numel()\r\n if param.requires_grad:\r\n trainable_params += param.numel()\r\n print(\r\n f\"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params \/ all_param}\"\r\n )\r\n\r\n\r\nMODEL_NAME = \"tiiuae\/falcon-7b\"\r\n\r\nbnb_config = BitsAndBytesConfig(\r\n load_in_4bit = True,\r\n bnb_4bit_use_double_quant=True,\r\n bnb_4bit_quant_type=\"nf4\",\r\n bnb_4bit_compute_dtype=torch.bfloat16,\r\n)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n MODEL_NAME,\r\n device_map = \"auto\",\r\n trust_remote_code = True,\r\n quantization_config = bnb_config\r\n)\r\n\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)\r\ntokenizer.pad_token = tokenizer.eos_token\r\n\r\nmodel.gradient_checkpointing_enable()\r\nmodel = prepare_model_for_kbit_training(model)\r\n\r\n\r\nconfig = LoraConfig(\r\n r = 16,\r\n lora_alpha = 32,\r\n target_modules = [\"query_key_value\"],\r\n lora_dropout = 0.05,\r\n bias = \"none\",\r\n task_type = \"CASUAL_LM\"\r\n)\r\n\r\nmodel = get_peft_model(model,config)\r\nprint_trainable_parameters(model)\r\n\r\ndef generate_prompt(data_point):\r\n return f\"\"\"\r\n: {data_point[\"question\"]}\r\n: {data_point[\"answer\"]} \r\n\"\"\".strip()\r\n\r\ndef generate_and_tokenize_prompt(data_point):\r\n full_prompt = generate_prompt(data_point)\r\n tokenized_full_prompt = tokenizer(full_prompt, padding = True, truncation = True,return_tensors = None)\r\n return dict({\r\n \"input_ids\" : tokenized_full_prompt[\"input_ids\"],\r\n \"attention_mask\" : tokenized_full_prompt[\"attention_mask\"]\r\n\r\n })\r\n\r\n\r\ndata = data[\"train\"].shuffle().map(generate_and_tokenize_prompt, batched = False) \r\n\r\nOUTPUT_DIR = \"experiments\"\r\n\r\ntrainings_args = transformers.TrainingArguments(\r\n per_device_train_batch_size = 1,\r\n gradient_accumulation_steps = 4,\r\n num_train_epochs = 1,\r\n learning_rate = 2e-4,\r\n fp16 = True,\r\n save_total_limit = 3,\r\n logging_steps = 1,\r\n output_dir = OUTPUT_DIR,\r\n max_steps = 80,\r\n optim = \"paged_adamw_8bit\",\r\n lr_scheduler_type = \"cosine\",\r\n warmup_ratio = 0.05,\r\n #remove_unused_columns=True\r\n)\r\n\r\ntrainer = transformers.Trainer(\r\n model = model,\r\n train_dataset = data,\r\n args = trainings_args, \r\n data_collator = transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),\r\n\r\n)\r\n\r\nmodel.config.use_cache = False\r\n\r\ntrainer.train()\r\n\r\n\r\nIndexError: Invalid key: 32 is out of bounds for size 0\r\n\r\nDataSet Format is like : \r\n[{\"question\": \"How can I create an account?\", \"answer\": \"To create an account, click on the 'Sign Up' button on the top right corner of our website and follow the instructions to complete the registration process.\"}, .... ]\n\n### Expected behavior\n\n-\n\n### Environment info\n\n\r\n!pip install -q pip \r\n!pip install -q bitsandbytes==0.39.0 \r\n!pip install -q torch==2.0.1\r\n\r\n!pip install -q git+https:\/\/github.com\/huggingface\/transformers.git \r\n!pip install -q git+https:\/\/github.com\/huggingface\/peft.git \r\n!pip install -q git+https:\/\/github.com\/huggingface\/accelerate.git \r\n\r\n!pip install -q datasets \r\n!pip install -q loralib==0.1.1 \r\n!pip install -q einops==0.6.1 \r\n\r\n\r\nimport json\r\nimport os\r\nfrom pprint import pprint\r\n\r\nimport bitsandbytes as bnb\r\nimport pandas as pd\r\nimport torch\r\nimport torch.nn as nn\r\nimport transformers\r\nfrom datasets import Dataset,load_dataset\r\n\r\nfrom peft import (\r\n LoraConfig,\r\n PeftConfig,\r\n PeftModel,\r\n get_peft_model,\r\n prepare_model_for_kbit_training\r\n)\r\n\r\nfrom transformers import (\r\n AutoConfig,\r\n AutoModelForCausalLM,\r\n AutoTokenizer,\r\n BitsAndBytesConfig,\r\n\r\n)\r\n\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5946\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5946\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5945","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5945\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5945\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5945\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5945","id":1754084577,"node_id":"I_kwDODunzps5ojTTh","number":5945,"title":"Failing to upload dataset to the hub","user":{"login":"Ar770","id":77382661,"node_id":"MDQ6VXNlcjc3MzgyNjYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/77382661?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Ar770","html_url":"https:\/\/github.com\/Ar770","followers_url":"https:\/\/api.github.com\/users\/Ar770\/followers","following_url":"https:\/\/api.github.com\/users\/Ar770\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Ar770\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Ar770\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Ar770\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Ar770\/orgs","repos_url":"https:\/\/api.github.com\/users\/Ar770\/repos","events_url":"https:\/\/api.github.com\/users\/Ar770\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Ar770\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Feel free to re-run your code later, it will resume automatically where you left","Tried many times in the last 2 weeks, problem remains.","Alternatively you can save your dataset in parquet files locally and upload them to the hub manually\r\n\r\n```python\r\nfrom tqdm import tqdm\r\nnum_shards = 60\r\nfor index in tqdm(range(num_shards)):\r\n ds.shard(num_shards=num_shards, index=index, contiguous=True).to_parquet(f\"{index:05d}.parquet\")\r\n````"],"created_at":1686635206000,"updated_at":1687172095000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nTrying to upload a dataset of hundreds of thousands of audio samples (the total volume is not very large, 60 gb) to the hub with push_to_hub, it doesn't work.\r\nFrom time to time one piece of the data (parquet) gets pushed and then I get RemoteDisconnected even though my internet is stable.\r\nPlease help.\r\nI'm trying to upload the dataset for almost a week.\r\nThanks\n\n### Steps to reproduce the bug\n\nnot relevant \n\n### Expected behavior\n\nBe able to upload thedataset\n\n### Environment info\n\npython: 3.9","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5945\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5945\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5944","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5944\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5944\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5944\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5944","id":1752882200,"node_id":"PR_kwDODunzps5Sx7O4","number":5944,"title":"Arrow dataset builder to be able to load and stream Arrow datasets","user":{"login":"mariusz-jachimowicz-83","id":10278877,"node_id":"MDQ6VXNlcjEwMjc4ODc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10278877?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83","html_url":"https:\/\/github.com\/mariusz-jachimowicz-83","followers_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/followers","following_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/repos","events_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","@lhoestq tips applied. Thanks for a review. :smile: It's a lot of fun to improve this project. ","Let's add some documentation in a subsequent PR :)\r\n\r\nIn particular @mariosasko and I think it's important to note to users that local arrow data are copied to cache according to the way load_dataset works, but if they want they can use Dataset.from_file instead","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006384 \/ 0.011353 (-0.004969) | 0.003788 \/ 0.011008 (-0.007220) | 0.098524 \/ 0.038508 (0.060016) | 0.031786 \/ 0.023109 (0.008677) | 0.307799 \/ 0.275898 (0.031901) | 0.337329 \/ 0.323480 (0.013849) | 0.003650 \/ 0.007986 (-0.004336) | 0.003731 \/ 0.004328 (-0.000598) | 0.076816 \/ 0.004250 (0.072566) | 0.041888 \/ 0.037052 (0.004835) | 0.310702 \/ 0.258489 (0.052213) | 0.343846 \/ 0.293841 (0.050005) | 0.027841 \/ 0.128546 (-0.100705) | 0.008312 \/ 0.075646 (-0.067334) | 0.320230 \/ 0.419271 (-0.099042) | 0.047378 \/ 0.043533 (0.003845) | 0.308683 \/ 0.255139 (0.053544) | 0.335129 \/ 0.283200 (0.051930) | 0.096294 \/ 0.141683 (-0.045389) | 1.485521 \/ 1.452155 (0.033366) | 1.559868 \/ 1.492716 (0.067152) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.197376 \/ 0.018006 (0.179370) | 0.430461 \/ 0.000490 (0.429972) | 0.004152 \/ 0.000200 (0.003953) | 0.000068 \/ 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023660 \/ 0.037411 (-0.013751) | 0.103128 \/ 0.014526 (0.088602) | 0.107549 \/ 0.176557 (-0.069008) | 0.175934 \/ 0.737135 (-0.561201) | 0.112210 \/ 0.296338 (-0.184129) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.415804 \/ 0.215209 (0.200595) | 4.216333 \/ 2.077655 (2.138679) | 1.910354 \/ 1.504120 (0.406234) | 1.712689 \/ 1.541195 (0.171494) | 1.754705 \/ 1.468490 (0.286215) | 0.554647 \/ 4.584777 (-4.030130) | 3.393592 \/ 3.745712 (-0.352120) | 1.737504 \/ 5.269862 (-3.532358) | 1.021213 \/ 4.565676 (-3.544464) | 0.066908 \/ 0.424275 (-0.357367) | 0.011446 \/ 0.007607 (0.003839) | 0.524630 \/ 0.226044 (0.298585) | 5.243005 \/ 2.268929 (2.974077) | 2.349685 \/ 55.444624 (-53.094939) | 2.027457 \/ 6.876477 (-4.849020) | 2.131053 \/ 2.142072 (-0.011020) | 0.669070 \/ 4.805227 (-4.136157) | 0.136317 \/ 6.500664 (-6.364347) | 0.065924 \/ 0.075469 (-0.009545) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.254102 \/ 1.841788 (-0.587686) | 13.790492 \/ 8.074308 (5.716184) | 14.197772 \/ 10.191392 (4.006380) | 0.143989 \/ 0.680424 (-0.536434) | 0.016577 \/ 0.534201 (-0.517624) | 0.375437 \/ 0.579283 (-0.203846) | 0.398995 \/ 0.434364 (-0.035369) | 0.445287 \/ 0.540337 (-0.095050) | 0.538632 \/ 1.386936 (-0.848304) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006251 \/ 0.011353 (-0.005101) | 0.004019 \/ 0.011008 (-0.006989) | 0.077985 \/ 0.038508 (0.039477) | 0.028705 \/ 0.023109 (0.005596) | 0.417360 \/ 0.275898 (0.141462) | 0.463964 \/ 0.323480 (0.140484) | 0.003489 \/ 0.007986 (-0.004497) | 0.003032 \/ 0.004328 (-0.001296) | 0.077953 \/ 0.004250 (0.073702) | 0.040104 \/ 0.037052 (0.003051) | 0.405242 \/ 0.258489 (0.146753) | 0.475029 \/ 0.293841 (0.181188) | 0.028113 \/ 0.128546 (-0.100433) | 0.008610 \/ 0.075646 (-0.067036) | 0.084847 \/ 0.419271 (-0.334424) | 0.048227 \/ 0.043533 (0.004694) | 0.417235 \/ 0.255139 (0.162096) | 0.450470 \/ 0.283200 (0.167270) | 0.096978 \/ 0.141683 (-0.044705) | 1.514688 \/ 1.452155 (0.062533) | 1.560205 \/ 1.492716 (0.067488) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.235125 \/ 0.018006 (0.217119) | 0.409904 \/ 0.000490 (0.409414) | 0.002474 \/ 0.000200 (0.002275) | 0.000074 \/ 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025152 \/ 0.037411 (-0.012259) | 0.103517 \/ 0.014526 (0.088991) | 0.110154 \/ 0.176557 (-0.066402) | 0.161431 \/ 0.737135 (-0.575704) | 0.114891 \/ 0.296338 (-0.181448) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.456077 \/ 0.215209 (0.240868) | 4.541171 \/ 2.077655 (2.463517) | 2.297912 \/ 1.504120 (0.793792) | 2.079337 \/ 1.541195 (0.538143) | 2.121291 \/ 1.468490 (0.652801) | 0.560172 \/ 4.584777 (-4.024605) | 3.421122 \/ 3.745712 (-0.324590) | 1.764675 \/ 5.269862 (-3.505186) | 1.043482 \/ 4.565676 (-3.522195) | 0.067652 \/ 0.424275 (-0.356623) | 0.011181 \/ 0.007607 (0.003574) | 0.557232 \/ 0.226044 (0.331188) | 5.607851 \/ 2.268929 (3.338922) | 2.783715 \/ 55.444624 (-52.660909) | 2.380943 \/ 6.876477 (-4.495534) | 2.378316 \/ 2.142072 (0.236244) | 0.674356 \/ 4.805227 (-4.130871) | 0.135912 \/ 6.500664 (-6.364752) | 0.067009 \/ 0.075469 (-0.008460) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.309002 \/ 1.841788 (-0.532786) | 14.464073 \/ 8.074308 (6.389765) | 14.418727 \/ 10.191392 (4.227335) | 0.148486 \/ 0.680424 (-0.531938) | 0.016650 \/ 0.534201 (-0.517551) | 0.368786 \/ 0.579283 (-0.210497) | 0.395026 \/ 0.434364 (-0.039338) | 0.433565 \/ 0.540337 (-0.106772) | 0.526603 \/ 1.386936 (-0.860333) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#443fc92700b4f9e12421e8082e205535314a67d5 \"CML watermark\")\n"],"created_at":1686579709000,"updated_at":1686677762000,"closed_at":1686677341000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5944","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5944","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5944.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5944.patch","merged_at":1686677341000},"body":"This adds a Arrow dataset builder to be able to load and stream from already preprocessed Arrow files.\r\nIt's related to https:\/\/github.com\/huggingface\/datasets\/issues\/3035","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5944\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5944\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5942","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5942\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5942\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5942\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5942","id":1752021681,"node_id":"PR_kwDODunzps5Su-V4","number":5942,"title":"Pass datasets-cli additional args as kwargs to DatasetBuilder in `run_beam.py`","user":{"login":"graelo","id":84066822,"node_id":"MDQ6VXNlcjg0MDY2ODIy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/84066822?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/graelo","html_url":"https:\/\/github.com\/graelo","followers_url":"https:\/\/api.github.com\/users\/graelo\/followers","following_url":"https:\/\/api.github.com\/users\/graelo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/graelo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/graelo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/graelo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/graelo\/orgs","repos_url":"https:\/\/api.github.com\/users\/graelo\/repos","events_url":"https:\/\/api.github.com\/users\/graelo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/graelo\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1686552650000,"updated_at":1688116500000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5942","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5942","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5942.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5942.patch","merged_at":null},"body":"Hi,\r\n\r\nFollowing this , here is a simple PR to pass any additional args to datasets-cli as kwargs in the DatasetBuilder in `run_beam.py`.\r\n\r\nI also took the liberty to add missing setup steps to the `beam.mdx` docs in order to help everyone.\r\n\r\n@lhoestq ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5942\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5942\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5941","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5941\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5941\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5941\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5941","id":1751838897,"node_id":"I_kwDODunzps5oavCx","number":5941,"title":"Load Data Sets Too Slow In Train Seq2seq Model","user":{"login":"xyx361100238","id":19569322,"node_id":"MDQ6VXNlcjE5NTY5MzIy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19569322?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/xyx361100238","html_url":"https:\/\/github.com\/xyx361100238","followers_url":"https:\/\/api.github.com\/users\/xyx361100238\/followers","following_url":"https:\/\/api.github.com\/users\/xyx361100238\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/xyx361100238\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/xyx361100238\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/xyx361100238\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/xyx361100238\/orgs","repos_url":"https:\/\/api.github.com\/users\/xyx361100238\/repos","events_url":"https:\/\/api.github.com\/users\/xyx361100238\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/xyx361100238\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! you can speed it up using multiprocessing by passing `num_proc=` to `load_dataset()`","already did\uff0cbut not useful for step Generating train split\uff0cit works in step \"Resolving data files\" & \"Downloading data files\" ","@mariosasko some advice \uff0c thanks\uff01","I met the same problem, terrible experience","@mariosasko ","We need more info about the issue to provide help. \r\n\r\nCan you interrupt the process (with `num_proc=None`) after the `load_dataset` call when the slowdown occurs? So we can know what part of the code is causing it.\r\n\r\nThe `audiofolder` \\ `imagefolder` with metadata is not performant for large datasets. Luckily, we can make them much faster if drop the nested metadata files feature (not that useful). I plan to work on this soon.\r\n\r\nIn the meantime, it's better to use `Dataset.from_generator` (requires replacing the `load_dataset` calls in the transformers script with `Dataset.from_generator`) or write a dataset loading script for large datasets."],"created_at":1686542323000,"updated_at":1689275741000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\n\r\nstep 'Generating train split' in load_dataset is too slow\uff1a\r\n![image](https:\/\/github.com\/huggingface\/datasets\/assets\/19569322\/d9b08eee-95fe-4741-a346-b70416c948f8)\r\n\n\n### Steps to reproduce the bug\n\nData\uff1a own data\uff0c16K16B Mono wav\r\nOficial Script:[ run_speech_recognition_seq2seq.py](https:\/\/github.com\/huggingface\/transformers\/blob\/main\/examples\/pytorch\/speech-recognition\/run_speech_recognition_seq2seq.py)\r\nAdd Code\uff1a\r\n if data_args.data_path is not None:\r\n print(data_args.data_path)\r\n raw_datasets = load_dataset(\"audiofolder\", data_dir=data_args.data_path, cache_dir=model_args.cache_dir)\r\n raw_datasets = raw_datasets.cast_column(\"audio\", Audio(sampling_rate=16000))\r\n raw_datasets = raw_datasets[\"train\"].train_test_split(test_size=0.005, shuffle=True)\r\n \uff08change cache_dir to other path \uff0cex:\/DATA\/cache\uff09\n\n### Expected behavior\n\nload data fast,at least 1000+\r\n`Generating train split: 387875 examples [32:24:45, 1154.83 examples\/s]`\n\n### Environment info\n\n\r\n- `transformers` version: 4.28.0.dev0\r\n- Platform: Linux-5.4.0-149-generic-x86_64-with-debian-bullseye-sid\r\n- Python version: 3.7.16\r\n- Huggingface_hub version: 0.13.2\r\n- PyTorch version (GPU?): 1.13.1+cu116 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?\/GPU?\/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5941\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5941\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5990","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5990\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5990\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5990\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5990","id":1774389854,"node_id":"I_kwDODunzps5pwwpe","number":5990,"title":"Pushing a large dataset on the hub consistently hangs","user":{"login":"AntreasAntoniou","id":10792502,"node_id":"MDQ6VXNlcjEwNzkyNTAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10792502?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AntreasAntoniou","html_url":"https:\/\/github.com\/AntreasAntoniou","followers_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/followers","following_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/orgs","repos_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/repos","events_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @AntreasAntoniou , sorry to know you are facing this issue. To help debugging it, could you tell me:\r\n- What is the total dataset size?\r\n- Is it always failing on the same shard or is the hanging problem happening randomly?\r\n- Were you able to save the dataset as parquet locally? This would help us determine if the problem comes from the upload or the file generation.\r\n\r\nI'm cc-ing @lhoestq who might have some insights from a `datasets` perspective.","One trick that can also help is to check the traceback when you kill your python process: it will show where in the code it was hanging","Right. So I did the trick @lhoestq suggested. Here is where things seem to hang\r\n\r\n```\r\nError while uploading 'data\/train-00120-of-00195-466c2dbab2eb9989.parquet' to the Hub. \r\nPushing split train to the Hub. \r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:03<00:00, 1.15s\/ba]\r\nUpload 1 LFS files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:52<00:00, 52.12s\/it]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:03<00:00, 1.08s\/ba]\r\nUpload 1 LFS files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:45<00:00, 45.54s\/it]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:03<00:00, 1.08s\/ba]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:03<00:00, 1.03s\/ba^Upload 1 LFS files: 0%| | 0\/1 [\r\n21:27:35 \r\n line for line in self.divide(flatten_spans()) if line.plain != separator \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/rich\/text.py\", line 385, in plain \r\n if len(self._text) != 1: \r\nKeyboardInterrupt \r\n \r\nOriginal exception was: \r\nTraceback (most recent call last): \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/tqdm\/contrib\/concurrent.py\", line 51, in _executor_map \r\n return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs)) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/tqdm\/std.py\", line 1178, in __iter__ \r\n for obj in iterable: \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/concurrent\/futures\/_base.py\", line 621, in result_iterator \r\n yield _result_or_cancel(fs.pop()) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/concurrent\/futures\/_base.py\", line 319, in _result_or_cancel \r\n return fut.result(timeout) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/concurrent\/futures\/_base.py\", line 453, in result \r\n self._condition.wait(timeout) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/threading.py\", line 320, in wait \r\n waiter.acquire() \r\nKeyboardInterrupt \r\n \r\nDuring handling of the above exception, another exception occurred: \r\n \r\nTraceback (most recent call last): \r\n File \"\/TALI\/tali\/scripts\/validate_dataset.py\", line 127, in \r\n train_dataset.push_to_hub(repo_id=\"Antreas\/TALI-base\", max_shard_size=\"5GB\") \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/dataset_dict.py\", line 1583, in push_to_hub \r\n repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parquet_shards_to_hub( \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py\", line 5275, in _push_parquet_shards_to_hub \r\n _retry( \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/utils\/file_utils.py\", line 282, in _retry \r\n return func(*func_args, **func_kwargs) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/huggingface_hub\/utils\/_validators.py\", line 118, in _inner_fn \r\n return fn(*args, **kwargs) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/huggingface_hub\/hf_api.py\", line 826, in _inner \r\n return fn(self, *args, **kwargs) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/huggingface_hub\/hf_api.py\", line 3205, in upload_file \r\n commit_info = self.create_commit( \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/huggingface_hub\/utils\/_validators.py\", line 118, in _inner_fn \r\n return fn(*args, **kwargs) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/huggingface_hub\/hf_api.py\", line 826, in _inner \r\n return fn(self, *args, **kwargs) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/huggingface_hub\/hf_api.py\", line 2680, in create_commit \r\n upload_lfs_files( \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/huggingface_hub\/utils\/_validators.py\", line 118, in _inner_fn \r\n return fn(*args, **kwargs) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/huggingface_hub\/_commit_api.py\", line 353, in upload_lfs_files \r\n thread_map( \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/tqdm\/contrib\/concurrent.py\", line 69, in thread_map \r\n return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/tqdm\/contrib\/concurrent.py\", line 49, in _executor_map \r\n with PoolExecutor(max_workers=max_workers, initializer=tqdm_class.set_lock, \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/concurrent\/futures\/_base.py\", line 649, in __exit__ \r\n self.shutdown(wait=True) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/concurrent\/futures\/thread.py\", line 235, in shutdown \r\n t.join() \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/threading.py\", line 1096, in join \r\n self._wait_for_tstate_lock() \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/threading.py\", line 1116, in _wait_for_tstate_lock \r\n if lock.acquire(block, timeout): \r\nKeyboardInterrupt \r\n```","@Wauplin \r\n\r\n>What is the total dataset size?\r\n\r\nThere are three variants, and the random hanging happens on all three. The sizes are 2TB, 1TB, and 200GB. \r\n\r\n>Is it always failing on the same shard or is the hanging problem happening randomly?\r\n\r\nIt seems to be very much random, as restarting can help move past the previous hang, only to find a new one, or not. \r\n\r\n>Were you able to save the dataset as parquet locally? This would help us determine if the problem comes from the upload or the file generation.\r\n\r\nYes. The dataset seems to be locally stored as parquet. ","Hmm it looks like an issue with TQDM lock. Maybe you can try updating TQDM ?","I am using the latest version of tqdm\r\n\r\n```\r\n\u2b22 [Docker] \u276f pip install tqdm --upgrade\r\nRequirement already satisfied: tqdm in \/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages (4.65.0)\r\nWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https:\/\/pip.pypa.io\/warnings\/venv\r\n```","I tried trying to catch the hanging issue in action again\r\n\r\n```\r\nPushing dataset shards to the dataset hub: 65%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258a | 127\/195 [2:28:02<1:19:15, 69.94s\/it] \r\nError while uploading 'data\/train-00127-of-00195-3f8d036ade107c27.parquet' to the Hub. \r\nPushing split train to the Hub. \r\nPushing dataset shards to the dataset hub: 64%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258f | 124\/195 [2:06:10<1:12:14, 61.05s\/it]C^[^C^C^C \r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Traceback (most recent call last) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \r\n\u2502 \/TALI\/tali\/scripts\/validate_dataset.py:127 in \u2502 \r\n\u2502 \u2502 \r\n\u2502 124 \u2502 \u2502 \r\n\u2502 125 \u2502 while not succesful_competion: \u2502 \r\n\u2502 126 \u2502 \u2502 try: \u2502 \r\n\u2502 \u2771 127 \u2502 \u2502 \u2502 train_dataset.push_to_hub(repo_id=\"Antreas\/TALI-base\", max_shard_size=\"5GB\") \u2502 \r\n\u2502 128 \u2502 \u2502 \u2502 succesful_competion = True \u2502 \r\n\u2502 129 \u2502 \u2502 except Exception as e: \u2502 \r\n\u2502 130 \u2502 \u2502 \u2502 print(e) \u2502 \r\n\u2502 \u2502 \r\n\u2502 \/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/dataset_dict.py:1583 in push_to_hub \u2502 \r\n\u2502 \u2502 \r\n\u2502 1580 \u2502 \u2502 for split in self.keys(): \u2502 \r\n\u2502 1581 \u2502 \u2502 \u2502 logger.warning(f\"Pushing split {split} to the Hub.\") \u2502 \r\n\u2502 1582 \u2502 \u2502 \u2502 # The split=key needs to be removed before merging \u2502 \r\n\u2502 \u2771 1583 \u2502 \u2502 \u2502 repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parq \u2502 \r\n\u2502 1584 \u2502 \u2502 \u2502 \u2502 repo_id, \u2502 \r\n\u2502 1585 \u2502 \u2502 \u2502 \u2502 split=split, \u2502 \r\n\u2502 1586 \u2502 \u2502 \u2502 \u2502 private=private, \u2502 \r\n\u2502 \u2502 \r\n\u2502 \/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py:5263 in \u2502 \r\n\u2502 _push_parquet_shards_to_hub \u2502 \r\n\u2502 \u2502 \r\n\u2502 5260 \u2502 \u2502 \u2502 \r\n\u2502 5261 \u2502 \u2502 uploaded_size = 0 \u2502 \r\n\u2502 5262 \u2502 \u2502 shards_path_in_repo = [] \u2502 \r\n\u2502 \u2771 5263 \u2502 \u2502 for index, shard in logging.tqdm( \u2502 \r\n\u2502 5264 \u2502 \u2502 \u2502 enumerate(itertools.chain([first_shard], shards_iter)), \u2502 \r\n\u2502 5265 \u2502 \u2502 \u2502 desc=\"Pushing dataset shards to the dataset hub\", \u2502 \r\n\u2502 5266 \u2502 \u2502 \u2502 total=num_shards, \u2502 \r\n\u2502 \u2502 \r\n\u2502 \/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/tqdm\/std.py:1178 in __iter__ \u2502 \r\n\u2502 \u2502 \r\n\u2502 1175 \u2502 \u2502 time = self._time \u2502 \r\n\u2502 1176 \u2502 \u2502 \u2502 \r\n\u2502 1177 \u2502 \u2502 try: \u2502\r\n\u2502 \u2771 1178 \u2502 \u2502 \u2502 for obj in iterable: \u2502\r\n\u2502 1179 \u2502 \u2502 \u2502 \u2502 yield obj \u2502\r\n\u2502 1180 \u2502 \u2502 \u2502 \u2502 # Update and possibly print the progressbar. \u2502\r\n\u2502 1181 \u2502 \u2502 \u2502 \u2502 # Note: does not call self.update(1) for speed optimisation. \u2502\r\n\u2502 \u2502\r\n\u2502 \/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py:5238 in \u2502\r\n\u2502 shards_with_embedded_external_files \u2502\r\n\u2502 \u2502\r\n\u2502 5235 \u2502 \u2502 \u2502 \u2502 for shard in shards: \u2502\r\n\u2502 5236 \u2502 \u2502 \u2502 \u2502 \u2502 format = shard.format \u2502\r\n\u2502 5237 \u2502 \u2502 \u2502 \u2502 \u2502 shard = shard.with_format(\"arrow\") \u2502\r\n\u2502 \u2771 5238 \u2502 \u2502 \u2502 \u2502 \u2502 shard = shard.map( \u2502\r\n\u2502 5239 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 embed_table_storage, \u2502\r\n\u2502 5240 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 batched=True, \u2502\r\n\u2502 5241 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 batch_size=1000, \u2502\r\n\u2502 \u2502\r\n\u2502 \/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py:578 in wrapper \u2502\r\n\u2502 \u2502\r\n\u2502 575 \u2502 \u2502 else: \u2502\r\n\u2502 576 \u2502 \u2502 \u2502 self: \"Dataset\" = kwargs.pop(\"self\") \u2502\r\n\u2502 577 \u2502 \u2502 # apply actual function \u2502\r\n\u2502 \u2771 578 \u2502 \u2502 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs) \u2502 \r\n\u2502 579 \u2502 \u2502 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [ou \u2502 \r\n\u2502 580 \u2502 \u2502 for dataset in datasets: \u2502 \r\n\u2502 581 \u2502 \u2502 \u2502 # Remove task templates if a column mapping of the template is no longer val \u2502 \r\n\u2502 \u2502 \r\n\u2502 \/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py:543 in wrapper \u2502 \r\n\u2502 \u2502 \r\n\u2502 540 \u2502 \u2502 \u2502 \"output_all_columns\": self._output_all_columns, \u2502 \r\n\u2502 541 \u2502 \u2502 } \u2502 \r\n\u2502 542 \u2502 \u2502 # apply actual function \u2502 \r\n\u2502 \u2771 543 \u2502 \u2502 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs) \u2502 \r\n\u2502 544 \u2502 \u2502 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [ou \u2502 \r\n\u2502 545 \u2502 \u2502 # re-apply format to the output \u2502 \r\n\u2502 546 \u2502 \u2502 for dataset in datasets: \u2502 \r\n\u2502 \u2502 \r\n\u2502 \/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py:3073 in map \u2502 \r\n\u2502 \u2502 \r\n\u2502 3070 \u2502 \u2502 \u2502 \u2502 \u2502 leave=False, \u2502 \r\n\u2502 3071 \u2502 \u2502 \u2502 \u2502 \u2502 desc=desc or \"Map\", \u2502 \r\n\u2502 3072 \u2502 \u2502 \u2502 \u2502 ) as pbar: \u2502 \r\n\u2502 \u2771 3073 \u2502 \u2502 \u2502 \u2502 \u2502 for rank, done, content in Dataset._map_single(**dataset_kwargs): \u2502 \r\n\u2502 3074 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 if done: \u2502 \r\n\u2502 3075 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 shards_done += 1 \u2502 \r\n\u2502 3076 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 logger.debug(f\"Finished processing shard number {rank} of {n \u2502 \r\n\u2502 \u2502 \r\n\u2502 \/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py:3464 in _map_single \u2502 \r\n\u2502 \u2502 \r\n\u2502 3461 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 buf_writer, writer, tmp_file = init_buffer_and_writer() \u2502 \r\n\u2502 3462 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 stack.enter_context(writer) \u2502 \r\n\u2502 3463 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 if isinstance(batch, pa.Table): \u2502 \r\n\u2502 \u2771 3464 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 writer.write_table(batch) \u2502 \r\n\u2502 3465 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 else: \u2502 \r\n\u2502 3466 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 writer.write_batch(batch) \u2502 \r\n\u2502 3467 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 num_examples_progress_update += num_examples_in_batch \u2502 \r\n\u2502 \u2502 \r\n\u2502 \/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/arrow_writer.py:567 in write_table \u2502 \r\n\u2502 \u2502 \r\n\u2502 564 \u2502 \u2502 \u2502 writer_batch_size = self.writer_batch_size \u2502 \r\n\u2502 565 \u2502 \u2502 if self.pa_writer is None: \u2502 \r\n\u2502 566 \u2502 \u2502 \u2502 self._build_writer(inferred_schema=pa_table.schema) \u2502 \r\n\u2502 \u2771 567 \u2502 \u2502 pa_table = pa_table.combine_chunks() \u2502 \r\n\u2502 568 \u2502 \u2502 pa_table = table_cast(pa_table, self._schema) \u2502 \r\n\u2502 569 \u2502 \u2502 if self.embed_local_files: \u2502 \r\n\u2502 570 \u2502 \u2502 \u2502 pa_table = embed_table_storage(pa_table) \u2502 \r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \r\nKeyboardInterrupt \r\n```","I'm on my phone so can't help that much. What I'd advice to do is to [save_to_disk](https:\/\/huggingface.co\/docs\/datasets\/package_reference\/main_classes#save_to_disk) if it's not already done and then upload the files\/folder to the Hub separately. You can find what you need in the [upload guide](https:\/\/huggingface.co\/docs\/huggingface_hub\/guides\/upload). It might not help finding the exact issue for now but at least it can unblock you. ","In your last stacktrace it interrupted while embedding external content - in case your dataset in made of images or audio files that live on your disk. Is it the case ?","Yeah, the dataset has images, audio, video and text. ","It's maybe related to https:\/\/github.com\/apache\/arrow\/issues\/34455: are you using ArrayND features ?\r\n\r\nAlso what's your `pyarrow` version ? Could you try updating to >= 12.0.1 ?","I was using pyarrow == 12.0.0\r\n\r\nI am not explicitly using ArrayND features, unless the hub API automatically converts my files to such. ","I have now updated to pyarrow == 12.0.1 and retrying","You can also try to reduce the `max_shard_size` - Sometimes parquet has a hard time working with data bigger than 2GB","So, updating the pyarrow seems to help. It can still throw errors here and there but I can retry when that happens. It's better than hanging. \r\n\r\nHowever, I am a bit confused about something. I have uploaded my datasets, but while earlier I could see all three sets, now I can only see 1. What's going on? \r\nhttps:\/\/huggingface.co\/datasets\/Antreas\/TALI-base\r\n\r\nI have seen this happen before as well, so I deleted and reuploaded, but this dataset is way too large for me to do this. ","It's a bug on our side, I'll update the dataset viewer ;)\r\n\r\nThanks for reporting !","Apparently this happened because of bad modifications in the README.md split metadata.\r\n\r\nI fixed them in this PR: https:\/\/huggingface.co\/datasets\/Antreas\/TALI-base\/discussions\/1","@lhoestq It's a bit odd that when uploading a dataset, one set at a time \"train\", \"val\", \"test\", the push_to_hub function overwrites the readme and removes differently named sets from previous commits. i.e., you push \"val\", all is well. Then you push \"test\", and the \"val\" entry disappears from the readme, while the data remain intact. ","Also, just found another related issue. One of the many that make things hang or fail when pushing to hub. \r\n\r\nIn the following code:\r\n\r\n```python\r\ntrain_generator = lambda: data_generator(\"train\", percentage=1.0)\r\n val_generator = lambda: data_generator(\"val\")\r\n test_generator = lambda: data_generator(\"test\")\r\n\r\n train_data = datasets.Dataset.from_generator(\r\n train_generator,\r\n num_proc=mp.cpu_count(),\r\n writer_batch_size=5000,\r\n cache_dir=tali_dataset_dir,\r\n )\r\n\r\n val_data = datasets.Dataset.from_generator(\r\n val_generator,\r\n writer_batch_size=5000,\r\n num_proc=mp.cpu_count(),\r\n cache_dir=tali_dataset_dir,\r\n )\r\n\r\n test_data = datasets.Dataset.from_generator(\r\n test_generator,\r\n writer_batch_size=5000,\r\n num_proc=mp.cpu_count(),\r\n cache_dir=tali_dataset_dir,\r\n )\r\n\r\n print(f\"Pushing TALI-large to hub\")\r\n\r\n dataset = datasets.DatasetDict(\r\n {\"train\": train_data, \"val\": val_data, \"test\": test_data}\r\n )\r\n succesful_competion = False\r\n\r\n while not succesful_competion:\r\n try:\r\n dataset.push_to_hub(repo_id=\"Antreas\/TALI-large\", max_shard_size=\"2GB\")\r\n succesful_competion = True\r\n except Exception as e:\r\n print(e)\r\n ```\r\n \r\n \r\n Things keep failing in the push_to_repo step, at random places, with the following error:\r\n \r\n ```bash\r\n Pushing dataset shards to the dataset hub: 7%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258b | 67\/950 [42:41<9:22:37, 38.23s\/it]\r\nError while uploading 'data\/train-00067-of-00950-a4d179ed5a593486.parquet' to the Hub.\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2\/2 [00:01<00:00, 1.81ba\/s]\r\nUpload 1 LFS files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:11<00:00, 11.20s\/it]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2\/2 [00:00<00:00, 2.48ba\/s]\r\nUpload 1 LFS files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:15<00:00, 15.30s\/it]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2\/2 [00:00<00:00, 2.39ba\/s]\r\nUpload 1 LFS files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:11<00:00, 11.52s\/it]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2\/2 [00:00<00:00, 2.47ba\/s]\r\nUpload 1 LFS files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:10<00:00, 10.39s\/it]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2\/2 [00:00<00:00, 2.26ba\/s]\r\nUpload 1 LFS files: 0%| | 0\/1 [16:38 @lhoestq It's a bit odd that when uploading a dataset, one set at a time \"train\", \"val\", \"test\", the push_to_hub function overwrites the readme and removes differently named sets from previous commits. i.e., you push \"val\", all is well. Then you push \"test\", and the \"val\" entry disappears from the readme, while the data remain intact.\r\n\r\nHmm this shouldn't happen. What code did you run exactly ? Using which version of `datasets` ?","> I have a while loop that forces retries, but it seems that the progress itself is randomly getting lost as well. Any ideas on how to improve this? It has been blocking me for way too long.\r\n\r\nCould you also print the cause of the error (`e.__cause__`) ? Or show the full stack trace when the error happens ?\r\nThis would give more details about why it failed and would help investigate.","> Should I build the parquet manually and then push manually as well? If I do things manually, how can I ensure my dataset works properly with \"stream=True\"?\r\n\r\nParquet is supported out of the box ^^\r\n\r\nIf you want to make sure it works as expected you can try locally first:\r\n```python\r\nds = load_dataset(\"path\/to\/local\", streaming=True)\r\n```","@lhoestq @AntreasAntoniou I transferred this issue to the `datasets` repository as the questions and answers are more related to this repo. Hope it can help other users find the bug and fixes more easily (like updating [tqdm](https:\/\/github.com\/huggingface\/datasets\/issues\/5990#issuecomment-1607120204) and [pyarrow](https:\/\/github.com\/huggingface\/datasets\/issues\/5990#issuecomment-1607120278) or [setting a lower `max_shard_size`](https:\/\/github.com\/huggingface\/datasets\/issues\/5990#issuecomment-1607120328)).\r\n\r\n~For the initial \"pushing large dataset consistently hangs\"-issue, I still think it's best to try to `save_to_disk` first and then upload it manually\/with a script (see [upload_folder](https:\/\/huggingface.co\/docs\/huggingface_hub\/guides\/upload#upload-a-folder)). It's not the most satisfying solution but at least it would confirm from where the problem comes from.~\r\n\r\n**EDIT:** removed suggestion about saving to disk first (see https:\/\/github.com\/huggingface\/datasets\/issues\/5990#issuecomment-1607186914).","> @lhoestq @AntreasAntoniou I transferred this issue to the datasets repository as the questions and answers are more related to this repo. Hope it can help other users find the bug and fixes more easily (like updating https:\/\/github.com\/huggingface\/datasets\/issues\/5990#issuecomment-1607120204 and https:\/\/github.com\/huggingface\/datasets\/issues\/5990#issuecomment-1607120278 or https:\/\/github.com\/huggingface\/datasets\/issues\/5990#issuecomment-1607120328).\r\n\r\nthanks :)\r\n\r\n> For the initial \"pushing large dataset consistently hangs\"-issue, I still think it's best to try to save_to_disk first and then upload it manually\/with a script (see [upload_folder](https:\/\/huggingface.co\/docs\/huggingface_hub\/guides\/upload#upload-a-folder)). It's not the most satisfying solution but at least it would confirm from where the problem comes from.\r\n\r\nAs I've already said in other discussions, I would not recommend pushing files saved with `save_to_disk` to the Hub but save to parquet shards and upload them instead. The Hub does not support datasets saved with `save_to_disk`, which is meant for disk only.","> As I've already said in other discussions, I would not recommend pushing files saved with save_to_disk to the Hub but save to parquet shards and upload them instead. The Hub does not support datasets saved with save_to_disk, which is meant for disk only.\r\n\r\nWell noted, thanks. That part was not clear to me :)","Sorry for not replying in a few days, I was on leave. :) \r\n\r\nSo, here are more information as to the error that causes some of the delay\r\n\r\n```bash\r\nPushing Antreas\/TALI-tiny to hub\r\nAttempting to push to hub\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6\/6 [00:24<00:00, 4.06s\/ba]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6\/6 [00:24<00:00, 4.15s\/ba]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6\/6 [00:26<00:00, 4.45s\/ba]\r\n\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/huggingface_hub\/lfs.py:310: UserWarning: hf_transfer is enabled but does not support uploading from bytes or BinaryIO, falling back to regular upload\r\n warnings.warn(\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6\/6 [00:25<00:00, 4.26s\/ba]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6\/6 [00:27<00:00, 4.58s\/ba]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6\/6 [00:24<00:00, 4.10s\/ba]\r\nPushing dataset shards to the dataset hub: 22%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258e | 5\/23 [52:23<3:08:37, 628.74s\/it]\r\nException: Error while uploading 'data\/train-00005-of-00023-e224d901fd65e062.parquet' to the Hub., with stacktrace: , and type: , and \r\ncause: HTTPSConnectionPool(host='s3.us-east-1.amazonaws.com', port=443): Max retries exceeded with url: \r\n\/lfs.huggingface.co\/repos\/7c\/d3\/7cd385d9324302dc13e3986331d72d9be6fa0174c63dcfe0e08cd474f7f1e8b7\/3415166ae28c0beccbbc692f38742b8dea2c197f5c805321104e888d21d7eb90?X-Amz-Algorithm=AWS4-HMAC-SHA256\r\n&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230627%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230627T003349Z&X-Amz-Expires=86400&X-Amz-Signature=5a12ff96f2\r\n91f644134170992a6628e5f3c4e7b2e7fc3e940b4378fe11ae5390&X-Amz-SignedHeaders=host&partNumber=1&uploadId=JSsK8r63XSF.VlKQx3Vf8OW4DEVp5YIIY7LPnuapNIegsxs5EHgM1p4u0.Nn6_wlPlQnvxm8HKMxZhczKE9KB74t0etB\r\noLcxqBIvsgey3uXBTZMAEGwU6y7CDUADiEIO&x-id=UploadPart (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2426)')))\r\nPush failed, retrying\r\nAttempting to push to hub\r\nPushing split train to the Hub.\r\n```\r\n\r\nOne issue is that the uploading does not continue from the chunk it failed off. It often continues from a very old chunk. e.g. if it failed on chunk 192\/250, it will continue from say 53\/250, and this behaviour appears almost random. ","Are you using a proxy of some sort ?","I am using a kubernetes cluster built into a university VPN. ","So, other than the random connection drops here and there, any idea why the progress does not continue where it left off?\r\n\r\n```bash\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 28\/28 [00:02<00:00, 10.79ba\/s]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 28\/28 [00:02<00:00, 13.65ba\/s]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 28\/28 [00:02<00:00, 13.39ba\/s]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 28\/28 [00:02<00:00, 13.04ba\/s]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 28\/28 [00:02<00:00, 13.52ba\/s]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 28\/28 [00:02<00:00, 12.28ba\/s]\r\nPushing dataset shards to the dataset hub: 20%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 | 75\/381 [1:34:39<6:26:11, 75.72s\/it]\r\nException: Error while uploading 'data\/train-00075-of-00381-1614bc251b778766.parquet' to the Hub., with stacktrace: , and type: , and \r\ncause: HTTPSConnectionPool(host='s3.us-east-1.amazonaws.com', port=443): Max retries exceeded with url: \r\n\/lfs.huggingface.co\/repos\/3b\/31\/3b311464573d8d63b137fcd5b40af1e7a5b1306843c88e80372d0117157504e5\/ed8dae933fb79ae1ef5fb1f698f5125d3e1c02977ac69438631f152bb3bfdd1e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-\r\nAmz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230629%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230629T053004Z&X-Amz-Expires=86400&X-Amz-Signature=da2b26270edfd6d0\r\nd069c015a5a432031107a8664c3f0917717e5e40c688183c&X-Amz-SignedHeaders=host&partNumber=1&uploadId=2erWGHTh3ICqBLU_QvHfnygZ2tkMWbL0rEqpJdYohCKHUHnfwMjvoBIg0TI_KSGn4rSKxUxOyqSIzFUFSRSzixZeLeneaXJOw.Qx8\r\nzLKSV5xV7HRQDj4RBesNve6cSoo&x-id=UploadPart (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2426)')))\r\nPush failed, retrying\r\nAttempting to push to hub\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 28\/28 [00:02<00:00, 12.09ba\/s]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 28\/28 [00:02<00:00, 11.51ba\/s]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 28\/28 [00:02<00:00, 10.77ba\/s]\r\nPushing dataset shards to the dataset hub: 20%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258b | 77\/381 [1:32:50<6:06:34, 72.35s\/it]\r\nException: Error while uploading 'data\/train-00077-of-00381-368b2327a9908aab.parquet' to the Hub., with stacktrace: , and type: , and \r\ncause: HTTPSConnectionPool(host='s3.us-east-1.amazonaws.com', port=443): Max retries exceeded with url: \r\n\/lfs.huggingface.co\/repos\/3b\/31\/3b311464573d8d63b137fcd5b40af1e7a5b1306843c88e80372d0117157504e5\/9462ff2c5e61283b53b091984a22de2f41a2f6e37b681171e2eca4a998f979cb?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-\r\nAmz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230629%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230629T070510Z&X-Amz-Expires=86400&X-Amz-Signature=9ab8487b93d443cd\r\n21f05476405855d46051a0771b4986bbb20f770ded21b1a4&X-Amz-SignedHeaders=host&partNumber=1&uploadId=UiHX1B.DcoAO2QmIHpWpCuNPwhXU_o1dsTkTGPqZt1P51o9k0yz.EsFD9eKpQMwgAST3jOatRG78I_JWRBeLBDYYVNp8r0TpIdeSg\r\neUg8uwPZOCPw9y5mWOw8MWJrnBo&x-id=UploadPart (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2426)')))\r\nPush failed, retrying\r\nAttempting to push to hub\r\nPushing split train to the Hub.\r\nPushing dataset shards to the dataset hub: 8%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258b | 29\/381 [27:39<5:50:03, 59.67s\/it]\r\nMap: 36%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 | 1000\/2764 [00:35<00:34, 51.63 examples\/Map: 72%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258f | 2000\/2764 [00:40<00:15, 49.06 examples\/Map: 72%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258f | 2000\/2764 [00:55<00:15, 49.06 examples\/Map: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2764\/2764 [00:56<00:00, 48.82 examples\/Pushing dataset shards to the dataset hub: 8%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589 | 30\/381 [28:35<5:43:03, 58.64s\/iPushing dataset shards to the dataset hub: 8%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258e | 31\/381 [29:40<5:52:18, 60.40s\/iPushing dataset shards to the dataset hub: 8%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258c | 32\/381 [30:46<6:02:20, 62.29s\/it] \r\nMap: 36%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258e \r\n```\r\n\r\nThis is actually the issue that wastes the most time for me, and I need it fixed. Please advice on how I can go about it.\r\n\r\nNotice how the progress goes from \r\n| 77\/381 to 30\/381","If the any shard is missing on the Hub, it will re-upload it. It looks like the 30th shard was missing on the Hub in your case. \r\n\r\nIt also means that the other files up to the 77th that were successfully uploaded won't be uploaded again.\r\n\r\ncc @mariosasko who might know better"],"created_at":1686408407000,"updated_at":1688133460000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nOnce I have locally built a large dataset that I want to push to hub, I use the recommended approach of .push_to_hub to get the dataset on the hub, and after pushing a few shards, it consistently hangs. This has happened over 40 times over the past week, and despite my best efforts to try and catch this happening and kill a process and restart, it seems to be extremely time wasting -- so I came to you to report this and to seek help. \r\n\r\nI already tried installing hf_transfer, but it doesn't support Byte file uploads so I uninstalled it.\n\n### Reproduction\n\n```python\r\nimport multiprocessing as mp\r\nimport pathlib\r\nfrom math import ceil\r\n\r\nimport datasets\r\nimport numpy as np\r\nfrom tqdm.auto import tqdm\r\n\r\nfrom tali.data.data import select_subtitles_between_timestamps\r\nfrom tali.utils import load_json\r\n\r\ntali_dataset_dir = \"\/data\/\"\r\n\r\nif __name__ == \"__main__\":\r\n full_dataset = datasets.load_dataset(\r\n \"Antreas\/TALI\", num_proc=mp.cpu_count(), cache_dir=tali_dataset_dir\r\n )\r\n\r\n def data_generator(set_name, percentage: float = 1.0):\r\n dataset = full_dataset[set_name]\r\n\r\n for item in tqdm(dataset):\r\n video_list = item[\"youtube_content_video\"]\r\n video_list = np.random.choice(\r\n video_list, int(ceil(len(video_list) * percentage))\r\n )\r\n if len(video_list) == 0:\r\n continue\r\n captions = item[\"youtube_subtitle_text\"]\r\n captions = select_subtitles_between_timestamps(\r\n subtitle_dict=load_json(\r\n captions.replace(\r\n \"\/data\/\",\r\n tali_dataset_dir,\r\n )\r\n ),\r\n starting_timestamp=0,\r\n ending_timestamp=100000000,\r\n )\r\n\r\n for video_path in video_list:\r\n temp_path = video_path.replace(\"\/data\/\", tali_dataset_dir)\r\n video_path_actual: pathlib.Path = pathlib.Path(temp_path)\r\n\r\n if video_path_actual.exists():\r\n item[\"youtube_content_video\"] = open(video_path_actual, \"rb\").read()\r\n item[\"youtube_subtitle_text\"] = captions\r\n yield item\r\n\r\n train_generator = lambda: data_generator(\"train\", percentage=0.1)\r\n val_generator = lambda: data_generator(\"val\")\r\n test_generator = lambda: data_generator(\"test\")\r\n\r\n train_data = datasets.Dataset.from_generator(\r\n train_generator,\r\n num_proc=mp.cpu_count(),\r\n writer_batch_size=5000,\r\n cache_dir=tali_dataset_dir,\r\n )\r\n\r\n val_data = datasets.Dataset.from_generator(\r\n val_generator,\r\n writer_batch_size=5000,\r\n num_proc=mp.cpu_count(),\r\n cache_dir=tali_dataset_dir,\r\n )\r\n\r\n test_data = datasets.Dataset.from_generator(\r\n test_generator,\r\n writer_batch_size=5000,\r\n num_proc=mp.cpu_count(),\r\n cache_dir=tali_dataset_dir,\r\n )\r\n\r\n dataset = datasets.DatasetDict(\r\n {\r\n \"train\": train_data,\r\n \"val\": val_data,\r\n \"test\": test_data,\r\n }\r\n )\r\n succesful_competion = False\r\n while not succesful_competion:\r\n try:\r\n dataset.push_to_hub(repo_id=\"Antreas\/TALI-small\", max_shard_size=\"5GB\")\r\n succesful_competion = True\r\n except Exception as e:\r\n print(e)\r\n```\n\n### Logs\n\n```shell\nPushing dataset shards to the dataset hub: 33%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258e | 7\/21 [24:33<49:06, 210.45s\/it]\r\nError while uploading 'data\/val-00007-of-00021-6b216a984af1a4c8.parquet' to the Hub. \r\nPushing split train to the Hub. \r\nResuming upload of the dataset shards. \r\nPushing dataset shards to the dataset hub: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 46\/46 [42:10<00:00, 55.01s\/it]\r\nPushing split val to the Hub. \r\nResuming upload of the dataset shards. \r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:01<00:00, 1.55ba\/s]\r\nUpload 1 LFS files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:23<00:00, 23.51s\/it]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:02<00:00, 1.39ba\/s]\r\nUpload 1 LFS files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:30<00:00, 30.19s\/it]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:02<00:00, 1.28ba\/s]\r\nUpload 1 LFS files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:24<00:00, 24.08s\/it]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:02<00:00, 1.42ba\/s]\r\nUpload 1 LFS files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:23<00:00, 23.97s\/it]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:02<00:00, 1.49ba\/s]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:02<00:00, 1.54ba\/s^\r\nUpload 1 LFS files: 0%| | 0\/1 [04:42\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007241 \/ 0.011353 (-0.004112) | 0.004574 \/ 0.011008 (-0.006434) | 0.120481 \/ 0.038508 (0.081973) | 0.040492 \/ 0.023109 (0.017383) | 0.391399 \/ 0.275898 (0.115501) | 0.422844 \/ 0.323480 (0.099365) | 0.004441 \/ 0.007986 (-0.003545) | 0.004544 \/ 0.004328 (0.000216) | 0.089482 \/ 0.004250 (0.085231) | 0.052939 \/ 0.037052 (0.015887) | 0.393649 \/ 0.258489 (0.135160) | 0.433852 \/ 0.293841 (0.140011) | 0.035882 \/ 0.128546 (-0.092664) | 0.010172 \/ 0.075646 (-0.065474) | 0.410331 \/ 0.419271 (-0.008940) | 0.061481 \/ 0.043533 (0.017948) | 0.405066 \/ 0.255139 (0.149927) | 0.417732 \/ 0.283200 (0.134532) | 0.121647 \/ 0.141683 (-0.020035) | 1.790624 \/ 1.452155 (0.338469) | 1.863398 \/ 1.492716 (0.370681) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.250650 \/ 0.018006 (0.232644) | 0.489044 \/ 0.000490 (0.488554) | 0.010421 \/ 0.000200 (0.010222) | 0.000106 \/ 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030340 \/ 0.037411 (-0.007071) | 0.128318 \/ 0.014526 (0.113792) | 0.140463 \/ 0.176557 (-0.036093) | 0.205762 \/ 0.737135 (-0.531373) | 0.147996 \/ 0.296338 (-0.148342) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.493158 \/ 0.215209 (0.277949) | 4.858346 \/ 2.077655 (2.780691) | 2.242942 \/ 1.504120 (0.738822) | 2.010092 \/ 1.541195 (0.468897) | 2.076765 \/ 1.468490 (0.608275) | 0.636669 \/ 4.584777 (-3.948108) | 4.478027 \/ 3.745712 (0.732314) | 2.157843 \/ 5.269862 (-3.112019) | 1.305133 \/ 4.565676 (-3.260543) | 0.079220 \/ 0.424275 (-0.345055) | 0.013858 \/ 0.007607 (0.006251) | 0.604501 \/ 0.226044 (0.378457) | 5.950071 \/ 2.268929 (3.681143) | 2.738373 \/ 55.444624 (-52.706251) | 2.380275 \/ 6.876477 (-4.496201) | 2.517108 \/ 2.142072 (0.375035) | 0.772249 \/ 4.805227 (-4.032979) | 0.169874 \/ 6.500664 (-6.330790) | 0.078026 \/ 0.075469 (0.002557) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.450200 \/ 1.841788 (-0.391588) | 17.810965 \/ 8.074308 (9.736657) | 15.518998 \/ 10.191392 (5.327606) | 0.200469 \/ 0.680424 (-0.479954) | 0.020777 \/ 0.534201 (-0.513424) | 0.504556 \/ 0.579283 (-0.074727) | 0.518493 \/ 0.434364 (0.084129) | 0.615335 \/ 0.540337 (0.074998) | 0.754065 \/ 1.386936 (-0.632871) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007224 \/ 0.011353 (-0.004129) | 0.004663 \/ 0.011008 (-0.006345) | 0.092151 \/ 0.038508 (0.053643) | 0.038359 \/ 0.023109 (0.015250) | 0.486413 \/ 0.275898 (0.210515) | 0.521596 \/ 0.323480 (0.198116) | 0.004207 \/ 0.007986 (-0.003778) | 0.003745 \/ 0.004328 (-0.000583) | 0.089840 \/ 0.004250 (0.085589) | 0.050996 \/ 0.037052 (0.013943) | 0.498090 \/ 0.258489 (0.239601) | 0.533647 \/ 0.293841 (0.239806) | 0.035151 \/ 0.128546 (-0.093395) | 0.010293 \/ 0.075646 (-0.065354) | 0.099056 \/ 0.419271 (-0.320215) | 0.057365 \/ 0.043533 (0.013833) | 0.470652 \/ 0.255139 (0.215513) | 0.509801 \/ 0.283200 (0.226602) | 0.115650 \/ 0.141683 (-0.026033) | 1.810860 \/ 1.452155 (0.358705) | 1.896775 \/ 1.492716 (0.404059) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.261887 \/ 0.018006 (0.243880) | 0.489919 \/ 0.000490 (0.489430) | 0.006117 \/ 0.000200 (0.005917) | 0.000134 \/ 0.000054 (0.000079) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.035033 \/ 0.037411 (-0.002378) | 0.141093 \/ 0.014526 (0.126567) | 0.152613 \/ 0.176557 (-0.023943) | 0.218351 \/ 0.737135 (-0.518785) | 0.158366 \/ 0.296338 (-0.137972) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.542219 \/ 0.215209 (0.327010) | 5.479358 \/ 2.077655 (3.401703) | 2.749586 \/ 1.504120 (1.245466) | 2.537686 \/ 1.541195 (0.996491) | 2.582351 \/ 1.468490 (1.113861) | 0.636750 \/ 4.584777 (-3.948027) | 4.537501 \/ 3.745712 (0.791789) | 2.141392 \/ 5.269862 (-3.128469) | 1.279711 \/ 4.565676 (-3.285965) | 0.079227 \/ 0.424275 (-0.345048) | 0.014141 \/ 0.007607 (0.006534) | 0.662070 \/ 0.226044 (0.436025) | 6.572144 \/ 2.268929 (4.303215) | 3.321349 \/ 55.444624 (-52.123275) | 2.928219 \/ 6.876477 (-3.948258) | 3.002732 \/ 2.142072 (0.860659) | 0.773808 \/ 4.805227 (-4.031419) | 0.166017 \/ 6.500664 (-6.334647) | 0.076424 \/ 0.075469 (0.000955) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.584325 \/ 1.841788 (-0.257463) | 18.359247 \/ 8.074308 (10.284938) | 16.977875 \/ 10.191392 (6.786483) | 0.195381 \/ 0.680424 (-0.485043) | 0.021048 \/ 0.534201 (-0.513153) | 0.512237 \/ 0.579283 (-0.067047) | 0.511435 \/ 0.434364 (0.077071) | 0.592856 \/ 0.540337 (0.052518) | 0.711905 \/ 1.386936 (-0.675031) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#d536e37b21a6dd5c122b6d8113994ec50846c5b5 \"CML watermark\")\n"],"created_at":1686301273000,"updated_at":1686749738000,"closed_at":1686749244000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5938","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5938","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5938.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5938.patch","merged_at":1686749244000},"body":"This PR ensures that the temporary filename created is the same as the one that is locked, while writing to the cache.\r\n\r\nThis PR stops using `tempfile` to generate the temporary filename.\r\n\r\nAdditionally, the behavior now is aligned for both `resume_download` `True` and `False`.\r\n\r\nRefactor temp_file_manager so that it uses the filename that is locked: \r\n- Use: `cache_path + \".incomplete\"`, when the locked one is `cache_path + \".lock\"`\r\n\r\nBefore it was using `tempfile` inside `cache_dir`, which was not locked: although very improbable name collision (8 random characters), this was not impossible when huge number of multiple processes.\r\n\r\nMaybe related to \"Stale file handle\" issues caused by `tempfile`: \r\n- [ ] https:\/\/huggingface.co\/datasets\/tapaco\/discussions\/4\r\n- [ ] https:\/\/huggingface.co\/datasets\/xcsr\/discussions\/1\r\n- [ ] https:\/\/huggingface.co\/datasets\/covost2\/discussions\/3\r\n```\r\nError code: ConfigNamesError\r\nException: OSError\r\nMessage: [Errno 116] Stale file handle\r\nTraceback: Traceback (most recent call last):\r\n File \"\/src\/services\/worker\/src\/worker\/job_runners\/dataset\/config_names.py\", line 61, in compute_config_names_response\r\n for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))\r\n File \"\/src\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 323, in get_dataset_config_names\r\n dataset_module = dataset_module_factory(\r\n File \"\/src\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 1219, in dataset_module_factory\r\n raise e1 from None\r\n File \"\/src\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 1188, in dataset_module_factory\r\n return HubDatasetModuleFactoryWithScript(\r\n File \"\/src\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 907, in get_module\r\n dataset_readme_path = self.download_dataset_readme_file()\r\n File \"\/src\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 896, in download_dataset_readme_file\r\n return cached_path(\r\n File \"\/src\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/utils\/file_utils.py\", line 183, in cached_path\r\n output_path = get_from_cache(\r\n File \"\/src\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/utils\/file_utils.py\", line 611, in get_from_cache\r\n http_get(\r\n File \"\/usr\/local\/lib\/python3.9\/tempfile.py\", line 496, in __exit__\r\n result = self.file.__exit__(exc, value, tb)\r\n OSError: [Errno 116] Stale file handle\r\n```\r\n- the stale file handle error can be raised when `tempfile` tries to close (when exiting its context manager) a filename that has been already closed by other process\r\n - note that `tempfile` filenames are randomly generated but not locked in our code\r\n\r\nCC: @severo ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5938\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5938\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5937","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5937\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5937\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5937\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5937","id":1749388597,"node_id":"PR_kwDODunzps5SmLIs","number":5937,"title":"Avoid parallel redownload in cache","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006157 \/ 0.011353 (-0.005196) | 0.003790 \/ 0.011008 (-0.007219) | 0.097889 \/ 0.038508 (0.059381) | 0.029038 \/ 0.023109 (0.005929) | 0.306918 \/ 0.275898 (0.031020) | 0.339637 \/ 0.323480 (0.016157) | 0.003526 \/ 0.007986 (-0.004460) | 0.003102 \/ 0.004328 (-0.001227) | 0.076908 \/ 0.004250 (0.072658) | 0.039254 \/ 0.037052 (0.002201) | 0.309197 \/ 0.258489 (0.050708) | 0.345635 \/ 0.293841 (0.051794) | 0.027954 \/ 0.128546 (-0.100593) | 0.008510 \/ 0.075646 (-0.067136) | 0.314674 \/ 0.419271 (-0.104598) | 0.057102 \/ 0.043533 (0.013569) | 0.307495 \/ 0.255139 (0.052356) | 0.329501 \/ 0.283200 (0.046302) | 0.098450 \/ 0.141683 (-0.043233) | 1.480102 \/ 1.452155 (0.027948) | 1.550554 \/ 1.492716 (0.057838) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.207440 \/ 0.018006 (0.189434) | 0.426560 \/ 0.000490 (0.426071) | 0.003250 \/ 0.000200 (0.003050) | 0.000074 \/ 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023777 \/ 0.037411 (-0.013634) | 0.103905 \/ 0.014526 (0.089379) | 0.108324 \/ 0.176557 (-0.068233) | 0.167223 \/ 0.737135 (-0.569913) | 0.113529 \/ 0.296338 (-0.182810) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.426770 \/ 0.215209 (0.211561) | 4.251806 \/ 2.077655 (2.174151) | 2.010426 \/ 1.504120 (0.506306) | 1.858630 \/ 1.541195 (0.317435) | 1.941318 \/ 1.468490 (0.472828) | 0.558056 \/ 4.584777 (-4.026721) | 3.399107 \/ 3.745712 (-0.346606) | 1.758386 \/ 5.269862 (-3.511476) | 1.036305 \/ 4.565676 (-3.529372) | 0.067094 \/ 0.424275 (-0.357182) | 0.011167 \/ 0.007607 (0.003560) | 0.526705 \/ 0.226044 (0.300661) | 5.250319 \/ 2.268929 (2.981390) | 2.496723 \/ 55.444624 (-52.947902) | 2.154013 \/ 6.876477 (-4.722464) | 2.394724 \/ 2.142072 (0.252652) | 0.669723 \/ 4.805227 (-4.135504) | 0.136367 \/ 6.500664 (-6.364297) | 0.067080 \/ 0.075469 (-0.008389) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.269700 \/ 1.841788 (-0.572088) | 14.099775 \/ 8.074308 (6.025467) | 14.422936 \/ 10.191392 (4.231544) | 0.132344 \/ 0.680424 (-0.548080) | 0.016744 \/ 0.534201 (-0.517457) | 0.378286 \/ 0.579283 (-0.200997) | 0.392282 \/ 0.434364 (-0.042082) | 0.437648 \/ 0.540337 (-0.102689) | 0.528554 \/ 1.386936 (-0.858382) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006086 \/ 0.011353 (-0.005267) | 0.003769 \/ 0.011008 (-0.007239) | 0.077414 \/ 0.038508 (0.038906) | 0.027806 \/ 0.023109 (0.004697) | 0.360333 \/ 0.275898 (0.084434) | 0.404725 \/ 0.323480 (0.081245) | 0.003443 \/ 0.007986 (-0.004543) | 0.004434 \/ 0.004328 (0.000106) | 0.077309 \/ 0.004250 (0.073059) | 0.040441 \/ 0.037052 (0.003388) | 0.358627 \/ 0.258489 (0.100138) | 0.415246 \/ 0.293841 (0.121405) | 0.027718 \/ 0.128546 (-0.100829) | 0.008495 \/ 0.075646 (-0.067151) | 0.082874 \/ 0.419271 (-0.336397) | 0.042323 \/ 0.043533 (-0.001210) | 0.354895 \/ 0.255139 (0.099756) | 0.390032 \/ 0.283200 (0.106832) | 0.092377 \/ 0.141683 (-0.049306) | 1.492817 \/ 1.452155 (0.040662) | 1.551859 \/ 1.492716 (0.059143) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.198921 \/ 0.018006 (0.180915) | 0.417699 \/ 0.000490 (0.417209) | 0.001349 \/ 0.000200 (0.001149) | 0.000071 \/ 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026349 \/ 0.037411 (-0.011062) | 0.105712 \/ 0.014526 (0.091186) | 0.111792 \/ 0.176557 (-0.064765) | 0.163677 \/ 0.737135 (-0.573459) | 0.116864 \/ 0.296338 (-0.179474) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.447532 \/ 0.215209 (0.232323) | 4.468770 \/ 2.077655 (2.391116) | 2.403820 \/ 1.504120 (0.899700) | 2.273640 \/ 1.541195 (0.732445) | 2.337505 \/ 1.468490 (0.869015) | 0.560729 \/ 4.584777 (-4.024048) | 3.389165 \/ 3.745712 (-0.356547) | 2.697614 \/ 5.269862 (-2.572247) | 1.351909 \/ 4.565676 (-3.213768) | 0.068089 \/ 0.424275 (-0.356186) | 0.011639 \/ 0.007607 (0.004032) | 0.555277 \/ 0.226044 (0.329233) | 5.559291 \/ 2.268929 (3.290363) | 2.657609 \/ 55.444624 (-52.787015) | 2.346667 \/ 6.876477 (-4.529809) | 2.615823 \/ 2.142072 (0.473751) | 0.668662 \/ 4.805227 (-4.136566) | 0.136593 \/ 6.500664 (-6.364071) | 0.068384 \/ 0.075469 (-0.007085) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.312089 \/ 1.841788 (-0.529699) | 14.477510 \/ 8.074308 (6.403202) | 14.231432 \/ 10.191392 (4.040040) | 0.132015 \/ 0.680424 (-0.548409) | 0.016908 \/ 0.534201 (-0.517293) | 0.368315 \/ 0.579283 (-0.210968) | 0.397964 \/ 0.434364 (-0.036400) | 0.432446 \/ 0.540337 (-0.107891) | 0.526349 \/ 1.386936 (-0.860587) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#78b4d55c3cfc60e309eb033d3ed0aba5e796b6ce \"CML watermark\")\n"],"created_at":1686298716000,"updated_at":1686745859000,"closed_at":1686745437000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5937","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5937","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5937.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5937.patch","merged_at":1686745437000},"body":"Avoid parallel redownload in cache by retrying inside the lock if path exists.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5937\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5937\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5936","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5936\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5936\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5936\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5936","id":1748424388,"node_id":"I_kwDODunzps5oNtbE","number":5936,"title":"Sequence of array not supported for most dtype","user":{"login":"qgallouedec","id":45557362,"node_id":"MDQ6VXNlcjQ1NTU3MzYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/45557362?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/qgallouedec","html_url":"https:\/\/github.com\/qgallouedec","followers_url":"https:\/\/api.github.com\/users\/qgallouedec\/followers","following_url":"https:\/\/api.github.com\/users\/qgallouedec\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/qgallouedec\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/qgallouedec\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/qgallouedec\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/qgallouedec\/orgs","repos_url":"https:\/\/api.github.com\/users\/qgallouedec\/repos","events_url":"https:\/\/api.github.com\/users\/qgallouedec\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/qgallouedec\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Related, `float16` is the only dtype not supported by `Array2D` (probably by every `ArrayND`):\r\n\r\n```python\r\nfrom datasets import Array2D, Features, Dataset\r\n\r\nimport numpy as np\r\n\r\nfor dtype in [\r\n \"bool\", # ok\r\n \"int8\", # ok\r\n \"int16\", # ok\r\n \"int32\", # ok\r\n \"int64\", # ok\r\n \"uint8\", # ok\r\n \"uint16\", # ok\r\n \"uint32\", # ok\r\n \"uint64\", # ok\r\n \"float16\", # failed\r\n \"float32\", # ok\r\n \"float64\", # ok\r\n]:\r\n features = Features({\"foo\": Array2D(dtype=dtype, shape=(3, 4))})\r\n array = np.zeros((3, 4), dtype=dtype)\r\n try:\r\n dataset = Dataset.from_dict({\"foo\": [array]}, features=features)\r\n except Exception as e:\r\n print(f\"Failed for dtype={dtype}\")\r\n```","Here's something I can't explain:\r\n\r\nWhen an array is encoded in the `from_dict` method, the numpy array is converted to a list (thus losing the original dtype, which is transfromed to the nearest builtin Python type)\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/6ee61e6e695b1df9f232d47faf3a5e2b30b33737\/src\/datasets\/features\/features.py#L524-L525\r\n\r\nHowever, later on, this same data is written to memory, and it seems authorized that the data is an array (or in this case, a list of arrays). \r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/6ee61e6e695b1df9f232d47faf3a5e2b30b33737\/src\/datasets\/arrow_writer.py#L185-L186\r\n\r\nSo the question is: why convert it to a Python list? This seems to be quite expensive both in terms of write time (all data is copied) and memory (e.g., an int8 is converted to an int64).\r\n\r\nFinally, if I try to remove this step, it solves all the previous problems, and it seems to me that it doesn't break anything (the CI passes without problem).","Arrow only support 1d numpy arrays, so we convert multidim arrays to lists of 1s arrays (and keep the dtype).\r\n\r\nThough you noticed that it's concerting to lists and lose the dtype. If it's the case then it's a bug.","Ok the conversion to list shouldn't be there indeed ! Could you open a PR to remove it ?"],"created_at":1686248287000,"updated_at":1686755014000,"closed_at":1686755014000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nCreate a dataset composed of sequence of array fails for most dtypes (see code below).\n\n### Steps to reproduce the bug\n\n```python\r\nfrom datasets import Sequence, Array2D, Features, Dataset\r\n\r\nimport numpy as np\r\n\r\nfor dtype in [\r\n \"bool\", # ok\r\n \"int8\", # failed\r\n \"int16\", # failed\r\n \"int32\", # failed\r\n \"int64\", # ok\r\n \"uint8\", # failed\r\n \"uint16\", # failed\r\n \"uint32\", # failed\r\n \"uint64\", # failed\r\n \"float16\", # failed\r\n \"float32\", # failed\r\n \"float64\", # ok\r\n]:\r\n features = Features({\"foo\": Sequence(Array2D(dtype=dtype, shape=(2, 2)))})\r\n sequence = [\r\n [[1.0, 2.0], [3.0, 4.0]],\r\n [[5.0, 6.0], [7.0, 8.0]],\r\n ]\r\n array = np.array(sequence, dtype=dtype)\r\n try:\r\n dataset = Dataset.from_dict({\"foo\": [array]}, features=features)\r\n except Exception as e:\r\n print(f\"Failed for dtype={dtype}\")\r\n```\r\n\r\nTraceback for `dtype=\"int8\"`:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\/home\/qgallouedec\/datasets\/a.py\", line 29, in \r\n raise e\r\n File \"\/home\/qgallouedec\/datasets\/a.py\", line 26, in \r\n dataset = Dataset.from_dict({\"foo\": [array]}, features=features)\r\n File \"\/home\/qgallouedec\/env\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py\", line 899, in from_dict\r\n pa_table = InMemoryTable.from_pydict(mapping=mapping)\r\n File \"\/home\/qgallouedec\/env\/lib\/python3.10\/site-packages\/datasets\/table.py\", line 799, in from_pydict\r\n return cls(pa.Table.from_pydict(*args, **kwargs))\r\n File \"pyarrow\/table.pxi\", line 3725, in pyarrow.lib.Table.from_pydict\r\n File \"pyarrow\/table.pxi\", line 5254, in pyarrow.lib._from_pydict\r\n File \"pyarrow\/array.pxi\", line 350, in pyarrow.lib.asarray\r\n File \"pyarrow\/array.pxi\", line 236, in pyarrow.lib.array\r\n File \"pyarrow\/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"\/home\/qgallouedec\/env\/lib\/python3.10\/site-packages\/datasets\/arrow_writer.py\", line 204, in __arrow_array__\r\n out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)\r\n File \"\/home\/qgallouedec\/env\/lib\/python3.10\/site-packages\/datasets\/table.py\", line 1833, in wrapper\r\n return func(array, *args, **kwargs)\r\n File \"\/home\/qgallouedec\/env\/lib\/python3.10\/site-packages\/datasets\/table.py\", line 2091, in cast_array_to_feature\r\n casted_values = _c(array.values, feature.feature)\r\n File \"\/home\/qgallouedec\/env\/lib\/python3.10\/site-packages\/datasets\/table.py\", line 1833, in wrapper\r\n return func(array, *args, **kwargs)\r\n File \"\/home\/qgallouedec\/env\/lib\/python3.10\/site-packages\/datasets\/table.py\", line 2139, in cast_array_to_feature\r\n return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)\r\n File \"\/home\/qgallouedec\/env\/lib\/python3.10\/site-packages\/datasets\/table.py\", line 1833, in wrapper\r\n return func(array, *args, **kwargs)\r\n File \"\/home\/qgallouedec\/env\/lib\/python3.10\/site-packages\/datasets\/table.py\", line 1967, in array_cast\r\n return pa_type.wrap_array(array)\r\n File \"pyarrow\/types.pxi\", line 879, in pyarrow.lib.BaseExtensionType.wrap_array\r\nTypeError: Incompatible storage type for extension>: expected list>, got list>\r\n```\n\n### Expected behavior\n\nNot to fail.\n\n### Environment info\n\n\r\n- Python 3.10.6\r\n- datasets: master branch\r\n- Numpy: 1.23.4","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5936\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5936\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5935","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5935\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5935\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5935\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5935","id":1748090220,"node_id":"PR_kwDODunzps5Sh9Mg","number":5935,"title":"Better row group size in push_to_hub","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007489 \/ 0.011353 (-0.003864) | 0.004914 \/ 0.011008 (-0.006095) | 0.111626 \/ 0.038508 (0.073117) | 0.037920 \/ 0.023109 (0.014811) | 0.350571 \/ 0.275898 (0.074673) | 0.389667 \/ 0.323480 (0.066187) | 0.006309 \/ 0.007986 (-0.001676) | 0.005488 \/ 0.004328 (0.001160) | 0.083962 \/ 0.004250 (0.079712) | 0.050728 \/ 0.037052 (0.013675) | 0.360997 \/ 0.258489 (0.102508) | 0.392736 \/ 0.293841 (0.098895) | 0.031975 \/ 0.128546 (-0.096571) | 0.009941 \/ 0.075646 (-0.065705) | 0.379840 \/ 0.419271 (-0.039432) | 0.056522 \/ 0.043533 (0.012989) | 0.359379 \/ 0.255139 (0.104240) | 0.384487 \/ 0.283200 (0.101287) | 0.117523 \/ 0.141683 (-0.024160) | 1.683639 \/ 1.452155 (0.231485) | 1.791645 \/ 1.492716 (0.298929) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.236862 \/ 0.018006 (0.218856) | 0.481208 \/ 0.000490 (0.480719) | 0.007455 \/ 0.000200 (0.007255) | 0.000111 \/ 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030854 \/ 0.037411 (-0.006557) | 0.126892 \/ 0.014526 (0.112367) | 0.139207 \/ 0.176557 (-0.037350) | 0.206447 \/ 0.737135 (-0.530689) | 0.143095 \/ 0.296338 (-0.153244) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.474677 \/ 0.215209 (0.259468) | 4.699534 \/ 2.077655 (2.621879) | 2.152102 \/ 1.504120 (0.647983) | 1.934815 \/ 1.541195 (0.393620) | 1.986448 \/ 1.468490 (0.517958) | 0.607184 \/ 4.584777 (-3.977593) | 4.480385 \/ 3.745712 (0.734673) | 2.074729 \/ 5.269862 (-3.195132) | 1.182383 \/ 4.565676 (-3.383294) | 0.075624 \/ 0.424275 (-0.348651) | 0.014046 \/ 0.007607 (0.006439) | 0.598859 \/ 0.226044 (0.372814) | 5.959551 \/ 2.268929 (3.690622) | 2.700851 \/ 55.444624 (-52.743773) | 2.303775 \/ 6.876477 (-4.572702) | 2.456441 \/ 2.142072 (0.314369) | 0.747185 \/ 4.805227 (-4.058042) | 0.165787 \/ 6.500664 (-6.334878) | 0.075817 \/ 0.075469 (0.000348) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.411859 \/ 1.841788 (-0.429928) | 17.375495 \/ 8.074308 (9.301187) | 15.187098 \/ 10.191392 (4.995706) | 0.169953 \/ 0.680424 (-0.510471) | 0.020204 \/ 0.534201 (-0.513997) | 0.461424 \/ 0.579283 (-0.117859) | 0.494443 \/ 0.434364 (0.060080) | 0.544583 \/ 0.540337 (0.004246) | 0.648231 \/ 1.386936 (-0.738705) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007785 \/ 0.011353 (-0.003568) | 0.005314 \/ 0.011008 (-0.005694) | 0.087273 \/ 0.038508 (0.048765) | 0.037810 \/ 0.023109 (0.014701) | 0.425473 \/ 0.275898 (0.149575) | 0.459976 \/ 0.323480 (0.136497) | 0.007270 \/ 0.007986 (-0.000716) | 0.004631 \/ 0.004328 (0.000303) | 0.087063 \/ 0.004250 (0.082812) | 0.052630 \/ 0.037052 (0.015578) | 0.432384 \/ 0.258489 (0.173895) | 0.500291 \/ 0.293841 (0.206450) | 0.033144 \/ 0.128546 (-0.095402) | 0.010101 \/ 0.075646 (-0.065545) | 0.096068 \/ 0.419271 (-0.323204) | 0.062750 \/ 0.043533 (0.019217) | 0.419308 \/ 0.255139 (0.164169) | 0.437099 \/ 0.283200 (0.153900) | 0.122289 \/ 0.141683 (-0.019394) | 1.737829 \/ 1.452155 (0.285674) | 1.851481 \/ 1.492716 (0.358765) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.014277 \/ 0.018006 (-0.003729) | 0.489835 \/ 0.000490 (0.489345) | 0.008423 \/ 0.000200 (0.008223) | 0.000188 \/ 0.000054 (0.000134) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032966 \/ 0.037411 (-0.004445) | 0.130069 \/ 0.014526 (0.115544) | 0.144372 \/ 0.176557 (-0.032185) | 0.200400 \/ 0.737135 (-0.536735) | 0.149384 \/ 0.296338 (-0.146954) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.511542 \/ 0.215209 (0.296333) | 5.093879 \/ 2.077655 (3.016225) | 2.572088 \/ 1.504120 (1.067968) | 2.339118 \/ 1.541195 (0.797923) | 2.441637 \/ 1.468490 (0.973147) | 0.614818 \/ 4.584777 (-3.969959) | 4.724441 \/ 3.745712 (0.978729) | 5.431978 \/ 5.269862 (0.162116) | 2.257794 \/ 4.565676 (-2.307883) | 0.078109 \/ 0.424275 (-0.346166) | 0.013821 \/ 0.007607 (0.006214) | 0.639232 \/ 0.226044 (0.413188) | 6.424623 \/ 2.268929 (4.155694) | 3.163018 \/ 55.444624 (-52.281606) | 2.756786 \/ 6.876477 (-4.119690) | 2.808655 \/ 2.142072 (0.666583) | 0.745843 \/ 4.805227 (-4.059385) | 0.165562 \/ 6.500664 (-6.335102) | 0.076610 \/ 0.075469 (0.001141) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.738630 \/ 1.841788 (-0.103158) | 18.073573 \/ 8.074308 (9.999265) | 16.482820 \/ 10.191392 (6.291428) | 0.213233 \/ 0.680424 (-0.467191) | 0.022839 \/ 0.534201 (-0.511362) | 0.487043 \/ 0.579283 (-0.092240) | 0.512518 \/ 0.434364 (0.078154) | 0.549365 \/ 0.540337 (0.009028) | 0.656612 \/ 1.386936 (-0.730324) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#288e92b03bd4ec91c10c8a529b32631cfaba9fb7 \"CML watermark\")\n","Good idea!\r\n\r\nI was wondering: if we want to optimize the balance between the size of downloading a row group, and the number of rows in the group, would it make sense to compute the row group size by checking the average size of the rows?\r\n\r\neg. 32x32 images could have a larger row group size than full HD images, no? Relying on the size would even remove the need to check the column types.\r\n\r\n(in this proposal, we could use the computed row group size, eg 837, or use the nearest row group size in a list of values: 10, 100, 1000, 10000)","Probably, but I would go for a simpler solution first :p","Sure! I wanted to understand if the idea made sense or not, but it's not for this PR.","I think it will be more useful for people who use the viewer and won't impact sequential io that much.","DuckDB [paragraph](https:\/\/duckdb.org\/docs\/data\/parquet\/tips.html#selecting-a-row_group_size) that explains how to choose the `row_group_size`. Our default shard size is 500 MB in `push_to_hub`, so, ideally, we should aim for 64 MB row groups (and make this part configurable for power users \ud83d\ude42).\r\n\r\nSo, before merging this PR, let's add a TODO or open an issue as a reminder that this can be improved.","I moved the config values, improved the features check and mentioned the improvements we could do in the docstring :)","_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006211 \/ 0.011353 (-0.005141) | 0.004244 \/ 0.011008 (-0.006764) | 0.097941 \/ 0.038508 (0.059433) | 0.028564 \/ 0.023109 (0.005455) | 0.299651 \/ 0.275898 (0.023753) | 0.340694 \/ 0.323480 (0.017214) | 0.005161 \/ 0.007986 (-0.002824) | 0.004764 \/ 0.004328 (0.000435) | 0.075505 \/ 0.004250 (0.071255) | 0.039656 \/ 0.037052 (0.002603) | 0.309242 \/ 0.258489 (0.050753) | 0.350783 \/ 0.293841 (0.056942) | 0.025145 \/ 0.128546 (-0.103401) | 0.008498 \/ 0.075646 (-0.067148) | 0.317657 \/ 0.419271 (-0.101615) | 0.043926 \/ 0.043533 (0.000394) | 0.305915 \/ 0.255139 (0.050776) | 0.331630 \/ 0.283200 (0.048430) | 0.088564 \/ 0.141683 (-0.053119) | 1.533175 \/ 1.452155 (0.081021) | 1.581017 \/ 1.492716 (0.088301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.206032 \/ 0.018006 (0.188025) | 0.433446 \/ 0.000490 (0.432956) | 0.003955 \/ 0.000200 (0.003755) | 0.000095 \/ 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023468 \/ 0.037411 (-0.013943) | 0.103292 \/ 0.014526 (0.088766) | 0.107234 \/ 0.176557 (-0.069322) | 0.168525 \/ 0.737135 (-0.568610) | 0.113218 \/ 0.296338 (-0.183120) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.431085 \/ 0.215209 (0.215875) | 4.302082 \/ 2.077655 (2.224427) | 2.068290 \/ 1.504120 (0.564171) | 1.850718 \/ 1.541195 (0.309523) | 1.964261 \/ 1.468490 (0.495771) | 0.547562 \/ 4.584777 (-4.037215) | 3.410739 \/ 3.745712 (-0.334974) | 1.779640 \/ 5.269862 (-3.490221) | 1.005466 \/ 4.565676 (-3.560210) | 0.066250 \/ 0.424275 (-0.358025) | 0.011877 \/ 0.007607 (0.004270) | 0.525185 \/ 0.226044 (0.299141) | 5.234786 \/ 2.268929 (2.965857) | 2.398045 \/ 55.444624 (-53.046580) | 2.073020 \/ 6.876477 (-4.803457) | 2.210753 \/ 2.142072 (0.068680) | 0.654897 \/ 4.805227 (-4.150331) | 0.134639 \/ 6.500664 (-6.366025) | 0.067050 \/ 0.075469 (-0.008419) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.180210 \/ 1.841788 (-0.661577) | 13.613091 \/ 8.074308 (5.538783) | 13.441837 \/ 10.191392 (3.250445) | 0.146048 \/ 0.680424 (-0.534376) | 0.016505 \/ 0.534201 (-0.517696) | 0.363210 \/ 0.579283 (-0.216073) | 0.405484 \/ 0.434364 (-0.028880) | 0.428712 \/ 0.540337 (-0.111625) | 0.522300 \/ 1.386936 (-0.864636) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006147 \/ 0.011353 (-0.005206) | 0.004161 \/ 0.011008 (-0.006847) | 0.075861 \/ 0.038508 (0.037353) | 0.027948 \/ 0.023109 (0.004839) | 0.362466 \/ 0.275898 (0.086568) | 0.398227 \/ 0.323480 (0.074747) | 0.005014 \/ 0.007986 (-0.002972) | 0.004772 \/ 0.004328 (0.000444) | 0.075674 \/ 0.004250 (0.071423) | 0.039158 \/ 0.037052 (0.002106) | 0.363567 \/ 0.258489 (0.105078) | 0.410378 \/ 0.293841 (0.116537) | 0.025510 \/ 0.128546 (-0.103036) | 0.008528 \/ 0.075646 (-0.067118) | 0.081803 \/ 0.419271 (-0.337468) | 0.040954 \/ 0.043533 (-0.002579) | 0.358492 \/ 0.255139 (0.103353) | 0.381345 \/ 0.283200 (0.098145) | 0.092347 \/ 0.141683 (-0.049336) | 1.567695 \/ 1.452155 (0.115540) | 1.668412 \/ 1.492716 (0.175696) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.203367 \/ 0.018006 (0.185360) | 0.424642 \/ 0.000490 (0.424152) | 0.002451 \/ 0.000200 (0.002251) | 0.000071 \/ 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026129 \/ 0.037411 (-0.011282) | 0.102564 \/ 0.014526 (0.088039) | 0.110583 \/ 0.176557 (-0.065973) | 0.164332 \/ 0.737135 (-0.572804) | 0.115706 \/ 0.296338 (-0.180632) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.468925 \/ 0.215209 (0.253716) | 4.657266 \/ 2.077655 (2.579612) | 2.423280 \/ 1.504120 (0.919160) | 2.236284 \/ 1.541195 (0.695089) | 2.323019 \/ 1.468490 (0.854529) | 0.548120 \/ 4.584777 (-4.036657) | 3.455602 \/ 3.745712 (-0.290110) | 1.730421 \/ 5.269862 (-3.539441) | 1.006089 \/ 4.565676 (-3.559588) | 0.067478 \/ 0.424275 (-0.356797) | 0.011465 \/ 0.007607 (0.003857) | 0.574235 \/ 0.226044 (0.348190) | 5.744404 \/ 2.268929 (3.475475) | 2.882225 \/ 55.444624 (-52.562400) | 2.618246 \/ 6.876477 (-4.258231) | 2.642920 \/ 2.142072 (0.500847) | 0.661441 \/ 4.805227 (-4.143787) | 0.137358 \/ 6.500664 (-6.363306) | 0.070372 \/ 0.075469 (-0.005097) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.333815 \/ 1.841788 (-0.507973) | 14.689667 \/ 8.074308 (6.615359) | 14.362294 \/ 10.191392 (4.170902) | 0.152011 \/ 0.680424 (-0.528413) | 0.016869 \/ 0.534201 (-0.517332) | 0.370433 \/ 0.579283 (-0.208851) | 0.399642 \/ 0.434364 (-0.034722) | 0.433759 \/ 0.540337 (-0.106578) | 0.525443 \/ 1.386936 (-0.861493) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#09e9f9a88edd9055b5c540e3d83b5a11d48f8ba8 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006564 \/ 0.011353 (-0.004789) | 0.004350 \/ 0.011008 (-0.006658) | 0.096277 \/ 0.038508 (0.057769) | 0.032956 \/ 0.023109 (0.009847) | 0.303675 \/ 0.275898 (0.027777) | 0.336384 \/ 0.323480 (0.012904) | 0.005789 \/ 0.007986 (-0.002197) | 0.003957 \/ 0.004328 (-0.000371) | 0.073990 \/ 0.004250 (0.069740) | 0.050974 \/ 0.037052 (0.013922) | 0.321754 \/ 0.258489 (0.063265) | 0.349489 \/ 0.293841 (0.055648) | 0.031138 \/ 0.128546 (-0.097409) | 0.009000 \/ 0.075646 (-0.066646) | 0.325445 \/ 0.419271 (-0.093826) | 0.070173 \/ 0.043533 (0.026640) | 0.304706 \/ 0.255139 (0.049567) | 0.321803 \/ 0.283200 (0.038603) | 0.109405 \/ 0.141683 (-0.032278) | 1.489812 \/ 1.452155 (0.037657) | 1.577729 \/ 1.492716 (0.085013) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.287187 \/ 0.018006 (0.269181) | 0.527625 \/ 0.000490 (0.527135) | 0.006533 \/ 0.000200 (0.006333) | 0.000090 \/ 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026659 \/ 0.037411 (-0.010752) | 0.106236 \/ 0.014526 (0.091710) | 0.118615 \/ 0.176557 (-0.057941) | 0.173156 \/ 0.737135 (-0.563979) | 0.122883 \/ 0.296338 (-0.173456) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.407189 \/ 0.215209 (0.191980) | 4.055732 \/ 2.077655 (1.978078) | 1.865594 \/ 1.504120 (0.361474) | 1.664325 \/ 1.541195 (0.123130) | 1.668961 \/ 1.468490 (0.200471) | 0.521207 \/ 4.584777 (-4.063570) | 3.740424 \/ 3.745712 (-0.005288) | 3.431973 \/ 5.269862 (-1.837889) | 1.636669 \/ 4.565676 (-2.929008) | 0.065271 \/ 0.424275 (-0.359005) | 0.012151 \/ 0.007607 (0.004544) | 0.514233 \/ 0.226044 (0.288189) | 5.110150 \/ 2.268929 (2.841222) | 2.264340 \/ 55.444624 (-53.180284) | 1.940428 \/ 6.876477 (-4.936049) | 2.042286 \/ 2.142072 (-0.099787) | 0.639200 \/ 4.805227 (-4.166028) | 0.139537 \/ 6.500664 (-6.361127) | 0.063195 \/ 0.075469 (-0.012274) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.179501 \/ 1.841788 (-0.662286) | 14.600133 \/ 8.074308 (6.525825) | 14.902137 \/ 10.191392 (4.710745) | 0.144509 \/ 0.680424 (-0.535915) | 0.017449 \/ 0.534201 (-0.516752) | 0.393135 \/ 0.579283 (-0.186148) | 0.413103 \/ 0.434364 (-0.021261) | 0.459897 \/ 0.540337 (-0.080440) | 0.552602 \/ 1.386936 (-0.834334) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006891 \/ 0.011353 (-0.004462) | 0.004633 \/ 0.011008 (-0.006375) | 0.073093 \/ 0.038508 (0.034585) | 0.032509 \/ 0.023109 (0.009399) | 0.348332 \/ 0.275898 (0.072434) | 0.381920 \/ 0.323480 (0.058440) | 0.005978 \/ 0.007986 (-0.002007) | 0.005360 \/ 0.004328 (0.001032) | 0.074307 \/ 0.004250 (0.070056) | 0.049668 \/ 0.037052 (0.012615) | 0.354713 \/ 0.258489 (0.096224) | 0.398521 \/ 0.293841 (0.104681) | 0.032013 \/ 0.128546 (-0.096534) | 0.008890 \/ 0.075646 (-0.066756) | 0.080013 \/ 0.419271 (-0.339259) | 0.051820 \/ 0.043533 (0.008288) | 0.349730 \/ 0.255139 (0.094591) | 0.369267 \/ 0.283200 (0.086067) | 0.103874 \/ 0.141683 (-0.037809) | 1.484148 \/ 1.452155 (0.031993) | 1.573927 \/ 1.492716 (0.081211) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.009699 \/ 0.018006 (-0.008307) | 0.511176 \/ 0.000490 (0.510686) | 0.002938 \/ 0.000200 (0.002738) | 0.000109 \/ 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027847 \/ 0.037411 (-0.009564) | 0.111565 \/ 0.014526 (0.097039) | 0.120625 \/ 0.176557 (-0.055932) | 0.172130 \/ 0.737135 (-0.565006) | 0.125949 \/ 0.296338 (-0.170389) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.430634 \/ 0.215209 (0.215424) | 4.315377 \/ 2.077655 (2.237722) | 2.070764 \/ 1.504120 (0.566644) | 1.881962 \/ 1.541195 (0.340767) | 1.904053 \/ 1.468490 (0.435563) | 0.524973 \/ 4.584777 (-4.059804) | 3.718359 \/ 3.745712 (-0.027353) | 3.415344 \/ 5.269862 (-1.854518) | 1.224568 \/ 4.565676 (-3.341108) | 0.065593 \/ 0.424275 (-0.358682) | 0.011643 \/ 0.007607 (0.004036) | 0.537050 \/ 0.226044 (0.311006) | 5.352155 \/ 2.268929 (3.083226) | 2.557361 \/ 55.444624 (-52.887263) | 2.217770 \/ 6.876477 (-4.658707) | 2.194975 \/ 2.142072 (0.052902) | 0.635142 \/ 4.805227 (-4.170085) | 0.140642 \/ 6.500664 (-6.360022) | 0.064690 \/ 0.075469 (-0.010779) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.266125 \/ 1.841788 (-0.575663) | 14.836413 \/ 8.074308 (6.762105) | 14.446870 \/ 10.191392 (4.255478) | 0.191545 \/ 0.680424 (-0.488878) | 0.017433 \/ 0.534201 (-0.516768) | 0.392296 \/ 0.579283 (-0.186987) | 0.420698 \/ 0.434364 (-0.013666) | 0.463225 \/ 0.540337 (-0.077112) | 0.556127 \/ 1.386936 (-0.830809) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#7fcbe5b1575c8d162b65b9397b3dfda995a4e048 \"CML watermark\")\n"],"created_at":1686236475000,"updated_at":1686332857000,"closed_at":1686332409000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5935","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5935","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5935.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5935.patch","merged_at":1686332409000},"body":"This is a very simple change that improves `to_parquet` to use a more reasonable row group size for image and audio datasets.\r\n\r\nThis is especially useful for `push_to_hub` and will provide a better experience with the dataset viewer on HF","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5935\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5935\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5934","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5934\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5934\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5934\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5934","id":1747904840,"node_id":"PR_kwDODunzps5ShUxQ","number":5934,"title":"Modify levels of some logging messages","user":{"login":"Laurent2916","id":21087104,"node_id":"MDQ6VXNlcjIxMDg3MTA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/21087104?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Laurent2916","html_url":"https:\/\/github.com\/Laurent2916","followers_url":"https:\/\/api.github.com\/users\/Laurent2916\/followers","following_url":"https:\/\/api.github.com\/users\/Laurent2916\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Laurent2916\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Laurent2916\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Laurent2916\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Laurent2916\/orgs","repos_url":"https:\/\/api.github.com\/users\/Laurent2916\/repos","events_url":"https:\/\/api.github.com\/users\/Laurent2916\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Laurent2916\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I've addressed this as part of #6019, so feel free to close this PR. ","Thanks !"],"created_at":1686231104000,"updated_at":1689186063000,"closed_at":1689186062000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5934","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5934","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5934.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5934.patch","merged_at":null},"body":"Some warning messages didn't quite sound like warnings so I modified their logging levels to info.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5934\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5934\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5933","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5933\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5933\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5933\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5933","id":1747382500,"node_id":"PR_kwDODunzps5Sfi5J","number":5933,"title":"Fix `to_numpy` when None values in the sequence","user":{"login":"qgallouedec","id":45557362,"node_id":"MDQ6VXNlcjQ1NTU3MzYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/45557362?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/qgallouedec","html_url":"https:\/\/github.com\/qgallouedec","followers_url":"https:\/\/api.github.com\/users\/qgallouedec\/followers","following_url":"https:\/\/api.github.com\/users\/qgallouedec\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/qgallouedec\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/qgallouedec\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/qgallouedec\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/qgallouedec\/orgs","repos_url":"https:\/\/api.github.com\/users\/qgallouedec\/repos","events_url":"https:\/\/api.github.com\/users\/qgallouedec\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/qgallouedec\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I just added the same test with dynamic shape","_The documentation is not available anymore as the PR was closed or merged._","Awesome ! I'm merging now if you don't mind :)\r\nWe should probably give you permissions to merge your own PRs when you have an approval","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009980 \/ 0.011353 (-0.001373) | 0.005709 \/ 0.011008 (-0.005300) | 0.132185 \/ 0.038508 (0.093677) | 0.039299 \/ 0.023109 (0.016190) | 0.400168 \/ 0.275898 (0.124270) | 0.470582 \/ 0.323480 (0.147102) | 0.007753 \/ 0.007986 (-0.000233) | 0.005196 \/ 0.004328 (0.000868) | 0.093698 \/ 0.004250 (0.089448) | 0.052631 \/ 0.037052 (0.015579) | 0.430347 \/ 0.258489 (0.171858) | 0.460162 \/ 0.293841 (0.166321) | 0.057511 \/ 0.128546 (-0.071035) | 0.013944 \/ 0.075646 (-0.061702) | 0.459008 \/ 0.419271 (0.039737) | 0.075532 \/ 0.043533 (0.031999) | 0.405165 \/ 0.255139 (0.150026) | 0.456142 \/ 0.283200 (0.172942) | 0.117309 \/ 0.141683 (-0.024374) | 1.945787 \/ 1.452155 (0.493633) | 2.067162 \/ 1.492716 (0.574446) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.285755 \/ 0.018006 (0.267749) | 0.619965 \/ 0.000490 (0.619476) | 0.005071 \/ 0.000200 (0.004871) | 0.000114 \/ 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031112 \/ 0.037411 (-0.006299) | 0.128514 \/ 0.014526 (0.113988) | 0.137161 \/ 0.176557 (-0.039396) | 0.211363 \/ 0.737135 (-0.525772) | 0.151045 \/ 0.296338 (-0.145293) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.609361 \/ 0.215209 (0.394152) | 6.124844 \/ 2.077655 (4.047189) | 2.440757 \/ 1.504120 (0.936637) | 2.034495 \/ 1.541195 (0.493300) | 2.047192 \/ 1.468490 (0.578702) | 0.883171 \/ 4.584777 (-3.701606) | 5.470552 \/ 3.745712 (1.724840) | 4.401696 \/ 5.269862 (-0.868165) | 2.378674 \/ 4.565676 (-2.187003) | 0.108065 \/ 0.424275 (-0.316210) | 0.013239 \/ 0.007607 (0.005632) | 0.830957 \/ 0.226044 (0.604913) | 8.090659 \/ 2.268929 (5.821731) | 3.289203 \/ 55.444624 (-52.155422) | 2.500777 \/ 6.876477 (-4.375700) | 2.561440 \/ 2.142072 (0.419367) | 1.064893 \/ 4.805227 (-3.740334) | 0.220486 \/ 6.500664 (-6.280178) | 0.079507 \/ 0.075469 (0.004038) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.544334 \/ 1.841788 (-0.297454) | 17.878997 \/ 8.074308 (9.804689) | 18.952191 \/ 10.191392 (8.760799) | 0.245166 \/ 0.680424 (-0.435258) | 0.028022 \/ 0.534201 (-0.506179) | 0.517828 \/ 0.579283 (-0.061455) | 0.618988 \/ 0.434364 (0.184624) | 0.589742 \/ 0.540337 (0.049405) | 0.670902 \/ 1.386936 (-0.716034) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009616 \/ 0.011353 (-0.001737) | 0.006098 \/ 0.011008 (-0.004911) | 0.100301 \/ 0.038508 (0.061793) | 0.037792 \/ 0.023109 (0.014683) | 0.484667 \/ 0.275898 (0.208769) | 0.519286 \/ 0.323480 (0.195806) | 0.007427 \/ 0.007986 (-0.000558) | 0.007172 \/ 0.004328 (0.002844) | 0.104429 \/ 0.004250 (0.100179) | 0.056567 \/ 0.037052 (0.019515) | 0.502641 \/ 0.258489 (0.244152) | 0.549629 \/ 0.293841 (0.255788) | 0.049574 \/ 0.128546 (-0.078972) | 0.015223 \/ 0.075646 (-0.060424) | 0.113947 \/ 0.419271 (-0.305324) | 0.064585 \/ 0.043533 (0.021053) | 0.512962 \/ 0.255139 (0.257823) | 0.507218 \/ 0.283200 (0.224019) | 0.122194 \/ 0.141683 (-0.019488) | 1.927821 \/ 1.452155 (0.475667) | 2.051161 \/ 1.492716 (0.558445) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.291350 \/ 0.018006 (0.273344) | 0.588099 \/ 0.000490 (0.587610) | 0.001368 \/ 0.000200 (0.001168) | 0.000153 \/ 0.000054 (0.000099) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030604 \/ 0.037411 (-0.006807) | 0.126810 \/ 0.014526 (0.112285) | 0.139309 \/ 0.176557 (-0.037248) | 0.208030 \/ 0.737135 (-0.529105) | 0.138985 \/ 0.296338 (-0.157353) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.681254 \/ 0.215209 (0.466045) | 6.753856 \/ 2.077655 (4.676201) | 2.780704 \/ 1.504120 (1.276585) | 2.475205 \/ 1.541195 (0.934010) | 2.486784 \/ 1.468490 (1.018294) | 0.879223 \/ 4.584777 (-3.705554) | 5.662294 \/ 3.745712 (1.916582) | 2.698705 \/ 5.269862 (-2.571156) | 1.660620 \/ 4.565676 (-2.905057) | 0.112218 \/ 0.424275 (-0.312057) | 0.014211 \/ 0.007607 (0.006604) | 0.796957 \/ 0.226044 (0.570913) | 8.180897 \/ 2.268929 (5.911969) | 3.540419 \/ 55.444624 (-51.904205) | 2.899467 \/ 6.876477 (-3.977010) | 2.870306 \/ 2.142072 (0.728233) | 1.069537 \/ 4.805227 (-3.735690) | 0.211281 \/ 6.500664 (-6.289383) | 0.078898 \/ 0.075469 (0.003429) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.666790 \/ 1.841788 (-0.174998) | 18.302127 \/ 8.074308 (10.227819) | 21.317546 \/ 10.191392 (11.126153) | 0.242795 \/ 0.680424 (-0.437629) | 0.026754 \/ 0.534201 (-0.507447) | 0.493375 \/ 0.579283 (-0.085908) | 0.605400 \/ 0.434364 (0.171036) | 0.586888 \/ 0.540337 (0.046550) | 0.722809 \/ 1.386936 (-0.664127) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#ce2328e7b1d62998b22510492530af55d4493b73 \"CML watermark\")\n"],"created_at":1686213536000,"updated_at":1686318581000,"closed_at":1686317028000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5933","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5933","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5933.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5933.patch","merged_at":1686317028000},"body":"Closes #5927 \r\nI've realized that the error was overlooked during testing due to the presence of only one None value in the sequence.\r\nUnfortunately, it was the only case where the function works as expected. When the sequence contained more than one None value, the function failed. Consequently, I've updated the tests to include sequences with multiple None values.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5933\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5933\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5932","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5932\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5932\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5932\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5932","id":1746249161,"node_id":"PR_kwDODunzps5Sbrzo","number":5932,"title":"[doc build] Use secrets","user":{"login":"mishig25","id":11827707,"node_id":"MDQ6VXNlcjExODI3NzA3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11827707?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mishig25","html_url":"https:\/\/github.com\/mishig25","followers_url":"https:\/\/api.github.com\/users\/mishig25\/followers","following_url":"https:\/\/api.github.com\/users\/mishig25\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mishig25\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mishig25\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mishig25\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mishig25\/orgs","repos_url":"https:\/\/api.github.com\/users\/mishig25\/repos","events_url":"https:\/\/api.github.com\/users\/mishig25\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mishig25\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008499 \/ 0.011353 (-0.002854) | 0.006155 \/ 0.011008 (-0.004853) | 0.124032 \/ 0.038508 (0.085524) | 0.037337 \/ 0.023109 (0.014228) | 0.389274 \/ 0.275898 (0.113376) | 0.427736 \/ 0.323480 (0.104257) | 0.006929 \/ 0.007986 (-0.001057) | 0.005017 \/ 0.004328 (0.000689) | 0.096356 \/ 0.004250 (0.092105) | 0.055694 \/ 0.037052 (0.018642) | 0.391417 \/ 0.258489 (0.132928) | 0.448098 \/ 0.293841 (0.154257) | 0.042442 \/ 0.128546 (-0.086105) | 0.013456 \/ 0.075646 (-0.062190) | 0.423502 \/ 0.419271 (0.004230) | 0.062919 \/ 0.043533 (0.019386) | 0.384317 \/ 0.255139 (0.129178) | 0.410851 \/ 0.283200 (0.127652) | 0.112807 \/ 0.141683 (-0.028875) | 1.746050 \/ 1.452155 (0.293895) | 1.977974 \/ 1.492716 (0.485257) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.306382 \/ 0.018006 (0.288375) | 0.620310 \/ 0.000490 (0.619820) | 0.009309 \/ 0.000200 (0.009109) | 0.000106 \/ 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026900 \/ 0.037411 (-0.010511) | 0.140125 \/ 0.014526 (0.125599) | 0.136295 \/ 0.176557 (-0.040261) | 0.207721 \/ 0.737135 (-0.529414) | 0.146328 \/ 0.296338 (-0.150011) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.616712 \/ 0.215209 (0.401503) | 6.237820 \/ 2.077655 (4.160166) | 2.503809 \/ 1.504120 (0.999689) | 2.129739 \/ 1.541195 (0.588544) | 2.160768 \/ 1.468490 (0.692277) | 0.971273 \/ 4.584777 (-3.613504) | 5.687161 \/ 3.745712 (1.941449) | 2.738148 \/ 5.269862 (-2.531713) | 1.692695 \/ 4.565676 (-2.872981) | 0.113701 \/ 0.424275 (-0.310574) | 0.014809 \/ 0.007607 (0.007202) | 0.774795 \/ 0.226044 (0.548750) | 7.660012 \/ 2.268929 (5.391083) | 3.253036 \/ 55.444624 (-52.191588) | 2.607498 \/ 6.876477 (-4.268979) | 2.681678 \/ 2.142072 (0.539606) | 1.095275 \/ 4.805227 (-3.709952) | 0.239078 \/ 6.500664 (-6.261586) | 0.081034 \/ 0.075469 (0.005565) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.574547 \/ 1.841788 (-0.267240) | 18.323566 \/ 8.074308 (10.249258) | 19.274482 \/ 10.191392 (9.083090) | 0.210275 \/ 0.680424 (-0.470149) | 0.031843 \/ 0.534201 (-0.502358) | 0.514843 \/ 0.579283 (-0.064440) | 0.633782 \/ 0.434364 (0.199418) | 0.588569 \/ 0.540337 (0.048232) | 0.721401 \/ 1.386936 (-0.665535) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008866 \/ 0.011353 (-0.002487) | 0.006460 \/ 0.011008 (-0.004548) | 0.121337 \/ 0.038508 (0.082829) | 0.033896 \/ 0.023109 (0.010786) | 0.455702 \/ 0.275898 (0.179804) | 0.509685 \/ 0.323480 (0.186205) | 0.007650 \/ 0.007986 (-0.000336) | 0.005578 \/ 0.004328 (0.001250) | 0.098505 \/ 0.004250 (0.094255) | 0.056122 \/ 0.037052 (0.019069) | 0.478483 \/ 0.258489 (0.219994) | 0.560008 \/ 0.293841 (0.266167) | 0.044926 \/ 0.128546 (-0.083620) | 0.014562 \/ 0.075646 (-0.061085) | 0.115027 \/ 0.419271 (-0.304244) | 0.066494 \/ 0.043533 (0.022961) | 0.463434 \/ 0.255139 (0.208296) | 0.513856 \/ 0.283200 (0.230656) | 0.126436 \/ 0.141683 (-0.015247) | 1.874729 \/ 1.452155 (0.422575) | 1.925080 \/ 1.492716 (0.432364) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.012672 \/ 0.018006 (-0.005334) | 0.615797 \/ 0.000490 (0.615307) | 0.001606 \/ 0.000200 (0.001406) | 0.000118 \/ 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031104 \/ 0.037411 (-0.006307) | 0.130107 \/ 0.014526 (0.115581) | 0.140587 \/ 0.176557 (-0.035970) | 0.205081 \/ 0.737135 (-0.532054) | 0.144068 \/ 0.296338 (-0.152270) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.646549 \/ 0.215209 (0.431340) | 6.403962 \/ 2.077655 (4.326307) | 2.812594 \/ 1.504120 (1.308474) | 2.478480 \/ 1.541195 (0.937285) | 2.552385 \/ 1.468490 (1.083895) | 0.991987 \/ 4.584777 (-3.592790) | 5.777917 \/ 3.745712 (2.032205) | 5.697830 \/ 5.269862 (0.427969) | 2.370583 \/ 4.565676 (-2.195094) | 0.109905 \/ 0.424275 (-0.314370) | 0.013801 \/ 0.007607 (0.006193) | 0.799932 \/ 0.226044 (0.573888) | 8.155672 \/ 2.268929 (5.886743) | 3.711662 \/ 55.444624 (-51.732963) | 3.042164 \/ 6.876477 (-3.834312) | 3.073549 \/ 2.142072 (0.931477) | 1.137515 \/ 4.805227 (-3.667712) | 0.231266 \/ 6.500664 (-6.269398) | 0.080893 \/ 0.075469 (0.005424) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.669210 \/ 1.841788 (-0.172577) | 18.747144 \/ 8.074308 (10.672836) | 21.084589 \/ 10.191392 (10.893197) | 0.241379 \/ 0.680424 (-0.439045) | 0.029473 \/ 0.534201 (-0.504728) | 0.524605 \/ 0.579283 (-0.054678) | 0.622852 \/ 0.434364 (0.188488) | 0.604941 \/ 0.540337 (0.064604) | 0.715978 \/ 1.386936 (-0.670958) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#142484a60b1330359d7713e906fc9e5e30aa9f64 \"CML watermark\")\n","Cool ! what about `.github\/workflows\/build_pr_documentation.yml` and `.github\/workflows\/delete_doc_comment.yml` ?","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005973 \/ 0.011353 (-0.005380) | 0.004389 \/ 0.011008 (-0.006620) | 0.096076 \/ 0.038508 (0.057568) | 0.031569 \/ 0.023109 (0.008460) | 0.328300 \/ 0.275898 (0.052402) | 0.359356 \/ 0.323480 (0.035876) | 0.005378 \/ 0.007986 (-0.002607) | 0.003703 \/ 0.004328 (-0.000625) | 0.075251 \/ 0.004250 (0.071000) | 0.042340 \/ 0.037052 (0.005287) | 0.346103 \/ 0.258489 (0.087614) | 0.379896 \/ 0.293841 (0.086055) | 0.027493 \/ 0.128546 (-0.101053) | 0.009033 \/ 0.075646 (-0.066613) | 0.327829 \/ 0.419271 (-0.091442) | 0.064074 \/ 0.043533 (0.020541) | 0.337703 \/ 0.255139 (0.082564) | 0.355335 \/ 0.283200 (0.072136) | 0.101179 \/ 0.141683 (-0.040504) | 1.471738 \/ 1.452155 (0.019584) | 1.539031 \/ 1.492716 (0.046315) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.194097 \/ 0.018006 (0.176091) | 0.434190 \/ 0.000490 (0.433701) | 0.005730 \/ 0.000200 (0.005530) | 0.000088 \/ 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025634 \/ 0.037411 (-0.011778) | 0.105080 \/ 0.014526 (0.090555) | 0.116508 \/ 0.176557 (-0.060049) | 0.173867 \/ 0.737135 (-0.563269) | 0.117749 \/ 0.296338 (-0.178590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.401566 \/ 0.215209 (0.186357) | 4.003558 \/ 2.077655 (1.925903) | 1.802756 \/ 1.504120 (0.298636) | 1.604222 \/ 1.541195 (0.063027) | 1.656617 \/ 1.468490 (0.188127) | 0.523385 \/ 4.584777 (-4.061392) | 3.744292 \/ 3.745712 (-0.001420) | 1.794295 \/ 5.269862 (-3.475567) | 1.044690 \/ 4.565676 (-3.520987) | 0.064992 \/ 0.424275 (-0.359284) | 0.011542 \/ 0.007607 (0.003935) | 0.507830 \/ 0.226044 (0.281785) | 5.061574 \/ 2.268929 (2.792645) | 2.252896 \/ 55.444624 (-53.191729) | 1.912551 \/ 6.876477 (-4.963926) | 2.073510 \/ 2.142072 (-0.068562) | 0.642148 \/ 4.805227 (-4.163079) | 0.140151 \/ 6.500664 (-6.360513) | 0.062623 \/ 0.075469 (-0.012846) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.180367 \/ 1.841788 (-0.661421) | 14.263475 \/ 8.074308 (6.189167) | 12.917251 \/ 10.191392 (2.725859) | 0.143815 \/ 0.680424 (-0.536608) | 0.017286 \/ 0.534201 (-0.516915) | 0.388411 \/ 0.579283 (-0.190872) | 0.430512 \/ 0.434364 (-0.003851) | 0.466595 \/ 0.540337 (-0.073742) | 0.564545 \/ 1.386936 (-0.822391) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006059 \/ 0.011353 (-0.005294) | 0.004419 \/ 0.011008 (-0.006590) | 0.074206 \/ 0.038508 (0.035697) | 0.031180 \/ 0.023109 (0.008071) | 0.380031 \/ 0.275898 (0.104133) | 0.410373 \/ 0.323480 (0.086893) | 0.005397 \/ 0.007986 (-0.002589) | 0.003952 \/ 0.004328 (-0.000376) | 0.074426 \/ 0.004250 (0.070176) | 0.046256 \/ 0.037052 (0.009203) | 0.385543 \/ 0.258489 (0.127054) | 0.430724 \/ 0.293841 (0.136883) | 0.028052 \/ 0.128546 (-0.100494) | 0.008810 \/ 0.075646 (-0.066836) | 0.080749 \/ 0.419271 (-0.338522) | 0.046746 \/ 0.043533 (0.003214) | 0.380325 \/ 0.255139 (0.125186) | 0.398901 \/ 0.283200 (0.115701) | 0.099607 \/ 0.141683 (-0.042076) | 1.433343 \/ 1.452155 (-0.018812) | 1.520447 \/ 1.492716 (0.027730) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.202232 \/ 0.018006 (0.184225) | 0.431342 \/ 0.000490 (0.430852) | 0.001020 \/ 0.000200 (0.000820) | 0.000089 \/ 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028762 \/ 0.037411 (-0.008649) | 0.111777 \/ 0.014526 (0.097251) | 0.119283 \/ 0.176557 (-0.057273) | 0.168151 \/ 0.737135 (-0.568985) | 0.126093 \/ 0.296338 (-0.170245) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.442689 \/ 0.215209 (0.227480) | 4.369202 \/ 2.077655 (2.291547) | 2.167703 \/ 1.504120 (0.663583) | 1.960580 \/ 1.541195 (0.419385) | 2.001459 \/ 1.468490 (0.532969) | 0.527169 \/ 4.584777 (-4.057608) | 3.738987 \/ 3.745712 (-0.006726) | 1.819002 \/ 5.269862 (-3.450860) | 1.082786 \/ 4.565676 (-3.482891) | 0.066209 \/ 0.424275 (-0.358066) | 0.011549 \/ 0.007607 (0.003942) | 0.545959 \/ 0.226044 (0.319915) | 5.466655 \/ 2.268929 (3.197727) | 2.671448 \/ 55.444624 (-52.773176) | 2.340968 \/ 6.876477 (-4.535509) | 2.358805 \/ 2.142072 (0.216733) | 0.649456 \/ 4.805227 (-4.155771) | 0.142009 \/ 6.500664 (-6.358655) | 0.064199 \/ 0.075469 (-0.011270) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.259819 \/ 1.841788 (-0.581969) | 14.456988 \/ 8.074308 (6.382680) | 14.478982 \/ 10.191392 (4.287590) | 0.163156 \/ 0.680424 (-0.517268) | 0.017090 \/ 0.534201 (-0.517111) | 0.391339 \/ 0.579283 (-0.187944) | 0.422021 \/ 0.434364 (-0.012343) | 0.465340 \/ 0.540337 (-0.074997) | 0.564517 \/ 1.386936 (-0.822419) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#97358c88f996a65f49923ec215358044e4146a95 \"CML watermark\")\n","> .github\/workflows\/delete_doc_comment.yml \r\n\r\nis already updated https:\/\/github.com\/huggingface\/datasets\/pull\/5932\/files\r\n\r\n> .github\/workflows\/build_pr_documentation.yml\r\n\r\nindeed no changes are needed"],"created_at":1686154179000,"updated_at":1686305818000,"closed_at":1686304396000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5932","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5932","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5932.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5932.patch","merged_at":1686304396000},"body":"Companion pr to https:\/\/github.com\/huggingface\/doc-builder\/pull\/379","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5932\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5932\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5931","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5931\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5931\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5931\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5931","id":1745408784,"node_id":"I_kwDODunzps5oCNMQ","number":5931,"title":"`datasets.map` not reusing cached copy by default","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This can happen when a map transform cannot be hashed deterministically (e.g., an object referenced by the transform changes its state after the first call - an issue with fast tokenizers). The solution is to provide `cache_file_name` in the `map` call to check this file for the cached result instead of relying on the default caching mechanism."],"created_at":1686128613000,"updated_at":1687364140000,"closed_at":1687364140000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nWhen I load the dataset from local directory, it's cached copy is picked up after first time. However, for `map` operation, the operation is applied again and cached copy is not picked up. Is there any way to pick cached copy instead of processing it again? The only solution I could think of was to use `save_to_disk` after my last transform and then use that in my DataLoader pipeline. Are there any other solutions for the same?\r\n\r\nOne more thing, my dataset is occupying 6GB storage memory after I use `map`, is there any way I can reduce that memory usage?\r\n\r\n\r\n### Steps to reproduce the bug\r\n\r\n```\r\n# make sure that dataset decodes audio with correct sampling rate\r\ndataset_sampling_rate = next(iter(self.raw_datasets.values())).features[\"audio\"].sampling_rate\r\nif dataset_sampling_rate != self.feature_extractor.sampling_rate:\r\n self.raw_datasets = self.raw_datasets.cast_column(\r\n \"audio\", datasets.features.Audio(sampling_rate=self.feature_extractor.sampling_rate)\r\n )\r\n\r\nvectorized_datasets = self.raw_datasets.map(\r\n self.prepare_dataset,\r\n remove_columns=next(iter(self.raw_datasets.values())).column_names,\r\n num_proc=self.num_workers,\r\n desc=\"preprocess datasets\",\r\n)\r\n# filter data that is longer than max_input_length\r\nself.vectorized_datasets = vectorized_datasets.filter(\r\n self.is_audio_in_length_range,\r\n num_proc=self.num_workers,\r\n input_columns=[\"input_length\"],\r\n )\r\n\r\ndef prepare_dataset(self, batch):\r\n # load audio\r\n sample = batch[\"audio\"]\r\n inputs = self.feature_extractor(sample[\"array\"], sampling_rate=sample[\"sampling_rate\"])\r\n batch[\"input_values\"] = inputs.input_values[0]\r\n batch[\"input_length\"] = len(batch[\"input_values\"])\r\n\r\n batch[\"labels\"] = self.tokenizer(batch[\"target_text\"]).input_ids\r\n return batch\r\n\r\n```\r\n\r\n### Expected behavior\r\n\r\n`map` to use cached copy and if possible an alternative technique to reduce memory usage after using `map`\r\n\r\n### Environment info\r\n\r\n\r\n- `datasets` version: 2.12.0\r\n- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17\r\n- Python version: 3.8.16\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 12.0.0\r\n- Pandas version: 2.0.2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5931\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5931\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5930","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5930\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5930\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5930\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5930","id":1745184395,"node_id":"I_kwDODunzps5oBWaL","number":5930,"title":"loading private custom dataset script - authentication error","user":{"login":"flckv","id":103381497,"node_id":"U_kgDOBil5-Q","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/103381497?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/flckv","html_url":"https:\/\/github.com\/flckv","followers_url":"https:\/\/api.github.com\/users\/flckv\/followers","following_url":"https:\/\/api.github.com\/users\/flckv\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/flckv\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/flckv\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/flckv\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/flckv\/orgs","repos_url":"https:\/\/api.github.com\/users\/flckv\/repos","events_url":"https:\/\/api.github.com\/users\/flckv\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/flckv\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This issue seems to have been resolved, so I'm closing it."],"created_at":1686121103000,"updated_at":1686840561000,"closed_at":1686840560000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nTrain model with my custom dataset stored in HuggingFace and loaded with the loading script requires authentication but I am not sure how ?\r\n\r\n\r\nI am logged in in the terminal, in the browser. I receive this error: \r\n\r\n\r\n\/python3.8\/site-packages\/datasets\/utils\/file_utils.py\", line 566, in get_from_cache\r\n raise ConnectionError(f\"Couldn't reach {url} ({repr(head_error)})\")\r\nConnectionError: Couldn't reach https:\/\/huggingface.co\/datasets\/fkov\/s\/blob\/main\/data\/s\/train\/labels `(ConnectionError('Unauthorized for URL `https:\/\/huggingface.co\/datasets\/fkov\/s\/blob\/main\/data\/s\/train\/labels. Please use the parameter `**`use_auth_token=True`**` after logging in with `**`huggingface-cli login`**`'))\r\n\r\nwhen I added: `use_auth_token=True` and logged in via terminal then I received error:\r\n\r\nor the same error in different format: \r\nraise ConnectionError(f\"`Couldn't reach {url} (error {response.status_code}`)\")\r\nConnectionError: Couldn't reach https:\/\/huggingface.co\/datasets\/fkov\/s\/blob\/main\/data\/s\/train\/labels (`error 401`)\r\n\r\n\r\n\n\n### Steps to reproduce the bug\n\n1. cloned transformers library locally:\r\nhttps:\/\/huggingface.co\/docs\/transformers\/v4.15.0\/examples :\r\n\r\n> git clone https:\/\/github.com\/huggingface\/transformers\r\n> cd transformers\r\n> pip install .\r\n> cd \/transformers\/examples\/pytorch\/audio-classification\r\n> pip install -r requirements.txt\r\n\r\n2. created **loading script** \r\n> https:\/\/huggingface.co\/docs\/datasets\/dataset_script added next to dataset:\r\n\r\n3. uploaded **private custom dataset** with loading script to HuggingFace\r\n> https:\/\/huggingface.co\/docs\/datasets\/dataset_script\r\n\r\n4. added dataset loading script to **local directory** in the above cloned transformers library:\r\n> cd \/transformers\/examples\/pytorch\/audio-classification\r\n\r\n5. logged in to HuggingFace on local terminal with :\r\n> **huggingface-cli login**\r\n\r\n6. run the model with the custom dataset stored on HuggingFace with code: https:\/\/github.com\/huggingface\/transformers\/blob\/main\/examples\/pytorch\/audio-classification\/README.md\r\n\r\n cd \/transformers\/examples\/pytorch\/audio-classification\r\n> python run_audio_classification.py \\\r\n> --model_name_or_path facebook\/wav2vec2-base \\\r\n> --output_dir l\/users\/flck\/outputs\/wav2vec2-base-s \\\r\n> --overwrite_output_dir \\\r\n> --dataset_name s \\\r\n> --dataset_config_name s \\\r\n> --remove_unused_columns False \\\r\n> --do_train \\\r\n> --do_eval \\\r\n> --fp16 \\\r\n> --learning_rate 3e-5 \\\r\n> --max_length_seconds 1 \\\r\n> --attention_mask False \\\r\n> --warmup_ratio 0.1 \\\r\n> --num_train_epochs 5 \\\r\n> --per_device_train_batch_size 32 \\\r\n> --gradient_accumulation_steps 4 \\\r\n> --per_device_eval_batch_size 32 \\\r\n> --dataloader_num_workers 4 \\\r\n> --logging_strategy steps \\\r\n> --logging_steps 10 \\\r\n> --evaluation_strategy epoch \\\r\n> --save_strategy epoch \\\r\n> --load_best_model_at_end True \\\r\n> --metric_for_best_model accuracy \\\r\n> --save_total_limit 3 \\\r\n> --seed 0 \\\r\n> --push_to_hub \\\r\n> **--use_auth_token=True** \r\n\r\n\n\n### Expected behavior\n\nBe able to train a model the https:\/\/github.com\/huggingface\/transformers\/blob\/main\/examples\/pytorch\/audio-classification\/ run_audio_classification.py with private custom dataset stored on HuggingFace.\n\n### Environment info\n\n- datasets version: 2.12.0 \r\n- `transformers` version: 4.30.0.dev0\r\n- Platform: Linux-5.4.204-ql-generic-12.0-19-x86_64-with-glibc2.17\r\n- Python version: 3.8.12\r\n- Huggingface_hub version: 0.15.1\r\n- Safetensors version: 0.3.1\r\n- PyTorch version (GPU?): 2.0.1+cu117 (True)\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.24.3\r\n[pip3] torch==2.0.1\r\n[pip3] torchaudio==2.0.2\r\n[conda] numpy 1.24.3 pypi_0 pypi\r\n[conda] torch 2.0.1 pypi_0 pypi\r\n[conda] torchaudio 2.0.2 pypi_0 pypi\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5930\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5930\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5929","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5929\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5929\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5929\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5929","id":1744478456,"node_id":"I_kwDODunzps5n-qD4","number":5929,"title":"Importing PyTorch reduces multiprocessing performance for map","user":{"login":"Maxscha","id":12814709,"node_id":"MDQ6VXNlcjEyODE0NzA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12814709?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Maxscha","html_url":"https:\/\/github.com\/Maxscha","followers_url":"https:\/\/api.github.com\/users\/Maxscha\/followers","following_url":"https:\/\/api.github.com\/users\/Maxscha\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Maxscha\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Maxscha\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Maxscha\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Maxscha\/orgs","repos_url":"https:\/\/api.github.com\/users\/Maxscha\/repos","events_url":"https:\/\/api.github.com\/users\/Maxscha\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Maxscha\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! The times match when I run this code locally or on Colab.\r\n\r\nAlso, we use `multiprocess`, not `multiprocessing`, for parallelization, and torch's `__init__.py` (executed on `import torch` ) slightly modifies the latter.","Hey Mariosasko,\r\n\r\nThanks for looking into it. We further did some investigations after your comment and figured out it's only affecting some hardware\/software configurations with the `pytorch` installation of `conda-forge`. Based on this we found the following issue in PyTorch: https:\/\/github.com\/pytorch\/pytorch\/issues\/102269 with a quick fix for now.\r\n\r\nSince it seems to be a deeper issue with forking processes, the difference between`multiprocess` and `multiprocessing` didn't make a difference.\r\n\r\nClosing this, since the issue comes from `pytorch` not `dataset`. \r\n"],"created_at":1686080545000,"updated_at":1686920952000,"closed_at":1686920952000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nI noticed that the performance of my dataset preprocessing with `map(...,num_proc=32)` decreases when PyTorch is imported.\r\n\r\n### Steps to reproduce the bug\r\n\r\nI created two example scripts to reproduce this behavior:\r\n\r\n```\r\nimport datasets\r\ndatasets.disable_caching()\r\n\r\nfrom datasets import Dataset\r\nimport time\r\n \r\nPROC=32\r\n\r\nif __name__ == \"__main__\":\r\n dataset = [True] * 10000000\r\n dataset = Dataset.from_dict({'train': dataset})\r\n \r\n\r\n start = time.time()\r\n dataset.map(lambda x: x, num_proc=PROC)\r\n end = time.time()\r\n print(end - start)\r\n```\r\nTakes around 4 seconds on my machine.\r\n\r\nWhile the same code, but with an `import torch`:\r\n```\r\nimport datasets\r\ndatasets.disable_caching()\r\n\r\nfrom datasets import Dataset\r\nimport time\r\nimport torch\r\n \r\nPROC=32\r\n\r\nif __name__ == \"__main__\":\r\n dataset = [True] * 10000000\r\n dataset = Dataset.from_dict({'train': dataset})\r\n \r\n\r\n start = time.time()\r\n dataset.map(lambda x: x, num_proc=PROC)\r\n end = time.time()\r\n print(end - start)\r\n```\r\ntakes around 22 seconds.\r\n\r\n\r\n\r\n### Expected behavior\r\n\r\nI would expect that the import of torch to not have such a significant effect on the performance of map using multiprocessing.\r\n\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 2.12.0\r\n- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35\r\n- Python version: 3.11.3\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 12.0.0\r\n- Pandas version: 2.0.2\r\n- torch: 2.0.1","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5929\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5929\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5928","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5928\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5928\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5928\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5928","id":1744098371,"node_id":"PR_kwDODunzps5SUXPC","number":5928,"title":"Fix link to quickstart docs in README.md","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006693 \/ 0.011353 (-0.004660) | 0.004331 \/ 0.011008 (-0.006677) | 0.098022 \/ 0.038508 (0.059514) | 0.032764 \/ 0.023109 (0.009654) | 0.295812 \/ 0.275898 (0.019914) | 0.325029 \/ 0.323480 (0.001550) | 0.005779 \/ 0.007986 (-0.002206) | 0.005381 \/ 0.004328 (0.001052) | 0.075785 \/ 0.004250 (0.071535) | 0.048759 \/ 0.037052 (0.011707) | 0.308986 \/ 0.258489 (0.050497) | 0.348000 \/ 0.293841 (0.054159) | 0.027686 \/ 0.128546 (-0.100860) | 0.008839 \/ 0.075646 (-0.066807) | 0.328389 \/ 0.419271 (-0.090883) | 0.062173 \/ 0.043533 (0.018640) | 0.312257 \/ 0.255139 (0.057119) | 0.325024 \/ 0.283200 (0.041824) | 0.103886 \/ 0.141683 (-0.037797) | 1.440215 \/ 1.452155 (-0.011940) | 1.528665 \/ 1.492716 (0.035948) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.210082 \/ 0.018006 (0.192076) | 0.442480 \/ 0.000490 (0.441990) | 0.006559 \/ 0.000200 (0.006359) | 0.000092 \/ 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026774 \/ 0.037411 (-0.010637) | 0.108362 \/ 0.014526 (0.093837) | 0.117631 \/ 0.176557 (-0.058926) | 0.176657 \/ 0.737135 (-0.560478) | 0.124154 \/ 0.296338 (-0.172184) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.428136 \/ 0.215209 (0.212927) | 4.270287 \/ 2.077655 (2.192632) | 2.014728 \/ 1.504120 (0.510608) | 1.806772 \/ 1.541195 (0.265577) | 1.946284 \/ 1.468490 (0.477794) | 0.525542 \/ 4.584777 (-4.059235) | 3.667025 \/ 3.745712 (-0.078687) | 1.878751 \/ 5.269862 (-3.391111) | 1.048321 \/ 4.565676 (-3.517356) | 0.065550 \/ 0.424275 (-0.358725) | 0.011881 \/ 0.007607 (0.004274) | 0.529873 \/ 0.226044 (0.303829) | 5.289641 \/ 2.268929 (3.020712) | 2.489403 \/ 55.444624 (-52.955221) | 2.141037 \/ 6.876477 (-4.735440) | 2.230735 \/ 2.142072 (0.088662) | 0.639781 \/ 4.805227 (-4.165447) | 0.141410 \/ 6.500664 (-6.359254) | 0.064374 \/ 0.075469 (-0.011095) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.159462 \/ 1.841788 (-0.682325) | 14.524730 \/ 8.074308 (6.450422) | 13.578070 \/ 10.191392 (3.386678) | 0.152138 \/ 0.680424 (-0.528286) | 0.017255 \/ 0.534201 (-0.516946) | 0.387607 \/ 0.579283 (-0.191676) | 0.413652 \/ 0.434364 (-0.020712) | 0.453644 \/ 0.540337 (-0.086693) | 0.550051 \/ 1.386936 (-0.836885) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006668 \/ 0.011353 (-0.004685) | 0.004677 \/ 0.011008 (-0.006331) | 0.075950 \/ 0.038508 (0.037442) | 0.032439 \/ 0.023109 (0.009329) | 0.381839 \/ 0.275898 (0.105941) | 0.419411 \/ 0.323480 (0.095931) | 0.005813 \/ 0.007986 (-0.002172) | 0.004090 \/ 0.004328 (-0.000238) | 0.075052 \/ 0.004250 (0.070802) | 0.048453 \/ 0.037052 (0.011401) | 0.388076 \/ 0.258489 (0.129587) | 0.431793 \/ 0.293841 (0.137952) | 0.028408 \/ 0.128546 (-0.100138) | 0.009028 \/ 0.075646 (-0.066618) | 0.082569 \/ 0.419271 (-0.336702) | 0.046772 \/ 0.043533 (0.003239) | 0.380182 \/ 0.255139 (0.125043) | 0.401828 \/ 0.283200 (0.118629) | 0.105388 \/ 0.141683 (-0.036294) | 1.453356 \/ 1.452155 (0.001201) | 1.561483 \/ 1.492716 (0.068767) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.008922 \/ 0.018006 (-0.009084) | 0.444112 \/ 0.000490 (0.443623) | 0.002756 \/ 0.000200 (0.002556) | 0.000104 \/ 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030408 \/ 0.037411 (-0.007003) | 0.112924 \/ 0.014526 (0.098399) | 0.124625 \/ 0.176557 (-0.051932) | 0.176915 \/ 0.737135 (-0.560220) | 0.129141 \/ 0.296338 (-0.167198) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.448197 \/ 0.215209 (0.232987) | 4.476548 \/ 2.077655 (2.398893) | 2.243977 \/ 1.504120 (0.739857) | 2.054060 \/ 1.541195 (0.512865) | 2.130680 \/ 1.468490 (0.662190) | 0.526815 \/ 4.584777 (-4.057962) | 3.759312 \/ 3.745712 (0.013600) | 3.333618 \/ 5.269862 (-1.936244) | 1.579611 \/ 4.565676 (-2.986065) | 0.065714 \/ 0.424275 (-0.358561) | 0.011939 \/ 0.007607 (0.004332) | 0.550313 \/ 0.226044 (0.324269) | 5.476946 \/ 2.268929 (3.208018) | 2.726521 \/ 55.444624 (-52.718104) | 2.364977 \/ 6.876477 (-4.511499) | 2.450624 \/ 2.142072 (0.308551) | 0.647174 \/ 4.805227 (-4.158053) | 0.141265 \/ 6.500664 (-6.359399) | 0.065493 \/ 0.075469 (-0.009976) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.249702 \/ 1.841788 (-0.592085) | 15.205647 \/ 8.074308 (7.131338) | 14.678310 \/ 10.191392 (4.486918) | 0.141539 \/ 0.680424 (-0.538884) | 0.017323 \/ 0.534201 (-0.516878) | 0.387602 \/ 0.579283 (-0.191681) | 0.415106 \/ 0.434364 (-0.019258) | 0.458146 \/ 0.540337 (-0.082192) | 0.553318 \/ 1.386936 (-0.833618) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#55127d7bf399fd2f3a8713db9822e8cb47cdbbed \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008567 \/ 0.011353 (-0.002786) | 0.005245 \/ 0.011008 (-0.005763) | 0.115074 \/ 0.038508 (0.076566) | 0.032567 \/ 0.023109 (0.009458) | 0.352297 \/ 0.275898 (0.076399) | 0.393403 \/ 0.323480 (0.069923) | 0.006402 \/ 0.007986 (-0.001583) | 0.004353 \/ 0.004328 (0.000025) | 0.087903 \/ 0.004250 (0.083653) | 0.048424 \/ 0.037052 (0.011372) | 0.370078 \/ 0.258489 (0.111588) | 0.410192 \/ 0.293841 (0.116351) | 0.042396 \/ 0.128546 (-0.086150) | 0.014426 \/ 0.075646 (-0.061220) | 0.411358 \/ 0.419271 (-0.007914) | 0.059546 \/ 0.043533 (0.016013) | 0.364721 \/ 0.255139 (0.109582) | 0.385100 \/ 0.283200 (0.101901) | 0.100572 \/ 0.141683 (-0.041111) | 1.741457 \/ 1.452155 (0.289302) | 1.933134 \/ 1.492716 (0.440418) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.217177 \/ 0.018006 (0.199171) | 0.510399 \/ 0.000490 (0.509909) | 0.005542 \/ 0.000200 (0.005342) | 0.000120 \/ 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026852 \/ 0.037411 (-0.010559) | 0.125580 \/ 0.014526 (0.111054) | 0.132164 \/ 0.176557 (-0.044392) | 0.189073 \/ 0.737135 (-0.548063) | 0.135980 \/ 0.296338 (-0.160358) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.601924 \/ 0.215209 (0.386715) | 5.891397 \/ 2.077655 (3.813743) | 2.389494 \/ 1.504120 (0.885375) | 2.044013 \/ 1.541195 (0.502818) | 2.019367 \/ 1.468490 (0.550877) | 0.883807 \/ 4.584777 (-3.700970) | 5.141349 \/ 3.745712 (1.395636) | 2.607415 \/ 5.269862 (-2.662446) | 1.567268 \/ 4.565676 (-2.998409) | 0.102738 \/ 0.424275 (-0.321537) | 0.013480 \/ 0.007607 (0.005873) | 0.744979 \/ 0.226044 (0.518934) | 7.404182 \/ 2.268929 (5.135254) | 2.983406 \/ 55.444624 (-52.461219) | 2.331847 \/ 6.876477 (-4.544630) | 2.465119 \/ 2.142072 (0.323047) | 1.106725 \/ 4.805227 (-3.698502) | 0.205779 \/ 6.500664 (-6.294885) | 0.081019 \/ 0.075469 (0.005550) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.527840 \/ 1.841788 (-0.313947) | 16.989487 \/ 8.074308 (8.915179) | 18.016123 \/ 10.191392 (7.824731) | 0.216157 \/ 0.680424 (-0.464266) | 0.025393 \/ 0.534201 (-0.508808) | 0.496743 \/ 0.579283 (-0.082540) | 0.575365 \/ 0.434364 (0.141002) | 0.559978 \/ 0.540337 (0.019641) | 0.677474 \/ 1.386936 (-0.709462) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008913 \/ 0.011353 (-0.002440) | 0.005540 \/ 0.011008 (-0.005469) | 0.100001 \/ 0.038508 (0.061493) | 0.034432 \/ 0.023109 (0.011323) | 0.419824 \/ 0.275898 (0.143926) | 0.443566 \/ 0.323480 (0.120086) | 0.006372 \/ 0.007986 (-0.001614) | 0.004405 \/ 0.004328 (0.000077) | 0.094927 \/ 0.004250 (0.090677) | 0.050300 \/ 0.037052 (0.013248) | 0.424806 \/ 0.258489 (0.166317) | 0.480793 \/ 0.293841 (0.186952) | 0.050869 \/ 0.128546 (-0.077677) | 0.015899 \/ 0.075646 (-0.059747) | 0.111413 \/ 0.419271 (-0.307859) | 0.058093 \/ 0.043533 (0.014560) | 0.430575 \/ 0.255139 (0.175436) | 0.483786 \/ 0.283200 (0.200586) | 0.106878 \/ 0.141683 (-0.034805) | 1.763576 \/ 1.452155 (0.311422) | 1.837750 \/ 1.492716 (0.345033) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.011565 \/ 0.018006 (-0.006441) | 0.484411 \/ 0.000490 (0.483922) | 0.004869 \/ 0.000200 (0.004669) | 0.000111 \/ 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030706 \/ 0.037411 (-0.006706) | 0.126901 \/ 0.014526 (0.112375) | 0.130367 \/ 0.176557 (-0.046190) | 0.206568 \/ 0.737135 (-0.530567) | 0.146505 \/ 0.296338 (-0.149834) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.627266 \/ 0.215209 (0.412057) | 6.314049 \/ 2.077655 (4.236394) | 2.582920 \/ 1.504120 (1.078800) | 2.249401 \/ 1.541195 (0.708206) | 2.244960 \/ 1.468490 (0.776470) | 0.907770 \/ 4.584777 (-3.677007) | 5.349622 \/ 3.745712 (1.603910) | 4.591244 \/ 5.269862 (-0.678618) | 2.301612 \/ 4.565676 (-2.264064) | 0.108813 \/ 0.424275 (-0.315462) | 0.013187 \/ 0.007607 (0.005580) | 0.806071 \/ 0.226044 (0.580027) | 7.843903 \/ 2.268929 (5.574974) | 3.405968 \/ 55.444624 (-52.038656) | 2.564301 \/ 6.876477 (-4.312176) | 2.652208 \/ 2.142072 (0.510135) | 1.168142 \/ 4.805227 (-3.637086) | 0.218551 \/ 6.500664 (-6.282113) | 0.078120 \/ 0.075469 (0.002651) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.562517 \/ 1.841788 (-0.279271) | 17.519325 \/ 8.074308 (9.445017) | 20.727083 \/ 10.191392 (10.535691) | 0.207135 \/ 0.680424 (-0.473288) | 0.028208 \/ 0.534201 (-0.505993) | 0.496157 \/ 0.579283 (-0.083126) | 0.569239 \/ 0.434364 (0.134875) | 0.566137 \/ 0.540337 (0.025799) | 0.704208 \/ 1.386936 (-0.682728) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#8eb3f34d876da98e722d866be90d7f26135ea9e3 \"CML watermark\")\n"],"created_at":1686064981000,"updated_at":1686066754000,"closed_at":1686066233000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5928","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5928","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5928.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5928.patch","merged_at":1686066233000},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5928\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5928\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5927","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5927\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5927\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5927\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5927","id":1744009032,"node_id":"I_kwDODunzps5n83dI","number":5927,"title":"`IndexError` when indexing `Sequence` of `Array2D` with `None` values","user":{"login":"qgallouedec","id":45557362,"node_id":"MDQ6VXNlcjQ1NTU3MzYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/45557362?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/qgallouedec","html_url":"https:\/\/github.com\/qgallouedec","followers_url":"https:\/\/api.github.com\/users\/qgallouedec\/followers","following_url":"https:\/\/api.github.com\/users\/qgallouedec\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/qgallouedec\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/qgallouedec\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/qgallouedec\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/qgallouedec\/orgs","repos_url":"https:\/\/api.github.com\/users\/qgallouedec\/repos","events_url":"https:\/\/api.github.com\/users\/qgallouedec\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/qgallouedec\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Easy fix would be to add:\r\n\r\n```python\r\nnull_indices -= np.arange(len(null_indices))\r\n```\r\n\r\nbefore L279, but I'm not sure it's the most intuitive way to fix it.","Same issue here:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/7fcbe5b1575c8d162b65b9397b3dfda995a4e048\/src\/datasets\/features\/features.py#L1398\r\n\r\nFixed in #5948 "],"created_at":1686062182000,"updated_at":1686659979000,"closed_at":1686317030000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nHaving `None` values in a `Sequence` of `ArrayND` fails.\r\n\n\n### Steps to reproduce the bug\n\n```python\r\nfrom datasets import Array2D, Dataset, Features, Sequence\r\n\r\ndata = [\r\n [\r\n [[0]],\r\n None,\r\n None,\r\n ]\r\n]\r\nfeature = Sequence(Array2D((1, 1), dtype=\"int64\"))\r\ndataset = Dataset.from_dict({\"a\": data}, features=Features({\"a\": feature}))\r\n\r\ndataset[0] # error raised only when indexing\r\n```\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\/Users\/quentingallouedec\/gia\/c.py\", line 13, in \r\n dataset[0] # error raised only when indexing\r\n File \"\/Users\/quentingallouedec\/gia\/env\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py\", line 2658, in __getitem__\r\n return self._getitem(key)\r\n File \"\/Users\/quentingallouedec\/gia\/env\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py\", line 2643, in _getitem\r\n formatted_output = format_table(\r\n File \"\/Users\/quentingallouedec\/gia\/env\/lib\/python3.10\/site-packages\/datasets\/formatting\/formatting.py\", line 634, in format_table\r\n return formatter(pa_table, query_type=query_type)\r\n File \"\/Users\/quentingallouedec\/gia\/env\/lib\/python3.10\/site-packages\/datasets\/formatting\/formatting.py\", line 406, in __call__\r\n return self.format_row(pa_table)\r\n File \"\/Users\/quentingallouedec\/gia\/env\/lib\/python3.10\/site-packages\/datasets\/formatting\/formatting.py\", line 441, in format_row\r\n row = self.python_arrow_extractor().extract_row(pa_table)\r\n File \"\/Users\/quentingallouedec\/gia\/env\/lib\/python3.10\/site-packages\/datasets\/formatting\/formatting.py\", line 144, in extract_row\r\n return _unnest(pa_table.to_pydict())\r\n File \"pyarrow\/table.pxi\", line 4146, in pyarrow.lib.Table.to_pydict\r\n File \"pyarrow\/table.pxi\", line 1312, in pyarrow.lib.ChunkedArray.to_pylist\r\n File \"pyarrow\/array.pxi\", line 1521, in pyarrow.lib.Array.to_pylist\r\n File \"pyarrow\/scalar.pxi\", line 675, in pyarrow.lib.ListScalar.as_py\r\n File \"\/Users\/quentingallouedec\/gia\/env\/lib\/python3.10\/site-packages\/datasets\/features\/features.py\", line 760, in to_pylist\r\n return self.to_numpy(zero_copy_only=zero_copy_only).tolist()\r\n File \"\/Users\/quentingallouedec\/gia\/env\/lib\/python3.10\/site-packages\/datasets\/features\/features.py\", line 725, in to_numpy\r\n numpy_arr = np.insert(numpy_arr.astype(np.float64), null_indices, np.nan, axis=0)\r\n File \"<__array_function__ internals>\", line 200, in insert\r\n File \"\/Users\/quentingallouedec\/gia\/env\/lib\/python3.10\/site-packages\/numpy\/lib\/function_base.py\", line 5426, in insert\r\n old_mask[indices] = False\r\nIndexError: index 3 is out of bounds for axis 0 with size 3\r\n```\r\n\r\nAFAIK, the problem only occurs when you use a `Sequence` of `ArrayND`.\r\n\r\nI strongly suspect that the problem comes from this line, or `np.insert` is misused:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/02ee418831aba68d0be93227bce8b3f42ef8980f\/src\/datasets\/features\/features.py#L729\r\n\r\nTo put t simply, you want something that do that:\r\n\r\n```python\r\nimport numpy as np\r\nnumpy_arr = np.zeros((1, 1, 1))\r\nnull_indices = np.array([1, 2])\r\nnp.insert(numpy_arr, null_indices, np.nan, axis=0)\r\n# raise an error, instead of outputting \r\n# array([[[ 0.]],\r\n# [[nan]],\r\n# [[nan]]])\r\n```\r\n\r\n\n\n### Expected behavior\n\nThe previous code should not raise an error.\n\n### Environment info\n\n- Python 3.10.11\r\n- datasets 2.10.0\r\n- pyarrow 12.0.0","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5927\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5927\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5926","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5926\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5926\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5926\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5926","id":1743922028,"node_id":"I_kwDODunzps5n8iNs","number":5926,"title":"Uncaught exception when generating the splits from a dataset that miss data","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":{"login":"albertvillanova","id":8515462.0,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @severo.\r\n\r\nThis is a known issue with `fsspec`:\r\n- #5862\r\n- https:\/\/github.com\/fsspec\/filesystem_spec\/issues\/1265"],"created_at":1686059461000,"updated_at":1686124396000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nDataset https:\/\/huggingface.co\/datasets\/blog_authorship_corpus has an issue with its hosting platform, since https:\/\/drive.google.com\/u\/0\/uc?id=1cGy4RNDV87ZHEXbiozABr9gsSrZpPaPz&export=download returns 404 error.\r\n\r\nBut when trying to generate the split names, we get an exception which is now correctly caught.\r\n\r\nSeen originally in https:\/\/github.com\/huggingface\/datasets-server\/blob\/adbdcd6710ffed4e2eb2e4cd905b5e0dff530a15\/services\/worker\/src\/worker\/job_runners\/config\/parquet_and_info.py#L435\n\n### Steps to reproduce the bug\n\n```python\r\n>>> from datasets import StreamingDownloadManager, load_dataset_builder\r\n>>> builder = load_dataset_builder(path=\"blog_authorship_corpus\")\r\nDownloading builder script: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5.60k\/5.60k [00:00<00:00, 23.1MB\/s]\r\nDownloading metadata: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.81k\/2.81k [00:00<00:00, 14.7MB\/s]\r\nDownloading readme: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 7.30k\/7.30k [00:00<00:00, 30.8MB\/s]\r\n>>> dl_manager = StreamingDownloadManager(base_path=builder.base_path)\r\n>>> builder._split_generators(dl_manager)\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/blog_authorship_corpus\/6f5d78241afd8313111956f877a57db7a0e9fc6718255dc85df0928197feb683\/blog_authorship_corpus.py\", line 79, in _split_generators\r\n data = dl_manager.download_and_extract(_DATA_URL)\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/download\/streaming_download_manager.py\", line 1087, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/download\/streaming_download_manager.py\", line 1039, in extract\r\n urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/utils\/py_utils.py\", line 435, in map_nested\r\n return function(data_struct)\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/download\/streaming_download_manager.py\", line 1044, in _extract\r\n protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/download\/streaming_download_manager.py\", line 433, in _get_extraction_protocol\r\n with fsspec.open(urlpath, **kwargs) as f:\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/fsspec\/core.py\", line 439, in open\r\n return open_files(\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/fsspec\/core.py\", line 194, in __getitem__\r\n out = super().__getitem__(item)\r\nIndexError: list index out of range\r\n```\n\n### Expected behavior\n\nWe should have an Exception raised by the datasets library.\n\n### Environment info\n\n\r\n- `datasets` version: 2.12.0\r\n- Platform: Linux-5.19.0-1026-aws-x86_64-with-glibc2.35\r\n- Python version: 3.9.15\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 2.0.2","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5926\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5926\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6030","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6030\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6030\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6030\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6030","id":1803864744,"node_id":"PR_kwDODunzps5Vd0ZG","number":6030,"title":"fixed typo in comment","user":{"login":"NightMachinery","id":36224762,"node_id":"MDQ6VXNlcjM2MjI0NzYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36224762?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NightMachinery","html_url":"https:\/\/github.com\/NightMachinery","followers_url":"https:\/\/api.github.com\/users\/NightMachinery\/followers","following_url":"https:\/\/api.github.com\/users\/NightMachinery\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NightMachinery\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NightMachinery\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NightMachinery\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NightMachinery\/orgs","repos_url":"https:\/\/api.github.com\/users\/NightMachinery\/repos","events_url":"https:\/\/api.github.com\/users\/NightMachinery\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NightMachinery\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_6030). All of your documentation changes will be reflected on that endpoint."],"created_at":1689288597000,"updated_at":1689288925000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6030","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6030","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6030.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6030.patch","merged_at":null},"body":"This mistake was a bit confusing, so I thought it was worth sending a PR over.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6030\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6030\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6029","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6029\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6029\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6029\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6029","id":1803460046,"node_id":"PR_kwDODunzps5VcbPW","number":6029,"title":"[docs] Fix link","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007039 \/ 0.011353 (-0.004314) | 0.004175 \/ 0.011008 (-0.006833) | 0.085426 \/ 0.038508 (0.046918) | 0.079818 \/ 0.023109 (0.056709) | 0.321924 \/ 0.275898 (0.046026) | 0.345482 \/ 0.323480 (0.022002) | 0.005510 \/ 0.007986 (-0.002475) | 0.003452 \/ 0.004328 (-0.000877) | 0.065158 \/ 0.004250 (0.060907) | 0.058843 \/ 0.037052 (0.021791) | 0.316280 \/ 0.258489 (0.057791) | 0.351666 \/ 0.293841 (0.057825) | 0.031190 \/ 0.128546 (-0.097357) | 0.008500 \/ 0.075646 (-0.067147) | 0.289595 \/ 0.419271 (-0.129676) | 0.053798 \/ 0.043533 (0.010265) | 0.315804 \/ 0.255139 (0.060665) | 0.334957 \/ 0.283200 (0.051757) | 0.024350 \/ 0.141683 (-0.117332) | 1.515753 \/ 1.452155 (0.063599) | 1.556215 \/ 1.492716 (0.063499) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.210378 \/ 0.018006 (0.192372) | 0.469309 \/ 0.000490 (0.468820) | 0.002890 \/ 0.000200 (0.002690) | 0.000086 \/ 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030214 \/ 0.037411 (-0.007197) | 0.088492 \/ 0.014526 (0.073966) | 0.098684 \/ 0.176557 (-0.077873) | 0.156077 \/ 0.737135 (-0.581058) | 0.098814 \/ 0.296338 (-0.197525) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.404548 \/ 0.215209 (0.189339) | 4.026173 \/ 2.077655 (1.948518) | 2.043216 \/ 1.504120 (0.539096) | 1.880997 \/ 1.541195 (0.339802) | 1.975205 \/ 1.468490 (0.506715) | 0.489395 \/ 4.584777 (-4.095382) | 3.684097 \/ 3.745712 (-0.061615) | 5.126934 \/ 5.269862 (-0.142928) | 3.092153 \/ 4.565676 (-1.473524) | 0.057668 \/ 0.424275 (-0.366607) | 0.007372 \/ 0.007607 (-0.000235) | 0.479647 \/ 0.226044 (0.253603) | 4.780207 \/ 2.268929 (2.511278) | 2.533457 \/ 55.444624 (-52.911168) | 2.182126 \/ 6.876477 (-4.694351) | 2.431834 \/ 2.142072 (0.289761) | 0.591760 \/ 4.805227 (-4.213467) | 0.135450 \/ 6.500664 (-6.365214) | 0.063218 \/ 0.075469 (-0.012251) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.262053 \/ 1.841788 (-0.579734) | 20.246992 \/ 8.074308 (12.172684) | 14.638222 \/ 10.191392 (4.446830) | 0.150021 \/ 0.680424 (-0.530403) | 0.018680 \/ 0.534201 (-0.515521) | 0.395215 \/ 0.579283 (-0.184068) | 0.421270 \/ 0.434364 (-0.013094) | 0.458845 \/ 0.540337 (-0.081492) | 0.634488 \/ 1.386936 (-0.752448) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007080 \/ 0.011353 (-0.004273) | 0.004112 \/ 0.011008 (-0.006896) | 0.066426 \/ 0.038508 (0.027918) | 0.090088 \/ 0.023109 (0.066978) | 0.400191 \/ 0.275898 (0.124293) | 0.429614 \/ 0.323480 (0.106134) | 0.005428 \/ 0.007986 (-0.002558) | 0.003501 \/ 0.004328 (-0.000827) | 0.065056 \/ 0.004250 (0.060806) | 0.061643 \/ 0.037052 (0.024590) | 0.398619 \/ 0.258489 (0.140130) | 0.445497 \/ 0.293841 (0.151657) | 0.031703 \/ 0.128546 (-0.096843) | 0.008708 \/ 0.075646 (-0.066938) | 0.071561 \/ 0.419271 (-0.347711) | 0.050684 \/ 0.043533 (0.007151) | 0.385361 \/ 0.255139 (0.130222) | 0.409349 \/ 0.283200 (0.126149) | 0.027388 \/ 0.141683 (-0.114295) | 1.473021 \/ 1.452155 (0.020866) | 1.525246 \/ 1.492716 (0.032529) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.237710 \/ 0.018006 (0.219704) | 0.468719 \/ 0.000490 (0.468230) | 0.000385 \/ 0.000200 (0.000185) | 0.000054 \/ 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032539 \/ 0.037411 (-0.004872) | 0.095324 \/ 0.014526 (0.080798) | 0.102248 \/ 0.176557 (-0.074308) | 0.156096 \/ 0.737135 (-0.581039) | 0.103458 \/ 0.296338 (-0.192881) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.416226 \/ 0.215209 (0.201017) | 4.141044 \/ 2.077655 (2.063389) | 2.143732 \/ 1.504120 (0.639612) | 2.001020 \/ 1.541195 (0.459825) | 2.091194 \/ 1.468490 (0.622704) | 0.489977 \/ 4.584777 (-4.094800) | 3.579615 \/ 3.745712 (-0.166097) | 3.438082 \/ 5.269862 (-1.831780) | 2.069031 \/ 4.565676 (-2.496645) | 0.056994 \/ 0.424275 (-0.367281) | 0.007362 \/ 0.007607 (-0.000245) | 0.493077 \/ 0.226044 (0.267033) | 4.922622 \/ 2.268929 (2.653694) | 2.627083 \/ 55.444624 (-52.817541) | 2.301141 \/ 6.876477 (-4.575336) | 2.356794 \/ 2.142072 (0.214722) | 0.583792 \/ 4.805227 (-4.221436) | 0.133707 \/ 6.500664 (-6.366958) | 0.062892 \/ 0.075469 (-0.012577) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.364908 \/ 1.841788 (-0.476880) | 20.641219 \/ 8.074308 (12.566911) | 14.848528 \/ 10.191392 (4.657136) | 0.174207 \/ 0.680424 (-0.506217) | 0.018206 \/ 0.534201 (-0.515995) | 0.413742 \/ 0.579283 (-0.165541) | 0.419940 \/ 0.434364 (-0.014424) | 0.458543 \/ 0.540337 (-0.081794) | 0.616518 \/ 1.386936 (-0.770418) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#18b2202c3e7cdde05920078f01864964556427da \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006875 \/ 0.011353 (-0.004478) | 0.003489 \/ 0.011008 (-0.007519) | 0.082077 \/ 0.038508 (0.043569) | 0.103011 \/ 0.023109 (0.079902) | 0.370572 \/ 0.275898 (0.094674) | 0.416400 \/ 0.323480 (0.092920) | 0.004048 \/ 0.007986 (-0.003938) | 0.003563 \/ 0.004328 (-0.000765) | 0.062666 \/ 0.004250 (0.058416) | 0.063664 \/ 0.037052 (0.026612) | 0.374206 \/ 0.258489 (0.115717) | 0.425590 \/ 0.293841 (0.131749) | 0.028174 \/ 0.128546 (-0.100373) | 0.007906 \/ 0.075646 (-0.067741) | 0.266251 \/ 0.419271 (-0.153020) | 0.045923 \/ 0.043533 (0.002390) | 0.376746 \/ 0.255139 (0.121607) | 0.401950 \/ 0.283200 (0.118750) | 0.024628 \/ 0.141683 (-0.117054) | 1.441903 \/ 1.452155 (-0.010252) | 1.537494 \/ 1.492716 (0.044777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.214696 \/ 0.018006 (0.196690) | 0.425626 \/ 0.000490 (0.425137) | 0.003370 \/ 0.000200 (0.003170) | 0.000071 \/ 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023133 \/ 0.037411 (-0.014279) | 0.072374 \/ 0.014526 (0.057848) | 0.081255 \/ 0.176557 (-0.095301) | 0.146960 \/ 0.737135 (-0.590175) | 0.081748 \/ 0.296338 (-0.214590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.390683 \/ 0.215209 (0.175473) | 3.893166 \/ 2.077655 (1.815511) | 1.884321 \/ 1.504120 (0.380201) | 1.701899 \/ 1.541195 (0.160704) | 1.737839 \/ 1.468490 (0.269349) | 0.497008 \/ 4.584777 (-4.087769) | 3.041211 \/ 3.745712 (-0.704501) | 3.519947 \/ 5.269862 (-1.749915) | 2.015085 \/ 4.565676 (-2.550592) | 0.057685 \/ 0.424275 (-0.366590) | 0.006415 \/ 0.007607 (-0.001192) | 0.465565 \/ 0.226044 (0.239520) | 4.635224 \/ 2.268929 (2.366295) | 2.297941 \/ 55.444624 (-53.146683) | 1.946670 \/ 6.876477 (-4.929807) | 2.078527 \/ 2.142072 (-0.063546) | 0.584101 \/ 4.805227 (-4.221126) | 0.126488 \/ 6.500664 (-6.374176) | 0.060819 \/ 0.075469 (-0.014650) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.223400 \/ 1.841788 (-0.618388) | 17.960923 \/ 8.074308 (9.886615) | 13.187683 \/ 10.191392 (2.996291) | 0.129258 \/ 0.680424 (-0.551166) | 0.016601 \/ 0.534201 (-0.517600) | 0.330028 \/ 0.579283 (-0.249255) | 0.353861 \/ 0.434364 (-0.080503) | 0.376022 \/ 0.540337 (-0.164315) | 0.518145 \/ 1.386936 (-0.868791) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006015 \/ 0.011353 (-0.005338) | 0.003605 \/ 0.011008 (-0.007403) | 0.062169 \/ 0.038508 (0.023661) | 0.056094 \/ 0.023109 (0.032985) | 0.353085 \/ 0.275898 (0.077187) | 0.393744 \/ 0.323480 (0.070265) | 0.004672 \/ 0.007986 (-0.003313) | 0.002859 \/ 0.004328 (-0.001469) | 0.062992 \/ 0.004250 (0.058742) | 0.049767 \/ 0.037052 (0.012714) | 0.356850 \/ 0.258489 (0.098361) | 0.403731 \/ 0.293841 (0.109890) | 0.026664 \/ 0.128546 (-0.101882) | 0.008026 \/ 0.075646 (-0.067621) | 0.067944 \/ 0.419271 (-0.351327) | 0.042133 \/ 0.043533 (-0.001400) | 0.353865 \/ 0.255139 (0.098726) | 0.383461 \/ 0.283200 (0.100261) | 0.021250 \/ 0.141683 (-0.120433) | 1.428102 \/ 1.452155 (-0.024053) | 1.481061 \/ 1.492716 (-0.011655) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.223552 \/ 0.018006 (0.205546) | 0.402390 \/ 0.000490 (0.401900) | 0.000721 \/ 0.000200 (0.000521) | 0.000059 \/ 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025065 \/ 0.037411 (-0.012347) | 0.075537 \/ 0.014526 (0.061011) | 0.083519 \/ 0.176557 (-0.093037) | 0.137068 \/ 0.737135 (-0.600068) | 0.084165 \/ 0.296338 (-0.212173) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.420176 \/ 0.215209 (0.204967) | 4.206226 \/ 2.077655 (2.128571) | 2.168089 \/ 1.504120 (0.663969) | 1.987299 \/ 1.541195 (0.446104) | 2.029489 \/ 1.468490 (0.560999) | 0.495822 \/ 4.584777 (-4.088955) | 3.106580 \/ 3.745712 (-0.639132) | 3.833215 \/ 5.269862 (-1.436647) | 2.450450 \/ 4.565676 (-2.115226) | 0.056979 \/ 0.424275 (-0.367296) | 0.006514 \/ 0.007607 (-0.001093) | 0.503646 \/ 0.226044 (0.277601) | 5.035035 \/ 2.268929 (2.766106) | 2.608245 \/ 55.444624 (-52.836379) | 2.245492 \/ 6.876477 (-4.630985) | 2.262868 \/ 2.142072 (0.120795) | 0.590736 \/ 4.805227 (-4.214491) | 0.124637 \/ 6.500664 (-6.376027) | 0.061442 \/ 0.075469 (-0.014027) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.316736 \/ 1.841788 (-0.525052) | 17.948635 \/ 8.074308 (9.874327) | 13.752442 \/ 10.191392 (3.561050) | 0.144107 \/ 0.680424 (-0.536317) | 0.017112 \/ 0.534201 (-0.517089) | 0.336537 \/ 0.579283 (-0.242746) | 0.347832 \/ 0.434364 (-0.086532) | 0.392944 \/ 0.540337 (-0.147393) | 0.534455 \/ 1.386936 (-0.852481) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#406b2212263c0d33f267e35b917f410ff6b3bc00 \"CML watermark\")\n"],"created_at":1689269052000,"updated_at":1689270461000,"closed_at":1689269939000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6029","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6029","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6029.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6029.patch","merged_at":1689269939000},"body":"Fixes link to the builder classes :)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6029\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6029\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6028","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6028\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6028\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6028\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6028","id":1803294981,"node_id":"PR_kwDODunzps5Vb3LJ","number":6028,"title":"Use new hffs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_6028). All of your documentation changes will be reflected on that endpoint.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006665 \/ 0.011353 (-0.004688) | 0.004376 \/ 0.011008 (-0.006633) | 0.085529 \/ 0.038508 (0.047021) | 0.076372 \/ 0.023109 (0.053263) | 0.310019 \/ 0.275898 (0.034121) | 0.341404 \/ 0.323480 (0.017924) | 0.005666 \/ 0.007986 (-0.002320) | 0.003763 \/ 0.004328 (-0.000566) | 0.064678 \/ 0.004250 (0.060427) | 0.059283 \/ 0.037052 (0.022231) | 0.316194 \/ 0.258489 (0.057704) | 0.349397 \/ 0.293841 (0.055557) | 0.031199 \/ 0.128546 (-0.097347) | 0.008724 \/ 0.075646 (-0.066923) | 0.300236 \/ 0.419271 (-0.119035) | 0.068872 \/ 0.043533 (0.025339) | 0.308521 \/ 0.255139 (0.053382) | 0.331292 \/ 0.283200 (0.048092) | 0.028236 \/ 0.141683 (-0.113447) | 1.501365 \/ 1.452155 (0.049211) | 1.554334 \/ 1.492716 (0.061618) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.238291 \/ 0.018006 (0.220285) | 0.565069 \/ 0.000490 (0.564580) | 0.001626 \/ 0.000200 (0.001426) | 0.000070 \/ 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029777 \/ 0.037411 (-0.007634) | 0.082873 \/ 0.014526 (0.068347) | 0.099619 \/ 0.176557 (-0.076937) | 0.156572 \/ 0.737135 (-0.580563) | 0.099887 \/ 0.296338 (-0.196452) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.401017 \/ 0.215209 (0.185808) | 3.827192 \/ 2.077655 (1.749537) | 1.861554 \/ 1.504120 (0.357434) | 1.699869 \/ 1.541195 (0.158674) | 1.720043 \/ 1.468490 (0.251553) | 0.486757 \/ 4.584777 (-4.098020) | 3.638125 \/ 3.745712 (-0.107587) | 5.844959 \/ 5.269862 (0.575097) | 3.454901 \/ 4.565676 (-1.110775) | 0.057650 \/ 0.424275 (-0.366625) | 0.007341 \/ 0.007607 (-0.000266) | 0.462698 \/ 0.226044 (0.236654) | 4.633472 \/ 2.268929 (2.364544) | 2.287607 \/ 55.444624 (-53.157017) | 2.057318 \/ 6.876477 (-4.819159) | 2.203657 \/ 2.142072 (0.061584) | 0.598136 \/ 4.805227 (-4.207091) | 0.134012 \/ 6.500664 (-6.366653) | 0.060824 \/ 0.075469 (-0.014645) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.277752 \/ 1.841788 (-0.564036) | 20.013398 \/ 8.074308 (11.939089) | 14.372993 \/ 10.191392 (4.181601) | 0.169991 \/ 0.680424 (-0.510433) | 0.018344 \/ 0.534201 (-0.515857) | 0.396985 \/ 0.579283 (-0.182299) | 0.416289 \/ 0.434364 (-0.018075) | 0.458658 \/ 0.540337 (-0.081680) | 0.692980 \/ 1.386936 (-0.693956) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006689 \/ 0.011353 (-0.004664) | 0.004393 \/ 0.011008 (-0.006615) | 0.064069 \/ 0.038508 (0.025561) | 0.080717 \/ 0.023109 (0.057607) | 0.370090 \/ 0.275898 (0.094191) | 0.400432 \/ 0.323480 (0.076952) | 0.005613 \/ 0.007986 (-0.002372) | 0.003641 \/ 0.004328 (-0.000687) | 0.064771 \/ 0.004250 (0.060520) | 0.057555 \/ 0.037052 (0.020502) | 0.392156 \/ 0.258489 (0.133667) | 0.409842 \/ 0.293841 (0.116001) | 0.031500 \/ 0.128546 (-0.097047) | 0.008786 \/ 0.075646 (-0.066860) | 0.070342 \/ 0.419271 (-0.348929) | 0.048646 \/ 0.043533 (0.005113) | 0.360914 \/ 0.255139 (0.105775) | 0.387626 \/ 0.283200 (0.104426) | 0.022787 \/ 0.141683 (-0.118896) | 1.508915 \/ 1.452155 (0.056761) | 1.539719 \/ 1.492716 (0.047002) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.257985 \/ 0.018006 (0.239979) | 0.550990 \/ 0.000490 (0.550501) | 0.000407 \/ 0.000200 (0.000207) | 0.000057 \/ 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030183 \/ 0.037411 (-0.007228) | 0.086882 \/ 0.014526 (0.072356) | 0.102382 \/ 0.176557 (-0.074175) | 0.154745 \/ 0.737135 (-0.582390) | 0.104008 \/ 0.296338 (-0.192331) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.426284 \/ 0.215209 (0.211075) | 4.240812 \/ 2.077655 (2.163158) | 2.261240 \/ 1.504120 (0.757120) | 2.085905 \/ 1.541195 (0.544710) | 2.160374 \/ 1.468490 (0.691883) | 0.481126 \/ 4.584777 (-4.103651) | 3.516234 \/ 3.745712 (-0.229478) | 3.325322 \/ 5.269862 (-1.944539) | 2.043307 \/ 4.565676 (-2.522369) | 0.056663 \/ 0.424275 (-0.367612) | 0.007786 \/ 0.007607 (0.000179) | 0.497614 \/ 0.226044 (0.271570) | 4.974529 \/ 2.268929 (2.705600) | 2.700018 \/ 55.444624 (-52.744606) | 2.393778 \/ 6.876477 (-4.482699) | 2.628202 \/ 2.142072 (0.486130) | 0.594316 \/ 4.805227 (-4.210911) | 0.147092 \/ 6.500664 (-6.353572) | 0.062207 \/ 0.075469 (-0.013262) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.315676 \/ 1.841788 (-0.526112) | 20.749251 \/ 8.074308 (12.674943) | 14.371553 \/ 10.191392 (4.180160) | 0.170249 \/ 0.680424 (-0.510175) | 0.018478 \/ 0.534201 (-0.515722) | 0.395710 \/ 0.579283 (-0.183573) | 0.409706 \/ 0.434364 (-0.024658) | 0.463454 \/ 0.540337 (-0.076884) | 0.615657 \/ 1.386936 (-0.771279) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#c5a752d8e8ca0a6ed118b024ba03c1b4a2881177 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007224 \/ 0.011353 (-0.004129) | 0.004506 \/ 0.011008 (-0.006503) | 0.096729 \/ 0.038508 (0.058221) | 0.082394 \/ 0.023109 (0.059284) | 0.390954 \/ 0.275898 (0.115056) | 0.416647 \/ 0.323480 (0.093167) | 0.005894 \/ 0.007986 (-0.002092) | 0.003756 \/ 0.004328 (-0.000572) | 0.075800 \/ 0.004250 (0.071549) | 0.062683 \/ 0.037052 (0.025631) | 0.398959 \/ 0.258489 (0.140470) | 0.436624 \/ 0.293841 (0.142783) | 0.034650 \/ 0.128546 (-0.093896) | 0.009655 \/ 0.075646 (-0.065991) | 0.315761 \/ 0.419271 (-0.103511) | 0.060957 \/ 0.043533 (0.017424) | 0.385649 \/ 0.255139 (0.130510) | 0.394022 \/ 0.283200 (0.110822) | 0.024601 \/ 0.141683 (-0.117082) | 1.729586 \/ 1.452155 (0.277431) | 1.724153 \/ 1.492716 (0.231437) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.207070 \/ 0.018006 (0.189063) | 0.466502 \/ 0.000490 (0.466012) | 0.010739 \/ 0.000200 (0.010540) | 0.000214 \/ 0.000054 (0.000160) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031633 \/ 0.037411 (-0.005779) | 0.095345 \/ 0.014526 (0.080819) | 0.105399 \/ 0.176557 (-0.071157) | 0.174173 \/ 0.737135 (-0.562962) | 0.104207 \/ 0.296338 (-0.192132) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.435312 \/ 0.215209 (0.220103) | 4.265600 \/ 2.077655 (2.187946) | 2.056500 \/ 1.504120 (0.552380) | 1.848023 \/ 1.541195 (0.306828) | 1.946156 \/ 1.468490 (0.477666) | 0.557788 \/ 4.584777 (-4.026989) | 4.070289 \/ 3.745712 (0.324577) | 3.608027 \/ 5.269862 (-1.661835) | 2.214556 \/ 4.565676 (-2.351121) | 0.062623 \/ 0.424275 (-0.361652) | 0.008083 \/ 0.007607 (0.000476) | 0.491782 \/ 0.226044 (0.265738) | 4.989963 \/ 2.268929 (2.721035) | 2.575867 \/ 55.444624 (-52.868757) | 2.208045 \/ 6.876477 (-4.668431) | 2.364184 \/ 2.142072 (0.222112) | 0.633925 \/ 4.805227 (-4.171302) | 0.144323 \/ 6.500664 (-6.356341) | 0.067505 \/ 0.075469 (-0.007965) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.467219 \/ 1.841788 (-0.374569) | 22.334967 \/ 8.074308 (14.260659) | 15.715747 \/ 10.191392 (5.524355) | 0.175443 \/ 0.680424 (-0.504980) | 0.026165 \/ 0.534201 (-0.508036) | 0.490675 \/ 0.579283 (-0.088608) | 0.509211 \/ 0.434364 (0.074847) | 0.586303 \/ 0.540337 (0.045965) | 0.785052 \/ 1.386936 (-0.601884) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007893 \/ 0.011353 (-0.003460) | 0.004577 \/ 0.011008 (-0.006431) | 0.075781 \/ 0.038508 (0.037273) | 0.095492 \/ 0.023109 (0.072382) | 0.433259 \/ 0.275898 (0.157361) | 0.469386 \/ 0.323480 (0.145906) | 0.006317 \/ 0.007986 (-0.001669) | 0.003708 \/ 0.004328 (-0.000621) | 0.074417 \/ 0.004250 (0.070167) | 0.068605 \/ 0.037052 (0.031552) | 0.448701 \/ 0.258489 (0.190212) | 0.469131 \/ 0.293841 (0.175290) | 0.036647 \/ 0.128546 (-0.091899) | 0.010077 \/ 0.075646 (-0.065570) | 0.082457 \/ 0.419271 (-0.336815) | 0.063255 \/ 0.043533 (0.019722) | 0.428144 \/ 0.255139 (0.173005) | 0.451872 \/ 0.283200 (0.168672) | 0.033953 \/ 0.141683 (-0.107730) | 1.781752 \/ 1.452155 (0.329597) | 1.869014 \/ 1.492716 (0.376297) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.223596 \/ 0.018006 (0.205590) | 0.470307 \/ 0.000490 (0.469818) | 0.005059 \/ 0.000200 (0.004859) | 0.000104 \/ 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.038804 \/ 0.037411 (0.001393) | 0.117879 \/ 0.014526 (0.103353) | 0.140701 \/ 0.176557 (-0.035855) | 0.194672 \/ 0.737135 (-0.542463) | 0.132806 \/ 0.296338 (-0.163533) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.510109 \/ 0.215209 (0.294900) | 4.729457 \/ 2.077655 (2.651803) | 2.512113 \/ 1.504120 (1.007993) | 2.302553 \/ 1.541195 (0.761358) | 2.420462 \/ 1.468490 (0.951972) | 0.531682 \/ 4.584777 (-4.053095) | 4.061208 \/ 3.745712 (0.315496) | 3.588542 \/ 5.269862 (-1.681320) | 2.203187 \/ 4.565676 (-2.362489) | 0.065791 \/ 0.424275 (-0.358484) | 0.008839 \/ 0.007607 (0.001232) | 0.562041 \/ 0.226044 (0.335997) | 5.702340 \/ 2.268929 (3.433412) | 3.127609 \/ 55.444624 (-52.317015) | 2.823060 \/ 6.876477 (-4.053417) | 2.898675 \/ 2.142072 (0.756603) | 0.659589 \/ 4.805227 (-4.145638) | 0.148798 \/ 6.500664 (-6.351866) | 0.070787 \/ 0.075469 (-0.004682) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.478317 \/ 1.841788 (-0.363471) | 21.995400 \/ 8.074308 (13.921092) | 16.770729 \/ 10.191392 (6.579337) | 0.226333 \/ 0.680424 (-0.454091) | 0.021835 \/ 0.534201 (-0.512366) | 0.460373 \/ 0.579283 (-0.118910) | 0.479494 \/ 0.434364 (0.045130) | 0.529470 \/ 0.540337 (-0.010868) | 0.718066 \/ 1.386936 (-0.668870) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#9a717b8eb80b0e50b25818127f79a35e0866fb14 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007824 \/ 0.011353 (-0.003529) | 0.004601 \/ 0.011008 (-0.006407) | 0.100025 \/ 0.038508 (0.061517) | 0.096046 \/ 0.023109 (0.072936) | 0.376226 \/ 0.275898 (0.100328) | 0.410905 \/ 0.323480 (0.087425) | 0.006048 \/ 0.007986 (-0.001938) | 0.003817 \/ 0.004328 (-0.000511) | 0.076624 \/ 0.004250 (0.072374) | 0.066390 \/ 0.037052 (0.029338) | 0.380098 \/ 0.258489 (0.121609) | 0.413603 \/ 0.293841 (0.119762) | 0.036546 \/ 0.128546 (-0.092001) | 0.009881 \/ 0.075646 (-0.065765) | 0.344338 \/ 0.419271 (-0.074934) | 0.061882 \/ 0.043533 (0.018350) | 0.368568 \/ 0.255139 (0.113429) | 0.397133 \/ 0.283200 (0.113934) | 0.027255 \/ 0.141683 (-0.114428) | 1.795099 \/ 1.452155 (0.342945) | 1.852443 \/ 1.492716 (0.359727) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.247436 \/ 0.018006 (0.229430) | 0.494119 \/ 0.000490 (0.493629) | 0.004359 \/ 0.000200 (0.004159) | 0.000089 \/ 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034765 \/ 0.037411 (-0.002647) | 0.104541 \/ 0.014526 (0.090015) | 0.113898 \/ 0.176557 (-0.062659) | 0.183634 \/ 0.737135 (-0.553501) | 0.116423 \/ 0.296338 (-0.179916) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.458747 \/ 0.215209 (0.243538) | 4.555740 \/ 2.077655 (2.478085) | 2.217240 \/ 1.504120 (0.713121) | 2.039879 \/ 1.541195 (0.498684) | 2.088581 \/ 1.468490 (0.620091) | 0.588063 \/ 4.584777 (-3.996714) | 4.238226 \/ 3.745712 (0.492514) | 4.768060 \/ 5.269862 (-0.501802) | 2.857117 \/ 4.565676 (-1.708560) | 0.068742 \/ 0.424275 (-0.355533) | 0.008667 \/ 0.007607 (0.001059) | 0.549294 \/ 0.226044 (0.323249) | 5.464635 \/ 2.268929 (3.195706) | 2.744435 \/ 55.444624 (-52.700189) | 2.347660 \/ 6.876477 (-4.528816) | 2.616816 \/ 2.142072 (0.474743) | 0.703701 \/ 4.805227 (-4.101526) | 0.159749 \/ 6.500664 (-6.340915) | 0.071990 \/ 0.075469 (-0.003479) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.486599 \/ 1.841788 (-0.355188) | 22.745438 \/ 8.074308 (14.671130) | 16.822332 \/ 10.191392 (6.630940) | 0.184730 \/ 0.680424 (-0.495694) | 0.021267 \/ 0.534201 (-0.512934) | 0.467108 \/ 0.579283 (-0.112176) | 0.472674 \/ 0.434364 (0.038311) | 0.548094 \/ 0.540337 (0.007756) | 0.735885 \/ 1.386936 (-0.651051) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007746 \/ 0.011353 (-0.003607) | 0.004585 \/ 0.011008 (-0.006423) | 0.076943 \/ 0.038508 (0.038435) | 0.087473 \/ 0.023109 (0.064363) | 0.480099 \/ 0.275898 (0.204201) | 0.495271 \/ 0.323480 (0.171791) | 0.006348 \/ 0.007986 (-0.001638) | 0.003902 \/ 0.004328 (-0.000426) | 0.077586 \/ 0.004250 (0.073335) | 0.066467 \/ 0.037052 (0.029415) | 0.468741 \/ 0.258489 (0.210252) | 0.506778 \/ 0.293841 (0.212937) | 0.036877 \/ 0.128546 (-0.091669) | 0.010102 \/ 0.075646 (-0.065545) | 0.084419 \/ 0.419271 (-0.334852) | 0.058721 \/ 0.043533 (0.015188) | 0.453633 \/ 0.255139 (0.198494) | 0.481171 \/ 0.283200 (0.197971) | 0.028716 \/ 0.141683 (-0.112967) | 1.853048 \/ 1.452155 (0.400893) | 1.885847 \/ 1.492716 (0.393130) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.192136 \/ 0.018006 (0.174130) | 0.484481 \/ 0.000490 (0.483991) | 0.002951 \/ 0.000200 (0.002751) | 0.000098 \/ 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.037949 \/ 0.037411 (0.000538) | 0.108364 \/ 0.014526 (0.093838) | 0.119542 \/ 0.176557 (-0.057014) | 0.188542 \/ 0.737135 (-0.548593) | 0.122011 \/ 0.296338 (-0.174327) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.483135 \/ 0.215209 (0.267926) | 4.849715 \/ 2.077655 (2.772060) | 2.497736 \/ 1.504120 (0.993616) | 2.314243 \/ 1.541195 (0.773048) | 2.412739 \/ 1.468490 (0.944249) | 0.564137 \/ 4.584777 (-4.020639) | 4.242273 \/ 3.745712 (0.496561) | 6.337843 \/ 5.269862 (1.067982) | 3.923250 \/ 4.565676 (-0.642426) | 0.066464 \/ 0.424275 (-0.357811) | 0.009217 \/ 0.007607 (0.001610) | 0.575667 \/ 0.226044 (0.349623) | 5.746187 \/ 2.268929 (3.477258) | 3.069655 \/ 55.444624 (-52.374969) | 2.674798 \/ 6.876477 (-4.201679) | 2.956535 \/ 2.142072 (0.814463) | 0.701043 \/ 4.805227 (-4.104185) | 0.157241 \/ 6.500664 (-6.343423) | 0.073175 \/ 0.075469 (-0.002294) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.609943 \/ 1.841788 (-0.231844) | 23.478594 \/ 8.074308 (15.404286) | 17.454437 \/ 10.191392 (7.263045) | 0.186422 \/ 0.680424 (-0.494002) | 0.021703 \/ 0.534201 (-0.512498) | 0.471704 \/ 0.579283 (-0.107579) | 0.480553 \/ 0.434364 (0.046189) | 0.552881 \/ 0.540337 (0.012544) | 0.722515 \/ 1.386936 (-0.664421) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#84645f80049cd00d9e0d4908faf3c3203fdcf21d \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007542 \/ 0.011353 (-0.003811) | 0.004692 \/ 0.011008 (-0.006316) | 0.099155 \/ 0.038508 (0.060647) | 0.089365 \/ 0.023109 (0.066256) | 0.370870 \/ 0.275898 (0.094972) | 0.422152 \/ 0.323480 (0.098673) | 0.006223 \/ 0.007986 (-0.001763) | 0.003852 \/ 0.004328 (-0.000476) | 0.075438 \/ 0.004250 (0.071188) | 0.065973 \/ 0.037052 (0.028921) | 0.381513 \/ 0.258489 (0.123024) | 0.416196 \/ 0.293841 (0.122355) | 0.035483 \/ 0.128546 (-0.093063) | 0.009884 \/ 0.075646 (-0.065762) | 0.341290 \/ 0.419271 (-0.077982) | 0.060546 \/ 0.043533 (0.017014) | 0.365101 \/ 0.255139 (0.109962) | 0.391058 \/ 0.283200 (0.107859) | 0.026325 \/ 0.141683 (-0.115358) | 1.815168 \/ 1.452155 (0.363013) | 1.834711 \/ 1.492716 (0.341994) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.222177 \/ 0.018006 (0.204171) | 0.501151 \/ 0.000490 (0.500662) | 0.010202 \/ 0.000200 (0.010002) | 0.000102 \/ 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034043 \/ 0.037411 (-0.003368) | 0.097884 \/ 0.014526 (0.083358) | 0.114022 \/ 0.176557 (-0.062534) | 0.186200 \/ 0.737135 (-0.550935) | 0.115555 \/ 0.296338 (-0.180783) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.485857 \/ 0.215209 (0.270648) | 4.959263 \/ 2.077655 (2.881608) | 2.501085 \/ 1.504120 (0.996965) | 2.234660 \/ 1.541195 (0.693465) | 2.238585 \/ 1.468490 (0.770095) | 0.645431 \/ 4.584777 (-3.939345) | 4.434311 \/ 3.745712 (0.688599) | 4.771491 \/ 5.269862 (-0.498371) | 2.778963 \/ 4.565676 (-1.786714) | 0.075615 \/ 0.424275 (-0.348660) | 0.009502 \/ 0.007607 (0.001895) | 0.546539 \/ 0.226044 (0.320495) | 5.464242 \/ 2.268929 (3.195314) | 2.894101 \/ 55.444624 (-52.550524) | 2.513761 \/ 6.876477 (-4.362715) | 2.719843 \/ 2.142072 (0.577770) | 0.678828 \/ 4.805227 (-4.126399) | 0.157839 \/ 6.500664 (-6.342825) | 0.071305 \/ 0.075469 (-0.004164) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.496879 \/ 1.841788 (-0.344909) | 22.214452 \/ 8.074308 (14.140144) | 17.707541 \/ 10.191392 (7.516149) | 0.197008 \/ 0.680424 (-0.483416) | 0.024883 \/ 0.534201 (-0.509318) | 0.493611 \/ 0.579283 (-0.085672) | 0.500677 \/ 0.434364 (0.066313) | 0.569381 \/ 0.540337 (0.029044) | 0.773950 \/ 1.386936 (-0.612986) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007337 \/ 0.011353 (-0.004015) | 0.004572 \/ 0.011008 (-0.006436) | 0.091123 \/ 0.038508 (0.052615) | 0.079762 \/ 0.023109 (0.056652) | 0.450527 \/ 0.275898 (0.174629) | 0.525097 \/ 0.323480 (0.201617) | 0.005873 \/ 0.007986 (-0.002112) | 0.003797 \/ 0.004328 (-0.000532) | 0.076259 \/ 0.004250 (0.072009) | 0.062745 \/ 0.037052 (0.025692) | 0.465553 \/ 0.258489 (0.207064) | 0.546026 \/ 0.293841 (0.252186) | 0.035638 \/ 0.128546 (-0.092909) | 0.010086 \/ 0.075646 (-0.065560) | 0.109269 \/ 0.419271 (-0.310002) | 0.056765 \/ 0.043533 (0.013233) | 0.440887 \/ 0.255139 (0.185748) | 0.513325 \/ 0.283200 (0.230125) | 0.027206 \/ 0.141683 (-0.114476) | 1.863564 \/ 1.452155 (0.411409) | 1.918206 \/ 1.492716 (0.425490) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.266479 \/ 0.018006 (0.248473) | 0.487971 \/ 0.000490 (0.487481) | 0.012246 \/ 0.000200 (0.012046) | 0.000119 \/ 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.035281 \/ 0.037411 (-0.002130) | 0.102991 \/ 0.014526 (0.088465) | 0.114638 \/ 0.176557 (-0.061919) | 0.184117 \/ 0.737135 (-0.553018) | 0.117943 \/ 0.296338 (-0.178396) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.497897 \/ 0.215209 (0.282688) | 4.973806 \/ 2.077655 (2.896151) | 2.596146 \/ 1.504120 (1.092026) | 2.419694 \/ 1.541195 (0.878499) | 2.525784 \/ 1.468490 (1.057294) | 0.568021 \/ 4.584777 (-4.016756) | 4.296431 \/ 3.745712 (0.550719) | 3.690682 \/ 5.269862 (-1.579179) | 2.345965 \/ 4.565676 (-2.219712) | 0.066859 \/ 0.424275 (-0.357416) | 0.009093 \/ 0.007607 (0.001486) | 0.582616 \/ 0.226044 (0.356571) | 5.826528 \/ 2.268929 (3.557600) | 3.253222 \/ 55.444624 (-52.191403) | 2.798447 \/ 6.876477 (-4.078030) | 3.054609 \/ 2.142072 (0.912537) | 0.678816 \/ 4.805227 (-4.126411) | 0.157966 \/ 6.500664 (-6.342698) | 0.073797 \/ 0.075469 (-0.001672) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.599480 \/ 1.841788 (-0.242308) | 23.249738 \/ 8.074308 (15.175430) | 16.965406 \/ 10.191392 (6.774014) | 0.171390 \/ 0.680424 (-0.509034) | 0.021810 \/ 0.534201 (-0.512391) | 0.483339 \/ 0.579283 (-0.095944) | 0.496615 \/ 0.434364 (0.062251) | 0.583786 \/ 0.540337 (0.043448) | 0.741699 \/ 1.386936 (-0.645237) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#7935cd2e564f5d1c66ed1acf731703724ba7a287 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006054 \/ 0.011353 (-0.005299) | 0.003706 \/ 0.011008 (-0.007302) | 0.080060 \/ 0.038508 (0.041552) | 0.061479 \/ 0.023109 (0.038370) | 0.327981 \/ 0.275898 (0.052083) | 0.356930 \/ 0.323480 (0.033450) | 0.004671 \/ 0.007986 (-0.003315) | 0.002901 \/ 0.004328 (-0.001428) | 0.062425 \/ 0.004250 (0.058174) | 0.046310 \/ 0.037052 (0.009258) | 0.323657 \/ 0.258489 (0.065168) | 0.370130 \/ 0.293841 (0.076289) | 0.027151 \/ 0.128546 (-0.101395) | 0.007850 \/ 0.075646 (-0.067797) | 0.262300 \/ 0.419271 (-0.156971) | 0.045456 \/ 0.043533 (0.001923) | 0.325569 \/ 0.255139 (0.070430) | 0.352962 \/ 0.283200 (0.069762) | 0.020156 \/ 0.141683 (-0.121527) | 1.429404 \/ 1.452155 (-0.022750) | 1.615032 \/ 1.492716 (0.122316) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.187309 \/ 0.018006 (0.169303) | 0.428848 \/ 0.000490 (0.428358) | 0.003599 \/ 0.000200 (0.003399) | 0.000069 \/ 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023260 \/ 0.037411 (-0.014151) | 0.072467 \/ 0.014526 (0.057941) | 0.082398 \/ 0.176557 (-0.094159) | 0.142573 \/ 0.737135 (-0.594562) | 0.082570 \/ 0.296338 (-0.213768) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.426503 \/ 0.215209 (0.211294) | 4.267875 \/ 2.077655 (2.190220) | 2.189762 \/ 1.504120 (0.685642) | 2.027992 \/ 1.541195 (0.486798) | 2.053211 \/ 1.468490 (0.584721) | 0.503850 \/ 4.584777 (-4.080927) | 3.086444 \/ 3.745712 (-0.659268) | 3.319492 \/ 5.269862 (-1.950370) | 2.070714 \/ 4.565676 (-2.494962) | 0.057591 \/ 0.424275 (-0.366684) | 0.006407 \/ 0.007607 (-0.001200) | 0.501145 \/ 0.226044 (0.275100) | 5.017753 \/ 2.268929 (2.748825) | 2.643145 \/ 55.444624 (-52.801479) | 2.327440 \/ 6.876477 (-4.549037) | 2.460250 \/ 2.142072 (0.318178) | 0.589397 \/ 4.805227 (-4.215830) | 0.124948 \/ 6.500664 (-6.375716) | 0.060450 \/ 0.075469 (-0.015020) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.279870 \/ 1.841788 (-0.561918) | 18.115908 \/ 8.074308 (10.041600) | 13.570032 \/ 10.191392 (3.378640) | 0.132981 \/ 0.680424 (-0.547442) | 0.016942 \/ 0.534201 (-0.517259) | 0.333591 \/ 0.579283 (-0.245692) | 0.358844 \/ 0.434364 (-0.075520) | 0.395748 \/ 0.540337 (-0.144590) | 0.546213 \/ 1.386936 (-0.840723) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006062 \/ 0.011353 (-0.005291) | 0.003673 \/ 0.011008 (-0.007336) | 0.064726 \/ 0.038508 (0.026218) | 0.061854 \/ 0.023109 (0.038745) | 0.385343 \/ 0.275898 (0.109445) | 0.441284 \/ 0.323480 (0.117805) | 0.004830 \/ 0.007986 (-0.003156) | 0.002909 \/ 0.004328 (-0.001420) | 0.063874 \/ 0.004250 (0.059624) | 0.049331 \/ 0.037052 (0.012278) | 0.418484 \/ 0.258489 (0.159995) | 0.451397 \/ 0.293841 (0.157556) | 0.027665 \/ 0.128546 (-0.100881) | 0.008088 \/ 0.075646 (-0.067558) | 0.069625 \/ 0.419271 (-0.349646) | 0.043437 \/ 0.043533 (-0.000095) | 0.359789 \/ 0.255139 (0.104650) | 0.430206 \/ 0.283200 (0.147007) | 0.022308 \/ 0.141683 (-0.119375) | 1.461030 \/ 1.452155 (0.008875) | 1.513683 \/ 1.492716 (0.020966) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.230958 \/ 0.018006 (0.212952) | 0.417553 \/ 0.000490 (0.417063) | 0.000802 \/ 0.000200 (0.000602) | 0.000066 \/ 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025421 \/ 0.037411 (-0.011991) | 0.077156 \/ 0.014526 (0.062630) | 0.087533 \/ 0.176557 (-0.089024) | 0.138048 \/ 0.737135 (-0.599087) | 0.089358 \/ 0.296338 (-0.206981) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.439172 \/ 0.215209 (0.223963) | 4.409509 \/ 2.077655 (2.331854) | 2.491270 \/ 1.504120 (0.987150) | 2.308446 \/ 1.541195 (0.767252) | 2.378440 \/ 1.468490 (0.909950) | 0.499834 \/ 4.584777 (-4.084943) | 3.083168 \/ 3.745712 (-0.662544) | 2.867543 \/ 5.269862 (-2.402318) | 1.876354 \/ 4.565676 (-2.689323) | 0.057092 \/ 0.424275 (-0.367183) | 0.006955 \/ 0.007607 (-0.000653) | 0.513799 \/ 0.226044 (0.287754) | 5.126660 \/ 2.268929 (2.857731) | 2.917348 \/ 55.444624 (-52.527277) | 2.508035 \/ 6.876477 (-4.368441) | 2.698089 \/ 2.142072 (0.556016) | 0.586828 \/ 4.805227 (-4.218399) | 0.124740 \/ 6.500664 (-6.375924) | 0.062276 \/ 0.075469 (-0.013193) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.291624 \/ 1.841788 (-0.550164) | 18.199968 \/ 8.074308 (10.125660) | 13.888139 \/ 10.191392 (3.696747) | 0.162955 \/ 0.680424 (-0.517469) | 0.017343 \/ 0.534201 (-0.516858) | 0.334683 \/ 0.579283 (-0.244600) | 0.352708 \/ 0.434364 (-0.081656) | 0.400629 \/ 0.540337 (-0.139708) | 0.539497 \/ 1.386936 (-0.847439) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#e7976db7fe22c6b93a869488d07b8137ea6a0db4 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007500 \/ 0.011353 (-0.003853) | 0.004498 \/ 0.011008 (-0.006510) | 0.100239 \/ 0.038508 (0.061731) | 0.083424 \/ 0.023109 (0.060315) | 0.366664 \/ 0.275898 (0.090766) | 0.406641 \/ 0.323480 (0.083161) | 0.004577 \/ 0.007986 (-0.003409) | 0.004809 \/ 0.004328 (0.000480) | 0.076898 \/ 0.004250 (0.072647) | 0.064021 \/ 0.037052 (0.026969) | 0.375836 \/ 0.258489 (0.117347) | 0.413008 \/ 0.293841 (0.119167) | 0.036010 \/ 0.128546 (-0.092537) | 0.009655 \/ 0.075646 (-0.065991) | 0.342595 \/ 0.419271 (-0.076677) | 0.061846 \/ 0.043533 (0.018313) | 0.376543 \/ 0.255139 (0.121404) | 0.395858 \/ 0.283200 (0.112659) | 0.026792 \/ 0.141683 (-0.114891) | 1.775569 \/ 1.452155 (0.323414) | 1.865077 \/ 1.492716 (0.372360) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.221521 \/ 0.018006 (0.203514) | 0.474604 \/ 0.000490 (0.474114) | 0.004354 \/ 0.000200 (0.004154) | 0.000090 \/ 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032947 \/ 0.037411 (-0.004464) | 0.100454 \/ 0.014526 (0.085928) | 0.111955 \/ 0.176557 (-0.064602) | 0.179752 \/ 0.737135 (-0.557383) | 0.114282 \/ 0.296338 (-0.182056) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.458261 \/ 0.215209 (0.243052) | 4.563536 \/ 2.077655 (2.485881) | 2.231928 \/ 1.504120 (0.727808) | 2.036751 \/ 1.541195 (0.495556) | 2.170413 \/ 1.468490 (0.701923) | 0.570825 \/ 4.584777 (-4.013952) | 4.505762 \/ 3.745712 (0.760050) | 5.033461 \/ 5.269862 (-0.236401) | 2.704989 \/ 4.565676 (-1.860687) | 0.067011 \/ 0.424275 (-0.357264) | 0.008568 \/ 0.007607 (0.000961) | 0.545151 \/ 0.226044 (0.319106) | 5.438984 \/ 2.268929 (3.170055) | 2.771818 \/ 55.444624 (-52.672806) | 2.393082 \/ 6.876477 (-4.483395) | 2.467173 \/ 2.142072 (0.325101) | 0.678849 \/ 4.805227 (-4.126379) | 0.160480 \/ 6.500664 (-6.340184) | 0.073681 \/ 0.075469 (-0.001788) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.532272 \/ 1.841788 (-0.309516) | 22.548741 \/ 8.074308 (14.474433) | 17.091044 \/ 10.191392 (6.899652) | 0.172100 \/ 0.680424 (-0.508324) | 0.022220 \/ 0.534201 (-0.511981) | 0.467871 \/ 0.579283 (-0.111412) | 0.491135 \/ 0.434364 (0.056771) | 0.548433 \/ 0.540337 (0.008096) | 0.733340 \/ 1.386936 (-0.653596) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007593 \/ 0.011353 (-0.003760) | 0.004656 \/ 0.011008 (-0.006352) | 0.076940 \/ 0.038508 (0.038431) | 0.085183 \/ 0.023109 (0.062073) | 0.447178 \/ 0.275898 (0.171280) | 0.469545 \/ 0.323480 (0.146065) | 0.006023 \/ 0.007986 (-0.001962) | 0.003808 \/ 0.004328 (-0.000520) | 0.076767 \/ 0.004250 (0.072517) | 0.065713 \/ 0.037052 (0.028661) | 0.445573 \/ 0.258489 (0.187084) | 0.481689 \/ 0.293841 (0.187848) | 0.036893 \/ 0.128546 (-0.091654) | 0.009976 \/ 0.075646 (-0.065670) | 0.084443 \/ 0.419271 (-0.334829) | 0.058829 \/ 0.043533 (0.015297) | 0.429291 \/ 0.255139 (0.174152) | 0.454016 \/ 0.283200 (0.170816) | 0.027289 \/ 0.141683 (-0.114394) | 1.806786 \/ 1.452155 (0.354632) | 1.887680 \/ 1.492716 (0.394964) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.241012 \/ 0.018006 (0.223006) | 0.470629 \/ 0.000490 (0.470139) | 0.003213 \/ 0.000200 (0.003013) | 0.000107 \/ 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.036896 \/ 0.037411 (-0.000515) | 0.106932 \/ 0.014526 (0.092406) | 0.120333 \/ 0.176557 (-0.056223) | 0.186271 \/ 0.737135 (-0.550865) | 0.121581 \/ 0.296338 (-0.174758) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.507782 \/ 0.215209 (0.292573) | 5.062932 \/ 2.077655 (2.985278) | 2.689539 \/ 1.504120 (1.185419) | 2.482978 \/ 1.541195 (0.941784) | 2.561320 \/ 1.468490 (1.092830) | 0.570664 \/ 4.584777 (-4.014113) | 4.346051 \/ 3.745712 (0.600339) | 6.479374 \/ 5.269862 (1.209513) | 4.096483 \/ 4.565676 (-0.469194) | 0.067564 \/ 0.424275 (-0.356711) | 0.009147 \/ 0.007607 (0.001540) | 0.596059 \/ 0.226044 (0.370015) | 5.963223 \/ 2.268929 (3.694295) | 3.201039 \/ 55.444624 (-52.243585) | 2.816581 \/ 6.876477 (-4.059896) | 3.047821 \/ 2.142072 (0.905748) | 0.687749 \/ 4.805227 (-4.117478) | 0.158174 \/ 6.500664 (-6.342490) | 0.073329 \/ 0.075469 (-0.002140) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.601346 \/ 1.841788 (-0.240441) | 23.712210 \/ 8.074308 (15.637902) | 16.567272 \/ 10.191392 (6.375880) | 0.224745 \/ 0.680424 (-0.455679) | 0.021662 \/ 0.534201 (-0.512539) | 0.471427 \/ 0.579283 (-0.107856) | 0.498751 \/ 0.434364 (0.064387) | 0.572047 \/ 0.540337 (0.031710) | 0.821868 \/ 1.386936 (-0.565068) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#34d0c9027c750adc89f3d04a6bf2e9cb95915da4 \"CML watermark\")\n"],"created_at":1689262904000,"updated_at":1689272945000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6028","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6028","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6028.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6028.patch","merged_at":null},"body":"Thanks to @janineguo 's work in https:\/\/github.com\/huggingface\/datasets\/pull\/5919 which was needed to support HfFileSystem.\r\n\r\n## Implementation details\r\n\r\nI replaced all the from_hf_repo and from_local_or_remote in data_files.py to only use a new `from_patterns` which works for any fsspec path, including hf:\/\/ paths, https:\/\/ URLs and local paths. This simplifies the codebase since there is no logic duplication anymore when it comes to data files resolution.\r\n\r\nI added `_prepare_path_and_storage_options` which returns the right storage_options to use given a path and a `DownloadConfig`. This is the only place where the logic depends on the filesystem type that must be used.\r\n\r\nI also removed the `get_metadata_data_files_list ` and `get_patterns_and_data_files` functions added recently, since data files resolution is now handled using a common interface.\r\n\r\n## Breaking changes\r\n\r\nDataFilesList and DataFilesDict:\r\n- use `str` paths instead of `Union[Path, Url]`\r\n- support hf:\/\/ paths\r\n\r\nclose https:\/\/github.com\/huggingface\/datasets\/issues\/6017","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6028\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6028\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6027","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6027\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6027\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6027\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6027","id":1803008486,"node_id":"PR_kwDODunzps5Va4g3","number":6027,"title":"Delete `task_templates` in `IterableDataset` when they are no longer valid","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008698 \/ 0.011353 (-0.002655) | 0.005250 \/ 0.011008 (-0.005758) | 0.104101 \/ 0.038508 (0.065593) | 0.085021 \/ 0.023109 (0.061912) | 0.426653 \/ 0.275898 (0.150755) | 0.460449 \/ 0.323480 (0.136969) | 0.005222 \/ 0.007986 (-0.002763) | 0.006280 \/ 0.004328 (0.001951) | 0.083458 \/ 0.004250 (0.079207) | 0.066132 \/ 0.037052 (0.029079) | 0.433416 \/ 0.258489 (0.174927) | 0.482718 \/ 0.293841 (0.188877) | 0.048872 \/ 0.128546 (-0.079675) | 0.013699 \/ 0.075646 (-0.061948) | 0.365660 \/ 0.419271 (-0.053611) | 0.071008 \/ 0.043533 (0.027475) | 0.428688 \/ 0.255139 (0.173549) | 0.443554 \/ 0.283200 (0.160354) | 0.035901 \/ 0.141683 (-0.105782) | 1.829296 \/ 1.452155 (0.377141) | 1.862351 \/ 1.492716 (0.369635) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.236284 \/ 0.018006 (0.218278) | 0.584075 \/ 0.000490 (0.583585) | 0.004634 \/ 0.000200 (0.004434) | 0.000125 \/ 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034723 \/ 0.037411 (-0.002688) | 0.100989 \/ 0.014526 (0.086464) | 0.113722 \/ 0.176557 (-0.062834) | 0.187659 \/ 0.737135 (-0.549477) | 0.113937 \/ 0.296338 (-0.182401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.587500 \/ 0.215209 (0.372291) | 5.847371 \/ 2.077655 (3.769716) | 2.599691 \/ 1.504120 (1.095571) | 2.246187 \/ 1.541195 (0.704992) | 2.419126 \/ 1.468490 (0.950636) | 0.847327 \/ 4.584777 (-3.737450) | 5.230438 \/ 3.745712 (1.484726) | 7.539021 \/ 5.269862 (2.269160) | 4.617473 \/ 4.565676 (0.051797) | 0.103620 \/ 0.424275 (-0.320655) | 0.009195 \/ 0.007607 (0.001588) | 0.714247 \/ 0.226044 (0.488203) | 7.331621 \/ 2.268929 (5.062693) | 3.416575 \/ 55.444624 (-52.028049) | 2.649467 \/ 6.876477 (-4.227009) | 2.928091 \/ 2.142072 (0.786018) | 1.002155 \/ 4.805227 (-3.803072) | 0.210790 \/ 6.500664 (-6.289874) | 0.081303 \/ 0.075469 (0.005834) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.655431 \/ 1.841788 (-0.186357) | 24.069595 \/ 8.074308 (15.995287) | 20.923766 \/ 10.191392 (10.732374) | 0.232021 \/ 0.680424 (-0.448403) | 0.026355 \/ 0.534201 (-0.507846) | 0.496830 \/ 0.579283 (-0.082453) | 0.582620 \/ 0.434364 (0.148257) | 0.551227 \/ 0.540337 (0.010890) | 0.756389 \/ 1.386936 (-0.630547) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009329 \/ 0.011353 (-0.002024) | 0.005045 \/ 0.011008 (-0.005964) | 0.082116 \/ 0.038508 (0.043608) | 0.082420 \/ 0.023109 (0.059311) | 0.502513 \/ 0.275898 (0.226615) | 0.526098 \/ 0.323480 (0.202618) | 0.007468 \/ 0.007986 (-0.000517) | 0.005477 \/ 0.004328 (0.001148) | 0.082617 \/ 0.004250 (0.078367) | 0.070292 \/ 0.037052 (0.033239) | 0.503290 \/ 0.258489 (0.244801) | 0.541631 \/ 0.293841 (0.247790) | 0.050826 \/ 0.128546 (-0.077721) | 0.014699 \/ 0.075646 (-0.060948) | 0.094441 \/ 0.419271 (-0.324830) | 0.065034 \/ 0.043533 (0.021501) | 0.486778 \/ 0.255139 (0.231639) | 0.516907 \/ 0.283200 (0.233707) | 0.045140 \/ 0.141683 (-0.096543) | 1.831676 \/ 1.452155 (0.379521) | 1.910865 \/ 1.492716 (0.418149) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.286818 \/ 0.018006 (0.268812) | 0.558621 \/ 0.000490 (0.558131) | 0.002830 \/ 0.000200 (0.002630) | 0.000148 \/ 0.000054 (0.000094) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.036716 \/ 0.037411 (-0.000696) | 0.107830 \/ 0.014526 (0.093305) | 0.116368 \/ 0.176557 (-0.060188) | 0.178401 \/ 0.737135 (-0.558734) | 0.124729 \/ 0.296338 (-0.171609) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.633557 \/ 0.215209 (0.418348) | 6.423135 \/ 2.077655 (4.345480) | 2.981883 \/ 1.504120 (1.477763) | 2.755592 \/ 1.541195 (1.214398) | 2.769337 \/ 1.468490 (1.300847) | 0.836219 \/ 4.584777 (-3.748558) | 5.302030 \/ 3.745712 (1.556318) | 7.463960 \/ 5.269862 (2.194098) | 4.427254 \/ 4.565676 (-0.138422) | 0.095990 \/ 0.424275 (-0.328285) | 0.009264 \/ 0.007607 (0.001657) | 0.770642 \/ 0.226044 (0.544597) | 7.779667 \/ 2.268929 (5.510739) | 3.799115 \/ 55.444624 (-51.645509) | 3.212560 \/ 6.876477 (-3.663917) | 3.281657 \/ 2.142072 (1.139584) | 1.044981 \/ 4.805227 (-3.760246) | 0.210693 \/ 6.500664 (-6.289971) | 0.079466 \/ 0.075469 (0.003997) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.793155 \/ 1.841788 (-0.048632) | 24.691127 \/ 8.074308 (16.616819) | 22.083150 \/ 10.191392 (11.891758) | 0.242246 \/ 0.680424 (-0.438178) | 0.028001 \/ 0.534201 (-0.506200) | 0.494061 \/ 0.579283 (-0.085222) | 0.599288 \/ 0.434364 (0.164924) | 0.552101 \/ 0.540337 (0.011764) | 0.784093 \/ 1.386936 (-0.602843) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#cd429c39604af34bc3a3ba1f463329b23fcbc1e3 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006658 \/ 0.011353 (-0.004695) | 0.004044 \/ 0.011008 (-0.006965) | 0.085844 \/ 0.038508 (0.047336) | 0.077147 \/ 0.023109 (0.054038) | 0.344387 \/ 0.275898 (0.068489) | 0.376718 \/ 0.323480 (0.053238) | 0.005537 \/ 0.007986 (-0.002448) | 0.003452 \/ 0.004328 (-0.000876) | 0.065326 \/ 0.004250 (0.061076) | 0.057639 \/ 0.037052 (0.020587) | 0.352363 \/ 0.258489 (0.093873) | 0.378939 \/ 0.293841 (0.085098) | 0.031259 \/ 0.128546 (-0.097287) | 0.008464 \/ 0.075646 (-0.067183) | 0.289076 \/ 0.419271 (-0.130195) | 0.052991 \/ 0.043533 (0.009459) | 0.346053 \/ 0.255139 (0.090914) | 0.362761 \/ 0.283200 (0.079561) | 0.023501 \/ 0.141683 (-0.118182) | 1.478312 \/ 1.452155 (0.026157) | 1.545437 \/ 1.492716 (0.052721) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.202964 \/ 0.018006 (0.184957) | 0.534793 \/ 0.000490 (0.534303) | 0.006025 \/ 0.000200 (0.005825) | 0.000225 \/ 0.000054 (0.000171) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029418 \/ 0.037411 (-0.007993) | 0.084297 \/ 0.014526 (0.069771) | 0.096702 \/ 0.176557 (-0.079855) | 0.157355 \/ 0.737135 (-0.579781) | 0.097858 \/ 0.296338 (-0.198480) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.380728 \/ 0.215209 (0.165519) | 3.787712 \/ 2.077655 (1.710057) | 1.836393 \/ 1.504120 (0.332273) | 1.678415 \/ 1.541195 (0.137220) | 1.781800 \/ 1.468490 (0.313310) | 0.478677 \/ 4.584777 (-4.106100) | 3.614080 \/ 3.745712 (-0.131632) | 3.255637 \/ 5.269862 (-2.014225) | 2.063642 \/ 4.565676 (-2.502035) | 0.056470 \/ 0.424275 (-0.367805) | 0.007408 \/ 0.007607 (-0.000199) | 0.459155 \/ 0.226044 (0.233111) | 4.586679 \/ 2.268929 (2.317750) | 2.305737 \/ 55.444624 (-53.138888) | 1.954755 \/ 6.876477 (-4.921721) | 2.190809 \/ 2.142072 (0.048737) | 0.572426 \/ 4.805227 (-4.232802) | 0.130349 \/ 6.500664 (-6.370315) | 0.059346 \/ 0.075469 (-0.016124) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.253671 \/ 1.841788 (-0.588117) | 19.509015 \/ 8.074308 (11.434706) | 13.951349 \/ 10.191392 (3.759957) | 0.171038 \/ 0.680424 (-0.509386) | 0.018826 \/ 0.534201 (-0.515375) | 0.394642 \/ 0.579283 (-0.184642) | 0.419614 \/ 0.434364 (-0.014750) | 0.470931 \/ 0.540337 (-0.069406) | 0.643858 \/ 1.386936 (-0.743078) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006765 \/ 0.011353 (-0.004587) | 0.003955 \/ 0.011008 (-0.007053) | 0.064377 \/ 0.038508 (0.025869) | 0.076980 \/ 0.023109 (0.053871) | 0.368675 \/ 0.275898 (0.092777) | 0.403746 \/ 0.323480 (0.080267) | 0.005303 \/ 0.007986 (-0.002683) | 0.003257 \/ 0.004328 (-0.001072) | 0.064154 \/ 0.004250 (0.059903) | 0.056975 \/ 0.037052 (0.019923) | 0.376718 \/ 0.258489 (0.118229) | 0.416291 \/ 0.293841 (0.122450) | 0.031444 \/ 0.128546 (-0.097102) | 0.008532 \/ 0.075646 (-0.067115) | 0.070455 \/ 0.419271 (-0.348816) | 0.049032 \/ 0.043533 (0.005499) | 0.361413 \/ 0.255139 (0.106274) | 0.384648 \/ 0.283200 (0.101448) | 0.024050 \/ 0.141683 (-0.117633) | 1.514330 \/ 1.452155 (0.062176) | 1.585424 \/ 1.492716 (0.092708) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.214701 \/ 0.018006 (0.196695) | 0.447706 \/ 0.000490 (0.447216) | 0.000373 \/ 0.000200 (0.000173) | 0.000058 \/ 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031007 \/ 0.037411 (-0.006404) | 0.090545 \/ 0.014526 (0.076019) | 0.100611 \/ 0.176557 (-0.075945) | 0.154847 \/ 0.737135 (-0.582289) | 0.102864 \/ 0.296338 (-0.193475) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.427740 \/ 0.215209 (0.212531) | 4.273143 \/ 2.077655 (2.195488) | 2.294906 \/ 1.504120 (0.790786) | 2.138460 \/ 1.541195 (0.597265) | 2.274126 \/ 1.468490 (0.805636) | 0.486559 \/ 4.584777 (-4.098218) | 3.565554 \/ 3.745712 (-0.180158) | 3.377659 \/ 5.269862 (-1.892202) | 2.029883 \/ 4.565676 (-2.535793) | 0.057303 \/ 0.424275 (-0.366972) | 0.007314 \/ 0.007607 (-0.000293) | 0.504263 \/ 0.226044 (0.278219) | 5.041196 \/ 2.268929 (2.772268) | 2.819273 \/ 55.444624 (-52.625351) | 2.421479 \/ 6.876477 (-4.454998) | 2.503063 \/ 2.142072 (0.360991) | 0.581467 \/ 4.805227 (-4.223760) | 0.133532 \/ 6.500664 (-6.367132) | 0.062504 \/ 0.075469 (-0.012965) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.328765 \/ 1.841788 (-0.513022) | 20.131672 \/ 8.074308 (12.057363) | 14.312895 \/ 10.191392 (4.121503) | 0.191199 \/ 0.680424 (-0.489225) | 0.018522 \/ 0.534201 (-0.515679) | 0.393121 \/ 0.579283 (-0.186162) | 0.413122 \/ 0.434364 (-0.021242) | 0.469312 \/ 0.540337 (-0.071026) | 0.633140 \/ 1.386936 (-0.753796) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#dbf6c103f5844de40431478e7e4a64fbf2c2c067 \"CML watermark\")\n"],"created_at":1689254177000,"updated_at":1689257180000,"closed_at":1689256655000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6027","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6027","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6027.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6027.patch","merged_at":1689256655000},"body":"Fix #6025 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6027\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6027\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6026","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6026\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6026\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6026\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6026","id":1802929222,"node_id":"PR_kwDODunzps5VanI8","number":6026,"title":"Fix style with ruff 0.0.278","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_6026). All of your documentation changes will be reflected on that endpoint.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006444 \/ 0.011353 (-0.004909) | 0.003768 \/ 0.011008 (-0.007240) | 0.079625 \/ 0.038508 (0.041117) | 0.064490 \/ 0.023109 (0.041381) | 0.313858 \/ 0.275898 (0.037960) | 0.350810 \/ 0.323480 (0.027330) | 0.004804 \/ 0.007986 (-0.003182) | 0.002904 \/ 0.004328 (-0.001425) | 0.061728 \/ 0.004250 (0.057477) | 0.052265 \/ 0.037052 (0.015213) | 0.321246 \/ 0.258489 (0.062757) | 0.353873 \/ 0.293841 (0.060032) | 0.027510 \/ 0.128546 (-0.101036) | 0.007942 \/ 0.075646 (-0.067704) | 0.260518 \/ 0.419271 (-0.158754) | 0.045686 \/ 0.043533 (0.002153) | 0.316821 \/ 0.255139 (0.061682) | 0.337086 \/ 0.283200 (0.053886) | 0.022188 \/ 0.141683 (-0.119495) | 1.427345 \/ 1.452155 (-0.024810) | 1.476059 \/ 1.492716 (-0.016657) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.189640 \/ 0.018006 (0.171634) | 0.429724 \/ 0.000490 (0.429235) | 0.005314 \/ 0.000200 (0.005114) | 0.000076 \/ 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024412 \/ 0.037411 (-0.013000) | 0.073488 \/ 0.014526 (0.058962) | 0.083843 \/ 0.176557 (-0.092714) | 0.147849 \/ 0.737135 (-0.589286) | 0.085465 \/ 0.296338 (-0.210873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.405314 \/ 0.215209 (0.190105) | 4.071471 \/ 2.077655 (1.993816) | 1.916252 \/ 1.504120 (0.412132) | 1.721616 \/ 1.541195 (0.180422) | 1.807187 \/ 1.468490 (0.338697) | 0.498045 \/ 4.584777 (-4.086732) | 3.057526 \/ 3.745712 (-0.688187) | 4.451424 \/ 5.269862 (-0.818437) | 2.764020 \/ 4.565676 (-1.801656) | 0.057665 \/ 0.424275 (-0.366610) | 0.006679 \/ 0.007607 (-0.000928) | 0.485733 \/ 0.226044 (0.259688) | 4.844367 \/ 2.268929 (2.575438) | 2.435359 \/ 55.444624 (-53.009265) | 2.111478 \/ 6.876477 (-4.764999) | 2.377448 \/ 2.142072 (0.235375) | 0.587997 \/ 4.805227 (-4.217230) | 0.125545 \/ 6.500664 (-6.375120) | 0.061509 \/ 0.075469 (-0.013960) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.229210 \/ 1.841788 (-0.612577) | 18.553994 \/ 8.074308 (10.479686) | 14.037877 \/ 10.191392 (3.846485) | 0.144230 \/ 0.680424 (-0.536194) | 0.016891 \/ 0.534201 (-0.517310) | 0.329039 \/ 0.579283 (-0.250244) | 0.357269 \/ 0.434364 (-0.077095) | 0.384222 \/ 0.540337 (-0.156115) | 0.521292 \/ 1.386936 (-0.865644) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006359 \/ 0.011353 (-0.004994) | 0.003721 \/ 0.011008 (-0.007287) | 0.062047 \/ 0.038508 (0.023539) | 0.065267 \/ 0.023109 (0.042158) | 0.360164 \/ 0.275898 (0.084266) | 0.402292 \/ 0.323480 (0.078812) | 0.005603 \/ 0.007986 (-0.002382) | 0.002966 \/ 0.004328 (-0.001363) | 0.062580 \/ 0.004250 (0.058330) | 0.053634 \/ 0.037052 (0.016582) | 0.362210 \/ 0.258489 (0.103721) | 0.404285 \/ 0.293841 (0.110444) | 0.027567 \/ 0.128546 (-0.100979) | 0.008119 \/ 0.075646 (-0.067528) | 0.067577 \/ 0.419271 (-0.351694) | 0.042867 \/ 0.043533 (-0.000666) | 0.361576 \/ 0.255139 (0.106437) | 0.389061 \/ 0.283200 (0.105862) | 0.021923 \/ 0.141683 (-0.119760) | 1.446259 \/ 1.452155 (-0.005895) | 1.490724 \/ 1.492716 (-0.001992) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.206433 \/ 0.018006 (0.188427) | 0.424178 \/ 0.000490 (0.423688) | 0.002340 \/ 0.000200 (0.002140) | 0.000069 \/ 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024955 \/ 0.037411 (-0.012456) | 0.077446 \/ 0.014526 (0.062920) | 0.088540 \/ 0.176557 (-0.088017) | 0.141225 \/ 0.737135 (-0.595910) | 0.089747 \/ 0.296338 (-0.206592) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.443738 \/ 0.215209 (0.228529) | 4.208887 \/ 2.077655 (2.131233) | 2.155127 \/ 1.504120 (0.651007) | 2.028178 \/ 1.541195 (0.486983) | 2.084903 \/ 1.468490 (0.616413) | 0.497530 \/ 4.584777 (-4.087247) | 3.069012 \/ 3.745712 (-0.676700) | 3.025184 \/ 5.269862 (-2.244678) | 1.904687 \/ 4.565676 (-2.660990) | 0.057526 \/ 0.424275 (-0.366749) | 0.006482 \/ 0.007607 (-0.001125) | 0.494692 \/ 0.226044 (0.268647) | 4.944437 \/ 2.268929 (2.675508) | 2.655989 \/ 55.444624 (-52.788635) | 2.331677 \/ 6.876477 (-4.544800) | 2.382396 \/ 2.142072 (0.240324) | 0.582019 \/ 4.805227 (-4.223209) | 0.125866 \/ 6.500664 (-6.374799) | 0.062908 \/ 0.075469 (-0.012561) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.294612 \/ 1.841788 (-0.547176) | 19.016152 \/ 8.074308 (10.941844) | 14.088828 \/ 10.191392 (3.897436) | 0.160842 \/ 0.680424 (-0.519582) | 0.017054 \/ 0.534201 (-0.517146) | 0.333647 \/ 0.579283 (-0.245636) | 0.348094 \/ 0.434364 (-0.086270) | 0.394970 \/ 0.540337 (-0.145367) | 0.551141 \/ 1.386936 (-0.835795) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#9e9cfe886792b30b5000808072a0f91ec8536749 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007442 \/ 0.011353 (-0.003911) | 0.004302 \/ 0.011008 (-0.006707) | 0.087159 \/ 0.038508 (0.048651) | 0.095094 \/ 0.023109 (0.071985) | 0.315422 \/ 0.275898 (0.039524) | 0.346672 \/ 0.323480 (0.023192) | 0.005811 \/ 0.007986 (-0.002174) | 0.003597 \/ 0.004328 (-0.000731) | 0.066400 \/ 0.004250 (0.062150) | 0.065947 \/ 0.037052 (0.028894) | 0.323269 \/ 0.258489 (0.064780) | 0.353309 \/ 0.293841 (0.059468) | 0.032268 \/ 0.128546 (-0.096278) | 0.008696 \/ 0.075646 (-0.066950) | 0.291486 \/ 0.419271 (-0.127786) | 0.054609 \/ 0.043533 (0.011076) | 0.321061 \/ 0.255139 (0.065922) | 0.336907 \/ 0.283200 (0.053707) | 0.027338 \/ 0.141683 (-0.114345) | 1.496442 \/ 1.452155 (0.044287) | 1.576946 \/ 1.492716 (0.084229) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.229140 \/ 0.018006 (0.211134) | 0.487500 \/ 0.000490 (0.487010) | 0.002425 \/ 0.000200 (0.002225) | 0.000089 \/ 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029351 \/ 0.037411 (-0.008060) | 0.089610 \/ 0.014526 (0.075084) | 0.097880 \/ 0.176557 (-0.078676) | 0.155947 \/ 0.737135 (-0.581189) | 0.098593 \/ 0.296338 (-0.197745) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.382911 \/ 0.215209 (0.167702) | 3.820363 \/ 2.077655 (1.742708) | 1.866385 \/ 1.504120 (0.362265) | 1.712910 \/ 1.541195 (0.171716) | 1.813863 \/ 1.468490 (0.345373) | 0.484884 \/ 4.584777 (-4.099893) | 3.678911 \/ 3.745712 (-0.066801) | 5.249908 \/ 5.269862 (-0.019953) | 3.099614 \/ 4.565676 (-1.466063) | 0.057449 \/ 0.424275 (-0.366826) | 0.007728 \/ 0.007607 (0.000120) | 0.462123 \/ 0.226044 (0.236078) | 4.603942 \/ 2.268929 (2.335014) | 2.380957 \/ 55.444624 (-53.063668) | 2.059621 \/ 6.876477 (-4.816856) | 2.293764 \/ 2.142072 (0.151691) | 0.636471 \/ 4.805227 (-4.168756) | 0.150112 \/ 6.500664 (-6.350552) | 0.063705 \/ 0.075469 (-0.011764) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.358099 \/ 1.841788 (-0.483689) | 20.193750 \/ 8.074308 (12.119442) | 14.297350 \/ 10.191392 (4.105958) | 0.164477 \/ 0.680424 (-0.515947) | 0.018259 \/ 0.534201 (-0.515942) | 0.399010 \/ 0.579283 (-0.180273) | 0.417306 \/ 0.434364 (-0.017058) | 0.456961 \/ 0.540337 (-0.083377) | 0.631068 \/ 1.386936 (-0.755868) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007324 \/ 0.011353 (-0.004028) | 0.004463 \/ 0.011008 (-0.006545) | 0.066148 \/ 0.038508 (0.027640) | 0.093909 \/ 0.023109 (0.070799) | 0.399122 \/ 0.275898 (0.123224) | 0.430226 \/ 0.323480 (0.106746) | 0.005505 \/ 0.007986 (-0.002481) | 0.003579 \/ 0.004328 (-0.000749) | 0.066529 \/ 0.004250 (0.062278) | 0.063471 \/ 0.037052 (0.026418) | 0.406351 \/ 0.258489 (0.147862) | 0.439987 \/ 0.293841 (0.146146) | 0.032640 \/ 0.128546 (-0.095906) | 0.008770 \/ 0.075646 (-0.066877) | 0.072592 \/ 0.419271 (-0.346680) | 0.050429 \/ 0.043533 (0.006896) | 0.390873 \/ 0.255139 (0.135734) | 0.412438 \/ 0.283200 (0.129239) | 0.027113 \/ 0.141683 (-0.114570) | 1.458281 \/ 1.452155 (0.006126) | 1.536819 \/ 1.492716 (0.044103) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.228309 \/ 0.018006 (0.210303) | 0.454042 \/ 0.000490 (0.453552) | 0.000387 \/ 0.000200 (0.000187) | 0.000055 \/ 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029573 \/ 0.037411 (-0.007838) | 0.086433 \/ 0.014526 (0.071907) | 0.097992 \/ 0.176557 (-0.078565) | 0.152464 \/ 0.737135 (-0.584671) | 0.099901 \/ 0.296338 (-0.196437) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.413807 \/ 0.215209 (0.198598) | 4.126395 \/ 2.077655 (2.048740) | 2.113544 \/ 1.504120 (0.609424) | 1.967829 \/ 1.541195 (0.426635) | 2.037123 \/ 1.468490 (0.568633) | 0.489403 \/ 4.584777 (-4.095374) | 3.689508 \/ 3.745712 (-0.056204) | 3.503909 \/ 5.269862 (-1.765952) | 2.113812 \/ 4.565676 (-2.451864) | 0.057988 \/ 0.424275 (-0.366287) | 0.007336 \/ 0.007607 (-0.000271) | 0.490840 \/ 0.226044 (0.264795) | 4.885040 \/ 2.268929 (2.616112) | 2.627864 \/ 55.444624 (-52.816760) | 2.231467 \/ 6.876477 (-4.645010) | 2.251307 \/ 2.142072 (0.109235) | 0.577370 \/ 4.805227 (-4.227857) | 0.131770 \/ 6.500664 (-6.368895) | 0.061313 \/ 0.075469 (-0.014156) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.362052 \/ 1.841788 (-0.479735) | 21.332694 \/ 8.074308 (13.258386) | 15.562019 \/ 10.191392 (5.370627) | 0.170874 \/ 0.680424 (-0.509550) | 0.019226 \/ 0.534201 (-0.514975) | 0.400311 \/ 0.579283 (-0.178972) | 0.423060 \/ 0.434364 (-0.011304) | 0.469946 \/ 0.540337 (-0.070391) | 0.647745 \/ 1.386936 (-0.739191) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#aec567c2f224f192e6e1f9799e3afc755eb517b2 \"CML watermark\")\n"],"created_at":1689251664000,"updated_at":1689252386000,"closed_at":1689251821000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6026","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6026","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6026.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6026.patch","merged_at":1689251821000},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6026\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6026\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6025","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6025\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6025\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6025\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6025","id":1801852601,"node_id":"I_kwDODunzps5rZha5","number":6025,"title":"Using a dataset for a use other than it was intended for.","user":{"login":"surya-narayanan","id":17240858,"node_id":"MDQ6VXNlcjE3MjQwODU4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17240858?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/surya-narayanan","html_url":"https:\/\/github.com\/surya-narayanan","followers_url":"https:\/\/api.github.com\/users\/surya-narayanan\/followers","following_url":"https:\/\/api.github.com\/users\/surya-narayanan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/surya-narayanan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/surya-narayanan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/surya-narayanan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/surya-narayanan\/orgs","repos_url":"https:\/\/api.github.com\/users\/surya-narayanan\/repos","events_url":"https:\/\/api.github.com\/users\/surya-narayanan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/surya-narayanan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I've opened a PR with a fix. In the meantime, you can avoid the error by deleting `task_templates` with `dataset.info.task_templates = None` before the `interleave_datasets` call.\r\n` "],"created_at":1689201197000,"updated_at":1689256656000,"closed_at":1689256656000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nHi, I want to use the rotten tomatoes dataset but for a task other than classification, but when I interleave the dataset, it throws ```'ValueError: Column label is not present in features.'```. It seems that the label_col must be there in the dataset for some reason? \r\n\r\nHere is the full stacktrace\r\n\r\n```\r\n File \"\/home\/suryahari\/Vornoi\/tryage-handoff-other-datasets.py\", line 276, in create_dataloaders \r\n dataset = interleave_datasets(dsfold, stopping_strategy=\"all_exhausted\") \r\n File \"\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/combine.py\", line 134, in interleave_datasets \r\n return _interleave_iterable_datasets( \r\n File \"\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/iterable_dataset.py\", line 1833, in _interleave_iterable_datasets \r\n info = DatasetInfo.from_merge([d.info for d in datasets]) \r\n File \"\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/info.py\", line 275, in from_merge \r\n dataset_infos = [dset_info.copy() for dset_info in dataset_infos if dset_info is not None] \r\n File \"\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/info.py\", line 275, in \r\n dataset_infos = [dset_info.copy() for dset_info in dataset_infos if dset_info is not None] \r\n File \"\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/info.py\", line 378, in copy \r\n return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()}) \r\n File \"\", line 20, in __init__ \r\n File \"\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/info.py\", line 208, in __post_init__ \r\n self.task_templates = [ \r\n File \"\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/info.py\", line 209, in \r\n template.align_with_features(self.features) for template in (self.task_templates) \r\n File \"\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/tasks\/text_classification.py\", line 20, in align_with_features \r\n raise ValueError(f\"Column {self.label_column} is not present in features.\") \r\nValueError: Column label is not present in features. \r\n```\n\n### Steps to reproduce the bug\n\nDelete the column `labels` from the `rotten_tomatoes` dataset. Try to interleave it with other datasets.\n\n### Expected behavior\n\nShould let me use the dataset with just the `text` field\n\n### Environment info\n\nlatest datasets library? I don't think this was an issue in earlier versions.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6025\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6025\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6024","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6024\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6024\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6024\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6024","id":1801708808,"node_id":"PR_kwDODunzps5VWbGe","number":6024,"title":"Don't reference self in Spark._validate_cache_dir","user":{"login":"maddiedawson","id":106995444,"node_id":"U_kgDOBmCe9A","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/106995444?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/maddiedawson","html_url":"https:\/\/github.com\/maddiedawson","followers_url":"https:\/\/api.github.com\/users\/maddiedawson\/followers","following_url":"https:\/\/api.github.com\/users\/maddiedawson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/maddiedawson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/maddiedawson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/maddiedawson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/maddiedawson\/orgs","repos_url":"https:\/\/api.github.com\/users\/maddiedawson\/repos","events_url":"https:\/\/api.github.com\/users\/maddiedawson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/maddiedawson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Ptal @lhoestq :) I tested this manually on a multi-node Databricks cluster","Hm looks like the check_code_quality failures are unrelated to me change... https:\/\/github.com\/huggingface\/datasets\/actions\/runs\/5536162850\/jobs\/10103451883?pr=6024","_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005952 \/ 0.011353 (-0.005400) | 0.003585 \/ 0.011008 (-0.007424) | 0.079163 \/ 0.038508 (0.040655) | 0.057926 \/ 0.023109 (0.034817) | 0.326647 \/ 0.275898 (0.050749) | 0.383485 \/ 0.323480 (0.060005) | 0.004530 \/ 0.007986 (-0.003456) | 0.002821 \/ 0.004328 (-0.001508) | 0.062071 \/ 0.004250 (0.057820) | 0.048023 \/ 0.037052 (0.010971) | 0.329368 \/ 0.258489 (0.070879) | 0.390877 \/ 0.293841 (0.097036) | 0.026959 \/ 0.128546 (-0.101588) | 0.007911 \/ 0.075646 (-0.067735) | 0.259956 \/ 0.419271 (-0.159315) | 0.044582 \/ 0.043533 (0.001049) | 0.320537 \/ 0.255139 (0.065398) | 0.373814 \/ 0.283200 (0.090614) | 0.020275 \/ 0.141683 (-0.121408) | 1.532128 \/ 1.452155 (0.079973) | 1.595031 \/ 1.492716 (0.102315) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.186127 \/ 0.018006 (0.168120) | 0.428586 \/ 0.000490 (0.428097) | 0.005180 \/ 0.000200 (0.004980) | 0.000069 \/ 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024876 \/ 0.037411 (-0.012536) | 0.072169 \/ 0.014526 (0.057643) | 0.082015 \/ 0.176557 (-0.094542) | 0.147467 \/ 0.737135 (-0.589668) | 0.082769 \/ 0.296338 (-0.213570) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.410625 \/ 0.215209 (0.195416) | 4.116742 \/ 2.077655 (2.039088) | 2.172291 \/ 1.504120 (0.668171) | 2.022462 \/ 1.541195 (0.481268) | 2.048142 \/ 1.468490 (0.579651) | 0.503152 \/ 4.584777 (-4.081625) | 3.019135 \/ 3.745712 (-0.726577) | 3.589451 \/ 5.269862 (-1.680410) | 2.206876 \/ 4.565676 (-2.358801) | 0.057687 \/ 0.424275 (-0.366588) | 0.006560 \/ 0.007607 (-0.001047) | 0.475585 \/ 0.226044 (0.249541) | 4.784344 \/ 2.268929 (2.515416) | 2.506322 \/ 55.444624 (-52.938302) | 2.168251 \/ 6.876477 (-4.708225) | 2.324453 \/ 2.142072 (0.182381) | 0.590609 \/ 4.805227 (-4.214618) | 0.124178 \/ 6.500664 (-6.376486) | 0.059197 \/ 0.075469 (-0.016272) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.212359 \/ 1.841788 (-0.629429) | 17.915843 \/ 8.074308 (9.841535) | 13.128330 \/ 10.191392 (2.936938) | 0.144805 \/ 0.680424 (-0.535618) | 0.016889 \/ 0.534201 (-0.517312) | 0.344056 \/ 0.579283 (-0.235227) | 0.359370 \/ 0.434364 (-0.074994) | 0.404199 \/ 0.540337 (-0.136138) | 0.549117 \/ 1.386936 (-0.837819) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005914 \/ 0.011353 (-0.005439) | 0.003565 \/ 0.011008 (-0.007443) | 0.061575 \/ 0.038508 (0.023067) | 0.057677 \/ 0.023109 (0.034568) | 0.359753 \/ 0.275898 (0.083855) | 0.394135 \/ 0.323480 (0.070655) | 0.004648 \/ 0.007986 (-0.003338) | 0.002795 \/ 0.004328 (-0.001534) | 0.061877 \/ 0.004250 (0.057626) | 0.049673 \/ 0.037052 (0.012621) | 0.363120 \/ 0.258489 (0.104631) | 0.402685 \/ 0.293841 (0.108844) | 0.027021 \/ 0.128546 (-0.101525) | 0.008006 \/ 0.075646 (-0.067641) | 0.067398 \/ 0.419271 (-0.351874) | 0.044442 \/ 0.043533 (0.000909) | 0.364851 \/ 0.255139 (0.109712) | 0.387219 \/ 0.283200 (0.104019) | 0.027267 \/ 0.141683 (-0.114416) | 1.466675 \/ 1.452155 (0.014520) | 1.512607 \/ 1.492716 (0.019891) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.206156 \/ 0.018006 (0.188150) | 0.410877 \/ 0.000490 (0.410387) | 0.003061 \/ 0.000200 (0.002861) | 0.000068 \/ 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024869 \/ 0.037411 (-0.012542) | 0.075736 \/ 0.014526 (0.061210) | 0.083922 \/ 0.176557 (-0.092634) | 0.139510 \/ 0.737135 (-0.597626) | 0.087685 \/ 0.296338 (-0.208654) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.414473 \/ 0.215209 (0.199264) | 4.150633 \/ 2.077655 (2.072979) | 2.132892 \/ 1.504120 (0.628773) | 1.964072 \/ 1.541195 (0.422878) | 2.003353 \/ 1.468490 (0.534863) | 0.498012 \/ 4.584777 (-4.086765) | 3.010135 \/ 3.745712 (-0.735577) | 2.841130 \/ 5.269862 (-2.428732) | 1.826013 \/ 4.565676 (-2.739664) | 0.057443 \/ 0.424275 (-0.366832) | 0.006374 \/ 0.007607 (-0.001234) | 0.490337 \/ 0.226044 (0.264292) | 4.889628 \/ 2.268929 (2.620700) | 2.575626 \/ 55.444624 (-52.868998) | 2.246522 \/ 6.876477 (-4.629955) | 2.276183 \/ 2.142072 (0.134110) | 0.581465 \/ 4.805227 (-4.223763) | 0.123877 \/ 6.500664 (-6.376787) | 0.060339 \/ 0.075469 (-0.015130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.333202 \/ 1.841788 (-0.508585) | 18.363558 \/ 8.074308 (10.289250) | 14.109356 \/ 10.191392 (3.917964) | 0.147358 \/ 0.680424 (-0.533066) | 0.016813 \/ 0.534201 (-0.517388) | 0.334815 \/ 0.579283 (-0.244468) | 0.366576 \/ 0.434364 (-0.067788) | 0.397223 \/ 0.540337 (-0.143115) | 0.547893 \/ 1.386936 (-0.839043) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#67ac60bcbebe9ddac70264951b1d584c93003cdf \"CML watermark\")\n"],"created_at":1689193876000,"updated_at":1689267512000,"closed_at":1689251829000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6024","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6024","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6024.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6024.patch","merged_at":1689251829000},"body":"Fix for https:\/\/github.com\/huggingface\/datasets\/issues\/5963","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6024\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6024\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6023","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6023\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6023\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6023\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6023","id":1801272420,"node_id":"PR_kwDODunzps5VU7EG","number":6023,"title":"Fix `ClassLabel` min max check for `None` values","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007108 \/ 0.011353 (-0.004245) | 0.004446 \/ 0.011008 (-0.006562) | 0.084013 \/ 0.038508 (0.045505) | 0.084271 \/ 0.023109 (0.061162) | 0.324496 \/ 0.275898 (0.048598) | 0.347783 \/ 0.323480 (0.024303) | 0.004382 \/ 0.007986 (-0.003604) | 0.005200 \/ 0.004328 (0.000872) | 0.065117 \/ 0.004250 (0.060866) | 0.063368 \/ 0.037052 (0.026316) | 0.328731 \/ 0.258489 (0.070242) | 0.356676 \/ 0.293841 (0.062835) | 0.031155 \/ 0.128546 (-0.097392) | 0.008672 \/ 0.075646 (-0.066975) | 0.287573 \/ 0.419271 (-0.131698) | 0.053692 \/ 0.043533 (0.010160) | 0.308796 \/ 0.255139 (0.053657) | 0.330521 \/ 0.283200 (0.047321) | 0.025010 \/ 0.141683 (-0.116672) | 1.498968 \/ 1.452155 (0.046813) | 1.552096 \/ 1.492716 (0.059380) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.263580 \/ 0.018006 (0.245574) | 0.559765 \/ 0.000490 (0.559275) | 0.003450 \/ 0.000200 (0.003250) | 0.000079 \/ 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029403 \/ 0.037411 (-0.008008) | 0.088154 \/ 0.014526 (0.073628) | 0.100372 \/ 0.176557 (-0.076185) | 0.157777 \/ 0.737135 (-0.579359) | 0.102273 \/ 0.296338 (-0.194066) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.387027 \/ 0.215209 (0.171818) | 3.854260 \/ 2.077655 (1.776605) | 1.875159 \/ 1.504120 (0.371039) | 1.703734 \/ 1.541195 (0.162539) | 1.814305 \/ 1.468490 (0.345815) | 0.482524 \/ 4.584777 (-4.102253) | 3.463602 \/ 3.745712 (-0.282110) | 4.004766 \/ 5.269862 (-1.265095) | 2.406751 \/ 4.565676 (-2.158925) | 0.057069 \/ 0.424275 (-0.367206) | 0.007448 \/ 0.007607 (-0.000159) | 0.465801 \/ 0.226044 (0.239757) | 4.636700 \/ 2.268929 (2.367771) | 2.329475 \/ 55.444624 (-53.115150) | 1.998330 \/ 6.876477 (-4.878146) | 2.264617 \/ 2.142072 (0.122544) | 0.577998 \/ 4.805227 (-4.227230) | 0.130846 \/ 6.500664 (-6.369818) | 0.059713 \/ 0.075469 (-0.015756) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.275931 \/ 1.841788 (-0.565857) | 20.396288 \/ 8.074308 (12.321980) | 13.875242 \/ 10.191392 (3.683850) | 0.164367 \/ 0.680424 (-0.516057) | 0.018573 \/ 0.534201 (-0.515628) | 0.397516 \/ 0.579283 (-0.181767) | 0.398977 \/ 0.434364 (-0.035387) | 0.462386 \/ 0.540337 (-0.077951) | 0.610129 \/ 1.386936 (-0.776807) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006912 \/ 0.011353 (-0.004441) | 0.004212 \/ 0.011008 (-0.006797) | 0.065707 \/ 0.038508 (0.027199) | 0.090435 \/ 0.023109 (0.067325) | 0.380539 \/ 0.275898 (0.104641) | 0.412692 \/ 0.323480 (0.089212) | 0.005545 \/ 0.007986 (-0.002441) | 0.003657 \/ 0.004328 (-0.000672) | 0.065380 \/ 0.004250 (0.061130) | 0.062901 \/ 0.037052 (0.025848) | 0.385931 \/ 0.258489 (0.127442) | 0.416272 \/ 0.293841 (0.122431) | 0.031974 \/ 0.128546 (-0.096572) | 0.008783 \/ 0.075646 (-0.066863) | 0.071424 \/ 0.419271 (-0.347847) | 0.049454 \/ 0.043533 (0.005921) | 0.374231 \/ 0.255139 (0.119092) | 0.386530 \/ 0.283200 (0.103331) | 0.025404 \/ 0.141683 (-0.116279) | 1.469869 \/ 1.452155 (0.017715) | 1.548629 \/ 1.492716 (0.055913) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.218413 \/ 0.018006 (0.200406) | 0.573863 \/ 0.000490 (0.573373) | 0.004156 \/ 0.000200 (0.003956) | 0.000097 \/ 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032610 \/ 0.037411 (-0.004801) | 0.088270 \/ 0.014526 (0.073744) | 0.106821 \/ 0.176557 (-0.069735) | 0.164498 \/ 0.737135 (-0.572638) | 0.106881 \/ 0.296338 (-0.189457) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.433730 \/ 0.215209 (0.218520) | 4.323902 \/ 2.077655 (2.246247) | 2.308607 \/ 1.504120 (0.804487) | 2.138888 \/ 1.541195 (0.597693) | 2.246760 \/ 1.468490 (0.778269) | 0.486863 \/ 4.584777 (-4.097914) | 3.561826 \/ 3.745712 (-0.183886) | 5.592685 \/ 5.269862 (0.322824) | 3.318560 \/ 4.565676 (-1.247116) | 0.057348 \/ 0.424275 (-0.366927) | 0.007434 \/ 0.007607 (-0.000174) | 0.506767 \/ 0.226044 (0.280723) | 5.083097 \/ 2.268929 (2.814168) | 2.780618 \/ 55.444624 (-52.664006) | 2.456924 \/ 6.876477 (-4.419553) | 2.564184 \/ 2.142072 (0.422112) | 0.580693 \/ 4.805227 (-4.224534) | 0.134471 \/ 6.500664 (-6.366194) | 0.062883 \/ 0.075469 (-0.012586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.346618 \/ 1.841788 (-0.495169) | 20.547998 \/ 8.074308 (12.473690) | 14.404159 \/ 10.191392 (4.212767) | 0.176612 \/ 0.680424 (-0.503812) | 0.018372 \/ 0.534201 (-0.515829) | 0.395636 \/ 0.579283 (-0.183647) | 0.410661 \/ 0.434364 (-0.023703) | 0.468782 \/ 0.540337 (-0.071555) | 0.637476 \/ 1.386936 (-0.749460) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#0172d4dac0ca823e8bd293cfd4d28e78d92efe42 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009896 \/ 0.011353 (-0.001457) | 0.004658 \/ 0.011008 (-0.006351) | 0.101185 \/ 0.038508 (0.062677) | 0.075480 \/ 0.023109 (0.052371) | 0.410620 \/ 0.275898 (0.134722) | 0.470639 \/ 0.323480 (0.147159) | 0.007042 \/ 0.007986 (-0.000943) | 0.003909 \/ 0.004328 (-0.000419) | 0.079676 \/ 0.004250 (0.075425) | 0.066921 \/ 0.037052 (0.029869) | 0.423624 \/ 0.258489 (0.165135) | 0.473008 \/ 0.293841 (0.179167) | 0.048492 \/ 0.128546 (-0.080054) | 0.012833 \/ 0.075646 (-0.062813) | 0.335286 \/ 0.419271 (-0.083985) | 0.083506 \/ 0.043533 (0.039973) | 0.401918 \/ 0.255139 (0.146779) | 0.467975 \/ 0.283200 (0.184775) | 0.050025 \/ 0.141683 (-0.091658) | 1.679392 \/ 1.452155 (0.227237) | 1.852812 \/ 1.492716 (0.360095) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.248067 \/ 0.018006 (0.230061) | 0.584818 \/ 0.000490 (0.584328) | 0.021558 \/ 0.000200 (0.021358) | 0.000104 \/ 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028572 \/ 0.037411 (-0.008839) | 0.097212 \/ 0.014526 (0.082686) | 0.121675 \/ 0.176557 (-0.054881) | 0.186597 \/ 0.737135 (-0.550538) | 0.122285 \/ 0.296338 (-0.174053) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.586279 \/ 0.215209 (0.371070) | 5.634402 \/ 2.077655 (3.556747) | 2.560648 \/ 1.504120 (1.056528) | 2.288796 \/ 1.541195 (0.747601) | 2.402580 \/ 1.468490 (0.934090) | 0.801453 \/ 4.584777 (-3.783324) | 5.036654 \/ 3.745712 (1.290942) | 8.319972 \/ 5.269862 (3.050110) | 4.665620 \/ 4.565676 (0.099944) | 0.107292 \/ 0.424275 (-0.316983) | 0.009206 \/ 0.007607 (0.001599) | 0.766505 \/ 0.226044 (0.540461) | 7.333784 \/ 2.268929 (5.064856) | 3.601875 \/ 55.444624 (-51.842749) | 2.886388 \/ 6.876477 (-3.990089) | 3.231797 \/ 2.142072 (1.089725) | 1.179509 \/ 4.805227 (-3.625718) | 0.224656 \/ 6.500664 (-6.276008) | 0.084749 \/ 0.075469 (0.009280) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.772345 \/ 1.841788 (-0.069443) | 24.138788 \/ 8.074308 (16.064480) | 20.712416 \/ 10.191392 (10.521024) | 0.254655 \/ 0.680424 (-0.425769) | 0.028858 \/ 0.534201 (-0.505343) | 0.499314 \/ 0.579283 (-0.079969) | 0.605797 \/ 0.434364 (0.171433) | 0.567628 \/ 0.540337 (0.027290) | 0.752288 \/ 1.386936 (-0.634648) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.010134 \/ 0.011353 (-0.001219) | 0.004630 \/ 0.011008 (-0.006378) | 0.082282 \/ 0.038508 (0.043774) | 0.081722 \/ 0.023109 (0.058613) | 0.465018 \/ 0.275898 (0.189120) | 0.516392 \/ 0.323480 (0.192912) | 0.006618 \/ 0.007986 (-0.001368) | 0.004310 \/ 0.004328 (-0.000018) | 0.078990 \/ 0.004250 (0.074739) | 0.077729 \/ 0.037052 (0.040677) | 0.464892 \/ 0.258489 (0.206403) | 0.510551 \/ 0.293841 (0.216710) | 0.050750 \/ 0.128546 (-0.077796) | 0.014402 \/ 0.075646 (-0.061244) | 0.092587 \/ 0.419271 (-0.326685) | 0.074769 \/ 0.043533 (0.031237) | 0.468591 \/ 0.255139 (0.213452) | 0.508138 \/ 0.283200 (0.224938) | 0.047774 \/ 0.141683 (-0.093909) | 1.798354 \/ 1.452155 (0.346199) | 1.851431 \/ 1.492716 (0.358714) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.282528 \/ 0.018006 (0.264522) | 0.588286 \/ 0.000490 (0.587797) | 0.004892 \/ 0.000200 (0.004692) | 0.000136 \/ 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.037048 \/ 0.037411 (-0.000364) | 0.101513 \/ 0.014526 (0.086987) | 0.133238 \/ 0.176557 (-0.043319) | 0.234799 \/ 0.737135 (-0.502336) | 0.120636 \/ 0.296338 (-0.175703) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.615377 \/ 0.215209 (0.400168) | 6.225717 \/ 2.077655 (4.148062) | 2.974137 \/ 1.504120 (1.470018) | 2.642168 \/ 1.541195 (1.100973) | 2.706051 \/ 1.468490 (1.237561) | 0.837171 \/ 4.584777 (-3.747606) | 5.143368 \/ 3.745712 (1.397656) | 4.560241 \/ 5.269862 (-0.709621) | 2.838375 \/ 4.565676 (-1.727301) | 0.092505 \/ 0.424275 (-0.331770) | 0.008962 \/ 0.007607 (0.001355) | 0.726361 \/ 0.226044 (0.500317) | 7.323998 \/ 2.268929 (5.055070) | 3.650531 \/ 55.444624 (-51.794094) | 2.960886 \/ 6.876477 (-3.915591) | 3.003889 \/ 2.142072 (0.861816) | 0.979264 \/ 4.805227 (-3.825963) | 0.204531 \/ 6.500664 (-6.296133) | 0.078285 \/ 0.075469 (0.002816) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.774225 \/ 1.841788 (-0.067563) | 26.399536 \/ 8.074308 (18.325228) | 22.312890 \/ 10.191392 (12.121498) | 0.244651 \/ 0.680424 (-0.435773) | 0.026950 \/ 0.534201 (-0.507251) | 0.493037 \/ 0.579283 (-0.086246) | 0.620399 \/ 0.434364 (0.186036) | 0.748985 \/ 0.540337 (0.208648) | 0.799766 \/ 1.386936 (-0.587170) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#a49ac2864177ec4fb34c43b59a6e49de1f21f973 \"CML watermark\")\n"],"created_at":1689176772000,"updated_at":1689179366000,"closed_at":1689178684000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6023","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6023","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6023.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6023.patch","merged_at":1689178684000},"body":"Fix #6022 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6023\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6023\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6022","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6022\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6022\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6022\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6022","id":1800092589,"node_id":"I_kwDODunzps5rSzut","number":6022,"title":"Batch map raises TypeError: '>=' not supported between instances of 'NoneType' and 'int'","user":{"login":"codingl2k1","id":138426806,"node_id":"U_kgDOCEA5tg","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/138426806?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/codingl2k1","html_url":"https:\/\/github.com\/codingl2k1","followers_url":"https:\/\/api.github.com\/users\/codingl2k1\/followers","following_url":"https:\/\/api.github.com\/users\/codingl2k1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/codingl2k1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/codingl2k1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/codingl2k1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/codingl2k1\/orgs","repos_url":"https:\/\/api.github.com\/users\/codingl2k1\/repos","events_url":"https:\/\/api.github.com\/users\/codingl2k1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/codingl2k1\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting! I've opened a PR with a fix."],"created_at":1689132017000,"updated_at":1689178686000,"closed_at":1689178685000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nWhen mapping some datasets with `batched=True`, datasets may raise an exeception:\r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/venv\/lib\/python3.11\/site-packages\/multiprocess\/pool.py\", line 125, in worker\r\n result = (True, func(*args, **kwds))\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/utils\/py_utils.py\", line 1328, in _write_generator_to_queue\r\n for i, result in enumerate(func(**kwargs)):\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/arrow_dataset.py\", line 3483, in _map_single\r\n writer.write_batch(batch)\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/arrow_writer.py\", line 549, in write_batch\r\n array = cast_array_to_feature(col_values, col_type) if col_type is not None else col_values\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/table.py\", line 1831, in wrapper\r\n return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/table.py\", line 1831, in \r\n return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/table.py\", line 2063, in cast_array_to_feature\r\n return feature.cast_storage(array)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/features\/features.py\", line 1098, in cast_storage\r\n if min_max[\"max\"] >= self.num_classes:\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nTypeError: '>=' not supported between instances of 'NoneType' and 'int'\r\nThe above exception was the direct cause of the following exception:\r\nTraceback (most recent call last):\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/t1.py\", line 33, in \r\n ds = ds.map(transforms, num_proc=14, batched=True, batch_size=5)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/dataset_dict.py\", line 850, in map\r\n {\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/dataset_dict.py\", line 851, in \r\n k: dataset.map(\r\n ^^^^^^^^^^^^\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/arrow_dataset.py\", line 577, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/arrow_dataset.py\", line 542, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/arrow_dataset.py\", line 3179, in map\r\n for rank, done, content in iflatmap_unordered(\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/utils\/py_utils.py\", line 1368, in iflatmap_unordered\r\n [async_result.get(timeout=0.05) for async_result in async_results]\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/src\/datasets\/utils\/py_utils.py\", line 1368, in \r\n [async_result.get(timeout=0.05) for async_result in async_results]\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/codingl2k1\/Work\/datasets\/venv\/lib\/python3.11\/site-packages\/multiprocess\/pool.py\", line 774, in get\r\n raise self._value\r\nTypeError: '>=' not supported between instances of 'NoneType' and 'int'\r\n```\n\n### Steps to reproduce the bug\n\n1. Checkout the latest main of datasets.\r\n2. Run the code:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndef transforms(examples):\r\n # examples[\"pixel_values\"] = [image.convert(\"RGB\").resize((100, 100)) for image in examples[\"image\"]]\r\n return examples\r\n\r\nds = load_dataset(\"scene_parse_150\")\r\nds = ds.map(transforms, num_proc=14, batched=True, batch_size=5)\r\nprint(ds)\r\n```\n\n### Expected behavior\n\nmap without exception.\n\n### Environment info\n\nDatasets: https:\/\/github.com\/huggingface\/datasets\/commit\/b8067c0262073891180869f700ebef5ac3dc5cce\r\nPython: 3.11.4\r\nSystem: Macos","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6022\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6022\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6021","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6021\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6021\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6021\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6021","id":1799785904,"node_id":"PR_kwDODunzps5VP11Q","number":6021,"title":"[docs] Update return statement of index search","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007697 \/ 0.011353 (-0.003656) | 0.004233 \/ 0.011008 (-0.006776) | 0.087890 \/ 0.038508 (0.049382) | 0.065305 \/ 0.023109 (0.042196) | 0.366919 \/ 0.275898 (0.091020) | 0.399656 \/ 0.323480 (0.076176) | 0.006753 \/ 0.007986 (-0.001232) | 0.003428 \/ 0.004328 (-0.000900) | 0.070180 \/ 0.004250 (0.065930) | 0.054164 \/ 0.037052 (0.017112) | 0.377130 \/ 0.258489 (0.118641) | 0.403456 \/ 0.293841 (0.109615) | 0.042639 \/ 0.128546 (-0.085907) | 0.012396 \/ 0.075646 (-0.063250) | 0.314235 \/ 0.419271 (-0.105036) | 0.061976 \/ 0.043533 (0.018443) | 0.376959 \/ 0.255139 (0.121820) | 0.433313 \/ 0.283200 (0.150113) | 0.031253 \/ 0.141683 (-0.110430) | 1.555749 \/ 1.452155 (0.103594) | 1.643905 \/ 1.492716 (0.151189) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.208630 \/ 0.018006 (0.190624) | 0.519532 \/ 0.000490 (0.519042) | 0.003719 \/ 0.000200 (0.003519) | 0.000099 \/ 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027403 \/ 0.037411 (-0.010008) | 0.080990 \/ 0.014526 (0.066464) | 0.090424 \/ 0.176557 (-0.086133) | 0.153922 \/ 0.737135 (-0.583213) | 0.098156 \/ 0.296338 (-0.198183) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.519453 \/ 0.215209 (0.304244) | 5.100089 \/ 2.077655 (3.022434) | 2.212165 \/ 1.504120 (0.708045) | 1.894405 \/ 1.541195 (0.353210) | 1.922914 \/ 1.468490 (0.454424) | 0.762443 \/ 4.584777 (-3.822334) | 4.669214 \/ 3.745712 (0.923502) | 5.016066 \/ 5.269862 (-0.253796) | 3.128821 \/ 4.565676 (-1.436856) | 0.091541 \/ 0.424275 (-0.332734) | 0.007582 \/ 0.007607 (-0.000026) | 0.652753 \/ 0.226044 (0.426709) | 6.601375 \/ 2.268929 (4.332446) | 3.076948 \/ 55.444624 (-52.367677) | 2.250544 \/ 6.876477 (-4.625933) | 2.404059 \/ 2.142072 (0.261987) | 0.994917 \/ 4.805227 (-3.810311) | 0.200318 \/ 6.500664 (-6.300346) | 0.069354 \/ 0.075469 (-0.006115) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.482559 \/ 1.841788 (-0.359229) | 20.722092 \/ 8.074308 (12.647784) | 17.703217 \/ 10.191392 (7.511825) | 0.215370 \/ 0.680424 (-0.465053) | 0.028208 \/ 0.534201 (-0.505993) | 0.425992 \/ 0.579283 (-0.153291) | 0.492785 \/ 0.434364 (0.058421) | 0.474154 \/ 0.540337 (-0.066183) | 0.644599 \/ 1.386936 (-0.742337) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008372 \/ 0.011353 (-0.002981) | 0.004543 \/ 0.011008 (-0.006465) | 0.070564 \/ 0.038508 (0.032056) | 0.066855 \/ 0.023109 (0.043746) | 0.386724 \/ 0.275898 (0.110826) | 0.432184 \/ 0.323480 (0.108704) | 0.005250 \/ 0.007986 (-0.002736) | 0.003630 \/ 0.004328 (-0.000698) | 0.069310 \/ 0.004250 (0.065060) | 0.055759 \/ 0.037052 (0.018707) | 0.375789 \/ 0.258489 (0.117299) | 0.417335 \/ 0.293841 (0.123494) | 0.043424 \/ 0.128546 (-0.085122) | 0.013106 \/ 0.075646 (-0.062541) | 0.087836 \/ 0.419271 (-0.331436) | 0.057770 \/ 0.043533 (0.014237) | 0.396694 \/ 0.255139 (0.141555) | 0.439350 \/ 0.283200 (0.156150) | 0.031660 \/ 0.141683 (-0.110023) | 1.571339 \/ 1.452155 (0.119185) | 1.667169 \/ 1.492716 (0.174452) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.180534 \/ 0.018006 (0.162528) | 0.540027 \/ 0.000490 (0.539537) | 0.003573 \/ 0.000200 (0.003373) | 0.000141 \/ 0.000054 (0.000086) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031380 \/ 0.037411 (-0.006032) | 0.083762 \/ 0.014526 (0.069236) | 0.098166 \/ 0.176557 (-0.078390) | 0.160761 \/ 0.737135 (-0.576374) | 0.097683 \/ 0.296338 (-0.198656) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.568074 \/ 0.215209 (0.352865) | 5.660544 \/ 2.077655 (3.582889) | 2.416698 \/ 1.504120 (0.912578) | 2.177096 \/ 1.541195 (0.635901) | 2.206178 \/ 1.468490 (0.737688) | 0.844864 \/ 4.584777 (-3.739912) | 4.793636 \/ 3.745712 (1.047923) | 7.062387 \/ 5.269862 (1.792525) | 4.201228 \/ 4.565676 (-0.364449) | 0.091997 \/ 0.424275 (-0.332279) | 0.007881 \/ 0.007607 (0.000274) | 0.679466 \/ 0.226044 (0.453422) | 6.580268 \/ 2.268929 (4.311340) | 3.229907 \/ 55.444624 (-52.214717) | 2.524877 \/ 6.876477 (-4.351600) | 2.463796 \/ 2.142072 (0.321723) | 0.975627 \/ 4.805227 (-3.829600) | 0.186670 \/ 6.500664 (-6.313994) | 0.065307 \/ 0.075469 (-0.010163) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.501447 \/ 1.841788 (-0.340340) | 21.231037 \/ 8.074308 (13.156729) | 17.591671 \/ 10.191392 (7.400279) | 0.212745 \/ 0.680424 (-0.467679) | 0.026100 \/ 0.534201 (-0.508101) | 0.428391 \/ 0.579283 (-0.150892) | 0.535268 \/ 0.434364 (0.100904) | 0.506733 \/ 0.540337 (-0.033604) | 0.660832 \/ 1.386936 (-0.726104) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#962537d7ee9191438ef47a4185d0ba626b2ee949 \"CML watermark\")\n"],"created_at":1689111212000,"updated_at":1689181982000,"closed_at":1689181380000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6021","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6021","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6021.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6021.patch","merged_at":1689181380000},"body":"Clarifies in the return statement of the docstring that the retrieval score is `IndexFlatL2` by default (see [PR](https:\/\/github.com\/huggingface\/transformers\/issues\/24739) and internal Slack [convo](https:\/\/huggingface.slack.com\/archives\/C01229B19EX\/p1689105179711689)), and fixes the formatting because multiple return values are not supported.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6021\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6021\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6020","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6020\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6020\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6020\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6020","id":1799720536,"node_id":"I_kwDODunzps5rRY5Y","number":6020,"title":"Inconsistent \"The features can't be aligned\" error when combining map, multiprocessing, and variable length outputs","user":{"login":"kheyer","id":38166299,"node_id":"MDQ6VXNlcjM4MTY2Mjk5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38166299?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kheyer","html_url":"https:\/\/github.com\/kheyer","followers_url":"https:\/\/api.github.com\/users\/kheyer\/followers","following_url":"https:\/\/api.github.com\/users\/kheyer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kheyer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kheyer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kheyer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kheyer\/orgs","repos_url":"https:\/\/api.github.com\/users\/kheyer\/repos","events_url":"https:\/\/api.github.com\/users\/kheyer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kheyer\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This scenario currently requires explicitly passing the target features (to avoid the error): \r\n```python\r\nimport datasets\r\n\r\n...\r\n\r\nfeatures = dataset.features\r\nfeatures[\"output\"] = = [{\"test\": datasets.Value(\"int64\")}]\r\ntest2 = dataset.map(lambda row, idx: test_func(row, idx), with_indices=True, num_proc=32, features=features)\r\n```"],"created_at":1689108038000,"updated_at":1689177504000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI'm using a dataset with map and multiprocessing to run a function that returned a variable length list of outputs. This output list may be empty. Normally this is handled fine, but there is an edge case that crops up when using multiprocessing. In some cases, an empty list result ends up in a dataset shard consisting of a single item. This results in a `The features can't be aligned` error that is difficult to debug because it depends on the number of processes\/shards used.\r\n\r\nI've reproduced a minimal example below. My current workaround is to fill empty results with a dummy value that I filter after, but this was a weird error that took a while to track down.\n\n### Steps to reproduce the bug\n\n```python\r\nimport datasets\r\n\r\ndataset = datasets.Dataset.from_list([{'idx':i} for i in range(60)])\r\n\r\ndef test_func(row, idx):\r\n if idx==58:\r\n return {'output': []}\r\n else:\r\n return {'output' : [{'test':1}, {'test':2}]}\r\n\r\n# this works fine\r\ntest1 = dataset.map(lambda row, idx: test_func(row, idx), with_indices=True, num_proc=4)\r\n\r\n# this fails\r\ntest2 = dataset.map(lambda row, idx: test_func(row, idx), with_indices=True, num_proc=32)\r\n>ValueError: The features can't be aligned because the key output of features {'idx': Value(dtype='int64', id=None), 'output': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None)} has unexpected type - Sequence(feature=Value(dtype='null', id=None), length=-1, id=None) (expected either [{'test': Value(dtype='int64', id=None)}] or Value(\"null\").\r\n```\r\n\r\nThe error occurs during the check\r\n\r\n```python\r\n_check_if_features_can_be_aligned([dset.features for dset in dsets])\r\n```\r\n\r\nWhen the multiprocessing splitting lines up just right with the empty return value, one of the `dset` in `dsets` will have a single item with an empty list value, causing the error.\n\n### Expected behavior\n\nExpected behavior is the result would be the same regardless of the `num_proc` value used.\n\n### Environment info\n\nDatasets version 2.11.0\r\nPython 3.9.16","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6020\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6020\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6019","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6019\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6019\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6019\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6019","id":1799532822,"node_id":"PR_kwDODunzps5VPAlD","number":6019,"title":"Improve logging","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007782 \/ 0.011353 (-0.003571) | 0.004451 \/ 0.011008 (-0.006557) | 0.099928 \/ 0.038508 (0.061420) | 0.081534 \/ 0.023109 (0.058425) | 0.379382 \/ 0.275898 (0.103484) | 0.410652 \/ 0.323480 (0.087172) | 0.005967 \/ 0.007986 (-0.002019) | 0.003702 \/ 0.004328 (-0.000627) | 0.076359 \/ 0.004250 (0.072109) | 0.066721 \/ 0.037052 (0.029669) | 0.383595 \/ 0.258489 (0.125106) | 0.423854 \/ 0.293841 (0.130013) | 0.032796 \/ 0.128546 (-0.095750) | 0.009728 \/ 0.075646 (-0.065918) | 0.344347 \/ 0.419271 (-0.074925) | 0.056320 \/ 0.043533 (0.012788) | 0.379974 \/ 0.255139 (0.124835) | 0.401294 \/ 0.283200 (0.118094) | 0.024110 \/ 0.141683 (-0.117572) | 1.804194 \/ 1.452155 (0.352039) | 1.860240 \/ 1.492716 (0.367523) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.233803 \/ 0.018006 (0.215797) | 0.506893 \/ 0.000490 (0.506404) | 0.003894 \/ 0.000200 (0.003694) | 0.000090 \/ 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033328 \/ 0.037411 (-0.004083) | 0.098661 \/ 0.014526 (0.084136) | 0.114971 \/ 0.176557 (-0.061586) | 0.186815 \/ 0.737135 (-0.550321) | 0.115490 \/ 0.296338 (-0.180848) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.422590 \/ 0.215209 (0.207381) | 4.277189 \/ 2.077655 (2.199535) | 2.095565 \/ 1.504120 (0.591445) | 2.040825 \/ 1.541195 (0.499630) | 2.162562 \/ 1.468490 (0.694072) | 0.578602 \/ 4.584777 (-4.006175) | 4.203474 \/ 3.745712 (0.457762) | 6.674595 \/ 5.269862 (1.404734) | 3.913251 \/ 4.565676 (-0.652426) | 0.067777 \/ 0.424275 (-0.356498) | 0.008716 \/ 0.007607 (0.001109) | 0.548704 \/ 0.226044 (0.322660) | 5.162120 \/ 2.268929 (2.893192) | 2.600250 \/ 55.444624 (-52.844374) | 2.232730 \/ 6.876477 (-4.643747) | 2.485617 \/ 2.142072 (0.343544) | 0.650872 \/ 4.805227 (-4.154355) | 0.148022 \/ 6.500664 (-6.352642) | 0.064795 \/ 0.075469 (-0.010674) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.399439 \/ 1.841788 (-0.442349) | 22.438959 \/ 8.074308 (14.364651) | 16.447831 \/ 10.191392 (6.256439) | 0.202003 \/ 0.680424 (-0.478421) | 0.026200 \/ 0.534201 (-0.508001) | 0.472966 \/ 0.579283 (-0.106317) | 0.491621 \/ 0.434364 (0.057257) | 0.551580 \/ 0.540337 (0.011242) | 0.751420 \/ 1.386936 (-0.635516) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007241 \/ 0.011353 (-0.004112) | 0.004434 \/ 0.011008 (-0.006574) | 0.075872 \/ 0.038508 (0.037364) | 0.080094 \/ 0.023109 (0.056985) | 0.459244 \/ 0.275898 (0.183346) | 0.492482 \/ 0.323480 (0.169002) | 0.005791 \/ 0.007986 (-0.002194) | 0.003657 \/ 0.004328 (-0.000671) | 0.075214 \/ 0.004250 (0.070964) | 0.064208 \/ 0.037052 (0.027156) | 0.464195 \/ 0.258489 (0.205706) | 0.497809 \/ 0.293841 (0.203968) | 0.036301 \/ 0.128546 (-0.092245) | 0.009855 \/ 0.075646 (-0.065791) | 0.080826 \/ 0.419271 (-0.338445) | 0.056700 \/ 0.043533 (0.013167) | 0.452850 \/ 0.255139 (0.197711) | 0.490738 \/ 0.283200 (0.207538) | 0.024145 \/ 0.141683 (-0.117538) | 1.689911 \/ 1.452155 (0.237757) | 1.789803 \/ 1.492716 (0.297087) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.247741 \/ 0.018006 (0.229735) | 0.486769 \/ 0.000490 (0.486279) | 0.000418 \/ 0.000200 (0.000218) | 0.000060 \/ 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.036317 \/ 0.037411 (-0.001094) | 0.104943 \/ 0.014526 (0.090417) | 0.120972 \/ 0.176557 (-0.055585) | 0.188461 \/ 0.737135 (-0.548674) | 0.120926 \/ 0.296338 (-0.175412) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.465788 \/ 0.215209 (0.250579) | 4.662369 \/ 2.077655 (2.584714) | 2.442241 \/ 1.504120 (0.938121) | 2.266328 \/ 1.541195 (0.725133) | 2.438998 \/ 1.468490 (0.970508) | 0.531384 \/ 4.584777 (-4.053393) | 4.125286 \/ 3.745712 (0.379574) | 3.920912 \/ 5.269862 (-1.348950) | 2.292149 \/ 4.565676 (-2.273528) | 0.070146 \/ 0.424275 (-0.354129) | 0.008887 \/ 0.007607 (0.001280) | 0.598181 \/ 0.226044 (0.372137) | 5.726454 \/ 2.268929 (3.457526) | 3.081836 \/ 55.444624 (-52.362788) | 2.683508 \/ 6.876477 (-4.192969) | 2.587350 \/ 2.142072 (0.445278) | 0.604736 \/ 4.805227 (-4.200491) | 0.141303 \/ 6.500664 (-6.359362) | 0.065020 \/ 0.075469 (-0.010449) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.481850 \/ 1.841788 (-0.359938) | 22.259592 \/ 8.074308 (14.185284) | 16.304290 \/ 10.191392 (6.112898) | 0.173514 \/ 0.680424 (-0.506909) | 0.021590 \/ 0.534201 (-0.512611) | 0.471753 \/ 0.579283 (-0.107531) | 0.472132 \/ 0.434364 (0.037768) | 0.563344 \/ 0.540337 (0.023007) | 0.738509 \/ 1.386936 (-0.648427) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#1cb7ae56dbd814945a4982c63bf0e50859a7b93a \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005910 \/ 0.011353 (-0.005443) | 0.004372 \/ 0.011008 (-0.006636) | 0.081583 \/ 0.038508 (0.043075) | 0.069598 \/ 0.023109 (0.046488) | 0.346360 \/ 0.275898 (0.070462) | 0.360733 \/ 0.323480 (0.037254) | 0.004725 \/ 0.007986 (-0.003261) | 0.003106 \/ 0.004328 (-0.001222) | 0.059916 \/ 0.004250 (0.055666) | 0.053242 \/ 0.037052 (0.016189) | 0.353551 \/ 0.258489 (0.095062) | 0.373052 \/ 0.293841 (0.079211) | 0.029036 \/ 0.128546 (-0.099510) | 0.007894 \/ 0.075646 (-0.067753) | 0.284131 \/ 0.419271 (-0.135140) | 0.049348 \/ 0.043533 (0.005815) | 0.347409 \/ 0.255139 (0.092270) | 0.355029 \/ 0.283200 (0.071830) | 0.022511 \/ 0.141683 (-0.119171) | 1.454495 \/ 1.452155 (0.002340) | 1.439551 \/ 1.492716 (-0.053166) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.218889 \/ 0.018006 (0.200883) | 0.478734 \/ 0.000490 (0.478244) | 0.003758 \/ 0.000200 (0.003558) | 0.000083 \/ 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025759 \/ 0.037411 (-0.011653) | 0.082511 \/ 0.014526 (0.067985) | 0.087578 \/ 0.176557 (-0.088979) | 0.137760 \/ 0.737135 (-0.599375) | 0.093312 \/ 0.296338 (-0.203027) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.378963 \/ 0.215209 (0.163754) | 3.645846 \/ 2.077655 (1.568191) | 1.741135 \/ 1.504120 (0.237015) | 1.599166 \/ 1.541195 (0.057972) | 1.610817 \/ 1.468490 (0.142327) | 0.459209 \/ 4.584777 (-4.125568) | 3.484857 \/ 3.745712 (-0.260855) | 3.928109 \/ 5.269862 (-1.341752) | 2.419784 \/ 4.565676 (-2.145892) | 0.051987 \/ 0.424275 (-0.372288) | 0.006495 \/ 0.007607 (-0.001112) | 0.427311 \/ 0.226044 (0.201267) | 4.226378 \/ 2.268929 (1.957450) | 2.212331 \/ 55.444624 (-53.232293) | 1.916213 \/ 6.876477 (-4.960264) | 1.978809 \/ 2.142072 (-0.163263) | 0.547351 \/ 4.805227 (-4.257876) | 0.121110 \/ 6.500664 (-6.379554) | 0.054163 \/ 0.075469 (-0.021306) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.228594 \/ 1.841788 (-0.613193) | 19.410901 \/ 8.074308 (11.336593) | 13.014722 \/ 10.191392 (2.823330) | 0.156449 \/ 0.680424 (-0.523975) | 0.021032 \/ 0.534201 (-0.513169) | 0.403976 \/ 0.579283 (-0.175307) | 0.413885 \/ 0.434364 (-0.020479) | 0.470465 \/ 0.540337 (-0.069873) | 0.641322 \/ 1.386936 (-0.745614) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007210 \/ 0.011353 (-0.004143) | 0.003824 \/ 0.011008 (-0.007185) | 0.058227 \/ 0.038508 (0.019719) | 0.076211 \/ 0.023109 (0.053102) | 0.336626 \/ 0.275898 (0.060728) | 0.420542 \/ 0.323480 (0.097062) | 0.006178 \/ 0.007986 (-0.001808) | 0.003332 \/ 0.004328 (-0.000997) | 0.058073 \/ 0.004250 (0.053823) | 0.062485 \/ 0.037052 (0.025432) | 0.386175 \/ 0.258489 (0.127686) | 0.415659 \/ 0.293841 (0.121818) | 0.031264 \/ 0.128546 (-0.097282) | 0.007502 \/ 0.075646 (-0.068144) | 0.072079 \/ 0.419271 (-0.347192) | 0.055860 \/ 0.043533 (0.012327) | 0.343508 \/ 0.255139 (0.088369) | 0.437844 \/ 0.283200 (0.154645) | 0.032852 \/ 0.141683 (-0.108831) | 1.409241 \/ 1.452155 (-0.042913) | 1.623949 \/ 1.492716 (0.131233) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.207511 \/ 0.018006 (0.189504) | 0.464149 \/ 0.000490 (0.463660) | 0.003248 \/ 0.000200 (0.003048) | 0.000226 \/ 0.000054 (0.000172) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030767 \/ 0.037411 (-0.006645) | 0.079169 \/ 0.014526 (0.064643) | 0.093111 \/ 0.176557 (-0.083445) | 0.153369 \/ 0.737135 (-0.583767) | 0.092939 \/ 0.296338 (-0.203400) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.375602 \/ 0.215209 (0.160392) | 3.968612 \/ 2.077655 (1.890957) | 2.081749 \/ 1.504120 (0.577629) | 1.899772 \/ 1.541195 (0.358577) | 1.847923 \/ 1.468490 (0.379433) | 0.442867 \/ 4.584777 (-4.141910) | 3.646664 \/ 3.745712 (-0.099048) | 5.870600 \/ 5.269862 (0.600739) | 3.356698 \/ 4.565676 (-1.208979) | 0.051422 \/ 0.424275 (-0.372853) | 0.006006 \/ 0.007607 (-0.001601) | 0.442439 \/ 0.226044 (0.216395) | 4.466256 \/ 2.268929 (2.197328) | 2.483832 \/ 55.444624 (-52.960792) | 2.105612 \/ 6.876477 (-4.770865) | 2.060650 \/ 2.142072 (-0.081422) | 0.531119 \/ 4.805227 (-4.274108) | 0.123436 \/ 6.500664 (-6.377228) | 0.059838 \/ 0.075469 (-0.015632) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.283042 \/ 1.841788 (-0.558746) | 19.688251 \/ 8.074308 (11.613943) | 13.346386 \/ 10.191392 (3.154994) | 0.197463 \/ 0.680424 (-0.482961) | 0.018484 \/ 0.534201 (-0.515717) | 0.391727 \/ 0.579283 (-0.187556) | 0.425061 \/ 0.434364 (-0.009303) | 0.448177 \/ 0.540337 (-0.092160) | 0.653694 \/ 1.386936 (-0.733242) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#01604752fe89d290479fa406b1a24ac1f346826e \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008966 \/ 0.011353 (-0.002387) | 0.005195 \/ 0.011008 (-0.005813) | 0.102879 \/ 0.038508 (0.064371) | 0.090902 \/ 0.023109 (0.067792) | 0.434397 \/ 0.275898 (0.158498) | 0.454013 \/ 0.323480 (0.130534) | 0.008507 \/ 0.007986 (0.000521) | 0.005000 \/ 0.004328 (0.000671) | 0.075789 \/ 0.004250 (0.071538) | 0.067608 \/ 0.037052 (0.030555) | 0.435091 \/ 0.258489 (0.176602) | 0.469411 \/ 0.293841 (0.175570) | 0.050859 \/ 0.128546 (-0.077687) | 0.013560 \/ 0.075646 (-0.062086) | 0.345473 \/ 0.419271 (-0.073799) | 0.094974 \/ 0.043533 (0.051441) | 0.429626 \/ 0.255139 (0.174487) | 0.434290 \/ 0.283200 (0.151090) | 0.052269 \/ 0.141683 (-0.089413) | 1.700549 \/ 1.452155 (0.248395) | 1.890693 \/ 1.492716 (0.397976) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.296618 \/ 0.018006 (0.278612) | 0.613908 \/ 0.000490 (0.613419) | 0.000484 \/ 0.000200 (0.000284) | 0.000086 \/ 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034346 \/ 0.037411 (-0.003065) | 0.096836 \/ 0.014526 (0.082310) | 0.113332 \/ 0.176557 (-0.063224) | 0.194464 \/ 0.737135 (-0.542671) | 0.111732 \/ 0.296338 (-0.184606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.624954 \/ 0.215209 (0.409745) | 6.442193 \/ 2.077655 (4.364538) | 2.818331 \/ 1.504120 (1.314211) | 2.529607 \/ 1.541195 (0.988413) | 2.549026 \/ 1.468490 (1.080536) | 0.967367 \/ 4.584777 (-3.617410) | 5.446885 \/ 3.745712 (1.701173) | 6.259099 \/ 5.269862 (0.989237) | 3.652936 \/ 4.565676 (-0.912740) | 0.106420 \/ 0.424275 (-0.317855) | 0.011293 \/ 0.007607 (0.003686) | 0.772026 \/ 0.226044 (0.545982) | 7.823986 \/ 2.268929 (5.555057) | 3.725328 \/ 55.444624 (-51.719297) | 2.851489 \/ 6.876477 (-4.024988) | 3.013722 \/ 2.142072 (0.871649) | 1.045090 \/ 4.805227 (-3.760137) | 0.213174 \/ 6.500664 (-6.287490) | 0.077104 \/ 0.075469 (0.001635) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.657135 \/ 1.841788 (-0.184652) | 24.547604 \/ 8.074308 (16.473296) | 19.989533 \/ 10.191392 (9.798141) | 0.257139 \/ 0.680424 (-0.423285) | 0.028448 \/ 0.534201 (-0.505753) | 0.490801 \/ 0.579283 (-0.088482) | 0.628072 \/ 0.434364 (0.193708) | 0.584873 \/ 0.540337 (0.044536) | 0.825258 \/ 1.386936 (-0.561678) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009258 \/ 0.011353 (-0.002095) | 0.005660 \/ 0.011008 (-0.005348) | 0.080577 \/ 0.038508 (0.042069) | 0.095786 \/ 0.023109 (0.072676) | 0.473334 \/ 0.275898 (0.197436) | 0.527962 \/ 0.323480 (0.204482) | 0.006537 \/ 0.007986 (-0.001449) | 0.004411 \/ 0.004328 (0.000083) | 0.080702 \/ 0.004250 (0.076452) | 0.077020 \/ 0.037052 (0.039968) | 0.483205 \/ 0.258489 (0.224716) | 0.556916 \/ 0.293841 (0.263076) | 0.047670 \/ 0.128546 (-0.080877) | 0.016647 \/ 0.075646 (-0.058999) | 0.090653 \/ 0.419271 (-0.328619) | 0.062122 \/ 0.043533 (0.018589) | 0.498326 \/ 0.255139 (0.243187) | 0.546572 \/ 0.283200 (0.263372) | 0.037525 \/ 0.141683 (-0.104157) | 1.869520 \/ 1.452155 (0.417365) | 1.915335 \/ 1.492716 (0.422619) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.248287 \/ 0.018006 (0.230281) | 0.611440 \/ 0.000490 (0.610950) | 0.004102 \/ 0.000200 (0.003902) | 0.000132 \/ 0.000054 (0.000078) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.038228 \/ 0.037411 (0.000817) | 0.103510 \/ 0.014526 (0.088984) | 0.114337 \/ 0.176557 (-0.062219) | 0.189662 \/ 0.737135 (-0.547473) | 0.119078 \/ 0.296338 (-0.177260) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.606622 \/ 0.215209 (0.391413) | 6.053900 \/ 2.077655 (3.976246) | 2.857972 \/ 1.504120 (1.353852) | 2.549756 \/ 1.541195 (1.008561) | 2.584557 \/ 1.468490 (1.116067) | 0.930431 \/ 4.584777 (-3.654346) | 5.524077 \/ 3.745712 (1.778365) | 7.858406 \/ 5.269862 (2.588545) | 4.890697 \/ 4.565676 (0.325020) | 0.095356 \/ 0.424275 (-0.328919) | 0.008614 \/ 0.007607 (0.001007) | 0.774227 \/ 0.226044 (0.548182) | 7.470215 \/ 2.268929 (5.201287) | 3.784820 \/ 55.444624 (-51.659805) | 3.199364 \/ 6.876477 (-3.677113) | 3.212002 \/ 2.142072 (1.069929) | 1.054104 \/ 4.805227 (-3.751123) | 0.226044 \/ 6.500664 (-6.274620) | 0.092237 \/ 0.075469 (0.016768) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.801054 \/ 1.841788 (-0.040734) | 24.220404 \/ 8.074308 (16.146096) | 21.652936 \/ 10.191392 (11.461544) | 0.247004 \/ 0.680424 (-0.433420) | 0.029651 \/ 0.534201 (-0.504550) | 0.475702 \/ 0.579283 (-0.103581) | 0.621121 \/ 0.434364 (0.186757) | 0.570489 \/ 0.540337 (0.030151) | 0.768840 \/ 1.386936 (-0.618096) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#b2fc21eda345643fb57d1d1167ebed9043310911 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009223 \/ 0.011353 (-0.002130) | 0.005750 \/ 0.011008 (-0.005258) | 0.105264 \/ 0.038508 (0.066756) | 0.088478 \/ 0.023109 (0.065369) | 0.461119 \/ 0.275898 (0.185221) | 0.481115 \/ 0.323480 (0.157636) | 0.006366 \/ 0.007986 (-0.001619) | 0.004515 \/ 0.004328 (0.000186) | 0.079296 \/ 0.004250 (0.075045) | 0.063483 \/ 0.037052 (0.026430) | 0.444490 \/ 0.258489 (0.186001) | 0.496474 \/ 0.293841 (0.202634) | 0.048568 \/ 0.128546 (-0.079978) | 0.013574 \/ 0.075646 (-0.062073) | 0.379213 \/ 0.419271 (-0.040059) | 0.086464 \/ 0.043533 (0.042932) | 0.437526 \/ 0.255139 (0.182387) | 0.447117 \/ 0.283200 (0.163917) | 0.049502 \/ 0.141683 (-0.092180) | 1.749146 \/ 1.452155 (0.296992) | 1.831082 \/ 1.492716 (0.338365) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.268205 \/ 0.018006 (0.250199) | 0.627406 \/ 0.000490 (0.626917) | 0.005439 \/ 0.000200 (0.005239) | 0.000128 \/ 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030564 \/ 0.037411 (-0.006848) | 0.096365 \/ 0.014526 (0.081840) | 0.117484 \/ 0.176557 (-0.059072) | 0.189104 \/ 0.737135 (-0.548032) | 0.118073 \/ 0.296338 (-0.178266) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.618229 \/ 0.215209 (0.403019) | 6.437853 \/ 2.077655 (4.360199) | 2.789946 \/ 1.504120 (1.285826) | 2.339245 \/ 1.541195 (0.798050) | 2.588779 \/ 1.468490 (1.120289) | 0.921008 \/ 4.584777 (-3.663769) | 5.402940 \/ 3.745712 (1.657227) | 4.818783 \/ 5.269862 (-0.451078) | 3.162259 \/ 4.565676 (-1.403417) | 0.108501 \/ 0.424275 (-0.315774) | 0.009384 \/ 0.007607 (0.001777) | 0.766811 \/ 0.226044 (0.540766) | 7.624629 \/ 2.268929 (5.355701) | 3.442420 \/ 55.444624 (-52.002204) | 2.759967 \/ 6.876477 (-4.116510) | 3.049644 \/ 2.142072 (0.907572) | 1.113308 \/ 4.805227 (-3.691919) | 0.223923 \/ 6.500664 (-6.276741) | 0.079156 \/ 0.075469 (0.003687) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.683318 \/ 1.841788 (-0.158470) | 25.062141 \/ 8.074308 (16.987833) | 21.777131 \/ 10.191392 (11.585739) | 0.266939 \/ 0.680424 (-0.413485) | 0.029670 \/ 0.534201 (-0.504531) | 0.476761 \/ 0.579283 (-0.102522) | 0.622080 \/ 0.434364 (0.187716) | 0.601781 \/ 0.540337 (0.061443) | 0.785126 \/ 1.386936 (-0.601811) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.010198 \/ 0.011353 (-0.001155) | 0.005777 \/ 0.011008 (-0.005231) | 0.083003 \/ 0.038508 (0.044495) | 0.093411 \/ 0.023109 (0.070302) | 0.496178 \/ 0.275898 (0.220280) | 0.554670 \/ 0.323480 (0.231190) | 0.008351 \/ 0.007986 (0.000365) | 0.004678 \/ 0.004328 (0.000350) | 0.083631 \/ 0.004250 (0.079381) | 0.075538 \/ 0.037052 (0.038485) | 0.492410 \/ 0.258489 (0.233921) | 0.545209 \/ 0.293841 (0.251368) | 0.048365 \/ 0.128546 (-0.080181) | 0.014219 \/ 0.075646 (-0.061427) | 0.100749 \/ 0.419271 (-0.318523) | 0.063431 \/ 0.043533 (0.019898) | 0.511115 \/ 0.255139 (0.255976) | 0.532965 \/ 0.283200 (0.249765) | 0.037968 \/ 0.141683 (-0.103715) | 1.940268 \/ 1.452155 (0.488113) | 2.032934 \/ 1.492716 (0.540217) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.238179 \/ 0.018006 (0.220172) | 0.605767 \/ 0.000490 (0.605277) | 0.004033 \/ 0.000200 (0.003833) | 0.000125 \/ 0.000054 (0.000071) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.036436 \/ 0.037411 (-0.000975) | 0.108034 \/ 0.014526 (0.093509) | 0.118624 \/ 0.176557 (-0.057933) | 0.183079 \/ 0.737135 (-0.554056) | 0.121739 \/ 0.296338 (-0.174600) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.630538 \/ 0.215209 (0.415329) | 6.552184 \/ 2.077655 (4.474529) | 3.003412 \/ 1.504120 (1.499292) | 2.669026 \/ 1.541195 (1.127832) | 2.791109 \/ 1.468490 (1.322619) | 0.884003 \/ 4.584777 (-3.700774) | 5.538660 \/ 3.745712 (1.792947) | 5.126708 \/ 5.269862 (-0.143154) | 3.120825 \/ 4.565676 (-1.444852) | 0.101178 \/ 0.424275 (-0.323097) | 0.009027 \/ 0.007607 (0.001420) | 0.785914 \/ 0.226044 (0.559869) | 7.994720 \/ 2.268929 (5.725792) | 4.061996 \/ 55.444624 (-51.382629) | 3.263230 \/ 6.876477 (-3.613247) | 3.288622 \/ 2.142072 (1.146550) | 1.141867 \/ 4.805227 (-3.663360) | 0.255287 \/ 6.500664 (-6.245378) | 0.100637 \/ 0.075469 (0.025168) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.769821 \/ 1.841788 (-0.071967) | 24.994008 \/ 8.074308 (16.919700) | 21.765971 \/ 10.191392 (11.574579) | 0.268493 \/ 0.680424 (-0.411931) | 0.028047 \/ 0.534201 (-0.506154) | 0.489472 \/ 0.579283 (-0.089811) | 0.594809 \/ 0.434364 (0.160445) | 0.613578 \/ 0.540337 (0.073241) | 0.879360 \/ 1.386936 (-0.507576) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#b85b1154aef2a9ab4d558f60d91623f2cc1583c4 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006003 \/ 0.011353 (-0.005350) | 0.003590 \/ 0.011008 (-0.007418) | 0.084657 \/ 0.038508 (0.046149) | 0.057884 \/ 0.023109 (0.034775) | 0.318347 \/ 0.275898 (0.042449) | 0.345976 \/ 0.323480 (0.022496) | 0.004706 \/ 0.007986 (-0.003279) | 0.002921 \/ 0.004328 (-0.001407) | 0.061850 \/ 0.004250 (0.057600) | 0.050558 \/ 0.037052 (0.013505) | 0.320877 \/ 0.258489 (0.062388) | 0.356062 \/ 0.293841 (0.062222) | 0.027511 \/ 0.128546 (-0.101035) | 0.007954 \/ 0.075646 (-0.067693) | 0.260290 \/ 0.419271 (-0.158981) | 0.051207 \/ 0.043533 (0.007674) | 0.334423 \/ 0.255139 (0.079284) | 0.338575 \/ 0.283200 (0.055375) | 0.022330 \/ 0.141683 (-0.119353) | 1.445446 \/ 1.452155 (-0.006709) | 1.500626 \/ 1.492716 (0.007910) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.192440 \/ 0.018006 (0.174433) | 0.428455 \/ 0.000490 (0.427965) | 0.000318 \/ 0.000200 (0.000118) | 0.000056 \/ 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.022933 \/ 0.037411 (-0.014478) | 0.072795 \/ 0.014526 (0.058269) | 0.081149 \/ 0.176557 (-0.095407) | 0.142941 \/ 0.737135 (-0.594195) | 0.082410 \/ 0.296338 (-0.213928) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.405220 \/ 0.215209 (0.190011) | 4.048585 \/ 2.077655 (1.970931) | 2.027908 \/ 1.504120 (0.523788) | 1.887828 \/ 1.541195 (0.346633) | 2.131780 \/ 1.468490 (0.663290) | 0.502847 \/ 4.584777 (-4.081930) | 3.069498 \/ 3.745712 (-0.676215) | 4.094774 \/ 5.269862 (-1.175088) | 2.544004 \/ 4.565676 (-2.021673) | 0.059540 \/ 0.424275 (-0.364735) | 0.006501 \/ 0.007607 (-0.001106) | 0.477218 \/ 0.226044 (0.251173) | 4.764961 \/ 2.268929 (2.496032) | 2.434594 \/ 55.444624 (-53.010030) | 2.104833 \/ 6.876477 (-4.771644) | 2.263059 \/ 2.142072 (0.120987) | 0.591755 \/ 4.805227 (-4.213472) | 0.131167 \/ 6.500664 (-6.369497) | 0.061808 \/ 0.075469 (-0.013661) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.345364 \/ 1.841788 (-0.496424) | 18.122584 \/ 8.074308 (10.048276) | 13.318689 \/ 10.191392 (3.127297) | 0.144526 \/ 0.680424 (-0.535898) | 0.016997 \/ 0.534201 (-0.517204) | 0.336036 \/ 0.579283 (-0.243247) | 0.359532 \/ 0.434364 (-0.074832) | 0.386945 \/ 0.540337 (-0.153392) | 0.538659 \/ 1.386936 (-0.848277) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006088 \/ 0.011353 (-0.005265) | 0.003684 \/ 0.011008 (-0.007324) | 0.062340 \/ 0.038508 (0.023832) | 0.058461 \/ 0.023109 (0.035352) | 0.360134 \/ 0.275898 (0.084236) | 0.393298 \/ 0.323480 (0.069818) | 0.004664 \/ 0.007986 (-0.003322) | 0.002909 \/ 0.004328 (-0.001420) | 0.062668 \/ 0.004250 (0.058418) | 0.050145 \/ 0.037052 (0.013092) | 0.361897 \/ 0.258489 (0.103408) | 0.402008 \/ 0.293841 (0.108167) | 0.027491 \/ 0.128546 (-0.101055) | 0.008113 \/ 0.075646 (-0.067534) | 0.068114 \/ 0.419271 (-0.351157) | 0.043303 \/ 0.043533 (-0.000230) | 0.360569 \/ 0.255139 (0.105430) | 0.387144 \/ 0.283200 (0.103944) | 0.020194 \/ 0.141683 (-0.121489) | 1.418066 \/ 1.452155 (-0.034089) | 1.475640 \/ 1.492716 (-0.017076) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.200291 \/ 0.018006 (0.182285) | 0.432298 \/ 0.000490 (0.431809) | 0.003303 \/ 0.000200 (0.003103) | 0.000075 \/ 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027749 \/ 0.037411 (-0.009662) | 0.081890 \/ 0.014526 (0.067364) | 0.094319 \/ 0.176557 (-0.082238) | 0.148646 \/ 0.737135 (-0.588490) | 0.091830 \/ 0.296338 (-0.204509) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.433546 \/ 0.215209 (0.218337) | 4.326855 \/ 2.077655 (2.249200) | 2.230186 \/ 1.504120 (0.726066) | 2.052524 \/ 1.541195 (0.511329) | 2.117270 \/ 1.468490 (0.648779) | 0.500331 \/ 4.584777 (-4.084446) | 3.113662 \/ 3.745712 (-0.632050) | 2.931540 \/ 5.269862 (-2.338322) | 1.853615 \/ 4.565676 (-2.712062) | 0.058250 \/ 0.424275 (-0.366025) | 0.006546 \/ 0.007607 (-0.001061) | 0.508850 \/ 0.226044 (0.282806) | 5.081809 \/ 2.268929 (2.812880) | 2.687037 \/ 55.444624 (-52.757588) | 2.369317 \/ 6.876477 (-4.507160) | 2.383549 \/ 2.142072 (0.241477) | 0.587039 \/ 4.805227 (-4.218188) | 0.125858 \/ 6.500664 (-6.374806) | 0.062522 \/ 0.075469 (-0.012947) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.294929 \/ 1.841788 (-0.546858) | 18.056312 \/ 8.074308 (9.982004) | 13.755117 \/ 10.191392 (3.563725) | 0.132037 \/ 0.680424 (-0.548387) | 0.016866 \/ 0.534201 (-0.517335) | 0.339040 \/ 0.579283 (-0.240243) | 0.364371 \/ 0.434364 (-0.069993) | 0.399533 \/ 0.540337 (-0.140804) | 0.564524 \/ 1.386936 (-0.822412) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#64b811c13a7982015d7e078e3d693ce5359a05a2 \"CML watermark\")\n","@lhoestq This bar comes from: https:\/\/github.com\/huggingface\/datasets\/blob\/b8067c0262073891180869f700ebef5ac3dc5cce\/src\/datasets\/builder.py#L1156-L1166\r\n\r\nDo you prefer not showing it or, e.g., having `desc=\"Generating splits\"`?","No strong opinion. Since there is a \"Generating\" progress bar already, maybe it can be \"Preparing splits\" (ref to download_and_prepare)","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006348 \/ 0.011353 (-0.005005) | 0.003721 \/ 0.011008 (-0.007287) | 0.084039 \/ 0.038508 (0.045531) | 0.067627 \/ 0.023109 (0.044517) | 0.308372 \/ 0.275898 (0.032474) | 0.335131 \/ 0.323480 (0.011652) | 0.005157 \/ 0.007986 (-0.002829) | 0.003266 \/ 0.004328 (-0.001062) | 0.065374 \/ 0.004250 (0.061124) | 0.055550 \/ 0.037052 (0.018498) | 0.314001 \/ 0.258489 (0.055512) | 0.350510 \/ 0.293841 (0.056669) | 0.030859 \/ 0.128546 (-0.097688) | 0.008286 \/ 0.075646 (-0.067361) | 0.287122 \/ 0.419271 (-0.132149) | 0.051494 \/ 0.043533 (0.007961) | 0.309868 \/ 0.255139 (0.054729) | 0.325845 \/ 0.283200 (0.042645) | 0.022622 \/ 0.141683 (-0.119061) | 1.468730 \/ 1.452155 (0.016575) | 1.547871 \/ 1.492716 (0.055155) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.202763 \/ 0.018006 (0.184757) | 0.456403 \/ 0.000490 (0.455914) | 0.003116 \/ 0.000200 (0.002916) | 0.000079 \/ 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027297 \/ 0.037411 (-0.010114) | 0.081204 \/ 0.014526 (0.066678) | 0.094274 \/ 0.176557 (-0.082282) | 0.154391 \/ 0.737135 (-0.582744) | 0.094312 \/ 0.296338 (-0.202026) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.387382 \/ 0.215209 (0.172173) | 3.865597 \/ 2.077655 (1.787943) | 1.855959 \/ 1.504120 (0.351839) | 1.685411 \/ 1.541195 (0.144216) | 1.732127 \/ 1.468490 (0.263637) | 0.482230 \/ 4.584777 (-4.102547) | 3.664947 \/ 3.745712 (-0.080765) | 5.114379 \/ 5.269862 (-0.155482) | 3.102803 \/ 4.565676 (-1.462873) | 0.056509 \/ 0.424275 (-0.367766) | 0.007230 \/ 0.007607 (-0.000377) | 0.456788 \/ 0.226044 (0.230744) | 4.575831 \/ 2.268929 (2.306902) | 2.335249 \/ 55.444624 (-53.109375) | 2.003805 \/ 6.876477 (-4.872672) | 2.141788 \/ 2.142072 (-0.000285) | 0.577501 \/ 4.805227 (-4.227726) | 0.130264 \/ 6.500664 (-6.370400) | 0.058889 \/ 0.075469 (-0.016580) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.252673 \/ 1.841788 (-0.589115) | 18.676897 \/ 8.074308 (10.602589) | 13.988101 \/ 10.191392 (3.796709) | 0.151376 \/ 0.680424 (-0.529048) | 0.018104 \/ 0.534201 (-0.516097) | 0.388413 \/ 0.579283 (-0.190870) | 0.414841 \/ 0.434364 (-0.019523) | 0.456078 \/ 0.540337 (-0.084259) | 0.641715 \/ 1.386936 (-0.745221) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006315 \/ 0.011353 (-0.005038) | 0.003847 \/ 0.011008 (-0.007162) | 0.063989 \/ 0.038508 (0.025481) | 0.068244 \/ 0.023109 (0.045135) | 0.416201 \/ 0.275898 (0.140303) | 0.438446 \/ 0.323480 (0.114966) | 0.005820 \/ 0.007986 (-0.002166) | 0.003165 \/ 0.004328 (-0.001163) | 0.064143 \/ 0.004250 (0.059892) | 0.056529 \/ 0.037052 (0.019477) | 0.414916 \/ 0.258489 (0.156427) | 0.450771 \/ 0.293841 (0.156930) | 0.030611 \/ 0.128546 (-0.097935) | 0.008289 \/ 0.075646 (-0.067357) | 0.070725 \/ 0.419271 (-0.348546) | 0.047998 \/ 0.043533 (0.004465) | 0.405609 \/ 0.255139 (0.150470) | 0.421895 \/ 0.283200 (0.138696) | 0.022135 \/ 0.141683 (-0.119548) | 1.444238 \/ 1.452155 (-0.007916) | 1.515823 \/ 1.492716 (0.023107) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.227043 \/ 0.018006 (0.209037) | 0.439732 \/ 0.000490 (0.439242) | 0.001267 \/ 0.000200 (0.001067) | 0.000070 \/ 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029082 \/ 0.037411 (-0.008329) | 0.086201 \/ 0.014526 (0.071675) | 0.098653 \/ 0.176557 (-0.077903) | 0.152574 \/ 0.737135 (-0.584561) | 0.100696 \/ 0.296338 (-0.195642) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.411243 \/ 0.215209 (0.196034) | 4.100170 \/ 2.077655 (2.022515) | 2.118310 \/ 1.504120 (0.614190) | 1.935646 \/ 1.541195 (0.394451) | 1.970798 \/ 1.468490 (0.502307) | 0.478635 \/ 4.584777 (-4.106142) | 3.589396 \/ 3.745712 (-0.156316) | 3.312462 \/ 5.269862 (-1.957399) | 1.963081 \/ 4.565676 (-2.602595) | 0.056392 \/ 0.424275 (-0.367883) | 0.007134 \/ 0.007607 (-0.000473) | 0.485131 \/ 0.226044 (0.259086) | 4.838946 \/ 2.268929 (2.570017) | 2.624550 \/ 55.444624 (-52.820075) | 2.223046 \/ 6.876477 (-4.653431) | 2.230642 \/ 2.142072 (0.088570) | 0.594892 \/ 4.805227 (-4.210335) | 0.130523 \/ 6.500664 (-6.370141) | 0.059585 \/ 0.075469 (-0.015884) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.329941 \/ 1.841788 (-0.511847) | 19.199057 \/ 8.074308 (11.124748) | 14.166009 \/ 10.191392 (3.974617) | 0.190595 \/ 0.680424 (-0.489829) | 0.018419 \/ 0.534201 (-0.515782) | 0.392031 \/ 0.579283 (-0.187252) | 0.409395 \/ 0.434364 (-0.024969) | 0.475930 \/ 0.540337 (-0.064408) | 0.654412 \/ 1.386936 (-0.732524) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#42fdfbd567674d075c3a9148ec3c95221eb62cfe \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007500 \/ 0.011353 (-0.003853) | 0.004328 \/ 0.011008 (-0.006681) | 0.086718 \/ 0.038508 (0.048209) | 0.098638 \/ 0.023109 (0.075529) | 0.335308 \/ 0.275898 (0.059409) | 0.369163 \/ 0.323480 (0.045683) | 0.005733 \/ 0.007986 (-0.002253) | 0.003738 \/ 0.004328 (-0.000590) | 0.066452 \/ 0.004250 (0.062202) | 0.066245 \/ 0.037052 (0.029192) | 0.337609 \/ 0.258489 (0.079120) | 0.388584 \/ 0.293841 (0.094744) | 0.031742 \/ 0.128546 (-0.096804) | 0.008721 \/ 0.075646 (-0.066925) | 0.290820 \/ 0.419271 (-0.128452) | 0.053323 \/ 0.043533 (0.009790) | 0.329192 \/ 0.255139 (0.074053) | 0.350560 \/ 0.283200 (0.067360) | 0.025402 \/ 0.141683 (-0.116281) | 1.476174 \/ 1.452155 (0.024020) | 1.578194 \/ 1.492716 (0.085478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.256160 \/ 0.018006 (0.238154) | 0.560315 \/ 0.000490 (0.559825) | 0.005287 \/ 0.000200 (0.005088) | 0.000094 \/ 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029164 \/ 0.037411 (-0.008247) | 0.084881 \/ 0.014526 (0.070356) | 0.100979 \/ 0.176557 (-0.075577) | 0.156539 \/ 0.737135 (-0.580597) | 0.101510 \/ 0.296338 (-0.194828) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.381138 \/ 0.215209 (0.165929) | 3.791573 \/ 2.077655 (1.713918) | 1.841954 \/ 1.504120 (0.337834) | 1.672463 \/ 1.541195 (0.131268) | 1.785769 \/ 1.468490 (0.317279) | 0.483263 \/ 4.584777 (-4.101514) | 3.617391 \/ 3.745712 (-0.128322) | 5.607794 \/ 5.269862 (0.337933) | 3.359530 \/ 4.565676 (-1.206147) | 0.056826 \/ 0.424275 (-0.367449) | 0.007375 \/ 0.007607 (-0.000232) | 0.455853 \/ 0.226044 (0.229809) | 4.548965 \/ 2.268929 (2.280037) | 2.412716 \/ 55.444624 (-53.031908) | 1.991456 \/ 6.876477 (-4.885021) | 2.242851 \/ 2.142072 (0.100778) | 0.573070 \/ 4.805227 (-4.232157) | 0.134658 \/ 6.500664 (-6.366006) | 0.061539 \/ 0.075469 (-0.013930) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.278306 \/ 1.841788 (-0.563481) | 20.634317 \/ 8.074308 (12.560009) | 15.164246 \/ 10.191392 (4.972854) | 0.167487 \/ 0.680424 (-0.512937) | 0.019006 \/ 0.534201 (-0.515195) | 0.394617 \/ 0.579283 (-0.184666) | 0.423385 \/ 0.434364 (-0.010979) | 0.469968 \/ 0.540337 (-0.070370) | 0.630058 \/ 1.386936 (-0.756878) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006793 \/ 0.011353 (-0.004559) | 0.004260 \/ 0.011008 (-0.006748) | 0.065398 \/ 0.038508 (0.026890) | 0.077850 \/ 0.023109 (0.054741) | 0.371754 \/ 0.275898 (0.095855) | 0.400652 \/ 0.323480 (0.077172) | 0.005729 \/ 0.007986 (-0.002256) | 0.003660 \/ 0.004328 (-0.000669) | 0.065119 \/ 0.004250 (0.060869) | 0.060714 \/ 0.037052 (0.023661) | 0.384592 \/ 0.258489 (0.126103) | 0.412806 \/ 0.293841 (0.118965) | 0.031865 \/ 0.128546 (-0.096681) | 0.008807 \/ 0.075646 (-0.066839) | 0.071156 \/ 0.419271 (-0.348115) | 0.049571 \/ 0.043533 (0.006038) | 0.367381 \/ 0.255139 (0.112242) | 0.386713 \/ 0.283200 (0.103513) | 0.024838 \/ 0.141683 (-0.116845) | 1.492986 \/ 1.452155 (0.040831) | 1.559243 \/ 1.492716 (0.066526) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.269737 \/ 0.018006 (0.251730) | 0.565177 \/ 0.000490 (0.564687) | 0.000404 \/ 0.000200 (0.000204) | 0.000060 \/ 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031631 \/ 0.037411 (-0.005780) | 0.087289 \/ 0.014526 (0.072764) | 0.102798 \/ 0.176557 (-0.073759) | 0.158977 \/ 0.737135 (-0.578158) | 0.105495 \/ 0.296338 (-0.190843) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.425067 \/ 0.215209 (0.209858) | 4.243121 \/ 2.077655 (2.165466) | 2.234567 \/ 1.504120 (0.730447) | 2.070810 \/ 1.541195 (0.529615) | 2.176802 \/ 1.468490 (0.708312) | 0.484987 \/ 4.584777 (-4.099790) | 3.647000 \/ 3.745712 (-0.098712) | 3.574843 \/ 5.269862 (-1.695019) | 2.092581 \/ 4.565676 (-2.473095) | 0.057299 \/ 0.424275 (-0.366976) | 0.007480 \/ 0.007607 (-0.000128) | 0.507838 \/ 0.226044 (0.281794) | 5.076594 \/ 2.268929 (2.807666) | 2.718858 \/ 55.444624 (-52.725766) | 2.362793 \/ 6.876477 (-4.513684) | 2.451962 \/ 2.142072 (0.309890) | 0.581355 \/ 4.805227 (-4.223872) | 0.133723 \/ 6.500664 (-6.366941) | 0.061896 \/ 0.075469 (-0.013573) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.325814 \/ 1.841788 (-0.515974) | 20.614502 \/ 8.074308 (12.540194) | 14.769422 \/ 10.191392 (4.578029) | 0.193797 \/ 0.680424 (-0.486627) | 0.018379 \/ 0.534201 (-0.515822) | 0.394153 \/ 0.579283 (-0.185130) | 0.409585 \/ 0.434364 (-0.024779) | 0.479107 \/ 0.540337 (-0.061231) | 0.668397 \/ 1.386936 (-0.718539) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#b2d892237169bad5512c91cae453d257ebefc201 \"CML watermark\")\n","In the end, I decided to remove the progress bar to avoid having it displayed when loading a cached dataset.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006673 \/ 0.011353 (-0.004680) | 0.004162 \/ 0.011008 (-0.006846) | 0.084017 \/ 0.038508 (0.045509) | 0.079536 \/ 0.023109 (0.056426) | 0.313594 \/ 0.275898 (0.037695) | 0.349200 \/ 0.323480 (0.025720) | 0.005544 \/ 0.007986 (-0.002441) | 0.003472 \/ 0.004328 (-0.000857) | 0.064742 \/ 0.004250 (0.060491) | 0.056857 \/ 0.037052 (0.019805) | 0.318635 \/ 0.258489 (0.060146) | 0.354378 \/ 0.293841 (0.060537) | 0.030856 \/ 0.128546 (-0.097690) | 0.008759 \/ 0.075646 (-0.066887) | 0.287760 \/ 0.419271 (-0.131511) | 0.052307 \/ 0.043533 (0.008775) | 0.316396 \/ 0.255139 (0.061257) | 0.351408 \/ 0.283200 (0.068208) | 0.024914 \/ 0.141683 (-0.116769) | 1.484592 \/ 1.452155 (0.032437) | 1.560662 \/ 1.492716 (0.067945) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.280938 \/ 0.018006 (0.262932) | 0.580236 \/ 0.000490 (0.579747) | 0.003369 \/ 0.000200 (0.003169) | 0.000090 \/ 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028736 \/ 0.037411 (-0.008675) | 0.082916 \/ 0.014526 (0.068390) | 0.097761 \/ 0.176557 (-0.078796) | 0.153515 \/ 0.737135 (-0.583620) | 0.099282 \/ 0.296338 (-0.197057) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.401244 \/ 0.215209 (0.186035) | 4.019866 \/ 2.077655 (1.942211) | 2.029642 \/ 1.504120 (0.525522) | 1.849591 \/ 1.541195 (0.308396) | 1.946829 \/ 1.468490 (0.478339) | 0.479750 \/ 4.584777 (-4.105027) | 3.482822 \/ 3.745712 (-0.262890) | 3.955859 \/ 5.269862 (-1.314003) | 2.370747 \/ 4.565676 (-2.194930) | 0.056905 \/ 0.424275 (-0.367370) | 0.007319 \/ 0.007607 (-0.000288) | 0.485310 \/ 0.226044 (0.259266) | 4.858228 \/ 2.268929 (2.589299) | 2.500476 \/ 55.444624 (-52.944148) | 2.171156 \/ 6.876477 (-4.705320) | 2.427266 \/ 2.142072 (0.285194) | 0.570199 \/ 4.805227 (-4.235029) | 0.130855 \/ 6.500664 (-6.369809) | 0.060269 \/ 0.075469 (-0.015200) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.258044 \/ 1.841788 (-0.583743) | 20.218657 \/ 8.074308 (12.144349) | 13.597970 \/ 10.191392 (3.406578) | 0.167656 \/ 0.680424 (-0.512768) | 0.018137 \/ 0.534201 (-0.516064) | 0.395309 \/ 0.579283 (-0.183975) | 0.406325 \/ 0.434364 (-0.028039) | 0.467457 \/ 0.540337 (-0.072880) | 0.613636 \/ 1.386936 (-0.773300) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006846 \/ 0.011353 (-0.004507) | 0.004207 \/ 0.011008 (-0.006802) | 0.064525 \/ 0.038508 (0.026017) | 0.081329 \/ 0.023109 (0.058220) | 0.399838 \/ 0.275898 (0.123940) | 0.431305 \/ 0.323480 (0.107825) | 0.005859 \/ 0.007986 (-0.002127) | 0.003568 \/ 0.004328 (-0.000760) | 0.065262 \/ 0.004250 (0.061011) | 0.064796 \/ 0.037052 (0.027744) | 0.406858 \/ 0.258489 (0.148369) | 0.440971 \/ 0.293841 (0.147130) | 0.031421 \/ 0.128546 (-0.097125) | 0.008777 \/ 0.075646 (-0.066870) | 0.071418 \/ 0.419271 (-0.347853) | 0.049263 \/ 0.043533 (0.005730) | 0.384279 \/ 0.255139 (0.129140) | 0.410745 \/ 0.283200 (0.127546) | 0.024467 \/ 0.141683 (-0.117216) | 1.522379 \/ 1.452155 (0.070224) | 1.581636 \/ 1.492716 (0.088920) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.276161 \/ 0.018006 (0.258155) | 0.548842 \/ 0.000490 (0.548352) | 0.004523 \/ 0.000200 (0.004324) | 0.000098 \/ 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030747 \/ 0.037411 (-0.006664) | 0.087493 \/ 0.014526 (0.072967) | 0.106563 \/ 0.176557 (-0.069993) | 0.162949 \/ 0.737135 (-0.574186) | 0.105303 \/ 0.296338 (-0.191036) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.425854 \/ 0.215209 (0.210645) | 4.244797 \/ 2.077655 (2.167142) | 2.269006 \/ 1.504120 (0.764886) | 2.097428 \/ 1.541195 (0.556234) | 2.181038 \/ 1.468490 (0.712548) | 0.477286 \/ 4.584777 (-4.107491) | 3.591452 \/ 3.745712 (-0.154260) | 3.481281 \/ 5.269862 (-1.788580) | 2.066895 \/ 4.565676 (-2.498782) | 0.056576 \/ 0.424275 (-0.367699) | 0.007409 \/ 0.007607 (-0.000199) | 0.498411 \/ 0.226044 (0.272367) | 4.994873 \/ 2.268929 (2.725945) | 2.749148 \/ 55.444624 (-52.695476) | 2.378544 \/ 6.876477 (-4.497932) | 2.452859 \/ 2.142072 (0.310786) | 0.571340 \/ 4.805227 (-4.233887) | 0.132174 \/ 6.500664 (-6.368490) | 0.061507 \/ 0.075469 (-0.013962) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.370773 \/ 1.841788 (-0.471015) | 20.493342 \/ 8.074308 (12.419034) | 14.809886 \/ 10.191392 (4.618494) | 0.175730 \/ 0.680424 (-0.504693) | 0.018617 \/ 0.534201 (-0.515583) | 0.393808 \/ 0.579283 (-0.185476) | 0.416419 \/ 0.434364 (-0.017945) | 0.477183 \/ 0.540337 (-0.063155) | 0.668060 \/ 1.386936 (-0.718876) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#2de7a2a4af5d94b0f98a7a6db94e78984af40602 \"CML watermark\")\n","Nice one :)"],"created_at":1689100223000,"updated_at":1689190454000,"closed_at":1689182368000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6019","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6019","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6019.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6019.patch","merged_at":1689182368000},"body":"Adds the StreamHandler (as `hfh` and `transformers` do) to the library's logger to log INFO messages and logs the messages about \"loading a cached result\" (and some other warnings) as INFO\r\n\r\n(Also removes the `leave=False` arg in the progress bars to be consistent with `hfh` and `transformers` - progress bars serve as an indicator that a result is not cached, so it makes more sense not to delete them)\r\n\r\nFix #2832, fix https:\/\/github.com\/huggingface\/datasets\/issues\/1948, fix https:\/\/github.com\/huggingface\/datasets\/issues\/5444","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6019\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6019\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6018","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6018\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6018\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6018\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6018","id":1799411999,"node_id":"PR_kwDODunzps5VOmKY","number":6018,"title":"test1","user":{"login":"ognjenovicj","id":139256323,"node_id":"U_kgDOCEziAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/139256323?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ognjenovicj","html_url":"https:\/\/github.com\/ognjenovicj","followers_url":"https:\/\/api.github.com\/users\/ognjenovicj\/followers","following_url":"https:\/\/api.github.com\/users\/ognjenovicj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ognjenovicj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ognjenovicj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ognjenovicj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ognjenovicj\/orgs","repos_url":"https:\/\/api.github.com\/users\/ognjenovicj\/repos","events_url":"https:\/\/api.github.com\/users\/ognjenovicj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ognjenovicj\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We no longer host datasets in this repo. You should use the HF Hub instead."],"created_at":1689096349000,"updated_at":1689185815000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6018","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6018","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6018.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6018.patch","merged_at":null},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6018\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6018\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6017","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6017\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6017\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6017\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6017","id":1799309132,"node_id":"I_kwDODunzps5rP0dM","number":6017,"title":"Switch to huggingface_hub's HfFileSystem","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":{"login":"lhoestq","id":42851186.0,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1689092680000,"updated_at":1689119113000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"draft":null,"pull_request":null,"body":"instead of the current datasets.filesystems.hffilesystem.HfFileSystem which can be slow in some cases\r\n\r\nrelated to https:\/\/github.com\/huggingface\/datasets\/issues\/5846 and https:\/\/github.com\/huggingface\/datasets\/pull\/5919","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6017\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6017\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6016","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6016\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6016\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6016\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6016","id":1798968033,"node_id":"PR_kwDODunzps5VNEvn","number":6016,"title":"Dataset string representation enhancement","user":{"login":"Ganryuu","id":63643948,"node_id":"MDQ6VXNlcjYzNjQzOTQ4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/63643948?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Ganryuu","html_url":"https:\/\/github.com\/Ganryuu","followers_url":"https:\/\/api.github.com\/users\/Ganryuu\/followers","following_url":"https:\/\/api.github.com\/users\/Ganryuu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Ganryuu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Ganryuu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Ganryuu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Ganryuu\/orgs","repos_url":"https:\/\/api.github.com\/users\/Ganryuu\/repos","events_url":"https:\/\/api.github.com\/users\/Ganryuu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Ganryuu\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_6016). All of your documentation changes will be reflected on that endpoint.","It we could have something similar to Polars, that would be great.\r\n\r\nThis is what Polars outputs: \r\n* `__repr__`\/`__str__` :\r\n```\r\nshape: (67_349, 3)\r\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\r\n\u2502 idx \u2506 sentence \u2506 label \u2502\r\n\u2502 --- \u2506 --- \u2506 --- \u2502\r\n\u2502 i32 \u2506 str \u2506 i64 \u2502\r\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\r\n\u2502 0 \u2506 hide new secretions from the par\u2026 \u2506 0 \u2502\r\n\u2502 1 \u2506 contains no wit , only labored g\u2026 \u2506 0 \u2502\r\n\u2502 2 \u2506 that loves its characters and co\u2026 \u2506 1 \u2502\r\n\u2502 3 \u2506 remains utterly satisfied to rem\u2026 \u2506 0 \u2502\r\n\u2502 \u2026 \u2506 \u2026 \u2506 \u2026 \u2502\r\n\u2502 67345 \u2506 anguish , anger and frustration \u2506 0 \u2502\r\n\u2502 67346 \u2506 at achieving the modest , crowd-\u2026 \u2506 1 \u2502\r\n\u2502 67347 \u2506 a patient viewer \u2506 1 \u2502\r\n\u2502 67348 \u2506 this new jangle of noise , mayhe\u2026 \u2506 0 \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\n```\r\n\r\n* `_repr_html_`:\r\n\"Screenshot\r\n\r\n"],"created_at":1689082705000,"updated_at":1689203761000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6016","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6016","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6016.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6016.patch","merged_at":null},"body":"my attempt at #6010 \r\nnot sure if this is the right way to go about it, I will wait for your feedback ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6016\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6016\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6015","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6015\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6015\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6015\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6015","id":1798807893,"node_id":"PR_kwDODunzps5VMhgB","number":6015,"title":"Add metadata ui screenshot in docs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007633 \/ 0.011353 (-0.003720) | 0.004666 \/ 0.011008 (-0.006343) | 0.097768 \/ 0.038508 (0.059260) | 0.085153 \/ 0.023109 (0.062044) | 0.400315 \/ 0.275898 (0.124417) | 0.452903 \/ 0.323480 (0.129423) | 0.006227 \/ 0.007986 (-0.001759) | 0.003814 \/ 0.004328 (-0.000515) | 0.074586 \/ 0.004250 (0.070336) | 0.064295 \/ 0.037052 (0.027242) | 0.408082 \/ 0.258489 (0.149593) | 0.446921 \/ 0.293841 (0.153080) | 0.034593 \/ 0.128546 (-0.093953) | 0.009191 \/ 0.075646 (-0.066456) | 0.337099 \/ 0.419271 (-0.082173) | 0.075320 \/ 0.043533 (0.031787) | 0.403488 \/ 0.255139 (0.148349) | 0.435309 \/ 0.283200 (0.152109) | 0.035675 \/ 0.141683 (-0.106008) | 1.732642 \/ 1.452155 (0.280487) | 1.770238 \/ 1.492716 (0.277522) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.235879 \/ 0.018006 (0.217873) | 0.500330 \/ 0.000490 (0.499841) | 0.005221 \/ 0.000200 (0.005021) | 0.000150 \/ 0.000054 (0.000096) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032479 \/ 0.037411 (-0.004933) | 0.095873 \/ 0.014526 (0.081348) | 0.107118 \/ 0.176557 (-0.069438) | 0.173809 \/ 0.737135 (-0.563326) | 0.109832 \/ 0.296338 (-0.186507) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.444342 \/ 0.215209 (0.229133) | 4.459010 \/ 2.077655 (2.381355) | 2.209687 \/ 1.504120 (0.705567) | 2.007556 \/ 1.541195 (0.466362) | 2.113683 \/ 1.468490 (0.645193) | 0.544281 \/ 4.584777 (-4.040496) | 4.037151 \/ 3.745712 (0.291439) | 4.852644 \/ 5.269862 (-0.417217) | 3.134126 \/ 4.565676 (-1.431550) | 0.066815 \/ 0.424275 (-0.357460) | 0.008836 \/ 0.007607 (0.001229) | 0.560904 \/ 0.226044 (0.334859) | 5.302760 \/ 2.268929 (3.033832) | 2.750182 \/ 55.444624 (-52.694442) | 2.322595 \/ 6.876477 (-4.553882) | 2.547486 \/ 2.142072 (0.405414) | 0.665766 \/ 4.805227 (-4.139461) | 0.151613 \/ 6.500664 (-6.349051) | 0.071155 \/ 0.075469 (-0.004314) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.473717 \/ 1.841788 (-0.368071) | 22.584179 \/ 8.074308 (14.509871) | 15.888001 \/ 10.191392 (5.696609) | 0.181073 \/ 0.680424 (-0.499351) | 0.021395 \/ 0.534201 (-0.512806) | 0.452693 \/ 0.579283 (-0.126590) | 0.447709 \/ 0.434364 (0.013345) | 0.529599 \/ 0.540337 (-0.010738) | 0.699241 \/ 1.386936 (-0.687695) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007917 \/ 0.011353 (-0.003436) | 0.004544 \/ 0.011008 (-0.006464) | 0.074566 \/ 0.038508 (0.036058) | 0.087530 \/ 0.023109 (0.064421) | 0.419753 \/ 0.275898 (0.143854) | 0.452352 \/ 0.323480 (0.128872) | 0.005882 \/ 0.007986 (-0.002104) | 0.003904 \/ 0.004328 (-0.000425) | 0.073539 \/ 0.004250 (0.069289) | 0.071320 \/ 0.037052 (0.034267) | 0.432899 \/ 0.258489 (0.174409) | 0.470365 \/ 0.293841 (0.176524) | 0.036198 \/ 0.128546 (-0.092348) | 0.009342 \/ 0.075646 (-0.066304) | 0.080970 \/ 0.419271 (-0.338301) | 0.058769 \/ 0.043533 (0.015236) | 0.413397 \/ 0.255139 (0.158258) | 0.448362 \/ 0.283200 (0.165162) | 0.034177 \/ 0.141683 (-0.107506) | 1.706217 \/ 1.452155 (0.254063) | 1.776743 \/ 1.492716 (0.284026) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.198779 \/ 0.018006 (0.180773) | 0.499862 \/ 0.000490 (0.499372) | 0.003891 \/ 0.000200 (0.003692) | 0.000108 \/ 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034671 \/ 0.037411 (-0.002740) | 0.103165 \/ 0.014526 (0.088639) | 0.115813 \/ 0.176557 (-0.060744) | 0.177407 \/ 0.737135 (-0.559728) | 0.117733 \/ 0.296338 (-0.178606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.476859 \/ 0.215209 (0.261650) | 4.823063 \/ 2.077655 (2.745409) | 2.524133 \/ 1.504120 (1.020013) | 2.374482 \/ 1.541195 (0.833288) | 2.518047 \/ 1.468490 (1.049557) | 0.559131 \/ 4.584777 (-4.025646) | 4.126213 \/ 3.745712 (0.380501) | 6.488570 \/ 5.269862 (1.218708) | 3.816540 \/ 4.565676 (-0.749137) | 0.064742 \/ 0.424275 (-0.359533) | 0.008476 \/ 0.007607 (0.000869) | 0.576432 \/ 0.226044 (0.350387) | 5.835133 \/ 2.268929 (3.566205) | 3.237833 \/ 55.444624 (-52.206791) | 2.726596 \/ 6.876477 (-4.149880) | 2.799212 \/ 2.142072 (0.657139) | 0.661628 \/ 4.805227 (-4.143599) | 0.153997 \/ 6.500664 (-6.346667) | 0.070621 \/ 0.075469 (-0.004848) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.648505 \/ 1.841788 (-0.193282) | 22.454019 \/ 8.074308 (14.379711) | 16.077098 \/ 10.191392 (5.885706) | 0.217875 \/ 0.680424 (-0.462549) | 0.021285 \/ 0.534201 (-0.512916) | 0.459837 \/ 0.579283 (-0.119446) | 0.476211 \/ 0.434364 (0.041847) | 0.525903 \/ 0.540337 (-0.014435) | 0.717224 \/ 1.386936 (-0.669712) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#b767e9c3ef30f9da30d47cfcaccf9a7ac2500c43 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008929 \/ 0.011353 (-0.002424) | 0.004188 \/ 0.011008 (-0.006820) | 0.097030 \/ 0.038508 (0.058522) | 0.071363 \/ 0.023109 (0.048254) | 0.333116 \/ 0.275898 (0.057218) | 0.371272 \/ 0.323480 (0.047792) | 0.006430 \/ 0.007986 (-0.001555) | 0.003689 \/ 0.004328 (-0.000639) | 0.068666 \/ 0.004250 (0.064416) | 0.057562 \/ 0.037052 (0.020510) | 0.347208 \/ 0.258489 (0.088719) | 0.390514 \/ 0.293841 (0.096673) | 0.050560 \/ 0.128546 (-0.077987) | 0.013372 \/ 0.075646 (-0.062275) | 0.311345 \/ 0.419271 (-0.107927) | 0.068990 \/ 0.043533 (0.025457) | 0.363026 \/ 0.255139 (0.107887) | 0.379793 \/ 0.283200 (0.096593) | 0.036891 \/ 0.141683 (-0.104792) | 1.583481 \/ 1.452155 (0.131327) | 1.688727 \/ 1.492716 (0.196011) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.209777 \/ 0.018006 (0.191771) | 0.507267 \/ 0.000490 (0.506777) | 0.003637 \/ 0.000200 (0.003438) | 0.000105 \/ 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029309 \/ 0.037411 (-0.008102) | 0.088386 \/ 0.014526 (0.073861) | 0.104974 \/ 0.176557 (-0.071582) | 0.171999 \/ 0.737135 (-0.565137) | 0.110797 \/ 0.296338 (-0.185542) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.543465 \/ 0.215209 (0.328256) | 5.361491 \/ 2.077655 (3.283836) | 2.348712 \/ 1.504120 (0.844592) | 2.012527 \/ 1.541195 (0.471332) | 2.069776 \/ 1.468490 (0.601286) | 0.874262 \/ 4.584777 (-3.710515) | 4.877317 \/ 3.745712 (1.131605) | 5.327459 \/ 5.269862 (0.057597) | 3.336823 \/ 4.565676 (-1.228854) | 0.100456 \/ 0.424275 (-0.323819) | 0.008503 \/ 0.007607 (0.000895) | 0.692009 \/ 0.226044 (0.465965) | 6.912731 \/ 2.268929 (4.643802) | 3.110548 \/ 55.444624 (-52.334076) | 2.443665 \/ 6.876477 (-4.432811) | 2.528713 \/ 2.142072 (0.386641) | 1.076358 \/ 4.805227 (-3.728869) | 0.220352 \/ 6.500664 (-6.280312) | 0.080293 \/ 0.075469 (0.004824) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.538444 \/ 1.841788 (-0.303344) | 21.121221 \/ 8.074308 (13.046913) | 19.810609 \/ 10.191392 (9.619216) | 0.225406 \/ 0.680424 (-0.455018) | 0.026652 \/ 0.534201 (-0.507549) | 0.430372 \/ 0.579283 (-0.148911) | 0.510722 \/ 0.434364 (0.076358) | 0.514347 \/ 0.540337 (-0.025991) | 0.686050 \/ 1.386936 (-0.700886) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007675 \/ 0.011353 (-0.003678) | 0.004542 \/ 0.011008 (-0.006466) | 0.069655 \/ 0.038508 (0.031147) | 0.069338 \/ 0.023109 (0.046229) | 0.436505 \/ 0.275898 (0.160607) | 0.481806 \/ 0.323480 (0.158326) | 0.005315 \/ 0.007986 (-0.002670) | 0.004455 \/ 0.004328 (0.000127) | 0.072674 \/ 0.004250 (0.068424) | 0.058088 \/ 0.037052 (0.021035) | 0.445825 \/ 0.258489 (0.187336) | 0.501706 \/ 0.293841 (0.207865) | 0.047123 \/ 0.128546 (-0.081424) | 0.012943 \/ 0.075646 (-0.062703) | 0.093491 \/ 0.419271 (-0.325780) | 0.060169 \/ 0.043533 (0.016637) | 0.436530 \/ 0.255139 (0.181391) | 0.466873 \/ 0.283200 (0.183674) | 0.040453 \/ 0.141683 (-0.101230) | 1.586438 \/ 1.452155 (0.134283) | 1.671081 \/ 1.492716 (0.178365) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.180607 \/ 0.018006 (0.162601) | 0.520145 \/ 0.000490 (0.519655) | 0.004824 \/ 0.000200 (0.004624) | 0.000116 \/ 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029308 \/ 0.037411 (-0.008103) | 0.093652 \/ 0.014526 (0.079126) | 0.102332 \/ 0.176557 (-0.074224) | 0.162414 \/ 0.737135 (-0.574721) | 0.098017 \/ 0.296338 (-0.198321) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.583949 \/ 0.215209 (0.368740) | 6.035191 \/ 2.077655 (3.957536) | 2.801274 \/ 1.504120 (1.297155) | 2.566150 \/ 1.541195 (1.024955) | 2.437122 \/ 1.468490 (0.968632) | 0.865038 \/ 4.584777 (-3.719739) | 4.841727 \/ 3.745712 (1.096015) | 4.683919 \/ 5.269862 (-0.585943) | 2.941240 \/ 4.565676 (-1.624437) | 0.104888 \/ 0.424275 (-0.319387) | 0.007747 \/ 0.007607 (0.000140) | 0.780041 \/ 0.226044 (0.553997) | 7.771314 \/ 2.268929 (5.502385) | 3.680814 \/ 55.444624 (-51.763811) | 2.938472 \/ 6.876477 (-3.938004) | 2.981740 \/ 2.142072 (0.839668) | 1.065411 \/ 4.805227 (-3.739816) | 0.222265 \/ 6.500664 (-6.278399) | 0.082428 \/ 0.075469 (0.006959) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.626774 \/ 1.841788 (-0.215014) | 21.618284 \/ 8.074308 (13.543976) | 20.596743 \/ 10.191392 (10.405351) | 0.240969 \/ 0.680424 (-0.439454) | 0.025630 \/ 0.534201 (-0.508570) | 0.481981 \/ 0.579283 (-0.097302) | 0.547914 \/ 0.434364 (0.113550) | 0.522296 \/ 0.540337 (-0.018041) | 0.729174 \/ 1.386936 (-0.657762) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#b8067c0262073891180869f700ebef5ac3dc5cce \"CML watermark\")\n"],"created_at":1689077789000,"updated_at":1689091648000,"closed_at":1689091006000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6015","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6015","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6015.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6015.patch","merged_at":1689091006000},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6015\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6015\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6014","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6014\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6014\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6014\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6014","id":1798213816,"node_id":"I_kwDODunzps5rLpC4","number":6014,"title":"Request to Share\/Update Dataset Viewer Code","user":{"login":"lilyorlilypad","id":105081034,"node_id":"U_kgDOBkNoyg","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/105081034?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lilyorlilypad","html_url":"https:\/\/github.com\/lilyorlilypad","followers_url":"https:\/\/api.github.com\/users\/lilyorlilypad\/followers","following_url":"https:\/\/api.github.com\/users\/lilyorlilypad\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lilyorlilypad\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lilyorlilypad\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lilyorlilypad\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lilyorlilypad\/orgs","repos_url":"https:\/\/api.github.com\/users\/lilyorlilypad\/repos","events_url":"https:\/\/api.github.com\/users\/lilyorlilypad\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lilyorlilypad\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! The huggingface\/dataset-viewer code was not maintained anymore because we switched to a new dataset viewer that is deployed available for each dataset the Hugging Face website.\r\n\r\nWhat are you using this old repository for ?","I think these parts are outdated:\r\n\r\n* https:\/\/github.com\/huggingface\/datasets-viewer\/blob\/8efad8eae313a891f713469983bf4c744786f26e\/run.py#L126-L131\r\n* https:\/\/github.com\/huggingface\/datasets-viewer\/blob\/8efad8eae313a891f713469983bf4c744786f26e\/run.py#L145-L150\r\n\r\nTo make the viewer work, the first one should be replaced with the following:\r\n```python\r\ndataset_module = datasets.load.dataset_module_factory(path)\r\nbuilder_cls = datasets.load.import_main_class(dataset_module.module_path)\r\nconfs = builder_cls.BUILDER_CONFIGS\r\n```\r\nAnd the second one:\r\n```python\r\ndataset_module = datasets.load.dataset_module_factory(path)\r\nbuilder_cls = datasets.load.import_main_class(dataset_module.module_path)\r\nif conf:\r\n builder_instance = builder_cls(name=conf, cache_dir=path if path_to_datasets is not None else None)\r\nelse:\r\n builder_instance = builder_cls(cache_dir=path if path_to_datasets is not None else None)\r\n```\r\n\r\nBut as @lhoestq suggested, it's better to use the `datasets-server` API nowadays to [fetch the rows](https:\/\/huggingface.co\/docs\/datasets-server\/rows).","> The dataset viewer on the Hugging Face website is incredibly useful\r\n\r\n@mariosasko i think @lilyorlilypad wants to run the new dataset-viewer, not the old one","> wants to run the new dataset-viewer, not the old one\r\n\r\nThanks for the clarification for me. I do want to run the new dataset-viewer. ","It should be possible to run it locally using the HF datasets-server API (docs [here](https:\/\/huggingface.co\/docs\/datasets-server)) but the front end part is not open source (yet ?)\r\n\r\nThe back-end is open source though if you're interested: https:\/\/github.com\/huggingface\/datasets-server\r\nIt automatically converts datasets on HF to Parquet, which is the format we use to power the viewer.","the new frontend would probably be hard to open source, as is, as it's quite intertwined with the Hub's code.\r\n\r\nHowever, at some point it would be amazing to have a community-driven open source implementation of a frontend to datasets-server! "],"created_at":1689057369000,"updated_at":1689171529000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"\r\nOverview:\r\nThe repository (huggingface\/datasets-viewer) was recently archived and when I tried to run the code, there was the error message \"AttributeError: module 'datasets.load' has no attribute 'prepare_module'\". I could not resolve the issue myself due to lack of documentation of that attribute. \r\n\r\nRequest:\r\nI kindly request the sharing of the code responsible for the dataset preview functionality or help with resolving the error. The dataset viewer on the Hugging Face website is incredibly useful since it is compatible with different types of inputs. It allows users to find datasets that meet their needs more efficiently. If needed, I am willing to contribute to the project by testing, documenting, and providing feedback on the dataset viewer code. \r\n\r\nThank you for considering this request, and I look forward to your response.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6014\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6014\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6013","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6013\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6013\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6013\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6013","id":1796083437,"node_id":"I_kwDODunzps5rDg7t","number":6013,"title":"[FR] `map` should reuse unchanged columns from the previous dataset to avoid disk usage","user":{"login":"NightMachinery","id":36224762,"node_id":"MDQ6VXNlcjM2MjI0NzYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36224762?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NightMachinery","html_url":"https:\/\/github.com\/NightMachinery","followers_url":"https:\/\/api.github.com\/users\/NightMachinery\/followers","following_url":"https:\/\/api.github.com\/users\/NightMachinery\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NightMachinery\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NightMachinery\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NightMachinery\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NightMachinery\/orgs","repos_url":"https:\/\/api.github.com\/users\/NightMachinery\/repos","events_url":"https:\/\/api.github.com\/users\/NightMachinery\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NightMachinery\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":3761482852,"node_id":"LA_kwDODunzps7gM6xk","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20second%20issue","name":"good second issue","color":"BDE59C","default":false,"description":"Issues a bit more difficult than \"Good First\" issues"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["You can use the `remove_columns` parameter in `map` to avoid duplicating the columns (and save disk space) and then concatenate the original dataset with the map result:\r\n```python\r\nfrom datasets import concatenate_datasets\r\n# dummy example\r\nds_new = ds.map(lambda x: {\"new_col\": x[\"col\"] + 2}, remove_columns=ds.column_names)\r\nds_combined = concatenate_datasets([ds, ds_new], axis=1)\r\n```\r\n\r\nDoing this automatically is hard to implement efficiently unless we know ahead of time which existing columns will be modified by a `map` transform. We have this info when `input_columns` are specified, so I think this is the only case we can optimize."],"created_at":1688971340000,"updated_at":1689003472000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\n\nCurrently adding a new column with `map` will cause all the data in the dataset to be duplicated and stored\/cached on the disk again. It should reuse unchanged columns. \n\n### Motivation\n\nThis allows having datasets with different columns but sharing some basic columns. Currently, these datasets would become too expensive to store and one would need some kind of on-the-fly join; which also doesn't seem implemented.\n\n### Your contribution\n\n_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6013\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6013\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6012","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6012\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6012\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6012\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6012","id":1795575432,"node_id":"I_kwDODunzps5rBk6I","number":6012,"title":"[FR] Transform Chaining, Lazy Mapping","user":{"login":"NightMachinery","id":36224762,"node_id":"MDQ6VXNlcjM2MjI0NzYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36224762?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NightMachinery","html_url":"https:\/\/github.com\/NightMachinery","followers_url":"https:\/\/api.github.com\/users\/NightMachinery\/followers","following_url":"https:\/\/api.github.com\/users\/NightMachinery\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NightMachinery\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NightMachinery\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NightMachinery\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NightMachinery\/orgs","repos_url":"https:\/\/api.github.com\/users\/NightMachinery\/repos","events_url":"https:\/\/api.github.com\/users\/NightMachinery\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NightMachinery\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["You can use `with_transform` to get a new dataset object.\r\n\r\nSupport for lazy `map` has already been discussed [here](https:\/\/github.com\/huggingface\/datasets\/issues\/3385) a little bit. Personally, I'm not a fan, as this would make `map` even more complex. ","> You can use `with_transform` to get a new dataset object.\r\n> \r\n> Support for lazy `map` has already been discussed [here](https:\/\/github.com\/huggingface\/datasets\/issues\/3385) a little bit. Personally, I'm not a fan, as this would make `map` even more complex.\r\n\r\nI read about IterableDataset, and it seems to have lazy mapping. But I can't figure out how to convert an IterableDataset into a normal one when needed.\r\n\r\n`with_transform` still does not chain AFAIU.","> I read about IterableDataset, and it seems to have lazy mapping. But I can't figure out how to convert an IterableDataset into a normal one when needed.\r\n\r\nYou must cache an `IterableDataset` to disk to load it as a `Dataset`. One way to do this is with `Dataset.from_generator`:\r\n```python\r\nfrom functools import partial\r\nfrom datasets import Dataset\r\n\r\ndef gen_from_iterable_dataset(iterable_ds)\r\n yield from iterable_ds\r\n\r\nds = Dataset.from_generator(partial(gen_from_iterable_dataset, iterable_ds), features=iterable_ds.features})\r\n```\r\n\r\n> with_transform still does not chain AFAIU.\r\n\r\nYes, not supported yet - the solution is to combine the transforms into a single one.","I wonder if it would be beneficial to have a dedicated method to do that ? Maybe a `.save_to_disk()` so that the user can reload the resulting dataset later ?","> ```python\r\n> from functools import partial\r\n> from datasets import Dataset\r\n> \r\n> def gen_from_iterable_dataset(iterable_ds)\r\n> yield from iterable_ds\r\n> \r\n> ds = Dataset.from_generator(partial(gen_from_iterable_dataset, iterable_ds), features=iterable_ds.features})\r\n> ```\r\n\r\n@mariosasko With these complex mapping functions, what hash will be used to cache this dataset?\r\n"],"created_at":1688938821000,"updated_at":1689277941000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\n\nCurrently using a `map` call processes and duplicates the whole dataset, which takes both time and disk space.\r\n\r\nThe solution is to allow lazy mapping, which is essentially a saved chain of transforms that are applied on the fly whenever a slice of the dataset is requested.\r\n\r\nThe API should look like `map`, as `set_transform` changes the current dataset while `map` returns another dataset.\n\n### Motivation\n\nLazy processing allows lower disk usage and faster experimentation.\n\n### Your contribution\n\n_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6012\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6012\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6011","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6011\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6011\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6011\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6011","id":1795296568,"node_id":"I_kwDODunzps5rAg04","number":6011,"title":"Documentation: wiki_dpr Dataset has no metric_type for Faiss Index","user":{"login":"YichiRockyZhang","id":29335344,"node_id":"MDQ6VXNlcjI5MzM1MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29335344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/YichiRockyZhang","html_url":"https:\/\/github.com\/YichiRockyZhang","followers_url":"https:\/\/api.github.com\/users\/YichiRockyZhang\/followers","following_url":"https:\/\/api.github.com\/users\/YichiRockyZhang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/YichiRockyZhang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/YichiRockyZhang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/YichiRockyZhang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/YichiRockyZhang\/orgs","repos_url":"https:\/\/api.github.com\/users\/YichiRockyZhang\/repos","events_url":"https:\/\/api.github.com\/users\/YichiRockyZhang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/YichiRockyZhang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! You can do `ds.get_index(\"embeddings\").faiss_index.metric_type` to get the metric type and then match the result with the FAISS metric [enum](https:\/\/github.com\/facebookresearch\/faiss\/blob\/43d86e30736ede853c384b24667fc3ab897d6ba9\/faiss\/MetricType.h#L22-L36) (should be L2).","Ah! Thank you for pointing this out. FYI: the enum indicates it's using the inner product. Using `torch.inner` or `torch.dot` still produces a discrepancy compared to the built-in score. I think this is because of the compression\/quantization that occurs with the FAISS index."],"created_at":1688891419000,"updated_at":1689044556000,"closed_at":1689044556000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nAfter loading `wiki_dpr` using:\r\n```py\r\nds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train')\r\nprint(ds.get_index(\"embeddings\").metric_type) # prints nothing because the value is None\r\n```\r\nthe index does not have a defined `metric_type`. This is an issue because I do not know how the `scores` are being computed for `get_nearest_examples()`.\n\n### Steps to reproduce the bug\n\nSystem: Python 3.9.16, Transformers 4.30.2, WSL\r\n\r\nAfter loading `wiki_dpr` using:\r\n```py\r\nds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train')\r\nprint(ds.get_index(\"embeddings\").metric_type) # prints nothing because the value is None\r\n```\r\nthe index does not have a defined `metric_type`. This is an issue because I do not know how the `scores` are being computed for `get_nearest_examples()`.\r\n\r\n```py\r\nfrom transformers import DPRQuestionEncoder, DPRContextEncoder, DPRQuestionEncoderTokenizer, DPRContextEncoderTokenizer\r\n\r\ntokenizer = DPRQuestionEncoderTokenizer.from_pretrained(\"facebook\/dpr-question_encoder-multiset-base\")\r\nencoder = DPRQuestionEncoder.from_pretrained(\"facebook\/dpr-question_encoder-multiset-base\")\r\n\r\ndef encode_question(query, tokenizer=tokenizer, encoder=encoder):\r\n inputs = tokenizer(query, return_tensors='pt')\r\n question_embedding = encoder(**inputs)[0].detach().numpy()\r\n return question_embedding\r\n\r\ndef get_knn(query, k=5, tokenizer=tokenizer, encoder=encoder, verbose=False):\r\n enc_question = encode_question(query, tokenizer, encoder)\r\n topk_results = ds.get_nearest_examples(index_name='embeddings',\r\n query=enc_question,\r\n k=k)\r\n \r\n \r\n a = torch.tensor(enc_question[0]).reshape(768)\r\n b = torch.tensor(topk_results.examples['embeddings'][0])\r\n print(a.shape, b.shape)\r\n print(torch.dot(a, b))\r\n print((a-b).pow(2).sum())\r\n\r\n return topk_results\r\n```\r\n\r\nThe [FAISS documentation](https:\/\/github.com\/facebookresearch\/faiss\/wiki\/MetricType-and-distances) suggests the metric is usually L2 distance (without the square root) or the inner product. I compute both for the sample query:\r\n```py\r\nquery = \"\"\" it catapulted into popular culture along with a line of action figures and other toys by Bandai.[2] By 2001, the media franchise had generated over $6 billion in toy sales.\r\nDespite initial criticism that its action violence targeted child audiences, the franchise has been commercially successful.\"\"\"\r\nget_knn(query,k=5)\r\n```\r\n\r\nHere, I get dot product of 80.6020 and L2 distance of 77.6616 and \r\n```py\r\nNearestExamplesResults(scores=array([76.20431 , 75.312416, 74.945404, 74.866394, 74.68506 ],\r\n dtype=float32), examples={'id': ['3081096', '2004811', '8908258', '9594124', '286575'], 'text': ['actors, resulting in the \"Power Rangers\" franchise which has continued since then into sequel TV series (with \"Power Rangers Beast Morphers\" set to premiere in 2019), comic books, video games, and three feature films, with a further cinematic universe planned. Following from the success of \"Power Rangers\", Saban acquired the rights to more of Toei\\'s library, creating \"VR Troopers\" and \"Big Bad Beetleborgs\" from several Metal Hero Series shows and \"Masked Rider\" from Kamen Rider Series footage. DIC Entertainment joined this boom by acquiring the rights to \"Gridman the Hyper Agent\" and turning it into \"Superhuman Samurai Syber-Squad\". In 2002,', \r\n```\r\n\r\nDoing `k=1` indicates the higher the outputted number, the better the match, so the metric should not be L2 distance. However, my manually computed inner product (80.6) has a discrepancy with the reported (76.2). Perhaps, this has to do with me using the `compressed` embeddings?\n\n### Expected behavior\n\n```py\r\nds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train')\r\nprint(ds.get_index(\"embeddings\").metric_type) # METRIC_INNER_PRODUCT\r\n```\n\n### Environment info\n\n- `datasets` version: 2.12.0\r\n- Platform: Linux-4.18.0-477.13.1.el8_8.x86_64-x86_64-with-glibc2.28\r\n- Python version: 3.9.16\r\n- Huggingface_hub version: 0.14.1\r\n- PyArrow version: 12.0.0\r\n- Pandas version: 2.0.1","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6011\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6011\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6010","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6010\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6010\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6010\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6010","id":1793838152,"node_id":"I_kwDODunzps5q68xI","number":6010,"title":"Improve `Dataset`'s string representation","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I want to take a shot at this if possible ","Yes, feel free to work on this.\r\n\r\nYou can check the PyArrow Table `__repr__` and Polars DataFrame `__repr__`\/`_repr_html_` implementations for some pointers\/ideas."],"created_at":1688747883000,"updated_at":1688999574000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"Currently, `Dataset.__repr__` outputs a dataset's column names and the number of rows. We could improve it by printing its features and the first few rows.\r\n\r\nWe should also implement `_repr_html_` to have a rich HTML representation in notebooks\/Streamlit.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6010\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6010\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6009","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6009\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6009\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6009\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6009","id":1792059808,"node_id":"PR_kwDODunzps5U1mus","number":6009,"title":"Fix cast for dictionaries with no keys","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006961 \/ 0.011353 (-0.004392) | 0.004390 \/ 0.011008 (-0.006618) | 0.103249 \/ 0.038508 (0.064741) | 0.048084 \/ 0.023109 (0.024975) | 0.351213 \/ 0.275898 (0.075315) | 0.416918 \/ 0.323480 (0.093439) | 0.005539 \/ 0.007986 (-0.002446) | 0.003555 \/ 0.004328 (-0.000774) | 0.079306 \/ 0.004250 (0.075055) | 0.066937 \/ 0.037052 (0.029884) | 0.382601 \/ 0.258489 (0.124112) | 0.406125 \/ 0.293841 (0.112284) | 0.032269 \/ 0.128546 (-0.096277) | 0.009133 \/ 0.075646 (-0.066514) | 0.354449 \/ 0.419271 (-0.064822) | 0.068978 \/ 0.043533 (0.025445) | 0.352314 \/ 0.255139 (0.097175) | 0.390398 \/ 0.283200 (0.107199) | 0.025640 \/ 0.141683 (-0.116043) | 1.553865 \/ 1.452155 (0.101710) | 1.601292 \/ 1.492716 (0.108576) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.208310 \/ 0.018006 (0.190303) | 0.440076 \/ 0.000490 (0.439586) | 0.000363 \/ 0.000200 (0.000163) | 0.000059 \/ 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029173 \/ 0.037411 (-0.008238) | 0.111323 \/ 0.014526 (0.096797) | 0.123001 \/ 0.176557 (-0.053556) | 0.180180 \/ 0.737135 (-0.556955) | 0.125804 \/ 0.296338 (-0.170534) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.419919 \/ 0.215209 (0.204710) | 4.194515 \/ 2.077655 (2.116860) | 1.881234 \/ 1.504120 (0.377114) | 1.672914 \/ 1.541195 (0.131720) | 1.723102 \/ 1.468490 (0.254612) | 0.543584 \/ 4.584777 (-4.041193) | 3.822477 \/ 3.745712 (0.076765) | 1.837946 \/ 5.269862 (-3.431915) | 1.094975 \/ 4.565676 (-3.470701) | 0.066788 \/ 0.424275 (-0.357487) | 0.011689 \/ 0.007607 (0.004082) | 0.520983 \/ 0.226044 (0.294938) | 5.209245 \/ 2.268929 (2.940316) | 2.392916 \/ 55.444624 (-53.051708) | 2.060042 \/ 6.876477 (-4.816434) | 2.162291 \/ 2.142072 (0.020219) | 0.668472 \/ 4.805227 (-4.136755) | 0.144373 \/ 6.500664 (-6.356291) | 0.066152 \/ 0.075469 (-0.009318) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.251256 \/ 1.841788 (-0.590532) | 15.161338 \/ 8.074308 (7.087030) | 14.416133 \/ 10.191392 (4.224741) | 0.166145 \/ 0.680424 (-0.514279) | 0.018168 \/ 0.534201 (-0.516033) | 0.433364 \/ 0.579283 (-0.145919) | 0.417484 \/ 0.434364 (-0.016880) | 0.502543 \/ 0.540337 (-0.037794) | 0.602904 \/ 1.386936 (-0.784032) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006946 \/ 0.011353 (-0.004407) | 0.004248 \/ 0.011008 (-0.006761) | 0.079707 \/ 0.038508 (0.041199) | 0.046226 \/ 0.023109 (0.023117) | 0.375864 \/ 0.275898 (0.099966) | 0.430740 \/ 0.323480 (0.107260) | 0.006222 \/ 0.007986 (-0.001764) | 0.003474 \/ 0.004328 (-0.000854) | 0.079622 \/ 0.004250 (0.075372) | 0.066666 \/ 0.037052 (0.029613) | 0.379487 \/ 0.258489 (0.120998) | 0.423002 \/ 0.293841 (0.129161) | 0.032836 \/ 0.128546 (-0.095710) | 0.008976 \/ 0.075646 (-0.066670) | 0.086578 \/ 0.419271 (-0.332693) | 0.055651 \/ 0.043533 (0.012118) | 0.360787 \/ 0.255139 (0.105648) | 0.384265 \/ 0.283200 (0.101065) | 0.025350 \/ 0.141683 (-0.116333) | 1.547880 \/ 1.452155 (0.095725) | 1.605850 \/ 1.492716 (0.113134) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.184227 \/ 0.018006 (0.166220) | 0.442071 \/ 0.000490 (0.441582) | 0.002887 \/ 0.000200 (0.002687) | 0.000088 \/ 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031923 \/ 0.037411 (-0.005488) | 0.119093 \/ 0.014526 (0.104568) | 0.128704 \/ 0.176557 (-0.047853) | 0.187065 \/ 0.737135 (-0.550070) | 0.134135 \/ 0.296338 (-0.162204) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.455731 \/ 0.215209 (0.240522) | 4.562911 \/ 2.077655 (2.485256) | 2.247431 \/ 1.504120 (0.743311) | 2.053346 \/ 1.541195 (0.512151) | 2.049611 \/ 1.468490 (0.581121) | 0.546069 \/ 4.584777 (-4.038708) | 3.821852 \/ 3.745712 (0.076140) | 3.358497 \/ 5.269862 (-1.911364) | 1.667697 \/ 4.565676 (-2.897979) | 0.067968 \/ 0.424275 (-0.356307) | 0.012344 \/ 0.007607 (0.004737) | 0.550864 \/ 0.226044 (0.324820) | 5.496867 \/ 2.268929 (3.227939) | 2.680031 \/ 55.444624 (-52.764594) | 2.328673 \/ 6.876477 (-4.547804) | 2.436754 \/ 2.142072 (0.294682) | 0.681195 \/ 4.805227 (-4.124033) | 0.148761 \/ 6.500664 (-6.351904) | 0.067716 \/ 0.075469 (-0.007753) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.353798 \/ 1.841788 (-0.487990) | 15.992965 \/ 8.074308 (7.918657) | 14.051539 \/ 10.191392 (3.860147) | 0.181087 \/ 0.680424 (-0.499337) | 0.018653 \/ 0.534201 (-0.515548) | 0.433499 \/ 0.579283 (-0.145784) | 0.428845 \/ 0.434364 (-0.005519) | 0.501100 \/ 0.540337 (-0.039238) | 0.603666 \/ 1.386936 (-0.783270) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#10cfa871a2f387fe9c6360e1873ea74c6d69ff67 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.010983 \/ 0.011353 (-0.000370) | 0.005630 \/ 0.011008 (-0.005378) | 0.109967 \/ 0.038508 (0.071458) | 0.101580 \/ 0.023109 (0.078471) | 0.490205 \/ 0.275898 (0.214307) | 0.534653 \/ 0.323480 (0.211173) | 0.008365 \/ 0.007986 (0.000379) | 0.004317 \/ 0.004328 (-0.000012) | 0.082429 \/ 0.004250 (0.078179) | 0.080556 \/ 0.037052 (0.043504) | 0.494627 \/ 0.258489 (0.236138) | 0.544189 \/ 0.293841 (0.250348) | 0.049419 \/ 0.128546 (-0.079127) | 0.014033 \/ 0.075646 (-0.061613) | 0.370406 \/ 0.419271 (-0.048866) | 0.083468 \/ 0.043533 (0.039935) | 0.463829 \/ 0.255139 (0.208690) | 0.507516 \/ 0.283200 (0.224316) | 0.053266 \/ 0.141683 (-0.088417) | 1.778680 \/ 1.452155 (0.326525) | 1.916616 \/ 1.492716 (0.423900) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.267646 \/ 0.018006 (0.249640) | 0.617824 \/ 0.000490 (0.617334) | 0.007720 \/ 0.000200 (0.007520) | 0.000139 \/ 0.000054 (0.000085) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034464 \/ 0.037411 (-0.002948) | 0.113626 \/ 0.014526 (0.099100) | 0.118911 \/ 0.176557 (-0.057646) | 0.194701 \/ 0.737135 (-0.542434) | 0.123431 \/ 0.296338 (-0.172907) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.606073 \/ 0.215209 (0.390863) | 6.086393 \/ 2.077655 (4.008738) | 2.568712 \/ 1.504120 (1.064593) | 2.260801 \/ 1.541195 (0.719606) | 2.411798 \/ 1.468490 (0.943307) | 0.876433 \/ 4.584777 (-3.708344) | 5.521280 \/ 3.745712 (1.775568) | 5.969722 \/ 5.269862 (0.699861) | 3.671028 \/ 4.565676 (-0.894649) | 0.097082 \/ 0.424275 (-0.327193) | 0.011354 \/ 0.007607 (0.003747) | 0.713842 \/ 0.226044 (0.487798) | 7.291172 \/ 2.268929 (5.022244) | 3.315272 \/ 55.444624 (-52.129352) | 2.777487 \/ 6.876477 (-4.098990) | 3.025449 \/ 2.142072 (0.883377) | 1.014115 \/ 4.805227 (-3.791112) | 0.217928 \/ 6.500664 (-6.282736) | 0.083097 \/ 0.075469 (0.007627) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.640060 \/ 1.841788 (-0.201728) | 25.342172 \/ 8.074308 (17.267864) | 22.776510 \/ 10.191392 (12.585118) | 0.227300 \/ 0.680424 (-0.453124) | 0.032233 \/ 0.534201 (-0.501968) | 0.507547 \/ 0.579283 (-0.071736) | 0.647044 \/ 0.434364 (0.212680) | 0.607019 \/ 0.540337 (0.066682) | 0.823548 \/ 1.386936 (-0.563388) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009576 \/ 0.011353 (-0.001777) | 0.009322 \/ 0.011008 (-0.001687) | 0.087184 \/ 0.038508 (0.048676) | 0.100795 \/ 0.023109 (0.077685) | 0.492138 \/ 0.275898 (0.216240) | 0.528386 \/ 0.323480 (0.204906) | 0.006689 \/ 0.007986 (-0.001296) | 0.004735 \/ 0.004328 (0.000406) | 0.085519 \/ 0.004250 (0.081269) | 0.072648 \/ 0.037052 (0.035595) | 0.496068 \/ 0.258489 (0.237579) | 0.549634 \/ 0.293841 (0.255793) | 0.049709 \/ 0.128546 (-0.078837) | 0.015077 \/ 0.075646 (-0.060569) | 0.099445 \/ 0.419271 (-0.319826) | 0.068080 \/ 0.043533 (0.024547) | 0.500426 \/ 0.255139 (0.245287) | 0.531437 \/ 0.283200 (0.248238) | 0.053176 \/ 0.141683 (-0.088507) | 1.827942 \/ 1.452155 (0.375787) | 1.914286 \/ 1.492716 (0.421570) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.247658 \/ 0.018006 (0.229652) | 0.590805 \/ 0.000490 (0.590315) | 0.005319 \/ 0.000200 (0.005119) | 0.000165 \/ 0.000054 (0.000110) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.036993 \/ 0.037411 (-0.000418) | 0.112944 \/ 0.014526 (0.098419) | 0.118964 \/ 0.176557 (-0.057593) | 0.194867 \/ 0.737135 (-0.542269) | 0.120816 \/ 0.296338 (-0.175523) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.638062 \/ 0.215209 (0.422853) | 6.246785 \/ 2.077655 (4.169130) | 2.957779 \/ 1.504120 (1.453659) | 2.739118 \/ 1.541195 (1.197924) | 2.795362 \/ 1.468490 (1.326872) | 0.890532 \/ 4.584777 (-3.694245) | 5.508198 \/ 3.745712 (1.762486) | 5.222315 \/ 5.269862 (-0.047547) | 3.152731 \/ 4.565676 (-1.412946) | 0.098344 \/ 0.424275 (-0.325931) | 0.008800 \/ 0.007607 (0.001193) | 0.757889 \/ 0.226044 (0.531845) | 7.545715 \/ 2.268929 (5.276787) | 3.694536 \/ 55.444624 (-51.750088) | 3.112872 \/ 6.876477 (-3.763605) | 3.182358 \/ 2.142072 (1.040285) | 1.028171 \/ 4.805227 (-3.777056) | 0.215223 \/ 6.500664 (-6.285441) | 0.085856 \/ 0.075469 (0.010387) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.853138 \/ 1.841788 (0.011350) | 25.939672 \/ 8.074308 (17.865364) | 23.118029 \/ 10.191392 (12.926637) | 0.250599 \/ 0.680424 (-0.429825) | 0.029942 \/ 0.534201 (-0.504259) | 0.508748 \/ 0.579283 (-0.070535) | 0.593966 \/ 0.434364 (0.159602) | 0.605499 \/ 0.540337 (0.065162) | 0.863827 \/ 1.386936 (-0.523109) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#5d15950d99677e9473cdcd31cfd83aa17e313e28 \"CML watermark\")\n"],"created_at":1688669294000,"updated_at":1688739180000,"closed_at":1688738473000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6009","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6009","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6009.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6009.patch","merged_at":1688738473000},"body":"Fix #5677 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6009\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6009\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6008","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6008\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6008\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6008\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6008","id":1789869344,"node_id":"I_kwDODunzps5qrz0g","number":6008,"title":"Dataset.from_generator consistently freezes at ~1000 rows","user":{"login":"andreemic","id":27695722,"node_id":"MDQ6VXNlcjI3Njk1NzIy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27695722?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/andreemic","html_url":"https:\/\/github.com\/andreemic","followers_url":"https:\/\/api.github.com\/users\/andreemic\/followers","following_url":"https:\/\/api.github.com\/users\/andreemic\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/andreemic\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/andreemic\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/andreemic\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/andreemic\/orgs","repos_url":"https:\/\/api.github.com\/users\/andreemic\/repos","events_url":"https:\/\/api.github.com\/users\/andreemic\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/andreemic\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["By default, we write data to disk (so it can be memory-mapped) every 1000 rows\/samples. You can control this with the `writer_batch_size` parameter. Also, when working with fixed-size arrays, the `ArrayXD` feature types yield better performance (e.g., in your case, `features=datasets.Features({\"i\": datasets.Array3D(shape=(512,512,3), dtype=\"float32\")})` should be faster).\r\n\r\nOur support for multi-dim arrays could be better, and we plan to improve it as part of https:\/\/github.com\/huggingface\/datasets\/issues\/5272.","> By default, we write data to disk (so it can be memory-mapped) every 1000 rows\/samples. You can control this with the `writer_batch_size` parameter. Also, when working with fixed-size arrays, the `ArrayXD` feature types yield better performance (e.g., in your case, `features=datasets.Features({\"i\": datasets.Array3D(shape=(512,512,3), dtype=\"float32\")})` should be faster).\r\n> \r\n> Our support for multi-dim arrays could be better, and we plan to improve it as part of #5272.\r\n\r\nThanks for the explanation! The Image array was just for demonstration, I use PIL Images in practice. Does that make a difference? What's the best approach for a dataset with PIL Images as rows?","It's best to use the `datasets.Image()` feature type for PIL images (to save space) :)"],"created_at":1688573208000,"updated_at":1688996799000,"closed_at":1688996799000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nWhenever I try to create a dataset which contains images using `Dataset.from_generator`, it freezes around 996 rows. I suppose it has something to do with memory consumption, but there's more memory available. I\r\n\r\nSomehow it worked a few times but mostly this makes the datasets library much more cumbersome to work with because generators are the easiest way to turn an existing dataset into a Hugging Face dataset.\r\n\r\nI've let it run in the frozen state for way longer than it can possibly take to load the actual dataset.\r\n\r\nLet me know if you have ideas how to resolve it!\n\n### Steps to reproduce the bug\n\n```python\r\nfrom datasets import Dataset\r\nimport numpy as np\r\n\r\ndef gen():\r\n for row in range(10000):\r\n yield {\"i\": np.random.rand(512, 512, 3)}\r\n \r\nDataset.from_generator(gen)\r\n# -> 90% of the time gets stuck around 1000 rows\r\n```\n\n### Expected behavior\n\nShould continue and go through all the examples yielded by the generator, or at least throw an error or somehow communicate what's going on.\n\n### Environment info\n\n- `datasets` version: 2.8.0\r\n- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- PyArrow version: 12.0.1\r\n- Pandas version: 1.5.1\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6008\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6008\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6007","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6007\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6007\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6007\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6007","id":1789782693,"node_id":"I_kwDODunzps5qreql","number":6007,"title":"Get an error \"OverflowError: Python int too large to convert to C long\" when loading a large dataset","user":{"login":"silverriver","id":2529049,"node_id":"MDQ6VXNlcjI1MjkwNDk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2529049?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/silverriver","html_url":"https:\/\/github.com\/silverriver","followers_url":"https:\/\/api.github.com\/users\/silverriver\/followers","following_url":"https:\/\/api.github.com\/users\/silverriver\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/silverriver\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/silverriver\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/silverriver\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/silverriver\/orgs","repos_url":"https:\/\/api.github.com\/users\/silverriver\/repos","events_url":"https:\/\/api.github.com\/users\/silverriver\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/silverriver\/received_events","type":"User","site_admin":false},"labels":[{"id":5705560427,"node_id":"LA_kwDODunzps8AAAABVBPxaw","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/arrow","name":"arrow","color":"c2e0c6","default":false,"description":"Related to Apache Arrow"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This error means that one of the int32 (`Value(\"int32\")`) columns in the dataset has a value that is out of the valid (int32) range.\r\n\r\nI'll open a PR to print the name of a problematic column to make debugging such errors easier.","I am afraid int32 is not the reason for this error.\r\n\r\nI have submitted a commit to use int64 for all ints in the dataset:\r\nhttps:\/\/huggingface.co\/datasets\/liwu\/MNBVC\/commit\/857ac00d9eab96a6708ad6a82bd9001686042a9e\r\n\r\nand I have updated my env to the latest datasets release:\r\nCopy-and-paste the text below in your GitHub issue.\r\n\r\n- `datasets` version: 2.13.1\r\n- Platform: macOS-13.2.1-arm64-arm-64bit\r\n- Python version: 3.11.2\r\n- Huggingface_hub version: 0.13.4\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 1.5.3\r\n\r\nBut the error still exist\r\n\r\n```\r\nDownloading and preparing dataset mnbvc\/news_peoples_daily to \/Users\/silver\/.cache\/huggingface\/datasets\/liwu___mnbvc\/news_peoples_daily\/0.0.1\/ee380f6309fe9b8b0d1fb14d77118f132444f22c8c4b28bf5c1645312688e051...\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 12\/12 [00:00<00:00, 9070.40it\/s]\r\nExtracting data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 12\/12 [00:00<00:00, 2697.16it\/s]\r\n---------------------------------------------------------------------------\r\nOverflowError Traceback (most recent call last)\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/builder.py:1647, in GeneratorBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)\r\n 1646 example = self.info.features.encode_example(record) if self.info.features is not None else record\r\n-> 1647 writer.write(example, key)\r\n 1648 num_examples_progress_update += 1\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/arrow_writer.py:490, in ArrowWriter.write(self, example, key, writer_batch_size)\r\n 488 self.hkey_record = []\r\n--> 490 self.write_examples_on_file()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/arrow_writer.py:448, in ArrowWriter.write_examples_on_file(self)\r\n 444 batch_examples[col] = [\r\n 445 row[0][col].to_pylist()[0] if isinstance(row[0][col], (pa.Array, pa.ChunkedArray)) else row[0][col]\r\n 446 for row in self.current_examples\r\n 447 ]\r\n--> 448 self.write_batch(batch_examples=batch_examples)\r\n 449 self.current_examples = []\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/arrow_writer.py:553, in ArrowWriter.write_batch(self, batch_examples, writer_batch_size)\r\n 552 typed_sequence = OptimizedTypedSequence(col_values, type=col_type, try_type=col_try_type, col=col)\r\n--> 553 arrays.append(pa.array(typed_sequence))\r\n 554 inferred_features[col] = typed_sequence.get_inferred_type()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/pyarrow\/array.pxi:236, in pyarrow.lib.array()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/pyarrow\/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/arrow_writer.py:189, in TypedSequence.__arrow_array__(self, type)\r\n 188 trying_cast_to_python_objects = True\r\n--> 189 out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n 190 # use smaller integer precisions if possible\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/pyarrow\/array.pxi:320, in pyarrow.lib.array()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/pyarrow\/array.pxi:39, in pyarrow.lib._sequence_to_array()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/pyarrow\/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\nOverflowError: Python int too large to convert to C long\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOverflowError Traceback (most recent call last)\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/builder.py:1656, in GeneratorBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)\r\n 1655 num_shards = shard_id + 1\r\n-> 1656 num_examples, num_bytes = writer.finalize()\r\n 1657 writer.close()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/arrow_writer.py:584, in ArrowWriter.finalize(self, close_stream)\r\n 583 self.hkey_record = []\r\n--> 584 self.write_examples_on_file()\r\n 585 # If schema is known, infer features even if no examples were written\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/arrow_writer.py:448, in ArrowWriter.write_examples_on_file(self)\r\n 444 batch_examples[col] = [\r\n 445 row[0][col].to_pylist()[0] if isinstance(row[0][col], (pa.Array, pa.ChunkedArray)) else row[0][col]\r\n 446 for row in self.current_examples\r\n 447 ]\r\n--> 448 self.write_batch(batch_examples=batch_examples)\r\n 449 self.current_examples = []\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/arrow_writer.py:553, in ArrowWriter.write_batch(self, batch_examples, writer_batch_size)\r\n 552 typed_sequence = OptimizedTypedSequence(col_values, type=col_type, try_type=col_try_type, col=col)\r\n--> 553 arrays.append(pa.array(typed_sequence))\r\n 554 inferred_features[col] = typed_sequence.get_inferred_type()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/pyarrow\/array.pxi:236, in pyarrow.lib.array()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/pyarrow\/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/arrow_writer.py:189, in TypedSequence.__arrow_array__(self, type)\r\n 188 trying_cast_to_python_objects = True\r\n--> 189 out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n 190 # use smaller integer precisions if possible\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/pyarrow\/array.pxi:320, in pyarrow.lib.array()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/pyarrow\/array.pxi:39, in pyarrow.lib._sequence_to_array()\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/pyarrow\/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\nOverflowError: Python int too large to convert to C long\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nDatasetGenerationError Traceback (most recent call last)\r\nCell In[2], line 1\r\n----> 1 dataset = load_dataset(\"liwu\/MNBVC\", 'news_peoples_daily', split='train')\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/load.py:1809, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1806 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n 1808 # Download and prepare data\r\n-> 1809 builder_instance.download_and_prepare(\r\n 1810 download_config=download_config,\r\n 1811 download_mode=download_mode,\r\n 1812 verification_mode=verification_mode,\r\n 1813 try_from_hf_gcs=try_from_hf_gcs,\r\n 1814 num_proc=num_proc,\r\n 1815 storage_options=storage_options,\r\n 1816 )\r\n 1818 # Build dataset for splits\r\n 1819 keep_in_memory = (\r\n 1820 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)\r\n 1821 )\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/builder.py:909, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)\r\n 907 if num_proc is not None:\r\n 908 prepare_split_kwargs[\"num_proc\"] = num_proc\r\n--> 909 self._download_and_prepare(\r\n 910 dl_manager=dl_manager,\r\n 911 verification_mode=verification_mode,\r\n 912 **prepare_split_kwargs,\r\n 913 **download_and_prepare_kwargs,\r\n 914 )\r\n 915 # Sync info\r\n 916 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/builder.py:1670, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs)\r\n 1669 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs):\r\n-> 1670 super()._download_and_prepare(\r\n 1671 dl_manager,\r\n 1672 verification_mode,\r\n 1673 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS\r\n 1674 or verification_mode == VerificationMode.ALL_CHECKS,\r\n 1675 **prepare_splits_kwargs,\r\n 1676 )\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/builder.py:1004, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)\r\n 1000 split_dict.add(split_generator.split_info)\r\n 1002 try:\r\n 1003 # Prepare split will record examples associated to the split\r\n-> 1004 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 1005 except OSError as e:\r\n 1006 raise OSError(\r\n 1007 \"Cannot find data file. \"\r\n 1008 + (self.manual_download_instructions or \"\")\r\n 1009 + \"\\nOriginal error:\\n\"\r\n 1010 + str(e)\r\n 1011 ) from None\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/builder.py:1508, in GeneratorBasedBuilder._prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size)\r\n 1506 job_id = 0\r\n 1507 with pbar:\r\n-> 1508 for job_id, done, content in self._prepare_split_single(\r\n 1509 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args\r\n 1510 ):\r\n 1511 if done:\r\n 1512 result = content\r\n\r\nFile ~\/git\/venv\/lib\/python3.11\/site-packages\/datasets\/builder.py:1665, in GeneratorBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)\r\n 1663 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:\r\n 1664 e = e.__context__\r\n-> 1665 raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\n 1667 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)\r\n\r\nDatasetGenerationError: An error occurred while generating the dataset\r\n```\r\n\r\nBesides, it works fine when I am using streamed dataset.","`simhash` is the problematic column - it has values such as `18329103420363166823` that are out of the int64 range. You can fix this by setting the feature type to `Value(\"string\")` (it's advised to use this type for hash values in general)\r\n\r\n> Besides, it works fine when I am using streamed dataset.\r\n\r\nStreaming yields Python dictionaries from the script without converting them to the Arrow representation, as this conversion step is not that cheap performance-wise.","i am using uint64 for simhash\r\n\r\nuint64 ranges up to about 3.69E19.\r\n\r\n18329103420363166823 is less than this value.\r\n\r\nmoreover, our simhash algorithm use 64 bits. it should fit in uint64.\r\n\r\n\r\n\r\n","You are right. I overlooked the feature type.\r\n\r\nThis is a reproducer:\r\n```python\r\nimport pyarrow as pa\r\nfrom datasets.arrow_writer import TypedSequence\r\n\r\npa.array(TypedSequence([18329103420363166823], type=Value(\"uint64\")))\r\n```\r\n\r\n`pa.array([18329103420363166823])` also fails with the same error, so it seems PyArrow does not always infer the correct type as NumPy does (`uint64` in this case).\r\n\r\nI'll report this issue in the Arrow repo.\r\n\r\n`pa.array([18329103420363166823], pa.uint64)` works, so maybe we can implement a temporary fix (supporting complex input such as `[{\"image\": pil_image, \"num\": uint64_value}]` would be hard though).\r\n\r\nIn the meantime, you should be able to bypass this error by returning the `simhash` values as NumPy scalars in the script:\r\n```python\r\ndef _generate_examples(self, ...):\r\n ...\r\n yield {..., \"simhash\": np.uint64(simhash), ...}\r\n```","Thank you for checking this issue in detail.\r\n\r\nHowever, it seems that using `np.uint64(simhash)` does not work. The same issue still exists.\r\n\r\nhttps:\/\/huggingface.co\/datasets\/liwu\/MNBVC\/commit\/1e44f1e400b7e61052647d44c99cdae3bae9c830\r\n\r\nAnyway, we decide to use string type for these simhash values. Hope pyarrow can fix their bug soon.","Arrow issue: https:\/\/github.com\/apache\/arrow\/issues\/36520"],"created_at":1688570210000,"updated_at":1689016277000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nWhen load a large dataset with the following code\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"liwu\/MNBVC\", 'news_peoples_daily', split='train')\r\n```\r\n\r\nWe encountered the error: \"OverflowError: Python int too large to convert to C long\"\r\nThe error look something like:\r\n\r\n```\r\nOverflowError: Python int too large to convert to C long\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOverflowError Traceback (most recent call last)\r\n in \r\n----> 1 dataset = load_dataset(\"liwu\/MNBVC\", 'news_peoples_daily', split='train', cache_dir='\/sfs\/MNBVC\/.cache\/')\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1749 ignore_verifications=ignore_verifications,\r\n 1750 try_from_hf_gcs=try_from_hf_gcs,\r\n-> 1751 use_auth_token=use_auth_token,\r\n 1752 )\r\n 1753 \r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/datasets\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 703 if not downloaded_from_gcs:\r\n 704 self._download_and_prepare(\r\n--> 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 706 )\r\n 707 # Sync info\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n 1225 \r\n 1226 def _download_and_prepare(self, dl_manager, verify_infos):\r\n-> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n 1228 \r\n 1229 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 791 try:\r\n 792 # Prepare split will record examples associated to the split\r\n--> 793 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 794 except OSError as e:\r\n 795 raise OSError(\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/datasets\/builder.py in _prepare_split(self, split_generator, check_duplicate_keys)\r\n 1219 writer.write(example, key)\r\n 1220 finally:\r\n-> 1221 num_examples, num_bytes = writer.finalize()\r\n 1222 \r\n 1223 split_generator.split_info.num_examples = num_examples\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/datasets\/arrow_writer.py in finalize(self, close_stream)\r\n 536 # Re-intializing to empty list for next batch\r\n 537 self.hkey_record = []\r\n--> 538 self.write_examples_on_file()\r\n 539 if self.pa_writer is None:\r\n 540 if self.schema:\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/datasets\/arrow_writer.py in write_examples_on_file(self)\r\n 407 # Since current_examples contains (example, key) tuples\r\n 408 batch_examples[col] = [row[0][col] for row in self.current_examples]\r\n--> 409 self.write_batch(batch_examples=batch_examples)\r\n 410 self.current_examples = []\r\n 411 \r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/datasets\/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)\r\n 506 col_try_type = try_features[col] if try_features is not None and col in try_features else None\r\n 507 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)\r\n--> 508 arrays.append(pa.array(typed_sequence))\r\n 509 inferred_features[col] = typed_sequence.get_inferred_type()\r\n 510 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/pyarrow\/array.pxi in pyarrow.lib.array()\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/pyarrow\/array.pxi in pyarrow.lib._handle_arrow_array_protocol()\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/datasets\/arrow_writer.py in __arrow_array__(self, type)\r\n 180 else:\r\n 181 trying_cast_to_python_objects = True\r\n--> 182 out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n 183 # use smaller integer precisions if possible\r\n 184 if self.trying_int_optimization:\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/pyarrow\/array.pxi in pyarrow.lib.array()\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/pyarrow\/array.pxi in pyarrow.lib._sequence_to_array()\r\n\r\n\/sfs\/MNBVC\/venv\/lib64\/python3.6\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\nOverflowError: Python int too large to convert to C long\r\n```\r\n\r\nHowever, that dataset can be loaded in a streaming manner:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"liwu\/MNBVC\", 'news_peoples_daily', split='train', streaming=True)\r\n\r\nfor i in dataset:\r\n pass # it work well\r\n```\r\n\r\nAnother issue is reported in our dataset hub:\r\nhttps:\/\/huggingface.co\/datasets\/liwu\/MNBVC\/discussions\/2\r\n\r\n\r\n### Steps to reproduce the bug\r\n\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"liwu\/MNBVC\", 'news_peoples_daily', split='train')\r\n\r\n### Expected behavior\r\n\r\nthe dataset can be safely loaded\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-3.10.0-1160.an7.x86_64-x86_64-with-centos-7.9\r\n- Python version: 3.6.8\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.1.5","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6007\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6007\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6006","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6006\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6006\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6006\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6006","id":1788855582,"node_id":"I_kwDODunzps5qn8Ue","number":6006,"title":"NotADirectoryError when loading gigawords","user":{"login":"xipq","id":115634163,"node_id":"U_kgDOBuRv8w","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/115634163?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/xipq","html_url":"https:\/\/github.com\/xipq","followers_url":"https:\/\/api.github.com\/users\/xipq\/followers","following_url":"https:\/\/api.github.com\/users\/xipq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/xipq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/xipq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/xipq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/xipq\/orgs","repos_url":"https:\/\/api.github.com\/users\/xipq\/repos","events_url":"https:\/\/api.github.com\/users\/xipq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/xipq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["issue due to corrupted download files. resolved after cleaning download cache. sorry for any inconvinence."],"created_at":1688538221000,"updated_at":1688538662000,"closed_at":1688538661000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\ngot `NotADirectoryError` whtn loading gigawords dataset\n\n### Steps to reproduce the bug\n\nWhen running\r\n```\r\nimport datasets\r\ndatasets.load_dataset('gigaword')\r\n```\r\n\r\nGot the following exception:\r\n```bash\r\nTraceback (most recent call last): [0\/1862]\r\n File \"\/home\/x\/.conda\/envs\/dataproc\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1629, in _prepare_split_single \r\n for key, record in generator: \r\n File \"\/home\/x\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/gigaword\/ea83a8b819190acac5f2dae011fad51dccf269a0604ec5dd24795b\r\n64efb424b6\/gigaword.py\", line 115, in _generate_examples \r\n with open(src_path, encoding=\"utf-8\") as f_d, open(tgt_path, encoding=\"utf-8\") as f_s:\r\n File \"\/home\/x\/.conda\/envs\/dataproc\/lib\/python3.8\/site-packages\/datasets\/streaming.py\", line 71, in wrapper\r\n return function(*args, use_auth_token=use_auth_token, **kwargs)\r\n File \"\/home\/x\/.conda\/envs\/dataproc\/lib\/python3.8\/site-packages\/datasets\/download\/streaming_download_manager.py\", line 493, in xope\r\nn \r\n return open(main_hop, mode, *args, **kwargs) \r\nNotADirectoryError: [Errno 20] Not a directory: '\/home\/x\/.cache\/huggingface\/datasets\/downloads\/6da52431bb5124d90cf51a0187d2dbee9046e\r\n89780c4be7599794a4f559048ec\/org_data\/train.src.txt'\r\n \r\nThe above exception was the direct cause of the following exception:\r\n \r\nTraceback (most recent call last): \r\n File \"gigaword.py\", line 38, in \r\n main() \r\n File \"gigaword.py\", line 35, in main \r\n train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path=\"..\/data\/\")\r\n File \"\/home\/x\/MICL\/preprocess\/fewshot_gym_dataset.py\", line 199, in generate_k_shot_data \r\n dataset = self.load_dataset() \r\n File \"gigaword.py\", line 29, in load_dataset \r\n return datasets.load_dataset('gigaword') \r\n File \"\/home\/x\/.conda\/envs\/dataproc\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1809, in load_dataset \r\n builder_instance.download_and_prepare( \r\n File \"\/home\/x\/.conda\/envs\/dataproc\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 909, in download_and_prepare\r\n self._download_and_prepare( \r\n File \"\/home\/x\/.conda\/envs\/dataproc\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1670, in _download_and_prepare\r\n super()._download_and_prepare( \r\n File \"\/home\/x\/.conda\/envs\/dataproc\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1004, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs) \r\n File \"\/home\/x\/.conda\/envs\/dataproc\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1508, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single( \r\n File \"\/home\/x\/.conda\/envs\/dataproc\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1665, in _prepare_split_single \r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\ndatasets.builder.DatasetGenerationError: An error occurred while generating the dataset\r\n```\r\n\n\n### Expected behavior\n\nDownload and process the dataset successfully\n\n### Environment info\n\n- `datasets` version: 2.13.1\r\n- Platform: Linux-5.0.0-1032-azure-x86_64-with-glibc2.10\r\n- Python version: 3.8.0\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 12.0.1\r\n- Pandas version: 2.0.3\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6006\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6006\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6005","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6005\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6005\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6005\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6005","id":1788103576,"node_id":"PR_kwDODunzps5UoJ91","number":6005,"title":"Drop Python 3.7 support","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006152 \/ 0.011353 (-0.005200) | 0.003916 \/ 0.011008 (-0.007092) | 0.097355 \/ 0.038508 (0.058847) | 0.037228 \/ 0.023109 (0.014119) | 0.315753 \/ 0.275898 (0.039855) | 0.387949 \/ 0.323480 (0.064470) | 0.004804 \/ 0.007986 (-0.003181) | 0.002975 \/ 0.004328 (-0.001353) | 0.076932 \/ 0.004250 (0.072682) | 0.053497 \/ 0.037052 (0.016445) | 0.331143 \/ 0.258489 (0.072654) | 0.388347 \/ 0.293841 (0.094506) | 0.027535 \/ 0.128546 (-0.101011) | 0.008509 \/ 0.075646 (-0.067137) | 0.312639 \/ 0.419271 (-0.106632) | 0.047212 \/ 0.043533 (0.003679) | 0.316875 \/ 0.255139 (0.061736) | 0.352191 \/ 0.283200 (0.068992) | 0.021380 \/ 0.141683 (-0.120303) | 1.541401 \/ 1.452155 (0.089247) | 1.519420 \/ 1.492716 (0.026704) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.206332 \/ 0.018006 (0.188326) | 0.412252 \/ 0.000490 (0.411762) | 0.005119 \/ 0.000200 (0.004919) | 0.000077 \/ 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023856 \/ 0.037411 (-0.013556) | 0.098216 \/ 0.014526 (0.083691) | 0.106553 \/ 0.176557 (-0.070003) | 0.168767 \/ 0.737135 (-0.568369) | 0.109244 \/ 0.296338 (-0.187094) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.457580 \/ 0.215209 (0.242371) | 4.583246 \/ 2.077655 (2.505591) | 2.296356 \/ 1.504120 (0.792236) | 2.096216 \/ 1.541195 (0.555021) | 2.159086 \/ 1.468490 (0.690596) | 0.557905 \/ 4.584777 (-4.026872) | 3.345910 \/ 3.745712 (-0.399802) | 1.767436 \/ 5.269862 (-3.502426) | 1.021583 \/ 4.565676 (-3.544094) | 0.067265 \/ 0.424275 (-0.357011) | 0.011411 \/ 0.007607 (0.003804) | 0.559841 \/ 0.226044 (0.333797) | 5.586892 \/ 2.268929 (3.317963) | 2.735520 \/ 55.444624 (-52.709104) | 2.429393 \/ 6.876477 (-4.447084) | 2.544901 \/ 2.142072 (0.402829) | 0.667603 \/ 4.805227 (-4.137625) | 0.136244 \/ 6.500664 (-6.364421) | 0.066961 \/ 0.075469 (-0.008508) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.206529 \/ 1.841788 (-0.635259) | 13.988306 \/ 8.074308 (5.913998) | 13.481813 \/ 10.191392 (3.290421) | 0.161901 \/ 0.680424 (-0.518523) | 0.016850 \/ 0.534201 (-0.517351) | 0.367657 \/ 0.579283 (-0.211626) | 0.393343 \/ 0.434364 (-0.041021) | 0.465288 \/ 0.540337 (-0.075050) | 0.559888 \/ 1.386936 (-0.827048) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005956 \/ 0.011353 (-0.005397) | 0.003734 \/ 0.011008 (-0.007274) | 0.077841 \/ 0.038508 (0.039333) | 0.036532 \/ 0.023109 (0.013422) | 0.438923 \/ 0.275898 (0.163025) | 0.490133 \/ 0.323480 (0.166653) | 0.004651 \/ 0.007986 (-0.003335) | 0.002881 \/ 0.004328 (-0.001448) | 0.077868 \/ 0.004250 (0.073618) | 0.051700 \/ 0.037052 (0.014647) | 0.448018 \/ 0.258489 (0.189529) | 0.500304 \/ 0.293841 (0.206464) | 0.029051 \/ 0.128546 (-0.099496) | 0.008498 \/ 0.075646 (-0.067148) | 0.082932 \/ 0.419271 (-0.336339) | 0.043665 \/ 0.043533 (0.000132) | 0.431613 \/ 0.255139 (0.176474) | 0.458749 \/ 0.283200 (0.175549) | 0.021951 \/ 0.141683 (-0.119731) | 1.556043 \/ 1.452155 (0.103888) | 1.588391 \/ 1.492716 (0.095675) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.220674 \/ 0.018006 (0.202667) | 0.415408 \/ 0.000490 (0.414918) | 0.002613 \/ 0.000200 (0.002413) | 0.000075 \/ 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025548 \/ 0.037411 (-0.011863) | 0.103633 \/ 0.014526 (0.089107) | 0.115193 \/ 0.176557 (-0.061364) | 0.163971 \/ 0.737135 (-0.573164) | 0.114754 \/ 0.296338 (-0.181585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.456823 \/ 0.215209 (0.241614) | 4.569950 \/ 2.077655 (2.492296) | 2.196339 \/ 1.504120 (0.692219) | 1.985822 \/ 1.541195 (0.444628) | 2.044083 \/ 1.468490 (0.575593) | 0.567919 \/ 4.584777 (-4.016858) | 3.397515 \/ 3.745712 (-0.348197) | 1.741087 \/ 5.269862 (-3.528775) | 1.041237 \/ 4.565676 (-3.524440) | 0.068963 \/ 0.424275 (-0.355313) | 0.011677 \/ 0.007607 (0.004070) | 0.565010 \/ 0.226044 (0.338966) | 5.625886 \/ 2.268929 (3.356957) | 2.670658 \/ 55.444624 (-52.773967) | 2.300279 \/ 6.876477 (-4.576198) | 2.392178 \/ 2.142072 (0.250106) | 0.680226 \/ 4.805227 (-4.125001) | 0.139119 \/ 6.500664 (-6.361545) | 0.067953 \/ 0.075469 (-0.007516) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.303280 \/ 1.841788 (-0.538507) | 14.458686 \/ 8.074308 (6.384378) | 14.409369 \/ 10.191392 (4.217977) | 0.144581 \/ 0.680424 (-0.535843) | 0.016634 \/ 0.534201 (-0.517567) | 0.364607 \/ 0.579283 (-0.214676) | 0.394521 \/ 0.434364 (-0.039843) | 0.433417 \/ 0.540337 (-0.106921) | 0.527127 \/ 1.386936 (-0.859809) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#04a36f9546484dceadb84a133c1a460281d018f8 \"CML watermark\")\n","_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006245 \/ 0.011353 (-0.005108) | 0.003871 \/ 0.011008 (-0.007138) | 0.098823 \/ 0.038508 (0.060315) | 0.039853 \/ 0.023109 (0.016744) | 0.314989 \/ 0.275898 (0.039091) | 0.376733 \/ 0.323480 (0.053254) | 0.004754 \/ 0.007986 (-0.003232) | 0.002971 \/ 0.004328 (-0.001357) | 0.078451 \/ 0.004250 (0.074201) | 0.053160 \/ 0.037052 (0.016107) | 0.324443 \/ 0.258489 (0.065954) | 0.361488 \/ 0.293841 (0.067647) | 0.027942 \/ 0.128546 (-0.100604) | 0.008535 \/ 0.075646 (-0.067111) | 0.315526 \/ 0.419271 (-0.103745) | 0.045706 \/ 0.043533 (0.002174) | 0.329614 \/ 0.255139 (0.074475) | 0.336339 \/ 0.283200 (0.053139) | 0.021278 \/ 0.141683 (-0.120405) | 1.529710 \/ 1.452155 (0.077555) | 1.566833 \/ 1.492716 (0.074116) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.215263 \/ 0.018006 (0.197257) | 0.440320 \/ 0.000490 (0.439830) | 0.002627 \/ 0.000200 (0.002427) | 0.000075 \/ 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023971 \/ 0.037411 (-0.013441) | 0.100549 \/ 0.014526 (0.086023) | 0.106995 \/ 0.176557 (-0.069561) | 0.169630 \/ 0.737135 (-0.567505) | 0.111614 \/ 0.296338 (-0.184724) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.424911 \/ 0.215209 (0.209702) | 4.246920 \/ 2.077655 (2.169266) | 1.923321 \/ 1.504120 (0.419202) | 1.714795 \/ 1.541195 (0.173600) | 1.772906 \/ 1.468490 (0.304416) | 0.554676 \/ 4.584777 (-4.030101) | 3.478896 \/ 3.745712 (-0.266816) | 2.800494 \/ 5.269862 (-2.469368) | 1.382630 \/ 4.565676 (-3.183047) | 0.067271 \/ 0.424275 (-0.357004) | 0.010967 \/ 0.007607 (0.003360) | 0.526769 \/ 0.226044 (0.300725) | 5.288564 \/ 2.268929 (3.019636) | 2.337459 \/ 55.444624 (-53.107165) | 1.999975 \/ 6.876477 (-4.876502) | 2.102680 \/ 2.142072 (-0.039392) | 0.672181 \/ 4.805227 (-4.133046) | 0.135097 \/ 6.500664 (-6.365567) | 0.066950 \/ 0.075469 (-0.008519) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.264365 \/ 1.841788 (-0.577423) | 14.282440 \/ 8.074308 (6.208132) | 14.220200 \/ 10.191392 (4.028808) | 0.139055 \/ 0.680424 (-0.541369) | 0.016681 \/ 0.534201 (-0.517520) | 0.367936 \/ 0.579283 (-0.211348) | 0.393959 \/ 0.434364 (-0.040404) | 0.424438 \/ 0.540337 (-0.115900) | 0.508065 \/ 1.386936 (-0.878872) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006514 \/ 0.011353 (-0.004839) | 0.003890 \/ 0.011008 (-0.007118) | 0.078871 \/ 0.038508 (0.040363) | 0.038080 \/ 0.023109 (0.014971) | 0.358282 \/ 0.275898 (0.082384) | 0.430654 \/ 0.323480 (0.107174) | 0.005712 \/ 0.007986 (-0.002273) | 0.003030 \/ 0.004328 (-0.001299) | 0.078636 \/ 0.004250 (0.074386) | 0.057771 \/ 0.037052 (0.020719) | 0.368814 \/ 0.258489 (0.110325) | 0.437047 \/ 0.293841 (0.143206) | 0.029470 \/ 0.128546 (-0.099076) | 0.008523 \/ 0.075646 (-0.067124) | 0.083334 \/ 0.419271 (-0.335938) | 0.044505 \/ 0.043533 (0.000972) | 0.357484 \/ 0.255139 (0.102345) | 0.393839 \/ 0.283200 (0.110639) | 0.023340 \/ 0.141683 (-0.118343) | 1.561033 \/ 1.452155 (0.108878) | 1.595560 \/ 1.492716 (0.102844) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.204149 \/ 0.018006 (0.186143) | 0.442747 \/ 0.000490 (0.442257) | 0.003105 \/ 0.000200 (0.002905) | 0.000085 \/ 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027002 \/ 0.037411 (-0.010409) | 0.105595 \/ 0.014526 (0.091070) | 0.108695 \/ 0.176557 (-0.067861) | 0.163182 \/ 0.737135 (-0.573953) | 0.114999 \/ 0.296338 (-0.181339) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.483713 \/ 0.215209 (0.268504) | 4.836063 \/ 2.077655 (2.758409) | 2.488072 \/ 1.504120 (0.983952) | 2.289556 \/ 1.541195 (0.748361) | 2.342912 \/ 1.468490 (0.874422) | 0.565937 \/ 4.584777 (-4.018840) | 3.479085 \/ 3.745712 (-0.266627) | 1.770922 \/ 5.269862 (-3.498940) | 1.046084 \/ 4.565676 (-3.519592) | 0.067857 \/ 0.424275 (-0.356418) | 0.011283 \/ 0.007607 (0.003676) | 0.592966 \/ 0.226044 (0.366921) | 5.932842 \/ 2.268929 (3.663914) | 2.956252 \/ 55.444624 (-52.488372) | 2.602704 \/ 6.876477 (-4.273772) | 2.715625 \/ 2.142072 (0.573552) | 0.674299 \/ 4.805227 (-4.130929) | 0.136039 \/ 6.500664 (-6.364625) | 0.067629 \/ 0.075469 (-0.007840) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.333734 \/ 1.841788 (-0.508054) | 14.561943 \/ 8.074308 (6.487634) | 14.455385 \/ 10.191392 (4.263993) | 0.132020 \/ 0.680424 (-0.548404) | 0.016893 \/ 0.534201 (-0.517308) | 0.367146 \/ 0.579283 (-0.212137) | 0.399623 \/ 0.434364 (-0.034741) | 0.432658 \/ 0.540337 (-0.107680) | 0.530475 \/ 1.386936 (-0.856461) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#18da5adb22b2b403b8d8ae673192746d2ed7e9f9 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006045 \/ 0.011353 (-0.005308) | 0.003906 \/ 0.011008 (-0.007103) | 0.097558 \/ 0.038508 (0.059050) | 0.038827 \/ 0.023109 (0.015718) | 0.393564 \/ 0.275898 (0.117666) | 0.442459 \/ 0.323480 (0.118980) | 0.004792 \/ 0.007986 (-0.003194) | 0.002984 \/ 0.004328 (-0.001345) | 0.076419 \/ 0.004250 (0.072169) | 0.053606 \/ 0.037052 (0.016554) | 0.409743 \/ 0.258489 (0.151254) | 0.445753 \/ 0.293841 (0.151912) | 0.027753 \/ 0.128546 (-0.100793) | 0.008428 \/ 0.075646 (-0.067219) | 0.310267 \/ 0.419271 (-0.109004) | 0.057582 \/ 0.043533 (0.014049) | 0.396624 \/ 0.255139 (0.141485) | 0.416288 \/ 0.283200 (0.133089) | 0.029048 \/ 0.141683 (-0.112635) | 1.495362 \/ 1.452155 (0.043207) | 1.546331 \/ 1.492716 (0.053615) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.203832 \/ 0.018006 (0.185826) | 0.423649 \/ 0.000490 (0.423160) | 0.004533 \/ 0.000200 (0.004333) | 0.000076 \/ 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023084 \/ 0.037411 (-0.014328) | 0.100503 \/ 0.014526 (0.085977) | 0.105058 \/ 0.176557 (-0.071499) | 0.168506 \/ 0.737135 (-0.568629) | 0.112019 \/ 0.296338 (-0.184320) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.425877 \/ 0.215209 (0.210668) | 4.251278 \/ 2.077655 (2.173624) | 1.931339 \/ 1.504120 (0.427219) | 1.730578 \/ 1.541195 (0.189383) | 1.750637 \/ 1.468490 (0.282147) | 0.559307 \/ 4.584777 (-4.025470) | 3.461665 \/ 3.745712 (-0.284047) | 2.826959 \/ 5.269862 (-2.442903) | 1.418448 \/ 4.565676 (-3.147229) | 0.067881 \/ 0.424275 (-0.356394) | 0.011394 \/ 0.007607 (0.003787) | 0.533226 \/ 0.226044 (0.307181) | 5.341849 \/ 2.268929 (3.072921) | 2.367832 \/ 55.444624 (-53.076792) | 2.027240 \/ 6.876477 (-4.849236) | 2.095852 \/ 2.142072 (-0.046220) | 0.673790 \/ 4.805227 (-4.131437) | 0.136044 \/ 6.500664 (-6.364620) | 0.066350 \/ 0.075469 (-0.009119) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.203740 \/ 1.841788 (-0.638048) | 13.720879 \/ 8.074308 (5.646571) | 13.405939 \/ 10.191392 (3.214547) | 0.146792 \/ 0.680424 (-0.533632) | 0.016844 \/ 0.534201 (-0.517357) | 0.373455 \/ 0.579283 (-0.205828) | 0.394596 \/ 0.434364 (-0.039768) | 0.464715 \/ 0.540337 (-0.075623) | 0.558931 \/ 1.386936 (-0.828005) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006118 \/ 0.011353 (-0.005235) | 0.003817 \/ 0.011008 (-0.007191) | 0.077494 \/ 0.038508 (0.038985) | 0.037507 \/ 0.023109 (0.014398) | 0.387030 \/ 0.275898 (0.111132) | 0.437352 \/ 0.323480 (0.113872) | 0.004810 \/ 0.007986 (-0.003176) | 0.002935 \/ 0.004328 (-0.001394) | 0.077143 \/ 0.004250 (0.072892) | 0.053986 \/ 0.037052 (0.016933) | 0.393164 \/ 0.258489 (0.134675) | 0.449603 \/ 0.293841 (0.155762) | 0.029303 \/ 0.128546 (-0.099244) | 0.008481 \/ 0.075646 (-0.067165) | 0.083363 \/ 0.419271 (-0.335908) | 0.043877 \/ 0.043533 (0.000344) | 0.378175 \/ 0.255139 (0.123036) | 0.403996 \/ 0.283200 (0.120797) | 0.021688 \/ 0.141683 (-0.119995) | 1.541606 \/ 1.452155 (0.089452) | 1.552996 \/ 1.492716 (0.060280) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.236759 \/ 0.018006 (0.218752) | 0.416221 \/ 0.000490 (0.415732) | 0.000862 \/ 0.000200 (0.000662) | 0.000070 \/ 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025543 \/ 0.037411 (-0.011868) | 0.101731 \/ 0.014526 (0.087206) | 0.108482 \/ 0.176557 (-0.068075) | 0.160290 \/ 0.737135 (-0.576845) | 0.111392 \/ 0.296338 (-0.184946) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.457767 \/ 0.215209 (0.242558) | 4.565976 \/ 2.077655 (2.488321) | 2.245413 \/ 1.504120 (0.741294) | 2.031458 \/ 1.541195 (0.490264) | 2.073193 \/ 1.468490 (0.604702) | 0.560461 \/ 4.584777 (-4.024316) | 3.422536 \/ 3.745712 (-0.323176) | 2.977017 \/ 5.269862 (-2.292845) | 1.377021 \/ 4.565676 (-3.188655) | 0.068444 \/ 0.424275 (-0.355831) | 0.011036 \/ 0.007607 (0.003429) | 0.571501 \/ 0.226044 (0.345456) | 5.702652 \/ 2.268929 (3.433723) | 2.727132 \/ 55.444624 (-52.717492) | 2.399269 \/ 6.876477 (-4.477208) | 2.574281 \/ 2.142072 (0.432208) | 0.682600 \/ 4.805227 (-4.122627) | 0.136943 \/ 6.500664 (-6.363722) | 0.067126 \/ 0.075469 (-0.008343) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.322196 \/ 1.841788 (-0.519592) | 14.239509 \/ 8.074308 (6.165201) | 14.235779 \/ 10.191392 (4.044387) | 0.148262 \/ 0.680424 (-0.532162) | 0.016566 \/ 0.534201 (-0.517635) | 0.364034 \/ 0.579283 (-0.215249) | 0.399157 \/ 0.434364 (-0.035207) | 0.426348 \/ 0.540337 (-0.113990) | 0.520804 \/ 1.386936 (-0.866132) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#8f57aae06bd325d76cb70cb774450f3a66f169cf \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007808 \/ 0.011353 (-0.003545) | 0.004706 \/ 0.011008 (-0.006303) | 0.100530 \/ 0.038508 (0.062022) | 0.052052 \/ 0.023109 (0.028943) | 0.419300 \/ 0.275898 (0.143402) | 0.488451 \/ 0.323480 (0.164971) | 0.006350 \/ 0.007986 (-0.001636) | 0.003875 \/ 0.004328 (-0.000453) | 0.076489 \/ 0.004250 (0.072238) | 0.077554 \/ 0.037052 (0.040502) | 0.435863 \/ 0.258489 (0.177373) | 0.483241 \/ 0.293841 (0.189400) | 0.037518 \/ 0.128546 (-0.091028) | 0.009857 \/ 0.075646 (-0.065789) | 0.340933 \/ 0.419271 (-0.078339) | 0.087046 \/ 0.043533 (0.043514) | 0.410721 \/ 0.255139 (0.155582) | 0.428995 \/ 0.283200 (0.145795) | 0.041701 \/ 0.141683 (-0.099982) | 1.821017 \/ 1.452155 (0.368862) | 1.837021 \/ 1.492716 (0.344305) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.228444 \/ 0.018006 (0.210438) | 0.480446 \/ 0.000490 (0.479956) | 0.004963 \/ 0.000200 (0.004763) | 0.000101 \/ 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032485 \/ 0.037411 (-0.004926) | 0.096500 \/ 0.014526 (0.081974) | 0.111547 \/ 0.176557 (-0.065010) | 0.178842 \/ 0.737135 (-0.558294) | 0.111099 \/ 0.296338 (-0.185240) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.467159 \/ 0.215209 (0.251950) | 4.701676 \/ 2.077655 (2.624021) | 2.390560 \/ 1.504120 (0.886440) | 2.197722 \/ 1.541195 (0.656528) | 2.264705 \/ 1.468490 (0.796215) | 0.568667 \/ 4.584777 (-4.016110) | 4.200724 \/ 3.745712 (0.455012) | 3.777625 \/ 5.269862 (-1.492236) | 2.372451 \/ 4.565676 (-2.193225) | 0.067562 \/ 0.424275 (-0.356714) | 0.008947 \/ 0.007607 (0.001340) | 0.556910 \/ 0.226044 (0.330865) | 5.528927 \/ 2.268929 (3.259998) | 2.902780 \/ 55.444624 (-52.541844) | 2.507933 \/ 6.876477 (-4.368544) | 2.734627 \/ 2.142072 (0.592554) | 0.683305 \/ 4.805227 (-4.121922) | 0.158288 \/ 6.500664 (-6.342376) | 0.071252 \/ 0.075469 (-0.004217) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.487502 \/ 1.841788 (-0.354286) | 22.193341 \/ 8.074308 (14.119033) | 15.922607 \/ 10.191392 (5.731215) | 0.172189 \/ 0.680424 (-0.508235) | 0.021502 \/ 0.534201 (-0.512699) | 0.471198 \/ 0.579283 (-0.108085) | 0.475979 \/ 0.434364 (0.041615) | 0.544675 \/ 0.540337 (0.004338) | 0.756102 \/ 1.386936 (-0.630834) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007635 \/ 0.011353 (-0.003717) | 0.004614 \/ 0.011008 (-0.006394) | 0.075852 \/ 0.038508 (0.037344) | 0.049700 \/ 0.023109 (0.026591) | 0.425957 \/ 0.275898 (0.150059) | 0.512590 \/ 0.323480 (0.189110) | 0.006921 \/ 0.007986 (-0.001065) | 0.003714 \/ 0.004328 (-0.000615) | 0.075536 \/ 0.004250 (0.071286) | 0.070206 \/ 0.037052 (0.033153) | 0.455706 \/ 0.258489 (0.197217) | 0.512231 \/ 0.293841 (0.218390) | 0.036685 \/ 0.128546 (-0.091861) | 0.009793 \/ 0.075646 (-0.065853) | 0.084208 \/ 0.419271 (-0.335064) | 0.065262 \/ 0.043533 (0.021729) | 0.423761 \/ 0.255139 (0.168622) | 0.456791 \/ 0.283200 (0.173591) | 0.044539 \/ 0.141683 (-0.097144) | 1.797029 \/ 1.452155 (0.344874) | 1.864124 \/ 1.492716 (0.371408) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.366840 \/ 0.018006 (0.348834) | 0.479254 \/ 0.000490 (0.478765) | 0.070383 \/ 0.000200 (0.070183) | 0.000762 \/ 0.000054 (0.000707) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034233 \/ 0.037411 (-0.003178) | 0.103140 \/ 0.014526 (0.088614) | 0.117099 \/ 0.176557 (-0.059457) | 0.178532 \/ 0.737135 (-0.558603) | 0.120092 \/ 0.296338 (-0.176247) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.492993 \/ 0.215209 (0.277784) | 4.878776 \/ 2.077655 (2.801121) | 2.566666 \/ 1.504120 (1.062547) | 2.356383 \/ 1.541195 (0.815188) | 2.454723 \/ 1.468490 (0.986233) | 0.571432 \/ 4.584777 (-4.013345) | 4.240554 \/ 3.745712 (0.494842) | 7.509259 \/ 5.269862 (2.239398) | 4.040294 \/ 4.565676 (-0.525382) | 0.067409 \/ 0.424275 (-0.356866) | 0.008657 \/ 0.007607 (0.001050) | 0.585751 \/ 0.226044 (0.359707) | 5.967668 \/ 2.268929 (3.698739) | 3.195573 \/ 55.444624 (-52.249052) | 2.839772 \/ 6.876477 (-4.036704) | 2.806319 \/ 2.142072 (0.664246) | 0.681502 \/ 4.805227 (-4.123725) | 0.158673 \/ 6.500664 (-6.341991) | 0.073224 \/ 0.075469 (-0.002245) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.623335 \/ 1.841788 (-0.218453) | 22.490806 \/ 8.074308 (14.416498) | 16.762435 \/ 10.191392 (6.571043) | 0.180961 \/ 0.680424 (-0.499463) | 0.022716 \/ 0.534201 (-0.511485) | 0.472910 \/ 0.579283 (-0.106373) | 0.471616 \/ 0.434364 (0.037252) | 0.548192 \/ 0.540337 (0.007854) | 0.734357 \/ 1.386936 (-0.652579) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#c0498b47a00153d4730352b6595fc51ab054fb95 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005858 \/ 0.011353 (-0.005495) | 0.003512 \/ 0.011008 (-0.007497) | 0.079739 \/ 0.038508 (0.041231) | 0.057736 \/ 0.023109 (0.034627) | 0.317640 \/ 0.275898 (0.041742) | 0.354157 \/ 0.323480 (0.030677) | 0.004772 \/ 0.007986 (-0.003214) | 0.002824 \/ 0.004328 (-0.001504) | 0.063288 \/ 0.004250 (0.059037) | 0.049542 \/ 0.037052 (0.012489) | 0.323974 \/ 0.258489 (0.065485) | 0.372149 \/ 0.293841 (0.078308) | 0.026841 \/ 0.128546 (-0.101705) | 0.007846 \/ 0.075646 (-0.067800) | 0.262546 \/ 0.419271 (-0.156725) | 0.051952 \/ 0.043533 (0.008420) | 0.319439 \/ 0.255139 (0.064300) | 0.343862 \/ 0.283200 (0.060663) | 0.027021 \/ 0.141683 (-0.114662) | 1.445211 \/ 1.452155 (-0.006944) | 1.485006 \/ 1.492716 (-0.007711) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.183174 \/ 0.018006 (0.165167) | 0.422794 \/ 0.000490 (0.422304) | 0.004148 \/ 0.000200 (0.003948) | 0.000067 \/ 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023037 \/ 0.037411 (-0.014374) | 0.071300 \/ 0.014526 (0.056775) | 0.083022 \/ 0.176557 (-0.093535) | 0.146215 \/ 0.737135 (-0.590920) | 0.082549 \/ 0.296338 (-0.213789) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.422846 \/ 0.215209 (0.207637) | 4.215280 \/ 2.077655 (2.137626) | 2.256802 \/ 1.504120 (0.752682) | 2.056867 \/ 1.541195 (0.515673) | 2.102478 \/ 1.468490 (0.633988) | 0.497552 \/ 4.584777 (-4.087225) | 3.049716 \/ 3.745712 (-0.695996) | 4.209227 \/ 5.269862 (-1.060635) | 2.599947 \/ 4.565676 (-1.965730) | 0.059131 \/ 0.424275 (-0.365144) | 0.006459 \/ 0.007607 (-0.001148) | 0.495047 \/ 0.226044 (0.269003) | 4.952332 \/ 2.268929 (2.683404) | 2.675260 \/ 55.444624 (-52.769365) | 2.333223 \/ 6.876477 (-4.543254) | 2.449573 \/ 2.142072 (0.307500) | 0.583420 \/ 4.805227 (-4.221807) | 0.125140 \/ 6.500664 (-6.375524) | 0.060209 \/ 0.075469 (-0.015260) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.215033 \/ 1.841788 (-0.626755) | 18.101107 \/ 8.074308 (10.026799) | 13.489222 \/ 10.191392 (3.297830) | 0.147122 \/ 0.680424 (-0.533302) | 0.016567 \/ 0.534201 (-0.517634) | 0.329909 \/ 0.579283 (-0.249374) | 0.340952 \/ 0.434364 (-0.093412) | 0.379166 \/ 0.540337 (-0.161172) | 0.510767 \/ 1.386936 (-0.876169) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005942 \/ 0.011353 (-0.005411) | 0.003628 \/ 0.011008 (-0.007380) | 0.061975 \/ 0.038508 (0.023467) | 0.058331 \/ 0.023109 (0.035221) | 0.393277 \/ 0.275898 (0.117379) | 0.410740 \/ 0.323480 (0.087261) | 0.004546 \/ 0.007986 (-0.003440) | 0.002826 \/ 0.004328 (-0.001503) | 0.062216 \/ 0.004250 (0.057966) | 0.049801 \/ 0.037052 (0.012748) | 0.394070 \/ 0.258489 (0.135581) | 0.414407 \/ 0.293841 (0.120566) | 0.027161 \/ 0.128546 (-0.101385) | 0.007901 \/ 0.075646 (-0.067746) | 0.066778 \/ 0.419271 (-0.352493) | 0.041354 \/ 0.043533 (-0.002179) | 0.379432 \/ 0.255139 (0.124293) | 0.402966 \/ 0.283200 (0.119766) | 0.020279 \/ 0.141683 (-0.121404) | 1.416986 \/ 1.452155 (-0.035169) | 1.474335 \/ 1.492716 (-0.018382) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.226147 \/ 0.018006 (0.208140) | 0.404361 \/ 0.000490 (0.403871) | 0.000358 \/ 0.000200 (0.000158) | 0.000054 \/ 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025105 \/ 0.037411 (-0.012306) | 0.075849 \/ 0.014526 (0.061323) | 0.084781 \/ 0.176557 (-0.091775) | 0.137415 \/ 0.737135 (-0.599720) | 0.086288 \/ 0.296338 (-0.210051) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.445925 \/ 0.215209 (0.230716) | 4.453478 \/ 2.077655 (2.375823) | 2.419048 \/ 1.504120 (0.914928) | 2.246363 \/ 1.541195 (0.705168) | 2.304022 \/ 1.468490 (0.835532) | 0.499132 \/ 4.584777 (-4.085645) | 3.001336 \/ 3.745712 (-0.744376) | 2.902593 \/ 5.269862 (-2.367269) | 1.819843 \/ 4.565676 (-2.745834) | 0.057210 \/ 0.424275 (-0.367065) | 0.006338 \/ 0.007607 (-0.001269) | 0.523280 \/ 0.226044 (0.297236) | 5.235969 \/ 2.268929 (2.967040) | 2.897585 \/ 55.444624 (-52.547039) | 2.541586 \/ 6.876477 (-4.334891) | 2.564233 \/ 2.142072 (0.422160) | 0.584714 \/ 4.805227 (-4.220513) | 0.124611 \/ 6.500664 (-6.376053) | 0.061774 \/ 0.075469 (-0.013695) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.349799 \/ 1.841788 (-0.491988) | 18.225076 \/ 8.074308 (10.150768) | 13.781518 \/ 10.191392 (3.590126) | 0.130562 \/ 0.680424 (-0.549862) | 0.016434 \/ 0.534201 (-0.517767) | 0.331607 \/ 0.579283 (-0.247676) | 0.343456 \/ 0.434364 (-0.090908) | 0.380437 \/ 0.540337 (-0.159900) | 0.522793 \/ 1.386936 (-0.864143) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#f0a3dbbd2e7ace162346d95ec27db674e80c1e23 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.013721 \/ 0.011353 (0.002368) | 0.005715 \/ 0.011008 (-0.005293) | 0.090116 \/ 0.038508 (0.051608) | 0.087185 \/ 0.023109 (0.064075) | 0.427813 \/ 0.275898 (0.151915) | 0.390614 \/ 0.323480 (0.067135) | 0.006976 \/ 0.007986 (-0.001009) | 0.004231 \/ 0.004328 (-0.000098) | 0.078320 \/ 0.004250 (0.074070) | 0.066235 \/ 0.037052 (0.029183) | 0.439904 \/ 0.258489 (0.181415) | 0.424119 \/ 0.293841 (0.130278) | 0.050362 \/ 0.128546 (-0.078184) | 0.014992 \/ 0.075646 (-0.060654) | 0.293519 \/ 0.419271 (-0.125753) | 0.066906 \/ 0.043533 (0.023373) | 0.449657 \/ 0.255139 (0.194518) | 0.393800 \/ 0.283200 (0.110600) | 0.032258 \/ 0.141683 (-0.109425) | 1.539534 \/ 1.452155 (0.087379) | 1.675292 \/ 1.492716 (0.182576) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.210515 \/ 0.018006 (0.192508) | 0.506817 \/ 0.000490 (0.506327) | 0.001938 \/ 0.000200 (0.001738) | 0.000118 \/ 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026019 \/ 0.037411 (-0.011393) | 0.080635 \/ 0.014526 (0.066109) | 0.103050 \/ 0.176557 (-0.073507) | 0.160597 \/ 0.737135 (-0.576538) | 0.095844 \/ 0.296338 (-0.200495) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.506359 \/ 0.215209 (0.291150) | 5.041586 \/ 2.077655 (2.963931) | 2.198288 \/ 1.504120 (0.694168) | 1.987544 \/ 1.541195 (0.446349) | 1.866790 \/ 1.468490 (0.398300) | 0.681642 \/ 4.584777 (-3.903135) | 4.719306 \/ 3.745712 (0.973593) | 7.669869 \/ 5.269862 (2.400008) | 4.466082 \/ 4.565676 (-0.099595) | 0.092974 \/ 0.424275 (-0.331301) | 0.008196 \/ 0.007607 (0.000589) | 0.707656 \/ 0.226044 (0.481612) | 6.974507 \/ 2.268929 (4.705579) | 3.254206 \/ 55.444624 (-52.190418) | 2.499019 \/ 6.876477 (-4.377457) | 2.509089 \/ 2.142072 (0.367017) | 0.915952 \/ 4.805227 (-3.889276) | 0.192119 \/ 6.500664 (-6.308545) | 0.065473 \/ 0.075469 (-0.009996) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.309078 \/ 1.841788 (-0.532710) | 19.660348 \/ 8.074308 (11.586040) | 16.659582 \/ 10.191392 (6.468190) | 0.194315 \/ 0.680424 (-0.486109) | 0.027773 \/ 0.534201 (-0.506428) | 0.401241 \/ 0.579283 (-0.178042) | 0.515799 \/ 0.434364 (0.081435) | 0.488772 \/ 0.540337 (-0.051566) | 0.604790 \/ 1.386936 (-0.782146) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006823 \/ 0.011353 (-0.004530) | 0.003940 \/ 0.011008 (-0.007068) | 0.061533 \/ 0.038508 (0.023025) | 0.065241 \/ 0.023109 (0.042132) | 0.411790 \/ 0.275898 (0.135892) | 0.475720 \/ 0.323480 (0.152241) | 0.005376 \/ 0.007986 (-0.002609) | 0.003433 \/ 0.004328 (-0.000895) | 0.065703 \/ 0.004250 (0.061452) | 0.050736 \/ 0.037052 (0.013683) | 0.435890 \/ 0.258489 (0.177401) | 0.436698 \/ 0.293841 (0.142857) | 0.040357 \/ 0.128546 (-0.088189) | 0.011578 \/ 0.075646 (-0.064069) | 0.072831 \/ 0.419271 (-0.346440) | 0.055698 \/ 0.043533 (0.012165) | 0.408225 \/ 0.255139 (0.153086) | 0.439551 \/ 0.283200 (0.156352) | 0.030469 \/ 0.141683 (-0.111214) | 1.443866 \/ 1.452155 (-0.008289) | 1.502022 \/ 1.492716 (0.009306) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.290338 \/ 0.018006 (0.272332) | 0.540726 \/ 0.000490 (0.540236) | 0.003244 \/ 0.000200 (0.003044) | 0.000170 \/ 0.000054 (0.000116) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030865 \/ 0.037411 (-0.006547) | 0.090866 \/ 0.014526 (0.076340) | 0.106224 \/ 0.176557 (-0.070332) | 0.166583 \/ 0.737135 (-0.570553) | 0.104448 \/ 0.296338 (-0.191891) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.518025 \/ 0.215209 (0.302816) | 6.027065 \/ 2.077655 (3.949410) | 2.671840 \/ 1.504120 (1.167720) | 2.273949 \/ 1.541195 (0.732754) | 2.414892 \/ 1.468490 (0.946402) | 0.774318 \/ 4.584777 (-3.810459) | 5.020364 \/ 3.745712 (1.274652) | 4.146927 \/ 5.269862 (-1.122934) | 2.584598 \/ 4.565676 (-1.981078) | 0.089519 \/ 0.424275 (-0.334756) | 0.009181 \/ 0.007607 (0.001574) | 0.654467 \/ 0.226044 (0.428423) | 6.421595 \/ 2.268929 (4.152666) | 3.091589 \/ 55.444624 (-52.353036) | 2.554798 \/ 6.876477 (-4.321679) | 2.441354 \/ 2.142072 (0.299282) | 0.943386 \/ 4.805227 (-3.861841) | 0.173641 \/ 6.500664 (-6.327023) | 0.072209 \/ 0.075469 (-0.003260) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.557147 \/ 1.841788 (-0.284641) | 19.980747 \/ 8.074308 (11.906439) | 17.816813 \/ 10.191392 (7.625421) | 0.212078 \/ 0.680424 (-0.468346) | 0.025435 \/ 0.534201 (-0.508766) | 0.396200 \/ 0.579283 (-0.183084) | 0.546249 \/ 0.434364 (0.111885) | 0.459632 \/ 0.540337 (-0.080705) | 0.616548 \/ 1.386936 (-0.770388) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#535e972a70a3d4f8490a7e1a77ac43d5a4ab2655 \"CML watermark\")\n"],"created_at":1688482957000,"updated_at":1688657561000,"closed_at":1688656963000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6005","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6005","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6005.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6005.patch","merged_at":1688656963000},"body":"`hfh` and `transformers` have dropped Python 3.7 support, so we should do the same :).\r\n\r\n(Based on the stats, it seems less than 10% of the users use `datasets` with Python 3.7)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6005\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6005\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6004","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6004\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6004\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6004\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6004","id":1786636368,"node_id":"PR_kwDODunzps5UjN2h","number":6004,"title":"Misc improvements","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006897 \/ 0.011353 (-0.004456) | 0.004207 \/ 0.011008 (-0.006802) | 0.104828 \/ 0.038508 (0.066320) | 0.048054 \/ 0.023109 (0.024945) | 0.373991 \/ 0.275898 (0.098093) | 0.426740 \/ 0.323480 (0.103260) | 0.005540 \/ 0.007986 (-0.002446) | 0.003531 \/ 0.004328 (-0.000797) | 0.079304 \/ 0.004250 (0.075053) | 0.066996 \/ 0.037052 (0.029944) | 0.370675 \/ 0.258489 (0.112186) | 0.414154 \/ 0.293841 (0.120313) | 0.031567 \/ 0.128546 (-0.096979) | 0.008843 \/ 0.075646 (-0.066803) | 0.357426 \/ 0.419271 (-0.061845) | 0.067040 \/ 0.043533 (0.023508) | 0.362384 \/ 0.255139 (0.107245) | 0.376056 \/ 0.283200 (0.092856) | 0.032985 \/ 0.141683 (-0.108697) | 1.560603 \/ 1.452155 (0.108448) | 1.619024 \/ 1.492716 (0.126308) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.229059 \/ 0.018006 (0.211053) | 0.440513 \/ 0.000490 (0.440023) | 0.004647 \/ 0.000200 (0.004447) | 0.000085 \/ 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029517 \/ 0.037411 (-0.007894) | 0.120974 \/ 0.014526 (0.106448) | 0.125070 \/ 0.176557 (-0.051486) | 0.184695 \/ 0.737135 (-0.552441) | 0.130244 \/ 0.296338 (-0.166095) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.436930 \/ 0.215209 (0.221721) | 4.356118 \/ 2.077655 (2.278463) | 2.049169 \/ 1.504120 (0.545049) | 1.842898 \/ 1.541195 (0.301703) | 1.918948 \/ 1.468490 (0.450458) | 0.553573 \/ 4.584777 (-4.031204) | 3.883195 \/ 3.745712 (0.137483) | 3.209780 \/ 5.269862 (-2.060081) | 1.551707 \/ 4.565676 (-3.013970) | 0.068181 \/ 0.424275 (-0.356094) | 0.012370 \/ 0.007607 (0.004762) | 0.539899 \/ 0.226044 (0.313854) | 5.380008 \/ 2.268929 (3.111079) | 2.518178 \/ 55.444624 (-52.926446) | 2.174190 \/ 6.876477 (-4.702286) | 2.317812 \/ 2.142072 (0.175740) | 0.674154 \/ 4.805227 (-4.131073) | 0.149313 \/ 6.500664 (-6.351351) | 0.068297 \/ 0.075469 (-0.007172) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.261426 \/ 1.841788 (-0.580362) | 15.316378 \/ 8.074308 (7.242070) | 13.573512 \/ 10.191392 (3.382120) | 0.190022 \/ 0.680424 (-0.490401) | 0.018697 \/ 0.534201 (-0.515504) | 0.448122 \/ 0.579283 (-0.131161) | 0.435044 \/ 0.434364 (0.000681) | 0.550065 \/ 0.540337 (0.009728) | 0.653547 \/ 1.386936 (-0.733389) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007116 \/ 0.011353 (-0.004237) | 0.004375 \/ 0.011008 (-0.006633) | 0.081793 \/ 0.038508 (0.043285) | 0.047980 \/ 0.023109 (0.024871) | 0.392185 \/ 0.275898 (0.116287) | 0.462263 \/ 0.323480 (0.138783) | 0.005574 \/ 0.007986 (-0.002412) | 0.003552 \/ 0.004328 (-0.000776) | 0.080413 \/ 0.004250 (0.076162) | 0.065539 \/ 0.037052 (0.028487) | 0.413137 \/ 0.258489 (0.154648) | 0.467377 \/ 0.293841 (0.173536) | 0.034386 \/ 0.128546 (-0.094160) | 0.009183 \/ 0.075646 (-0.066464) | 0.087542 \/ 0.419271 (-0.331730) | 0.053954 \/ 0.043533 (0.010421) | 0.385096 \/ 0.255139 (0.129957) | 0.404900 \/ 0.283200 (0.121701) | 0.025908 \/ 0.141683 (-0.115775) | 1.550159 \/ 1.452155 (0.098005) | 1.598794 \/ 1.492716 (0.106078) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.246222 \/ 0.018006 (0.228216) | 0.441095 \/ 0.000490 (0.440605) | 0.006863 \/ 0.000200 (0.006663) | 0.000109 \/ 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032179 \/ 0.037411 (-0.005233) | 0.120112 \/ 0.014526 (0.105586) | 0.129326 \/ 0.176557 (-0.047230) | 0.184542 \/ 0.737135 (-0.552593) | 0.135038 \/ 0.296338 (-0.161300) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.459002 \/ 0.215209 (0.243793) | 4.580258 \/ 2.077655 (2.502604) | 2.296689 \/ 1.504120 (0.792569) | 2.104338 \/ 1.541195 (0.563143) | 2.182896 \/ 1.468490 (0.714406) | 0.546447 \/ 4.584777 (-4.038330) | 3.854047 \/ 3.745712 (0.108335) | 1.873829 \/ 5.269862 (-3.396032) | 1.116484 \/ 4.565676 (-3.449193) | 0.067158 \/ 0.424275 (-0.357117) | 0.012035 \/ 0.007607 (0.004428) | 0.556642 \/ 0.226044 (0.330597) | 5.574436 \/ 2.268929 (3.305508) | 2.828223 \/ 55.444624 (-52.616402) | 2.519851 \/ 6.876477 (-4.356626) | 2.668594 \/ 2.142072 (0.526521) | 0.675989 \/ 4.805227 (-4.129238) | 0.146075 \/ 6.500664 (-6.354589) | 0.067788 \/ 0.075469 (-0.007681) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.345958 \/ 1.841788 (-0.495830) | 15.672748 \/ 8.074308 (7.598440) | 14.937583 \/ 10.191392 (4.746191) | 0.163479 \/ 0.680424 (-0.516945) | 0.018364 \/ 0.534201 (-0.515837) | 0.433296 \/ 0.579283 (-0.145987) | 0.432463 \/ 0.434364 (-0.001901) | 0.512000 \/ 0.540337 (-0.028338) | 0.619397 \/ 1.386936 (-0.767539) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#0832d48a07ed00b406271f4b4439e6d54ae38ebf \"CML watermark\")\n","_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.010097 \/ 0.011353 (-0.001256) | 0.005070 \/ 0.011008 (-0.005939) | 0.118638 \/ 0.038508 (0.080130) | 0.043651 \/ 0.023109 (0.020542) | 0.356074 \/ 0.275898 (0.080176) | 0.414578 \/ 0.323480 (0.091098) | 0.005939 \/ 0.007986 (-0.002046) | 0.004927 \/ 0.004328 (0.000598) | 0.089545 \/ 0.004250 (0.085294) | 0.067533 \/ 0.037052 (0.030481) | 0.371550 \/ 0.258489 (0.113061) | 0.417808 \/ 0.293841 (0.123967) | 0.045186 \/ 0.128546 (-0.083361) | 0.015763 \/ 0.075646 (-0.059883) | 0.393304 \/ 0.419271 (-0.025967) | 0.065123 \/ 0.043533 (0.021591) | 0.345057 \/ 0.255139 (0.089918) | 0.378809 \/ 0.283200 (0.095610) | 0.033243 \/ 0.141683 (-0.108440) | 1.679956 \/ 1.452155 (0.227802) | 1.775456 \/ 1.492716 (0.282739) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.229723 \/ 0.018006 (0.211717) | 0.554630 \/ 0.000490 (0.554140) | 0.008729 \/ 0.000200 (0.008529) | 0.000183 \/ 0.000054 (0.000129) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027284 \/ 0.037411 (-0.010128) | 0.114741 \/ 0.014526 (0.100215) | 0.129188 \/ 0.176557 (-0.047369) | 0.189270 \/ 0.737135 (-0.547866) | 0.126000 \/ 0.296338 (-0.170339) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.580417 \/ 0.215209 (0.365208) | 5.829337 \/ 2.077655 (3.751683) | 2.421191 \/ 1.504120 (0.917071) | 2.063673 \/ 1.541195 (0.522479) | 2.133427 \/ 1.468490 (0.664937) | 0.830964 \/ 4.584777 (-3.753813) | 5.107139 \/ 3.745712 (1.361427) | 4.599451 \/ 5.269862 (-0.670410) | 2.406502 \/ 4.565676 (-2.159175) | 0.100422 \/ 0.424275 (-0.323853) | 0.011850 \/ 0.007607 (0.004243) | 0.741881 \/ 0.226044 (0.515836) | 7.425689 \/ 2.268929 (5.156760) | 3.068948 \/ 55.444624 (-52.375676) | 2.496292 \/ 6.876477 (-4.380184) | 2.566420 \/ 2.142072 (0.424348) | 1.093084 \/ 4.805227 (-3.712144) | 0.224106 \/ 6.500664 (-6.276558) | 0.084549 \/ 0.075469 (0.009080) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.416315 \/ 1.841788 (-0.425473) | 16.306901 \/ 8.074308 (8.232593) | 19.792419 \/ 10.191392 (9.601027) | 0.224223 \/ 0.680424 (-0.456201) | 0.026385 \/ 0.534201 (-0.507816) | 0.463460 \/ 0.579283 (-0.115823) | 0.598385 \/ 0.434364 (0.164021) | 0.543981 \/ 0.540337 (0.003644) | 0.647454 \/ 1.386936 (-0.739482) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009470 \/ 0.011353 (-0.001883) | 0.004800 \/ 0.011008 (-0.006208) | 0.094276 \/ 0.038508 (0.055768) | 0.045157 \/ 0.023109 (0.022048) | 0.397302 \/ 0.275898 (0.121404) | 0.474213 \/ 0.323480 (0.150733) | 0.005826 \/ 0.007986 (-0.002160) | 0.003724 \/ 0.004328 (-0.000605) | 0.090060 \/ 0.004250 (0.085809) | 0.066671 \/ 0.037052 (0.029618) | 0.439560 \/ 0.258489 (0.181071) | 0.468598 \/ 0.293841 (0.174757) | 0.044549 \/ 0.128546 (-0.083997) | 0.014000 \/ 0.075646 (-0.061646) | 0.110457 \/ 0.419271 (-0.308815) | 0.065898 \/ 0.043533 (0.022365) | 0.408101 \/ 0.255139 (0.152962) | 0.433473 \/ 0.283200 (0.150273) | 0.038438 \/ 0.141683 (-0.103245) | 1.767781 \/ 1.452155 (0.315626) | 1.791575 \/ 1.492716 (0.298859) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.230257 \/ 0.018006 (0.212251) | 0.492280 \/ 0.000490 (0.491790) | 0.005110 \/ 0.000200 (0.004910) | 0.000119 \/ 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028854 \/ 0.037411 (-0.008557) | 0.111702 \/ 0.014526 (0.097176) | 0.122040 \/ 0.176557 (-0.054517) | 0.179103 \/ 0.737135 (-0.558032) | 0.128869 \/ 0.296338 (-0.167470) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.634795 \/ 0.215209 (0.419586) | 6.204760 \/ 2.077655 (4.127105) | 2.692479 \/ 1.504120 (1.188359) | 2.324260 \/ 1.541195 (0.783066) | 2.380640 \/ 1.468490 (0.912149) | 0.887827 \/ 4.584777 (-3.696950) | 5.251648 \/ 3.745712 (1.505935) | 2.632767 \/ 5.269862 (-2.637095) | 1.745721 \/ 4.565676 (-2.819955) | 0.108364 \/ 0.424275 (-0.315911) | 0.013409 \/ 0.007607 (0.005802) | 0.783427 \/ 0.226044 (0.557383) | 7.765144 \/ 2.268929 (5.496216) | 3.340686 \/ 55.444624 (-52.103938) | 2.715340 \/ 6.876477 (-4.161137) | 2.768604 \/ 2.142072 (0.626531) | 1.119746 \/ 4.805227 (-3.685481) | 0.210804 \/ 6.500664 (-6.289860) | 0.072600 \/ 0.075469 (-0.002869) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.517334 \/ 1.841788 (-0.324454) | 17.046837 \/ 8.074308 (8.972529) | 19.371090 \/ 10.191392 (9.179698) | 0.194275 \/ 0.680424 (-0.486148) | 0.026712 \/ 0.534201 (-0.507488) | 0.462731 \/ 0.579283 (-0.116552) | 0.568958 \/ 0.434364 (0.134595) | 0.555707 \/ 0.540337 (0.015370) | 0.663654 \/ 1.386936 (-0.723283) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#5d20476b1d4c8e11e0ffafc1570cbf4bd19011cf \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006423 \/ 0.011353 (-0.004930) | 0.003882 \/ 0.011008 (-0.007126) | 0.082976 \/ 0.038508 (0.044468) | 0.071281 \/ 0.023109 (0.048171) | 0.311367 \/ 0.275898 (0.035469) | 0.348228 \/ 0.323480 (0.024748) | 0.005315 \/ 0.007986 (-0.002671) | 0.003326 \/ 0.004328 (-0.001003) | 0.064641 \/ 0.004250 (0.060391) | 0.056134 \/ 0.037052 (0.019081) | 0.314071 \/ 0.258489 (0.055582) | 0.360534 \/ 0.293841 (0.066693) | 0.030642 \/ 0.128546 (-0.097904) | 0.008301 \/ 0.075646 (-0.067345) | 0.285820 \/ 0.419271 (-0.133451) | 0.069241 \/ 0.043533 (0.025708) | 0.313995 \/ 0.255139 (0.058856) | 0.336656 \/ 0.283200 (0.053457) | 0.031686 \/ 0.141683 (-0.109997) | 1.467627 \/ 1.452155 (0.015472) | 1.536493 \/ 1.492716 (0.043777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.196518 \/ 0.018006 (0.178512) | 0.458235 \/ 0.000490 (0.457745) | 0.005599 \/ 0.000200 (0.005399) | 0.000088 \/ 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027371 \/ 0.037411 (-0.010040) | 0.080986 \/ 0.014526 (0.066460) | 0.093296 \/ 0.176557 (-0.083260) | 0.150592 \/ 0.737135 (-0.586543) | 0.094150 \/ 0.296338 (-0.202188) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.379412 \/ 0.215209 (0.164202) | 3.797927 \/ 2.077655 (1.720272) | 1.830654 \/ 1.504120 (0.326534) | 1.669569 \/ 1.541195 (0.128374) | 1.746738 \/ 1.468490 (0.278248) | 0.479536 \/ 4.584777 (-4.105241) | 3.592867 \/ 3.745712 (-0.152845) | 5.468098 \/ 5.269862 (0.198237) | 3.268013 \/ 4.565676 (-1.297663) | 0.056635 \/ 0.424275 (-0.367640) | 0.007224 \/ 0.007607 (-0.000383) | 0.456681 \/ 0.226044 (0.230636) | 4.566736 \/ 2.268929 (2.297807) | 2.362831 \/ 55.444624 (-53.081793) | 1.965141 \/ 6.876477 (-4.911336) | 2.156905 \/ 2.142072 (0.014833) | 0.572543 \/ 4.805227 (-4.232684) | 0.132203 \/ 6.500664 (-6.368461) | 0.059254 \/ 0.075469 (-0.016215) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.256134 \/ 1.841788 (-0.585654) | 19.905438 \/ 8.074308 (11.831130) | 14.179556 \/ 10.191392 (3.988164) | 0.168043 \/ 0.680424 (-0.512381) | 0.018215 \/ 0.534201 (-0.515986) | 0.392740 \/ 0.579283 (-0.186543) | 0.398397 \/ 0.434364 (-0.035967) | 0.463806 \/ 0.540337 (-0.076531) | 0.616248 \/ 1.386936 (-0.770688) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006564 \/ 0.011353 (-0.004789) | 0.003923 \/ 0.011008 (-0.007085) | 0.063929 \/ 0.038508 (0.025421) | 0.073780 \/ 0.023109 (0.050671) | 0.360242 \/ 0.275898 (0.084344) | 0.395078 \/ 0.323480 (0.071598) | 0.005265 \/ 0.007986 (-0.002720) | 0.003229 \/ 0.004328 (-0.001100) | 0.064094 \/ 0.004250 (0.059843) | 0.057468 \/ 0.037052 (0.020416) | 0.369530 \/ 0.258489 (0.111041) | 0.411159 \/ 0.293841 (0.117318) | 0.031278 \/ 0.128546 (-0.097268) | 0.008424 \/ 0.075646 (-0.067222) | 0.070411 \/ 0.419271 (-0.348860) | 0.048714 \/ 0.043533 (0.005181) | 0.361280 \/ 0.255139 (0.106141) | 0.382468 \/ 0.283200 (0.099269) | 0.023059 \/ 0.141683 (-0.118624) | 1.452369 \/ 1.452155 (0.000215) | 1.519192 \/ 1.492716 (0.026475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.223745 \/ 0.018006 (0.205739) | 0.442086 \/ 0.000490 (0.441596) | 0.000379 \/ 0.000200 (0.000179) | 0.000055 \/ 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030919 \/ 0.037411 (-0.006493) | 0.088483 \/ 0.014526 (0.073958) | 0.101165 \/ 0.176557 (-0.075391) | 0.154332 \/ 0.737135 (-0.582804) | 0.103030 \/ 0.296338 (-0.193309) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.414520 \/ 0.215209 (0.199311) | 4.126754 \/ 2.077655 (2.049099) | 2.142677 \/ 1.504120 (0.638557) | 1.995300 \/ 1.541195 (0.454106) | 2.101678 \/ 1.468490 (0.633188) | 0.481099 \/ 4.584777 (-4.103678) | 3.562813 \/ 3.745712 (-0.182900) | 3.392463 \/ 5.269862 (-1.877399) | 1.983943 \/ 4.565676 (-2.581734) | 0.056594 \/ 0.424275 (-0.367681) | 0.007216 \/ 0.007607 (-0.000391) | 0.495085 \/ 0.226044 (0.269041) | 4.955640 \/ 2.268929 (2.686712) | 2.629434 \/ 55.444624 (-52.815191) | 2.269577 \/ 6.876477 (-4.606900) | 2.357708 \/ 2.142072 (0.215635) | 0.612370 \/ 4.805227 (-4.192857) | 0.131169 \/ 6.500664 (-6.369495) | 0.061029 \/ 0.075469 (-0.014440) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.339438 \/ 1.841788 (-0.502350) | 19.757611 \/ 8.074308 (11.683303) | 14.246254 \/ 10.191392 (4.054862) | 0.170750 \/ 0.680424 (-0.509674) | 0.018192 \/ 0.534201 (-0.516009) | 0.395693 \/ 0.579283 (-0.183590) | 0.411003 \/ 0.434364 (-0.023361) | 0.478531 \/ 0.540337 (-0.061806) | 0.650291 \/ 1.386936 (-0.736645) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#3e34d06d746688dd5d26e4c85517b7e1a2f361ca \"CML watermark\")\n"],"created_at":1688408954000,"updated_at":1688663051000,"closed_at":1688662525000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6004","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6004","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6004.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6004.patch","merged_at":1688662525000},"body":"Contains the following improvements:\r\n\r\n* fixes a \"share dataset\" link in README and modifies the \"hosting\" part in the disclaimer section\r\n* updates `Makefile` to also run the style checks on `utils` and `setup.py`\r\n* deletes a test for GH-hosted datasets (no longer supported)\r\n* deletes `convert_dataset.sh` (outdated)\r\n* aligns `utils\/release.py` with `transformers` (the current version is outdated)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6004\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6004\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6003","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6003\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6003\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6003\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6003","id":1786554110,"node_id":"I_kwDODunzps5qfKb-","number":6003,"title":"interleave_datasets & DataCollatorForLanguageModeling having a conflict ?","user":{"login":"PonteIneptique","id":1929830,"node_id":"MDQ6VXNlcjE5Mjk4MzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1929830?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PonteIneptique","html_url":"https:\/\/github.com\/PonteIneptique","followers_url":"https:\/\/api.github.com\/users\/PonteIneptique\/followers","following_url":"https:\/\/api.github.com\/users\/PonteIneptique\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PonteIneptique\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PonteIneptique\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PonteIneptique\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PonteIneptique\/orgs","repos_url":"https:\/\/api.github.com\/users\/PonteIneptique\/repos","events_url":"https:\/\/api.github.com\/users\/PonteIneptique\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PonteIneptique\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1688404531000,"updated_at":1688404531000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nHi everyone :)\r\n\r\nI have two local & custom datasets (1 \"sentence\" per line) which I split along the 95\/5 lines for pre-training a Bert model. I use a modified version of `run_mlm.py` in order to be able to make use of `interleave_dataset`:\r\n\r\n- `tokenize()` runs fine\r\n- `group_text()` runs fine\r\n\r\nEverytime, on step 19, I get \r\n\r\n```pytb\r\n File \"env\/lib\/python3.9\/site-packages\/transformers\/data\/data_collator.py\", line 779, in torch_mask_tokens\r\n inputs[indices_random] = random_words[indices_random]\r\nRuntimeError: Index put requires the source and destination dtypes match, got Float for the destination and Long for the source.\r\n```\r\n\r\nI tried:\r\n- training without interleave on dataset 1, it runs\r\n- training without interleave on dataset 2, it runs\r\n- training without `.to_iterable_dataset()`, it hangs then crash\r\n- training without group_text() and padding to max_length seemed to fix the issue, but who knows if this was just because it was an issue that would come much later in terms of steps.\r\n\r\nI might have coded something wrong, but I don't get what \n\n### Steps to reproduce the bug\n\nI have this function:\r\n\r\n```py\r\ndef build_dataset(path: str, percent: str):\r\n dataset = load_dataset(\r\n \"text\",\r\n data_files={\"train\": [path]},\r\n split=f\"train[{percent}]\"\r\n )\r\n dataset = dataset.map(\r\n lambda examples: tokenize(examples[\"text\"]),\r\n batched=True,\r\n num_proc=num_proc,\r\n )\r\n\r\n dataset = dataset.map(\r\n group_texts,\r\n batched=True,\r\n num_proc=num_proc,\r\n desc=f\"Grouping texts in chunks of {tokenizer.max_seq_length}\",\r\n remove_columns=[\"text\"]\r\n )\r\n\r\n print(len(dataset))\r\n return dataset.to_iterable_dataset()\r\n```\r\n\r\nI hardcoded group_text:\r\n```py\r\n def group_texts(examples):\r\n # Concatenate all texts.\r\n concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}\r\n total_length = len(concatenated_examples[list(examples.keys())[0]])\r\n # We drop the small remainder, and if the total_length < max_seq_length we exclude this batch and return an empty dict.\r\n # We could add padding if the model supported it instead of this drop, you can customize this part to your needs.\r\n total_length = (total_length \/\/ 512) * 512\r\n # Split by chunks of max_len.\r\n result = {\r\n k: [t[i: i + 512] for i in range(0, total_length, 512)]\r\n for k, t in concatenated_examples.items()\r\n }\r\n # result = {k: [el for el in elements if el] for k, elements in result.items()}\r\n return result\r\n```\r\n\r\nAnd then I build datasets using the following code:\r\n\r\n```py\r\ntrain1 = build_dataset(\"d1.txt\", \":95%\")\r\ntrain2 = build_dataset(\"d2.txt\", \":95%\")\r\ndev1 = build_dataset(\"d1.txt\", \"95%:\")\r\ndev2 = build_dataset(\"d2.txt\", \"95%:\")\r\n```\r\n\r\nand finally I run\r\n```py\r\ntrain_dataset = interleave_datasets(\r\n [train1, train2],\r\n probabilities=[0.8, 0.2],\r\n seed=42\r\n)\r\neval_dataset = interleave_datasets(\r\n [dev1, dev2],\r\n probabilities=[0.8, 0.2],\r\n seed=42\r\n)\r\n```\r\n\r\nThen I run the training part which remains mostly untouched:\r\n\r\n> CUDA_VISIBLE_DEVICES=1 python custom_dataset.py --model_type bert --per_device_train_batch_size 32 --do_train --output_dir \/var\/mlm\/training-bert\/model --max_seq_length 512 --save_steps 10000 --save_total_limit 3 --auto_find_batch_size --logging_dir .\/logs-bert --learning_rate 0.0001 --do_train --num_train_epochs 25 --warmup_steps 10000 --max_step 45000 --fp16\n\n### Expected behavior\n\nThe model should then train normally, but fails every time at the same step (19).\r\n\r\nprinting the variables at `inputs[indices_random] = random_words[indices_random]` shows a magnificient empty tensor (, 32) [if I remember well]\n\n### Environment info\n\ntransformers[torch] 4.30.2\r\nUbuntu\r\nA100 0 CUDA 12\r\nDriver Version: 525.116.04","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6003\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6003\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6002","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6002\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6002\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6002\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6002","id":1786053060,"node_id":"PR_kwDODunzps5UhP-Z","number":6002,"title":"Add KLUE-MRC metrics","user":{"login":"ingyuseong","id":37537248,"node_id":"MDQ6VXNlcjM3NTM3MjQ4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/37537248?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ingyuseong","html_url":"https:\/\/github.com\/ingyuseong","followers_url":"https:\/\/api.github.com\/users\/ingyuseong\/followers","following_url":"https:\/\/api.github.com\/users\/ingyuseong\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ingyuseong\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ingyuseong\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ingyuseong\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ingyuseong\/orgs","repos_url":"https:\/\/api.github.com\/users\/ingyuseong\/repos","events_url":"https:\/\/api.github.com\/users\/ingyuseong\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ingyuseong\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The metrics API in `datasets` is deprecated as of version 2.0, and `evaulate` is our new library for metrics. You can add a new metric to it by following [these steps](https:\/\/huggingface.co\/docs\/evaluate\/creating_and_sharing)."],"created_at":1688386270000,"updated_at":1688903840000,"closed_at":1688903840000,"author_association":"NONE","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6002","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6002","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6002.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6002.patch","merged_at":null},"body":"## Metrics for KLUE-MRC (Korean Language Understanding Evaluation \u2014 Machine Reading Comprehension)\r\n\r\nAdding metrics for [KLUE-MRC](https:\/\/huggingface.co\/datasets\/klue).\r\nKLUE-MRC is very similar to SQuAD 2.0 but has a slightly different format which is why I added metrics for KLUE-MRC.\r\n\r\nSpecifically, in the case of [LM Eval Harness](https:\/\/github.com\/EleutherAI\/lm-evaluation-harness), it leverages the scoring script of SQuAD to evaluate SQuAD 2.0 and KorQuAD. But the script isn't suitable for KLUE-MRC because KLUE-MRC is a bit different from SQuAD 2.0. And this is why I added the scoring script for KLUE-MRC.\r\n\r\n- [x] All tests passed\r\n- [x] Added a metric card (referred the metric card of SQuAD 2.0)\r\n- [x] Compatibility test with [LM Eval Harness](https:\/\/github.com\/EleutherAI\/lm-evaluation-harness) passed\r\n\r\n### References\r\n- [KLUE: Korean Language Understanding Evaluation](https:\/\/datasets-benchmarks-proceedings.neurips.cc\/paper_files\/paper\/2021\/file\/98dce83da57b0395e163467c9dae521b-Paper-round2.pdf)\r\n- [KLUE on Hugging Face Datasets](https:\/\/huggingface.co\/datasets\/klue)\r\n- #2416","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6002\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6002\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6001","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6001\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6001\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6001\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6001","id":1782516627,"node_id":"PR_kwDODunzps5UVMMh","number":6001,"title":"Align `column_names` type check with type hint in `sort`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006038 \/ 0.011353 (-0.005315) | 0.003797 \/ 0.011008 (-0.007211) | 0.097686 \/ 0.038508 (0.059178) | 0.035235 \/ 0.023109 (0.012126) | 0.317294 \/ 0.275898 (0.041396) | 0.377682 \/ 0.323480 (0.054202) | 0.003485 \/ 0.007986 (-0.004501) | 0.003603 \/ 0.004328 (-0.000725) | 0.077268 \/ 0.004250 (0.073017) | 0.054649 \/ 0.037052 (0.017597) | 0.322293 \/ 0.258489 (0.063804) | 0.372277 \/ 0.293841 (0.078436) | 0.027927 \/ 0.128546 (-0.100619) | 0.008495 \/ 0.075646 (-0.067151) | 0.313078 \/ 0.419271 (-0.106193) | 0.046974 \/ 0.043533 (0.003441) | 0.313848 \/ 0.255139 (0.058709) | 0.338454 \/ 0.283200 (0.055255) | 0.020462 \/ 0.141683 (-0.121221) | 1.473027 \/ 1.452155 (0.020873) | 1.539468 \/ 1.492716 (0.046752) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.221429 \/ 0.018006 (0.203423) | 0.412044 \/ 0.000490 (0.411555) | 0.005866 \/ 0.000200 (0.005666) | 0.000075 \/ 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.022870 \/ 0.037411 (-0.014541) | 0.099129 \/ 0.014526 (0.084603) | 0.103463 \/ 0.176557 (-0.073094) | 0.164969 \/ 0.737135 (-0.572166) | 0.110000 \/ 0.296338 (-0.186339) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.431311 \/ 0.215209 (0.216102) | 4.293562 \/ 2.077655 (2.215907) | 1.961209 \/ 1.504120 (0.457089) | 1.733680 \/ 1.541195 (0.192485) | 1.793171 \/ 1.468490 (0.324681) | 0.568566 \/ 4.584777 (-4.016211) | 3.401794 \/ 3.745712 (-0.343918) | 1.827949 \/ 5.269862 (-3.441913) | 1.055963 \/ 4.565676 (-3.509714) | 0.068459 \/ 0.424275 (-0.355816) | 0.011586 \/ 0.007607 (0.003979) | 0.533936 \/ 0.226044 (0.307891) | 5.347637 \/ 2.268929 (3.078708) | 2.378056 \/ 55.444624 (-53.066569) | 2.032159 \/ 6.876477 (-4.844318) | 2.159064 \/ 2.142072 (0.016991) | 0.674528 \/ 4.805227 (-4.130699) | 0.136859 \/ 6.500664 (-6.363805) | 0.066629 \/ 0.075469 (-0.008840) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.218084 \/ 1.841788 (-0.623704) | 14.141710 \/ 8.074308 (6.067402) | 13.588415 \/ 10.191392 (3.397023) | 0.155104 \/ 0.680424 (-0.525320) | 0.017160 \/ 0.534201 (-0.517041) | 0.375558 \/ 0.579283 (-0.203725) | 0.386293 \/ 0.434364 (-0.048071) | 0.459476 \/ 0.540337 (-0.080862) | 0.548561 \/ 1.386936 (-0.838375) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005878 \/ 0.011353 (-0.005475) | 0.003750 \/ 0.011008 (-0.007259) | 0.077720 \/ 0.038508 (0.039212) | 0.034955 \/ 0.023109 (0.011846) | 0.357480 \/ 0.275898 (0.081582) | 0.418210 \/ 0.323480 (0.094730) | 0.004566 \/ 0.007986 (-0.003419) | 0.002918 \/ 0.004328 (-0.001410) | 0.076517 \/ 0.004250 (0.072266) | 0.050202 \/ 0.037052 (0.013150) | 0.368166 \/ 0.258489 (0.109677) | 0.415681 \/ 0.293841 (0.121840) | 0.029496 \/ 0.128546 (-0.099050) | 0.008547 \/ 0.075646 (-0.067099) | 0.083037 \/ 0.419271 (-0.336234) | 0.045001 \/ 0.043533 (0.001468) | 0.356503 \/ 0.255139 (0.101364) | 0.383747 \/ 0.283200 (0.100547) | 0.025071 \/ 0.141683 (-0.116612) | 1.541985 \/ 1.452155 (0.089830) | 1.594710 \/ 1.492716 (0.101994) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.204491 \/ 0.018006 (0.186484) | 0.408686 \/ 0.000490 (0.408196) | 0.002505 \/ 0.000200 (0.002305) | 0.000082 \/ 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024446 \/ 0.037411 (-0.012965) | 0.101432 \/ 0.014526 (0.086906) | 0.108105 \/ 0.176557 (-0.068452) | 0.161195 \/ 0.737135 (-0.575940) | 0.112671 \/ 0.296338 (-0.183667) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.459697 \/ 0.215209 (0.244488) | 4.570071 \/ 2.077655 (2.492416) | 2.211547 \/ 1.504120 (0.707427) | 1.996651 \/ 1.541195 (0.455457) | 2.015621 \/ 1.468490 (0.547131) | 0.567423 \/ 4.584777 (-4.017354) | 3.408027 \/ 3.745712 (-0.337685) | 2.913824 \/ 5.269862 (-2.356038) | 1.423223 \/ 4.565676 (-3.142453) | 0.068740 \/ 0.424275 (-0.355535) | 0.010997 \/ 0.007607 (0.003390) | 0.567340 \/ 0.226044 (0.341296) | 5.666280 \/ 2.268929 (3.397351) | 2.804934 \/ 55.444624 (-52.639690) | 2.430761 \/ 6.876477 (-4.445716) | 2.451820 \/ 2.142072 (0.309748) | 0.681926 \/ 4.805227 (-4.123301) | 0.137761 \/ 6.500664 (-6.362903) | 0.067173 \/ 0.075469 (-0.008296) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.329853 \/ 1.841788 (-0.511934) | 14.436232 \/ 8.074308 (6.361924) | 14.398645 \/ 10.191392 (4.207253) | 0.147421 \/ 0.680424 (-0.533002) | 0.016743 \/ 0.534201 (-0.517458) | 0.364964 \/ 0.579283 (-0.214319) | 0.387072 \/ 0.434364 (-0.047292) | 0.423892 \/ 0.540337 (-0.116445) | 0.521304 \/ 1.386936 (-0.865632) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#a62b6ce65f718e9ff4189da86d160ae4bb197fc2 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006463 \/ 0.011353 (-0.004889) | 0.003923 \/ 0.011008 (-0.007086) | 0.102096 \/ 0.038508 (0.063588) | 0.040230 \/ 0.023109 (0.017121) | 0.384688 \/ 0.275898 (0.108789) | 0.445574 \/ 0.323480 (0.122094) | 0.003590 \/ 0.007986 (-0.004395) | 0.004023 \/ 0.004328 (-0.000306) | 0.080125 \/ 0.004250 (0.075875) | 0.057406 \/ 0.037052 (0.020354) | 0.395049 \/ 0.258489 (0.136560) | 0.438065 \/ 0.293841 (0.144224) | 0.028963 \/ 0.128546 (-0.099583) | 0.008693 \/ 0.075646 (-0.066954) | 0.317158 \/ 0.419271 (-0.102114) | 0.047930 \/ 0.043533 (0.004397) | 0.382442 \/ 0.255139 (0.127303) | 0.410665 \/ 0.283200 (0.127466) | 0.020127 \/ 0.141683 (-0.121555) | 1.558554 \/ 1.452155 (0.106400) | 1.590959 \/ 1.492716 (0.098242) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.208826 \/ 0.018006 (0.190820) | 0.432037 \/ 0.000490 (0.431547) | 0.006509 \/ 0.000200 (0.006309) | 0.000285 \/ 0.000054 (0.000230) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023460 \/ 0.037411 (-0.013951) | 0.099070 \/ 0.014526 (0.084545) | 0.105771 \/ 0.176557 (-0.070785) | 0.166683 \/ 0.737135 (-0.570452) | 0.108755 \/ 0.296338 (-0.187583) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.424324 \/ 0.215209 (0.209115) | 4.225696 \/ 2.077655 (2.148042) | 1.910955 \/ 1.504120 (0.406835) | 1.704493 \/ 1.541195 (0.163298) | 1.782784 \/ 1.468490 (0.314293) | 0.562927 \/ 4.584777 (-4.021850) | 3.380163 \/ 3.745712 (-0.365550) | 1.779641 \/ 5.269862 (-3.490221) | 1.029134 \/ 4.565676 (-3.536543) | 0.068325 \/ 0.424275 (-0.355950) | 0.011528 \/ 0.007607 (0.003921) | 0.530141 \/ 0.226044 (0.304097) | 5.323443 \/ 2.268929 (3.054514) | 2.346956 \/ 55.444624 (-53.097668) | 2.013335 \/ 6.876477 (-4.863142) | 2.118531 \/ 2.142072 (-0.023541) | 0.675206 \/ 4.805227 (-4.130021) | 0.135473 \/ 6.500664 (-6.365191) | 0.064804 \/ 0.075469 (-0.010665) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.240179 \/ 1.841788 (-0.601608) | 14.692449 \/ 8.074308 (6.618141) | 13.672223 \/ 10.191392 (3.480831) | 0.147748 \/ 0.680424 (-0.532676) | 0.017119 \/ 0.534201 (-0.517082) | 0.369481 \/ 0.579283 (-0.209802) | 0.390133 \/ 0.434364 (-0.044231) | 0.458768 \/ 0.540337 (-0.081569) | 0.548989 \/ 1.386936 (-0.837947) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006319 \/ 0.011353 (-0.005034) | 0.003975 \/ 0.011008 (-0.007033) | 0.077886 \/ 0.038508 (0.039378) | 0.038322 \/ 0.023109 (0.015213) | 0.379851 \/ 0.275898 (0.103953) | 0.456749 \/ 0.323480 (0.133269) | 0.005320 \/ 0.007986 (-0.002665) | 0.003135 \/ 0.004328 (-0.001194) | 0.078272 \/ 0.004250 (0.074022) | 0.059919 \/ 0.037052 (0.022866) | 0.430062 \/ 0.258489 (0.171573) | 0.477432 \/ 0.293841 (0.183591) | 0.029713 \/ 0.128546 (-0.098833) | 0.008704 \/ 0.075646 (-0.066942) | 0.082488 \/ 0.419271 (-0.336784) | 0.044667 \/ 0.043533 (0.001134) | 0.354910 \/ 0.255139 (0.099771) | 0.434637 \/ 0.283200 (0.151438) | 0.026402 \/ 0.141683 (-0.115281) | 1.528825 \/ 1.452155 (0.076671) | 1.548209 \/ 1.492716 (0.055493) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.237988 \/ 0.018006 (0.219982) | 0.420402 \/ 0.000490 (0.419913) | 0.003098 \/ 0.000200 (0.002898) | 0.000077 \/ 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026253 \/ 0.037411 (-0.011159) | 0.106137 \/ 0.014526 (0.091611) | 0.110273 \/ 0.176557 (-0.066284) | 0.165316 \/ 0.737135 (-0.571819) | 0.115720 \/ 0.296338 (-0.180619) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.454244 \/ 0.215209 (0.239035) | 4.526018 \/ 2.077655 (2.448364) | 2.395985 \/ 1.504120 (0.891865) | 2.234822 \/ 1.541195 (0.693627) | 2.370235 \/ 1.468490 (0.901745) | 0.567607 \/ 4.584777 (-4.017169) | 3.650156 \/ 3.745712 (-0.095556) | 3.360094 \/ 5.269862 (-1.909768) | 1.415252 \/ 4.565676 (-3.150424) | 0.068012 \/ 0.424275 (-0.356263) | 0.011135 \/ 0.007607 (0.003528) | 0.561967 \/ 0.226044 (0.335923) | 5.621819 \/ 2.268929 (3.352890) | 2.676912 \/ 55.444624 (-52.767712) | 2.338306 \/ 6.876477 (-4.538171) | 2.430888 \/ 2.142072 (0.288815) | 0.684576 \/ 4.805227 (-4.120651) | 0.138923 \/ 6.500664 (-6.361741) | 0.069933 \/ 0.075469 (-0.005536) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.313383 \/ 1.841788 (-0.528405) | 15.125088 \/ 8.074308 (7.050780) | 14.801501 \/ 10.191392 (4.610109) | 0.134235 \/ 0.680424 (-0.546189) | 0.017058 \/ 0.534201 (-0.517143) | 0.365166 \/ 0.579283 (-0.214117) | 0.395415 \/ 0.434364 (-0.038949) | 0.419355 \/ 0.540337 (-0.120983) | 0.513411 \/ 1.386936 (-0.873525) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#8b9649b3cfb49342e44873ce7e29e0c75eaf3efa \"CML watermark\")\n"],"created_at":1688130950000,"updated_at":1688134712000,"closed_at":1688134284000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6001","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6001","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6001.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6001.patch","merged_at":1688134284000},"body":"Fix #5998 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6001\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6001\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6000","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6000\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6000\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6000\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6000","id":1782456878,"node_id":"PR_kwDODunzps5UU_FB","number":6000,"title":"Pin `joblib` to avoid `joblibspark` test failures","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006722 \/ 0.011353 (-0.004631) | 0.004425 \/ 0.011008 (-0.006583) | 0.100850 \/ 0.038508 (0.062341) | 0.040816 \/ 0.023109 (0.017707) | 0.348823 \/ 0.275898 (0.072925) | 0.446285 \/ 0.323480 (0.122805) | 0.005738 \/ 0.007986 (-0.002247) | 0.003517 \/ 0.004328 (-0.000811) | 0.078824 \/ 0.004250 (0.074574) | 0.064695 \/ 0.037052 (0.027643) | 0.389894 \/ 0.258489 (0.131405) | 0.416107 \/ 0.293841 (0.122266) | 0.028850 \/ 0.128546 (-0.099696) | 0.009011 \/ 0.075646 (-0.066635) | 0.323117 \/ 0.419271 (-0.096154) | 0.049162 \/ 0.043533 (0.005629) | 0.340144 \/ 0.255139 (0.085005) | 0.382072 \/ 0.283200 (0.098872) | 0.023160 \/ 0.141683 (-0.118523) | 1.549218 \/ 1.452155 (0.097063) | 1.581266 \/ 1.492716 (0.088550) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.293360 \/ 0.018006 (0.275353) | 0.602189 \/ 0.000490 (0.601700) | 0.004608 \/ 0.000200 (0.004408) | 0.000082 \/ 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028144 \/ 0.037411 (-0.009267) | 0.107088 \/ 0.014526 (0.092562) | 0.112188 \/ 0.176557 (-0.064369) | 0.174669 \/ 0.737135 (-0.562466) | 0.116359 \/ 0.296338 (-0.179980) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.422911 \/ 0.215209 (0.207702) | 4.231524 \/ 2.077655 (2.153869) | 1.906711 \/ 1.504120 (0.402591) | 1.706841 \/ 1.541195 (0.165646) | 1.792066 \/ 1.468490 (0.323576) | 0.559221 \/ 4.584777 (-4.025556) | 3.434280 \/ 3.745712 (-0.311433) | 1.918714 \/ 5.269862 (-3.351148) | 1.073070 \/ 4.565676 (-3.492606) | 0.067891 \/ 0.424275 (-0.356384) | 0.011927 \/ 0.007607 (0.004320) | 0.530843 \/ 0.226044 (0.304799) | 5.309213 \/ 2.268929 (3.040285) | 2.439246 \/ 55.444624 (-53.005378) | 2.101245 \/ 6.876477 (-4.775231) | 2.177436 \/ 2.142072 (0.035363) | 0.672150 \/ 4.805227 (-4.133077) | 0.137571 \/ 6.500664 (-6.363093) | 0.068343 \/ 0.075469 (-0.007126) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.265262 \/ 1.841788 (-0.576525) | 14.988021 \/ 8.074308 (6.913713) | 13.611677 \/ 10.191392 (3.420285) | 0.171389 \/ 0.680424 (-0.509035) | 0.017681 \/ 0.534201 (-0.516520) | 0.377542 \/ 0.579283 (-0.201741) | 0.399475 \/ 0.434364 (-0.034889) | 0.469553 \/ 0.540337 (-0.070785) | 0.561888 \/ 1.386936 (-0.825048) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006782 \/ 0.011353 (-0.004571) | 0.004412 \/ 0.011008 (-0.006597) | 0.078594 \/ 0.038508 (0.040086) | 0.039930 \/ 0.023109 (0.016820) | 0.371879 \/ 0.275898 (0.095981) | 0.444910 \/ 0.323480 (0.121430) | 0.005707 \/ 0.007986 (-0.002279) | 0.003901 \/ 0.004328 (-0.000427) | 0.080125 \/ 0.004250 (0.075875) | 0.063977 \/ 0.037052 (0.026925) | 0.382781 \/ 0.258489 (0.124292) | 0.441791 \/ 0.293841 (0.147950) | 0.030428 \/ 0.128546 (-0.098118) | 0.009008 \/ 0.075646 (-0.066638) | 0.084447 \/ 0.419271 (-0.334824) | 0.044432 \/ 0.043533 (0.000899) | 0.365686 \/ 0.255139 (0.110547) | 0.394312 \/ 0.283200 (0.111113) | 0.024508 \/ 0.141683 (-0.117175) | 1.577020 \/ 1.452155 (0.124865) | 1.630259 \/ 1.492716 (0.137543) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.307960 \/ 0.018006 (0.289953) | 0.591473 \/ 0.000490 (0.590983) | 0.008098 \/ 0.000200 (0.007898) | 0.000110 \/ 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029567 \/ 0.037411 (-0.007845) | 0.112773 \/ 0.014526 (0.098247) | 0.117362 \/ 0.176557 (-0.059194) | 0.174293 \/ 0.737135 (-0.562843) | 0.123156 \/ 0.296338 (-0.173182) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.457475 \/ 0.215209 (0.242266) | 4.599067 \/ 2.077655 (2.521412) | 2.262638 \/ 1.504120 (0.758518) | 2.124943 \/ 1.541195 (0.583748) | 2.339912 \/ 1.468490 (0.871422) | 0.566264 \/ 4.584777 (-4.018513) | 3.489261 \/ 3.745712 (-0.256451) | 1.925151 \/ 5.269862 (-3.344711) | 1.099389 \/ 4.565676 (-3.466287) | 0.068232 \/ 0.424275 (-0.356043) | 0.011660 \/ 0.007607 (0.004052) | 0.571227 \/ 0.226044 (0.345183) | 5.702059 \/ 2.268929 (3.433130) | 2.837701 \/ 55.444624 (-52.606924) | 2.605468 \/ 6.876477 (-4.271008) | 2.818396 \/ 2.142072 (0.676323) | 0.681856 \/ 4.805227 (-4.123371) | 0.141401 \/ 6.500664 (-6.359263) | 0.069728 \/ 0.075469 (-0.005741) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.354935 \/ 1.841788 (-0.486853) | 15.437404 \/ 8.074308 (7.363095) | 15.415193 \/ 10.191392 (5.223801) | 0.153459 \/ 0.680424 (-0.526964) | 0.017190 \/ 0.534201 (-0.517011) | 0.367256 \/ 0.579283 (-0.212027) | 0.392709 \/ 0.434364 (-0.041655) | 0.426125 \/ 0.540337 (-0.114213) | 0.522612 \/ 1.386936 (-0.864324) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#25ac13d8ab23e7d99252ce083a45e8333b6bbcdc \"CML watermark\")\n","_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009183 \/ 0.011353 (-0.002170) | 0.005232 \/ 0.011008 (-0.005776) | 0.120349 \/ 0.038508 (0.081841) | 0.044715 \/ 0.023109 (0.021606) | 0.361519 \/ 0.275898 (0.085621) | 0.463702 \/ 0.323480 (0.140223) | 0.005842 \/ 0.007986 (-0.002144) | 0.004041 \/ 0.004328 (-0.000288) | 0.096953 \/ 0.004250 (0.092703) | 0.070593 \/ 0.037052 (0.033540) | 0.409790 \/ 0.258489 (0.151301) | 0.477452 \/ 0.293841 (0.183611) | 0.045827 \/ 0.128546 (-0.082719) | 0.014038 \/ 0.075646 (-0.061608) | 0.421317 \/ 0.419271 (0.002045) | 0.065276 \/ 0.043533 (0.021743) | 0.360074 \/ 0.255139 (0.104935) | 0.409147 \/ 0.283200 (0.125947) | 0.032444 \/ 0.141683 (-0.109238) | 1.739257 \/ 1.452155 (0.287102) | 1.831408 \/ 1.492716 (0.338692) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.274852 \/ 0.018006 (0.256846) | 0.596320 \/ 0.000490 (0.595830) | 0.006399 \/ 0.000200 (0.006199) | 0.000133 \/ 0.000054 (0.000079) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031400 \/ 0.037411 (-0.006012) | 0.127052 \/ 0.014526 (0.112526) | 0.134269 \/ 0.176557 (-0.042288) | 0.225998 \/ 0.737135 (-0.511137) | 0.150019 \/ 0.296338 (-0.146319) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.654202 \/ 0.215209 (0.438993) | 6.216735 \/ 2.077655 (4.139081) | 2.440214 \/ 1.504120 (0.936094) | 2.150575 \/ 1.541195 (0.609380) | 2.124790 \/ 1.468490 (0.656300) | 0.923514 \/ 4.584777 (-3.661263) | 5.556924 \/ 3.745712 (1.811212) | 2.843886 \/ 5.269862 (-2.425975) | 1.834232 \/ 4.565676 (-2.731444) | 0.111735 \/ 0.424275 (-0.312540) | 0.014823 \/ 0.007607 (0.007216) | 0.820503 \/ 0.226044 (0.594459) | 7.887737 \/ 2.268929 (5.618809) | 3.120307 \/ 55.444624 (-52.324317) | 2.405856 \/ 6.876477 (-4.470621) | 2.411239 \/ 2.142072 (0.269167) | 1.071283 \/ 4.805227 (-3.733944) | 0.227738 \/ 6.500664 (-6.272926) | 0.073516 \/ 0.075469 (-0.001953) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.531806 \/ 1.841788 (-0.309982) | 18.547661 \/ 8.074308 (10.473353) | 21.083922 \/ 10.191392 (10.892530) | 0.241706 \/ 0.680424 (-0.438718) | 0.034169 \/ 0.534201 (-0.500032) | 0.497514 \/ 0.579283 (-0.081769) | 0.599801 \/ 0.434364 (0.165437) | 0.576465 \/ 0.540337 (0.036127) | 0.673509 \/ 1.386936 (-0.713427) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007558 \/ 0.011353 (-0.003795) | 0.005001 \/ 0.011008 (-0.006008) | 0.093809 \/ 0.038508 (0.055301) | 0.039792 \/ 0.023109 (0.016683) | 0.456869 \/ 0.275898 (0.180971) | 0.493370 \/ 0.323480 (0.169891) | 0.005561 \/ 0.007986 (-0.002424) | 0.003982 \/ 0.004328 (-0.000346) | 0.085421 \/ 0.004250 (0.081170) | 0.059817 \/ 0.037052 (0.022765) | 0.468040 \/ 0.258489 (0.209550) | 0.514853 \/ 0.293841 (0.221012) | 0.044267 \/ 0.128546 (-0.084279) | 0.012674 \/ 0.075646 (-0.062972) | 0.098324 \/ 0.419271 (-0.320948) | 0.056604 \/ 0.043533 (0.013071) | 0.432200 \/ 0.255139 (0.177061) | 0.459812 \/ 0.283200 (0.176612) | 0.033872 \/ 0.141683 (-0.107811) | 1.618576 \/ 1.452155 (0.166421) | 1.676562 \/ 1.492716 (0.183846) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.230625 \/ 0.018006 (0.212619) | 0.600558 \/ 0.000490 (0.600068) | 0.003419 \/ 0.000200 (0.003219) | 0.000113 \/ 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026916 \/ 0.037411 (-0.010496) | 0.103003 \/ 0.014526 (0.088478) | 0.117078 \/ 0.176557 (-0.059478) | 0.169359 \/ 0.737135 (-0.567776) | 0.120305 \/ 0.296338 (-0.176034) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.616877 \/ 0.215209 (0.401668) | 6.157232 \/ 2.077655 (4.079577) | 2.869219 \/ 1.504120 (1.365099) | 2.381410 \/ 1.541195 (0.840216) | 2.417357 \/ 1.468490 (0.948867) | 0.914947 \/ 4.584777 (-3.669830) | 5.718526 \/ 3.745712 (1.972814) | 2.757253 \/ 5.269862 (-2.512609) | 1.794122 \/ 4.565676 (-2.771554) | 0.108423 \/ 0.424275 (-0.315852) | 0.013378 \/ 0.007607 (0.005771) | 0.831067 \/ 0.226044 (0.605023) | 8.478946 \/ 2.268929 (6.210018) | 3.685937 \/ 55.444624 (-51.758687) | 2.867472 \/ 6.876477 (-4.009005) | 2.895975 \/ 2.142072 (0.753903) | 1.137547 \/ 4.805227 (-3.667681) | 0.213891 \/ 6.500664 (-6.286773) | 0.075825 \/ 0.075469 (0.000356) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.621193 \/ 1.841788 (-0.220594) | 17.322110 \/ 8.074308 (9.247802) | 21.804016 \/ 10.191392 (11.612624) | 0.243692 \/ 0.680424 (-0.436732) | 0.030331 \/ 0.534201 (-0.503870) | 0.492186 \/ 0.579283 (-0.087097) | 0.632583 \/ 0.434364 (0.198219) | 0.576265 \/ 0.540337 (0.035927) | 0.713165 \/ 1.386936 (-0.673771) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#a293ceb5aa41c4ae265c0e2aa9ada2d544466121 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008916 \/ 0.011353 (-0.002437) | 0.004737 \/ 0.011008 (-0.006271) | 0.134271 \/ 0.038508 (0.095763) | 0.054472 \/ 0.023109 (0.031363) | 0.380942 \/ 0.275898 (0.105044) | 0.474138 \/ 0.323480 (0.150658) | 0.007917 \/ 0.007986 (-0.000068) | 0.003748 \/ 0.004328 (-0.000580) | 0.092765 \/ 0.004250 (0.088515) | 0.077873 \/ 0.037052 (0.040821) | 0.397533 \/ 0.258489 (0.139043) | 0.454737 \/ 0.293841 (0.160896) | 0.039901 \/ 0.128546 (-0.088645) | 0.010188 \/ 0.075646 (-0.065458) | 0.447312 \/ 0.419271 (0.028040) | 0.068684 \/ 0.043533 (0.025151) | 0.371554 \/ 0.255139 (0.116415) | 0.459655 \/ 0.283200 (0.176455) | 0.027157 \/ 0.141683 (-0.114526) | 1.874643 \/ 1.452155 (0.422488) | 2.014800 \/ 1.492716 (0.522083) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.227079 \/ 0.018006 (0.209073) | 0.483241 \/ 0.000490 (0.482751) | 0.012404 \/ 0.000200 (0.012204) | 0.000409 \/ 0.000054 (0.000354) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033135 \/ 0.037411 (-0.004277) | 0.137782 \/ 0.014526 (0.123257) | 0.142951 \/ 0.176557 (-0.033605) | 0.209825 \/ 0.737135 (-0.527311) | 0.152438 \/ 0.296338 (-0.143900) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.513066 \/ 0.215209 (0.297857) | 5.122776 \/ 2.077655 (3.045121) | 2.399270 \/ 1.504120 (0.895150) | 2.180143 \/ 1.541195 (0.638949) | 2.286395 \/ 1.468490 (0.817905) | 0.641866 \/ 4.584777 (-3.942911) | 4.694922 \/ 3.745712 (0.949210) | 2.543390 \/ 5.269862 (-2.726472) | 1.398592 \/ 4.565676 (-3.167084) | 0.088662 \/ 0.424275 (-0.335613) | 0.015854 \/ 0.007607 (0.008247) | 0.688891 \/ 0.226044 (0.462847) | 6.370148 \/ 2.268929 (4.101220) | 2.949974 \/ 55.444624 (-52.494650) | 2.538049 \/ 6.876477 (-4.338428) | 2.699380 \/ 2.142072 (0.557308) | 0.792670 \/ 4.805227 (-4.012557) | 0.169126 \/ 6.500664 (-6.331538) | 0.078511 \/ 0.075469 (0.003042) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.609119 \/ 1.841788 (-0.232669) | 18.785069 \/ 8.074308 (10.710761) | 16.670783 \/ 10.191392 (6.479391) | 0.213081 \/ 0.680424 (-0.467343) | 0.023904 \/ 0.534201 (-0.510296) | 0.567720 \/ 0.579283 (-0.011564) | 0.505806 \/ 0.434364 (0.071442) | 0.649466 \/ 0.540337 (0.109129) | 0.773174 \/ 1.386936 (-0.613762) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008036 \/ 0.011353 (-0.003317) | 0.004808 \/ 0.011008 (-0.006201) | 0.094316 \/ 0.038508 (0.055808) | 0.056174 \/ 0.023109 (0.033065) | 0.481618 \/ 0.275898 (0.205720) | 0.565300 \/ 0.323480 (0.241820) | 0.006339 \/ 0.007986 (-0.001646) | 0.003950 \/ 0.004328 (-0.000379) | 0.093389 \/ 0.004250 (0.089139) | 0.076163 \/ 0.037052 (0.039111) | 0.489013 \/ 0.258489 (0.230524) | 0.565451 \/ 0.293841 (0.271611) | 0.039392 \/ 0.128546 (-0.089155) | 0.010553 \/ 0.075646 (-0.065093) | 0.101406 \/ 0.419271 (-0.317865) | 0.062355 \/ 0.043533 (0.018822) | 0.470461 \/ 0.255139 (0.215322) | 0.502574 \/ 0.283200 (0.219375) | 0.030196 \/ 0.141683 (-0.111486) | 1.893926 \/ 1.452155 (0.441771) | 1.958902 \/ 1.492716 (0.466185) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.198074 \/ 0.018006 (0.180068) | 0.476828 \/ 0.000490 (0.476338) | 0.003457 \/ 0.000200 (0.003257) | 0.000105 \/ 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.037576 \/ 0.037411 (0.000165) | 0.146663 \/ 0.014526 (0.132138) | 0.152969 \/ 0.176557 (-0.023588) | 0.218683 \/ 0.737135 (-0.518452) | 0.161552 \/ 0.296338 (-0.134786) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.525988 \/ 0.215209 (0.310779) | 5.234673 \/ 2.077655 (3.157018) | 2.571668 \/ 1.504120 (1.067548) | 2.339760 \/ 1.541195 (0.798565) | 2.422886 \/ 1.468490 (0.954395) | 0.651537 \/ 4.584777 (-3.933240) | 4.811148 \/ 3.745712 (1.065436) | 4.451165 \/ 5.269862 (-0.818697) | 2.016283 \/ 4.565676 (-2.549394) | 0.096393 \/ 0.424275 (-0.327882) | 0.015222 \/ 0.007607 (0.007615) | 0.739132 \/ 0.226044 (0.513087) | 6.813327 \/ 2.268929 (4.544399) | 3.169018 \/ 55.444624 (-52.275606) | 2.783120 \/ 6.876477 (-4.093356) | 2.918979 \/ 2.142072 (0.776907) | 0.797476 \/ 4.805227 (-4.007751) | 0.171038 \/ 6.500664 (-6.329626) | 0.079878 \/ 0.075469 (0.004409) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.595082 \/ 1.841788 (-0.246705) | 19.685844 \/ 8.074308 (11.611536) | 17.518989 \/ 10.191392 (7.327597) | 0.220015 \/ 0.680424 (-0.460409) | 0.026351 \/ 0.534201 (-0.507850) | 0.578977 \/ 0.579283 (-0.000306) | 0.549564 \/ 0.434364 (0.115200) | 0.667564 \/ 0.540337 (0.127227) | 0.802121 \/ 1.386936 (-0.584815) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#e9aee64766aaddfda60a735cfc93345aed64bdcf \"CML watermark\")\n"],"created_at":1688128614000,"updated_at":1688131025000,"closed_at":1688130507000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/6000","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6000","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6000.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/6000.patch","merged_at":1688130507000},"body":"`joblibspark` doesn't support the latest `joblib` release.\r\n\r\nSee https:\/\/github.com\/huggingface\/datasets\/actions\/runs\/5401870932\/jobs\/9812337078 for the errors","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6000\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6000\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5999","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5999\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5999\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5999\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5999","id":1781851513,"node_id":"I_kwDODunzps5qNOV5","number":5999,"title":"Getting a 409 error while loading xglue dataset","user":{"login":"Praful932","id":45713796,"node_id":"MDQ6VXNlcjQ1NzEzNzk2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/45713796?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Praful932","html_url":"https:\/\/github.com\/Praful932","followers_url":"https:\/\/api.github.com\/users\/Praful932\/followers","following_url":"https:\/\/api.github.com\/users\/Praful932\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Praful932\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Praful932\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Praful932\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Praful932\/orgs","repos_url":"https:\/\/api.github.com\/users\/Praful932\/repos","events_url":"https:\/\/api.github.com\/users\/Praful932\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Praful932\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462.0,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @Praful932.\r\n\r\nLet's continue the conversation on the Hub: https:\/\/huggingface.co\/datasets\/xglue\/discussions\/5"],"created_at":1688098434000,"updated_at":1688104643000,"closed_at":1688104642000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nUnable to load xglue dataset\n\n### Steps to reproduce the bug\n\n```python\r\nimport datasets\r\n\r\ndataset = datasets.load_dataset(\"xglue\", \"ntg\")\r\n```\r\n\r\n> ConnectionError: Couldn't reach https:\/\/xglue.blob.core.windows.net\/xglue\/xglue_full_dataset.tar.gz (error 409)\n\n### Expected behavior\n\nExpected the dataset to load\n\n### Environment info\n\n- `datasets` version: 2.13.1\r\n- Platform: Linux-5.15.107+-x86_64-with-glibc2.31\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.5.3","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5999\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5999\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5998","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5998\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5998\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5998\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5998","id":1781805018,"node_id":"I_kwDODunzps5qNC_a","number":5998,"title":"The current implementation has a potential bug in the sort method","user":{"login":"wangyuxinwhy","id":22192665,"node_id":"MDQ6VXNlcjIyMTkyNjY1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22192665?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/wangyuxinwhy","html_url":"https:\/\/github.com\/wangyuxinwhy","followers_url":"https:\/\/api.github.com\/users\/wangyuxinwhy\/followers","following_url":"https:\/\/api.github.com\/users\/wangyuxinwhy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/wangyuxinwhy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/wangyuxinwhy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/wangyuxinwhy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/wangyuxinwhy\/orgs","repos_url":"https:\/\/api.github.com\/users\/wangyuxinwhy\/repos","events_url":"https:\/\/api.github.com\/users\/wangyuxinwhy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/wangyuxinwhy\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting, @wangyuxinwhy. "],"created_at":1688095017000,"updated_at":1688134863000,"closed_at":1688134285000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nIn the sort method\uff0chere's a piece of code\r\n\r\n```python\r\n# column_names: Union[str, Sequence_[str]]\r\n\r\n# Check proper format of and for duplicates in column_names\r\nif not isinstance(column_names, list):\r\n column_names = [column_names]\r\n```\r\n\r\nI get an error when I pass in a tuple based on the column_names type annotation, it will raise an errror.As in the example below, while the type annotation implies that a tuple can be passed.\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('glue', 'ax')['test']\r\ndataset.sort(column_names=('premise', 'hypothesis'))\r\n# Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset.\r\n```\r\n\r\nOf course, after I modified the tuple into a list, everything worked fine\r\n\r\nChange the code to the following so there will be no problem\r\n\r\n```python\r\n# Check proper format of and for duplicates in column_names\r\nif not isinstance(column_names, list):\r\n if isinstance(column_names, str):\r\n column_names = [column_names]\r\n else:\r\n column_names = list(column_names)\r\n```\r\n\r\n### Steps to reproduce the bug\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('glue', 'ax')['test']\r\ndataset.sort(column_names=('premise', 'hypothesis'))\r\n# Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset.\r\n```\r\n\r\n### Expected behavior\r\n\r\nPassing tuple into column_names should be equivalent to passing list\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 2.13.0\r\n- Platform: macOS-13.1-arm64-arm-64bit\r\n- Python version: 3.10.11\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 12.0.1\r\n- Pandas version: 2.0.2","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5998\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5998\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5997","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5997\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5997\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5997\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5997","id":1781582818,"node_id":"I_kwDODunzps5qMMvi","number":5997,"title":"extend the map function so it can wrap around long text that does not fit in the context window","user":{"login":"siddhsql","id":127623723,"node_id":"U_kgDOB5tiKw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/127623723?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/siddhsql","html_url":"https:\/\/github.com\/siddhsql","followers_url":"https:\/\/api.github.com\/users\/siddhsql\/followers","following_url":"https:\/\/api.github.com\/users\/siddhsql\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/siddhsql\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/siddhsql\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/siddhsql\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/siddhsql\/orgs","repos_url":"https:\/\/api.github.com\/users\/siddhsql\/repos","events_url":"https:\/\/api.github.com\/users\/siddhsql\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/siddhsql\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I just noticed the [docs](https:\/\/github.com\/huggingface\/datasets\/blob\/main\/src\/datasets\/arrow_dataset.py#L2881C11-L2881C200) say:\r\n\r\n>If batched is `True` and `batch_size` is `n > 1`, then the function takes a batch of `n` examples as input and can return a batch with `n` examples, or with an arbitrary number of examples.\r\n\r\nso maybe this is a bug then.","All the values in a batch must be of the same length. So one solution is dropping all the input columns:\r\n```python\r\ndata = data.map(lambda samples: tokenizer(samples[\"text\"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True, remove_columns=data.column_names)\r\n```\r\n\r\nAnother is padding\/transforming the input columns to the tokenizer output's length (447). "],"created_at":1688076921000,"updated_at":1688407132000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\n\nI understand `dataset` provides a [`map`](https:\/\/github.com\/huggingface\/datasets\/blob\/main\/src\/datasets\/arrow_dataset.py#L2849) function. This function in turn takes in a callable that is used to tokenize the text on which a model is trained. Frequently this text will not fit within a models's context window. In this case it would be useful to wrap around the text into multiple rows with each row fitting the model's context window. I tried to do it using this code as example which in turn I have borrowed from [here](https:\/\/stackoverflow.com\/a\/76343993\/147530):\r\n\r\n```\r\ndata = data.map(lambda samples: tokenizer(samples[\"text\"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True)\r\n```\r\n\r\nbut running the code gives me this error:\r\n\r\n```\r\nFile \"\/llm\/fine-tune.py\", line 117, in \r\n data = data.map(lambda samples: tokenizer(samples[\"text\"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True)\r\n File \"\/llm\/.env\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py\", line 580, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/llm\/.env\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py\", line 545, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/llm\/.env\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py\", line 3087, in map\r\n for rank, done, content in Dataset._map_single(**dataset_kwargs):\r\n File \"\/llm\/.env\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py\", line 3480, in _map_single\r\n writer.write_batch(batch)\r\n File \"\/llm\/.env\/lib\/python3.9\/site-packages\/datasets\/arrow_writer.py\", line 556, in write_batch\r\n pa_table = pa.Table.from_arrays(arrays, schema=schema)\r\n File \"pyarrow\/table.pxi\", line 3798, in pyarrow.lib.Table.from_arrays\r\n File \"pyarrow\/table.pxi\", line 2962, in pyarrow.lib.Table.validate\r\n File \"pyarrow\/error.pxi\", line 100, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 394 but got length 447\r\n```\r\n\r\nThe lambda function I have provided is correctly chopping up long text so it wraps around (and because of this 394 samples become 447 after wrap around) but the dataset `map` function does not like it.\n\n### Motivation\n\nplease see above\n\n### Your contribution\n\nI'm afraid I don't have much knowledge to help","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5997\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5997\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5996","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5996\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5996\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5996\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5996","id":1779294374,"node_id":"PR_kwDODunzps5UKP0i","number":5996,"title":"Deprecate `use_auth_token` in favor of `token`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006134 \/ 0.011353 (-0.005219) | 0.003816 \/ 0.011008 (-0.007193) | 0.098226 \/ 0.038508 (0.059718) | 0.036830 \/ 0.023109 (0.013721) | 0.314551 \/ 0.275898 (0.038653) | 0.372251 \/ 0.323480 (0.048771) | 0.004762 \/ 0.007986 (-0.003224) | 0.003041 \/ 0.004328 (-0.001287) | 0.077651 \/ 0.004250 (0.073401) | 0.052445 \/ 0.037052 (0.015393) | 0.324632 \/ 0.258489 (0.066143) | 0.365724 \/ 0.293841 (0.071883) | 0.028069 \/ 0.128546 (-0.100477) | 0.008444 \/ 0.075646 (-0.067203) | 0.312767 \/ 0.419271 (-0.106505) | 0.047773 \/ 0.043533 (0.004240) | 0.305317 \/ 0.255139 (0.050178) | 0.332007 \/ 0.283200 (0.048807) | 0.018985 \/ 0.141683 (-0.122698) | 1.538022 \/ 1.452155 (0.085868) | 1.575898 \/ 1.492716 (0.083182) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.204780 \/ 0.018006 (0.186774) | 0.428125 \/ 0.000490 (0.427635) | 0.003454 \/ 0.000200 (0.003254) | 0.000078 \/ 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025064 \/ 0.037411 (-0.012348) | 0.099419 \/ 0.014526 (0.084893) | 0.111068 \/ 0.176557 (-0.065489) | 0.169775 \/ 0.737135 (-0.567361) | 0.112067 \/ 0.296338 (-0.184271) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.429642 \/ 0.215209 (0.214433) | 4.275556 \/ 2.077655 (2.197901) | 1.914658 \/ 1.504120 (0.410539) | 1.706556 \/ 1.541195 (0.165361) | 1.754228 \/ 1.468490 (0.285738) | 0.563669 \/ 4.584777 (-4.021108) | 3.391501 \/ 3.745712 (-0.354211) | 1.791517 \/ 5.269862 (-3.478345) | 1.030704 \/ 4.565676 (-3.534973) | 0.070882 \/ 0.424275 (-0.353393) | 0.011351 \/ 0.007607 (0.003744) | 0.529438 \/ 0.226044 (0.303394) | 5.294316 \/ 2.268929 (3.025387) | 2.344653 \/ 55.444624 (-53.099972) | 1.997468 \/ 6.876477 (-4.879009) | 2.108932 \/ 2.142072 (-0.033140) | 0.676794 \/ 4.805227 (-4.128433) | 0.135058 \/ 6.500664 (-6.365607) | 0.065857 \/ 0.075469 (-0.009612) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.231864 \/ 1.841788 (-0.609924) | 13.986694 \/ 8.074308 (5.912386) | 13.306600 \/ 10.191392 (3.115208) | 0.145520 \/ 0.680424 (-0.534904) | 0.016717 \/ 0.534201 (-0.517484) | 0.366303 \/ 0.579283 (-0.212980) | 0.391637 \/ 0.434364 (-0.042727) | 0.425445 \/ 0.540337 (-0.114892) | 0.507719 \/ 1.386936 (-0.879217) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006236 \/ 0.011353 (-0.005116) | 0.003766 \/ 0.011008 (-0.007242) | 0.076794 \/ 0.038508 (0.038286) | 0.037210 \/ 0.023109 (0.014101) | 0.378387 \/ 0.275898 (0.102489) | 0.425456 \/ 0.323480 (0.101977) | 0.004694 \/ 0.007986 (-0.003291) | 0.002921 \/ 0.004328 (-0.001407) | 0.076985 \/ 0.004250 (0.072735) | 0.052188 \/ 0.037052 (0.015136) | 0.394385 \/ 0.258489 (0.135896) | 0.432527 \/ 0.293841 (0.138686) | 0.029091 \/ 0.128546 (-0.099455) | 0.008364 \/ 0.075646 (-0.067282) | 0.082583 \/ 0.419271 (-0.336689) | 0.042928 \/ 0.043533 (-0.000605) | 0.375321 \/ 0.255139 (0.120182) | 0.391719 \/ 0.283200 (0.108519) | 0.019388 \/ 0.141683 (-0.122295) | 1.550644 \/ 1.452155 (0.098489) | 1.604882 \/ 1.492716 (0.112166) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.236859 \/ 0.018006 (0.218853) | 0.418528 \/ 0.000490 (0.418039) | 0.000388 \/ 0.000200 (0.000188) | 0.000059 \/ 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025548 \/ 0.037411 (-0.011863) | 0.100644 \/ 0.014526 (0.086118) | 0.109102 \/ 0.176557 (-0.067455) | 0.161694 \/ 0.737135 (-0.575441) | 0.112088 \/ 0.296338 (-0.184250) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.484128 \/ 0.215209 (0.268919) | 4.849952 \/ 2.077655 (2.772297) | 2.512769 \/ 1.504120 (1.008649) | 2.303295 \/ 1.541195 (0.762100) | 2.356699 \/ 1.468490 (0.888209) | 0.564181 \/ 4.584777 (-4.020596) | 3.421393 \/ 3.745712 (-0.324319) | 2.570875 \/ 5.269862 (-2.698987) | 1.474307 \/ 4.565676 (-3.091370) | 0.068035 \/ 0.424275 (-0.356240) | 0.011300 \/ 0.007607 (0.003693) | 0.587867 \/ 0.226044 (0.361823) | 5.862447 \/ 2.268929 (3.593519) | 3.004017 \/ 55.444624 (-52.440607) | 2.664989 \/ 6.876477 (-4.211488) | 2.740020 \/ 2.142072 (0.597948) | 0.680840 \/ 4.805227 (-4.124387) | 0.137001 \/ 6.500664 (-6.363663) | 0.068098 \/ 0.075469 (-0.007371) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.297362 \/ 1.841788 (-0.544426) | 14.207891 \/ 8.074308 (6.133583) | 14.087562 \/ 10.191392 (3.896170) | 0.149514 \/ 0.680424 (-0.530910) | 0.016566 \/ 0.534201 (-0.517635) | 0.367602 \/ 0.579283 (-0.211681) | 0.400692 \/ 0.434364 (-0.033671) | 0.432907 \/ 0.540337 (-0.107431) | 0.525924 \/ 1.386936 (-0.861012) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#1ec069feaaf6c28d4e4df76d344693b591a74c3f \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006223 \/ 0.011353 (-0.005130) | 0.003672 \/ 0.011008 (-0.007336) | 0.097451 \/ 0.038508 (0.058943) | 0.036243 \/ 0.023109 (0.013133) | 0.375650 \/ 0.275898 (0.099752) | 0.431652 \/ 0.323480 (0.108172) | 0.004758 \/ 0.007986 (-0.003227) | 0.002941 \/ 0.004328 (-0.001387) | 0.077383 \/ 0.004250 (0.073132) | 0.055342 \/ 0.037052 (0.018289) | 0.390335 \/ 0.258489 (0.131846) | 0.427867 \/ 0.293841 (0.134026) | 0.027619 \/ 0.128546 (-0.100927) | 0.008244 \/ 0.075646 (-0.067402) | 0.313499 \/ 0.419271 (-0.105773) | 0.054987 \/ 0.043533 (0.011454) | 0.394044 \/ 0.255139 (0.138905) | 0.398784 \/ 0.283200 (0.115584) | 0.026499 \/ 0.141683 (-0.115184) | 1.496907 \/ 1.452155 (0.044753) | 1.554465 \/ 1.492716 (0.061749) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.241197 \/ 0.018006 (0.223190) | 0.427856 \/ 0.000490 (0.427366) | 0.006264 \/ 0.000200 (0.006065) | 0.000218 \/ 0.000054 (0.000164) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025550 \/ 0.037411 (-0.011862) | 0.104426 \/ 0.014526 (0.089901) | 0.110310 \/ 0.176557 (-0.066246) | 0.173813 \/ 0.737135 (-0.563322) | 0.112129 \/ 0.296338 (-0.184209) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.458806 \/ 0.215209 (0.243597) | 4.576351 \/ 2.077655 (2.498697) | 2.265670 \/ 1.504120 (0.761550) | 2.073230 \/ 1.541195 (0.532035) | 2.135283 \/ 1.468490 (0.666793) | 0.562506 \/ 4.584777 (-4.022271) | 3.375101 \/ 3.745712 (-0.370611) | 1.734393 \/ 5.269862 (-3.535469) | 1.026622 \/ 4.565676 (-3.539054) | 0.068144 \/ 0.424275 (-0.356131) | 0.011092 \/ 0.007607 (0.003485) | 0.562779 \/ 0.226044 (0.336734) | 5.608256 \/ 2.268929 (3.339328) | 2.706468 \/ 55.444624 (-52.738157) | 2.381607 \/ 6.876477 (-4.494869) | 2.451027 \/ 2.142072 (0.308954) | 0.671590 \/ 4.805227 (-4.133637) | 0.135749 \/ 6.500664 (-6.364915) | 0.065389 \/ 0.075469 (-0.010080) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.244806 \/ 1.841788 (-0.596981) | 14.042150 \/ 8.074308 (5.967841) | 14.246612 \/ 10.191392 (4.055220) | 0.134309 \/ 0.680424 (-0.546114) | 0.017082 \/ 0.534201 (-0.517119) | 0.366043 \/ 0.579283 (-0.213240) | 0.400748 \/ 0.434364 (-0.033616) | 0.425695 \/ 0.540337 (-0.114643) | 0.509355 \/ 1.386936 (-0.877581) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006134 \/ 0.011353 (-0.005219) | 0.003980 \/ 0.011008 (-0.007028) | 0.078353 \/ 0.038508 (0.039845) | 0.038011 \/ 0.023109 (0.014902) | 0.375784 \/ 0.275898 (0.099886) | 0.433619 \/ 0.323480 (0.110139) | 0.004897 \/ 0.007986 (-0.003088) | 0.002981 \/ 0.004328 (-0.001347) | 0.077362 \/ 0.004250 (0.073112) | 0.056108 \/ 0.037052 (0.019056) | 0.395984 \/ 0.258489 (0.137495) | 0.427397 \/ 0.293841 (0.133556) | 0.029325 \/ 0.128546 (-0.099221) | 0.008498 \/ 0.075646 (-0.067148) | 0.082478 \/ 0.419271 (-0.336794) | 0.044085 \/ 0.043533 (0.000552) | 0.389923 \/ 0.255139 (0.134784) | 0.391180 \/ 0.283200 (0.107980) | 0.022452 \/ 0.141683 (-0.119231) | 1.507758 \/ 1.452155 (0.055603) | 1.530459 \/ 1.492716 (0.037743) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.230928 \/ 0.018006 (0.212922) | 0.408484 \/ 0.000490 (0.407995) | 0.000806 \/ 0.000200 (0.000606) | 0.000067 \/ 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025183 \/ 0.037411 (-0.012228) | 0.102292 \/ 0.014526 (0.087766) | 0.108142 \/ 0.176557 (-0.068415) | 0.161172 \/ 0.737135 (-0.575963) | 0.114476 \/ 0.296338 (-0.181862) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.482978 \/ 0.215209 (0.267769) | 4.816103 \/ 2.077655 (2.738448) | 2.505567 \/ 1.504120 (1.001447) | 2.302598 \/ 1.541195 (0.761404) | 2.371238 \/ 1.468490 (0.902748) | 0.567467 \/ 4.584777 (-4.017310) | 3.363407 \/ 3.745712 (-0.382306) | 1.746213 \/ 5.269862 (-3.523649) | 1.035468 \/ 4.565676 (-3.530208) | 0.068431 \/ 0.424275 (-0.355844) | 0.011069 \/ 0.007607 (0.003462) | 0.598241 \/ 0.226044 (0.372196) | 5.953927 \/ 2.268929 (3.684999) | 3.007493 \/ 55.444624 (-52.437132) | 2.629399 \/ 6.876477 (-4.247078) | 2.737201 \/ 2.142072 (0.595129) | 0.682456 \/ 4.805227 (-4.122771) | 0.137613 \/ 6.500664 (-6.363051) | 0.067941 \/ 0.075469 (-0.007528) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.306015 \/ 1.841788 (-0.535772) | 14.359240 \/ 8.074308 (6.284932) | 14.187601 \/ 10.191392 (3.996209) | 0.138612 \/ 0.680424 (-0.541812) | 0.016708 \/ 0.534201 (-0.517493) | 0.366365 \/ 0.579283 (-0.212918) | 0.396982 \/ 0.434364 (-0.037382) | 0.426939 \/ 0.540337 (-0.113398) | 0.520064 \/ 1.386936 (-0.866872) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#21d0fd041a5eca02d3ee787396216ac613c662ac \"CML watermark\")\n","They use `token` and emit a deprecation warning if `use_auth_token` is passed instead (see https:\/\/github.com\/huggingface\/transformers\/blob\/78a2b19fc84ed55c65f4bf20a901edb7ceb73c5f\/src\/transformers\/modeling_utils.py#L1933). \r\n\r\nI think we can update the `examples` scripts after merging this PR.","> I think we can update the examples scripts after merging this PR.\r\n\r\nWe should do a release before updated in the examples scripts no ? That's why it's an option to not have a deprecation warning until transformers and co are updated with the `token` arg","> We should do a release before updated in the examples scripts no ? That's why it's an option to not have a deprecation warning until transformers and co are updated with the token arg\r\n\r\nThis would avoid the warning only for the latest `datasets` release. TBH, I don't think this is worth the hassle, considering how simple it is to remove it.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007644 \/ 0.011353 (-0.003709) | 0.004667 \/ 0.011008 (-0.006341) | 0.117347 \/ 0.038508 (0.078839) | 0.050620 \/ 0.023109 (0.027510) | 0.415402 \/ 0.275898 (0.139504) | 0.485898 \/ 0.323480 (0.162418) | 0.005848 \/ 0.007986 (-0.002138) | 0.003736 \/ 0.004328 (-0.000592) | 0.089798 \/ 0.004250 (0.085547) | 0.069344 \/ 0.037052 (0.032292) | 0.441684 \/ 0.258489 (0.183195) | 0.468972 \/ 0.293841 (0.175131) | 0.036637 \/ 0.128546 (-0.091909) | 0.010219 \/ 0.075646 (-0.065427) | 0.394293 \/ 0.419271 (-0.024978) | 0.061462 \/ 0.043533 (0.017929) | 0.409448 \/ 0.255139 (0.154309) | 0.431557 \/ 0.283200 (0.148358) | 0.027795 \/ 0.141683 (-0.113888) | 1.837844 \/ 1.452155 (0.385690) | 1.862683 \/ 1.492716 (0.369967) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.230500 \/ 0.018006 (0.212494) | 0.483139 \/ 0.000490 (0.482649) | 0.006517 \/ 0.000200 (0.006317) | 0.000143 \/ 0.000054 (0.000088) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033152 \/ 0.037411 (-0.004259) | 0.133673 \/ 0.014526 (0.119147) | 0.143853 \/ 0.176557 (-0.032704) | 0.215254 \/ 0.737135 (-0.521882) | 0.150676 \/ 0.296338 (-0.145662) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.503796 \/ 0.215209 (0.288587) | 5.049981 \/ 2.077655 (2.972326) | 2.399427 \/ 1.504120 (0.895307) | 2.167635 \/ 1.541195 (0.626441) | 2.257448 \/ 1.468490 (0.788958) | 0.641298 \/ 4.584777 (-3.943479) | 4.828676 \/ 3.745712 (1.082964) | 4.346069 \/ 5.269862 (-0.923793) | 2.103890 \/ 4.565676 (-2.461786) | 0.079115 \/ 0.424275 (-0.345160) | 0.013377 \/ 0.007607 (0.005770) | 0.621207 \/ 0.226044 (0.395162) | 6.190939 \/ 2.268929 (3.922011) | 2.920129 \/ 55.444624 (-52.524495) | 2.549225 \/ 6.876477 (-4.327252) | 2.719221 \/ 2.142072 (0.577149) | 0.790949 \/ 4.805227 (-4.014278) | 0.172032 \/ 6.500664 (-6.328632) | 0.077779 \/ 0.075469 (0.002310) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.432572 \/ 1.841788 (-0.409216) | 21.000031 \/ 8.074308 (12.925723) | 17.555093 \/ 10.191392 (7.363701) | 0.166646 \/ 0.680424 (-0.513778) | 0.020451 \/ 0.534201 (-0.513750) | 0.488767 \/ 0.579283 (-0.090516) | 0.737036 \/ 0.434364 (0.302672) | 0.621694 \/ 0.540337 (0.081356) | 0.732074 \/ 1.386936 (-0.654862) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008198 \/ 0.011353 (-0.003155) | 0.004987 \/ 0.011008 (-0.006021) | 0.090714 \/ 0.038508 (0.052206) | 0.053379 \/ 0.023109 (0.030270) | 0.425199 \/ 0.275898 (0.149301) | 0.514036 \/ 0.323480 (0.190556) | 0.006043 \/ 0.007986 (-0.001943) | 0.003888 \/ 0.004328 (-0.000441) | 0.088294 \/ 0.004250 (0.084043) | 0.073024 \/ 0.037052 (0.035971) | 0.435983 \/ 0.258489 (0.177494) | 0.514293 \/ 0.293841 (0.220452) | 0.039451 \/ 0.128546 (-0.089095) | 0.010439 \/ 0.075646 (-0.065207) | 0.096885 \/ 0.419271 (-0.322387) | 0.060165 \/ 0.043533 (0.016632) | 0.421053 \/ 0.255139 (0.165914) | 0.455545 \/ 0.283200 (0.172345) | 0.027234 \/ 0.141683 (-0.114449) | 1.768975 \/ 1.452155 (0.316820) | 1.842853 \/ 1.492716 (0.350137) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.278940 \/ 0.018006 (0.260933) | 0.480709 \/ 0.000490 (0.480219) | 0.000436 \/ 0.000200 (0.000236) | 0.000070 \/ 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034900 \/ 0.037411 (-0.002511) | 0.144893 \/ 0.014526 (0.130368) | 0.149567 \/ 0.176557 (-0.026989) | 0.213200 \/ 0.737135 (-0.523935) | 0.156735 \/ 0.296338 (-0.139604) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.535897 \/ 0.215209 (0.320687) | 5.336998 \/ 2.077655 (3.259343) | 2.685854 \/ 1.504120 (1.181734) | 2.470177 \/ 1.541195 (0.928983) | 2.547495 \/ 1.468490 (1.079004) | 0.642830 \/ 4.584777 (-3.941947) | 4.595866 \/ 3.745712 (0.850154) | 2.186696 \/ 5.269862 (-3.083165) | 1.317969 \/ 4.565676 (-3.247708) | 0.079268 \/ 0.424275 (-0.345007) | 0.013792 \/ 0.007607 (0.006185) | 0.662236 \/ 0.226044 (0.436192) | 6.604775 \/ 2.268929 (4.335847) | 3.355888 \/ 55.444624 (-52.088736) | 2.968911 \/ 6.876477 (-3.907565) | 3.121862 \/ 2.142072 (0.979790) | 0.794752 \/ 4.805227 (-4.010475) | 0.170800 \/ 6.500664 (-6.329864) | 0.078393 \/ 0.075469 (0.002924) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.601605 \/ 1.841788 (-0.240183) | 20.743553 \/ 8.074308 (12.669245) | 17.543968 \/ 10.191392 (7.352576) | 0.221884 \/ 0.680424 (-0.458540) | 0.020779 \/ 0.534201 (-0.513422) | 0.479677 \/ 0.579283 (-0.099606) | 0.516207 \/ 0.434364 (0.081843) | 0.564046 \/ 0.540337 (0.023709) | 0.711336 \/ 1.386936 (-0.675600) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#819bb4346434912eb405ce3f3e9f21dc25a2fe85 \"CML watermark\")\n","Yes, sounds great! Thanks","yup"],"created_at":1687969598000,"updated_at":1688570540000,"closed_at":1688400213000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5996","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5996","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5996.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5996.patch","merged_at":1688400213000},"body":"... to be consistent with `transformers` and `huggingface_hub`.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5996\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5996\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5995","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5995\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5995\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5995\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5995","id":1777088925,"node_id":"PR_kwDODunzps5UCvYJ","number":5995,"title":"Support returning dataframe in map transform","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009725 \/ 0.011353 (-0.001628) | 0.006014 \/ 0.011008 (-0.004994) | 0.136039 \/ 0.038508 (0.097531) | 0.049685 \/ 0.023109 (0.026576) | 0.492967 \/ 0.275898 (0.217068) | 0.553775 \/ 0.323480 (0.230295) | 0.007421 \/ 0.007986 (-0.000564) | 0.004686 \/ 0.004328 (0.000357) | 0.106639 \/ 0.004250 (0.102389) | 0.073483 \/ 0.037052 (0.036431) | 0.507194 \/ 0.258489 (0.248705) | 0.535760 \/ 0.293841 (0.241919) | 0.049666 \/ 0.128546 (-0.078880) | 0.014139 \/ 0.075646 (-0.061507) | 0.435459 \/ 0.419271 (0.016188) | 0.076026 \/ 0.043533 (0.032493) | 0.454542 \/ 0.255139 (0.199403) | 0.512724 \/ 0.283200 (0.229524) | 0.034969 \/ 0.141683 (-0.106713) | 1.881048 \/ 1.452155 (0.428893) | 1.959915 \/ 1.492716 (0.467199) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.265322 \/ 0.018006 (0.247316) | 0.573963 \/ 0.000490 (0.573474) | 0.017493 \/ 0.000200 (0.017293) | 0.000637 \/ 0.000054 (0.000582) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028712 \/ 0.037411 (-0.008699) | 0.149554 \/ 0.014526 (0.135029) | 0.130013 \/ 0.176557 (-0.046544) | 0.203408 \/ 0.737135 (-0.533727) | 0.144778 \/ 0.296338 (-0.151561) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.664198 \/ 0.215209 (0.448989) | 6.418054 \/ 2.077655 (4.340399) | 2.602338 \/ 1.504120 (1.098219) | 2.212992 \/ 1.541195 (0.671797) | 2.214309 \/ 1.468490 (0.745819) | 0.914772 \/ 4.584777 (-3.670005) | 5.824831 \/ 3.745712 (2.079119) | 2.865381 \/ 5.269862 (-2.404481) | 1.906020 \/ 4.565676 (-2.659657) | 0.106947 \/ 0.424275 (-0.317328) | 0.013467 \/ 0.007607 (0.005860) | 0.834556 \/ 0.226044 (0.608512) | 8.237078 \/ 2.268929 (5.968150) | 3.380919 \/ 55.444624 (-52.063705) | 2.656713 \/ 6.876477 (-4.219764) | 2.834941 \/ 2.142072 (0.692869) | 1.151241 \/ 4.805227 (-3.653986) | 0.220860 \/ 6.500664 (-6.279804) | 0.080781 \/ 0.075469 (0.005312) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.655128 \/ 1.841788 (-0.186660) | 18.696108 \/ 8.074308 (10.621800) | 22.882108 \/ 10.191392 (12.690716) | 0.236041 \/ 0.680424 (-0.444383) | 0.031073 \/ 0.534201 (-0.503128) | 0.525263 \/ 0.579283 (-0.054021) | 0.632933 \/ 0.434364 (0.198569) | 0.707228 \/ 0.540337 (0.166890) | 0.753508 \/ 1.386936 (-0.633428) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009875 \/ 0.011353 (-0.001478) | 0.005135 \/ 0.011008 (-0.005873) | 0.101307 \/ 0.038508 (0.062799) | 0.044895 \/ 0.023109 (0.021786) | 0.497824 \/ 0.275898 (0.221926) | 0.573098 \/ 0.323480 (0.249618) | 0.006669 \/ 0.007986 (-0.001317) | 0.004289 \/ 0.004328 (-0.000039) | 0.105824 \/ 0.004250 (0.101573) | 0.061002 \/ 0.037052 (0.023950) | 0.510127 \/ 0.258489 (0.251638) | 0.581387 \/ 0.293841 (0.287546) | 0.052843 \/ 0.128546 (-0.075703) | 0.015506 \/ 0.075646 (-0.060140) | 0.116057 \/ 0.419271 (-0.303215) | 0.063444 \/ 0.043533 (0.019912) | 0.479366 \/ 0.255139 (0.224227) | 0.518419 \/ 0.283200 (0.235220) | 0.034876 \/ 0.141683 (-0.106806) | 2.018446 \/ 1.452155 (0.566292) | 1.960755 \/ 1.492716 (0.468039) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.269077 \/ 0.018006 (0.251070) | 0.606059 \/ 0.000490 (0.605569) | 0.000488 \/ 0.000200 (0.000288) | 0.000093 \/ 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032465 \/ 0.037411 (-0.004946) | 0.136517 \/ 0.014526 (0.121991) | 0.147740 \/ 0.176557 (-0.028816) | 0.193802 \/ 0.737135 (-0.543334) | 0.151876 \/ 0.296338 (-0.144462) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.709866 \/ 0.215209 (0.494657) | 6.848193 \/ 2.077655 (4.770538) | 3.310853 \/ 1.504120 (1.806733) | 2.940813 \/ 1.541195 (1.399619) | 2.934934 \/ 1.468490 (1.466444) | 0.927104 \/ 4.584777 (-3.657673) | 5.921607 \/ 3.745712 (2.175895) | 4.926558 \/ 5.269862 (-0.343303) | 2.853269 \/ 4.565676 (-1.712407) | 0.120278 \/ 0.424275 (-0.303998) | 0.015468 \/ 0.007607 (0.007861) | 0.820509 \/ 0.226044 (0.594464) | 8.263136 \/ 2.268929 (5.994208) | 3.780214 \/ 55.444624 (-51.664410) | 3.108482 \/ 6.876477 (-3.767995) | 3.101544 \/ 2.142072 (0.959471) | 1.165539 \/ 4.805227 (-3.639688) | 0.229215 \/ 6.500664 (-6.271449) | 0.079862 \/ 0.075469 (0.004393) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.775071 \/ 1.841788 (-0.066717) | 19.327621 \/ 8.074308 (11.253313) | 23.057537 \/ 10.191392 (12.866145) | 0.250649 \/ 0.680424 (-0.429775) | 0.029767 \/ 0.534201 (-0.504434) | 0.554774 \/ 0.579283 (-0.024509) | 0.651919 \/ 0.434364 (0.217555) | 0.651641 \/ 0.540337 (0.111304) | 0.762386 \/ 1.386936 (-0.624550) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#fdc3ce7060366f480621e8640903c9ab476164e7 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005997 \/ 0.011353 (-0.005356) | 0.003892 \/ 0.011008 (-0.007116) | 0.098020 \/ 0.038508 (0.059512) | 0.042584 \/ 0.023109 (0.019475) | 0.317909 \/ 0.275898 (0.042011) | 0.395042 \/ 0.323480 (0.071563) | 0.005358 \/ 0.007986 (-0.002628) | 0.003266 \/ 0.004328 (-0.001062) | 0.076698 \/ 0.004250 (0.072447) | 0.062331 \/ 0.037052 (0.025279) | 0.334900 \/ 0.258489 (0.076411) | 0.379355 \/ 0.293841 (0.085514) | 0.030815 \/ 0.128546 (-0.097731) | 0.008596 \/ 0.075646 (-0.067050) | 0.327739 \/ 0.419271 (-0.091533) | 0.054061 \/ 0.043533 (0.010528) | 0.311044 \/ 0.255139 (0.055905) | 0.336705 \/ 0.283200 (0.053506) | 0.022785 \/ 0.141683 (-0.118898) | 1.516793 \/ 1.452155 (0.064639) | 1.590435 \/ 1.492716 (0.097719) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.289157 \/ 0.018006 (0.271151) | 0.531074 \/ 0.000490 (0.530585) | 0.004672 \/ 0.000200 (0.004472) | 0.000095 \/ 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026173 \/ 0.037411 (-0.011238) | 0.105723 \/ 0.014526 (0.091197) | 0.118010 \/ 0.176557 (-0.058547) | 0.178062 \/ 0.737135 (-0.559073) | 0.120059 \/ 0.296338 (-0.176279) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.410870 \/ 0.215209 (0.195661) | 4.042183 \/ 2.077655 (1.964528) | 1.830059 \/ 1.504120 (0.325939) | 1.638996 \/ 1.541195 (0.097802) | 1.701368 \/ 1.468490 (0.232878) | 0.529915 \/ 4.584777 (-4.054861) | 3.693308 \/ 3.745712 (-0.052404) | 1.827875 \/ 5.269862 (-3.441986) | 1.063237 \/ 4.565676 (-3.502440) | 0.065368 \/ 0.424275 (-0.358907) | 0.010986 \/ 0.007607 (0.003379) | 0.509399 \/ 0.226044 (0.283354) | 5.092739 \/ 2.268929 (2.823810) | 2.293490 \/ 55.444624 (-53.151135) | 1.958742 \/ 6.876477 (-4.917735) | 2.024985 \/ 2.142072 (-0.117088) | 0.646978 \/ 4.805227 (-4.158249) | 0.138616 \/ 6.500664 (-6.362048) | 0.062101 \/ 0.075469 (-0.013368) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.202016 \/ 1.841788 (-0.639772) | 14.493204 \/ 8.074308 (6.418896) | 12.992160 \/ 10.191392 (2.800768) | 0.188922 \/ 0.680424 (-0.491502) | 0.017594 \/ 0.534201 (-0.516606) | 0.399917 \/ 0.579283 (-0.179367) | 0.429760 \/ 0.434364 (-0.004604) | 0.497906 \/ 0.540337 (-0.042431) | 0.608745 \/ 1.386936 (-0.778191) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006164 \/ 0.011353 (-0.005189) | 0.003980 \/ 0.011008 (-0.007028) | 0.074676 \/ 0.038508 (0.036168) | 0.041337 \/ 0.023109 (0.018228) | 0.400981 \/ 0.275898 (0.125083) | 0.448791 \/ 0.323480 (0.125312) | 0.004063 \/ 0.007986 (-0.003923) | 0.004443 \/ 0.004328 (0.000114) | 0.075011 \/ 0.004250 (0.070760) | 0.056494 \/ 0.037052 (0.019441) | 0.402054 \/ 0.258489 (0.143565) | 0.446122 \/ 0.293841 (0.152281) | 0.031752 \/ 0.128546 (-0.096794) | 0.008835 \/ 0.075646 (-0.066811) | 0.081226 \/ 0.419271 (-0.338046) | 0.051501 \/ 0.043533 (0.007969) | 0.383674 \/ 0.255139 (0.128535) | 0.405524 \/ 0.283200 (0.122325) | 0.025929 \/ 0.141683 (-0.115754) | 1.492985 \/ 1.452155 (0.040830) | 1.541601 \/ 1.492716 (0.048885) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.305149 \/ 0.018006 (0.287142) | 0.497259 \/ 0.000490 (0.496770) | 0.000420 \/ 0.000200 (0.000220) | 0.000056 \/ 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027933 \/ 0.037411 (-0.009479) | 0.111900 \/ 0.014526 (0.097374) | 0.124879 \/ 0.176557 (-0.051678) | 0.178952 \/ 0.737135 (-0.558184) | 0.127698 \/ 0.296338 (-0.168640) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.448525 \/ 0.215209 (0.233316) | 4.486791 \/ 2.077655 (2.409137) | 2.256687 \/ 1.504120 (0.752567) | 2.061078 \/ 1.541195 (0.519884) | 2.078924 \/ 1.468490 (0.610434) | 0.534412 \/ 4.584777 (-4.050365) | 3.721098 \/ 3.745712 (-0.024614) | 1.818735 \/ 5.269862 (-3.451127) | 1.104198 \/ 4.565676 (-3.461479) | 0.066277 \/ 0.424275 (-0.357998) | 0.011441 \/ 0.007607 (0.003834) | 0.550140 \/ 0.226044 (0.324095) | 5.498079 \/ 2.268929 (3.229150) | 2.717398 \/ 55.444624 (-52.727227) | 2.410194 \/ 6.876477 (-4.466283) | 2.405304 \/ 2.142072 (0.263231) | 0.665432 \/ 4.805227 (-4.139796) | 0.141488 \/ 6.500664 (-6.359177) | 0.064051 \/ 0.075469 (-0.011419) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.272334 \/ 1.841788 (-0.569454) | 14.901608 \/ 8.074308 (6.827300) | 14.287857 \/ 10.191392 (4.096465) | 0.165337 \/ 0.680424 (-0.515086) | 0.017402 \/ 0.534201 (-0.516799) | 0.398120 \/ 0.579283 (-0.181163) | 0.416539 \/ 0.434364 (-0.017825) | 0.463890 \/ 0.540337 (-0.076447) | 0.567909 \/ 1.386936 (-0.819027) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#504ec0f2e00ee38e0993ed1e4f1e10f1eefaea0d \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009434 \/ 0.011353 (-0.001919) | 0.005567 \/ 0.011008 (-0.005441) | 0.122652 \/ 0.038508 (0.084144) | 0.050177 \/ 0.023109 (0.027067) | 0.384292 \/ 0.275898 (0.108394) | 0.446608 \/ 0.323480 (0.123128) | 0.006502 \/ 0.007986 (-0.001484) | 0.004523 \/ 0.004328 (0.000194) | 0.100581 \/ 0.004250 (0.096331) | 0.073615 \/ 0.037052 (0.036563) | 0.420179 \/ 0.258489 (0.161690) | 0.474631 \/ 0.293841 (0.180790) | 0.047942 \/ 0.128546 (-0.080604) | 0.013864 \/ 0.075646 (-0.061783) | 0.419384 \/ 0.419271 (0.000112) | 0.088317 \/ 0.043533 (0.044784) | 0.379620 \/ 0.255139 (0.124481) | 0.412639 \/ 0.283200 (0.129440) | 0.048947 \/ 0.141683 (-0.092736) | 1.823498 \/ 1.452155 (0.371343) | 1.966629 \/ 1.492716 (0.473913) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.300669 \/ 0.018006 (0.282663) | 0.593499 \/ 0.000490 (0.593009) | 0.007247 \/ 0.000200 (0.007047) | 0.000114 \/ 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030556 \/ 0.037411 (-0.006856) | 0.119252 \/ 0.014526 (0.104726) | 0.131403 \/ 0.176557 (-0.045153) | 0.201845 \/ 0.737135 (-0.535291) | 0.139350 \/ 0.296338 (-0.156989) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.652400 \/ 0.215209 (0.437191) | 6.536540 \/ 2.077655 (4.458886) | 2.644565 \/ 1.504120 (1.140445) | 2.245181 \/ 1.541195 (0.703986) | 2.316030 \/ 1.468490 (0.847540) | 0.922535 \/ 4.584777 (-3.662242) | 5.469065 \/ 3.745712 (1.723353) | 2.800489 \/ 5.269862 (-2.469373) | 1.749042 \/ 4.565676 (-2.816635) | 0.108444 \/ 0.424275 (-0.315831) | 0.015651 \/ 0.007607 (0.008044) | 0.846085 \/ 0.226044 (0.620041) | 8.018460 \/ 2.268929 (5.749531) | 3.338710 \/ 55.444624 (-52.105914) | 2.675998 \/ 6.876477 (-4.200479) | 2.918550 \/ 2.142072 (0.776478) | 1.135145 \/ 4.805227 (-3.670082) | 0.215165 \/ 6.500664 (-6.285499) | 0.082066 \/ 0.075469 (0.006597) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.561661 \/ 1.841788 (-0.280127) | 18.519035 \/ 8.074308 (10.444727) | 19.046300 \/ 10.191392 (8.854908) | 0.236890 \/ 0.680424 (-0.443534) | 0.027681 \/ 0.534201 (-0.506520) | 0.511998 \/ 0.579283 (-0.067285) | 0.591627 \/ 0.434364 (0.157264) | 0.562021 \/ 0.540337 (0.021683) | 0.679354 \/ 1.386936 (-0.707582) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009643 \/ 0.011353 (-0.001710) | 0.005768 \/ 0.011008 (-0.005241) | 0.104430 \/ 0.038508 (0.065922) | 0.050044 \/ 0.023109 (0.026935) | 0.464117 \/ 0.275898 (0.188219) | 0.518439 \/ 0.323480 (0.194959) | 0.006935 \/ 0.007986 (-0.001051) | 0.004316 \/ 0.004328 (-0.000013) | 0.094330 \/ 0.004250 (0.090080) | 0.071451 \/ 0.037052 (0.034399) | 0.492248 \/ 0.258489 (0.233759) | 0.555740 \/ 0.293841 (0.261899) | 0.047836 \/ 0.128546 (-0.080711) | 0.014788 \/ 0.075646 (-0.060859) | 0.107590 \/ 0.419271 (-0.311682) | 0.064396 \/ 0.043533 (0.020863) | 0.451529 \/ 0.255139 (0.196390) | 0.475025 \/ 0.283200 (0.191826) | 0.040006 \/ 0.141683 (-0.101677) | 1.797107 \/ 1.452155 (0.344953) | 1.879261 \/ 1.492716 (0.386545) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.298458 \/ 0.018006 (0.280451) | 0.613022 \/ 0.000490 (0.612532) | 0.003582 \/ 0.000200 (0.003382) | 0.000106 \/ 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030179 \/ 0.037411 (-0.007232) | 0.123286 \/ 0.014526 (0.108760) | 0.132070 \/ 0.176557 (-0.044486) | 0.190883 \/ 0.737135 (-0.546252) | 0.138526 \/ 0.296338 (-0.157812) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.666908 \/ 0.215209 (0.451699) | 6.489035 \/ 2.077655 (4.411381) | 2.897027 \/ 1.504120 (1.392907) | 2.565150 \/ 1.541195 (1.023956) | 2.504827 \/ 1.468490 (1.036336) | 0.916112 \/ 4.584777 (-3.668665) | 5.651751 \/ 3.745712 (1.906039) | 2.743382 \/ 5.269862 (-2.526479) | 1.773338 \/ 4.565676 (-2.792338) | 0.128764 \/ 0.424275 (-0.295511) | 0.013140 \/ 0.007607 (0.005533) | 0.803281 \/ 0.226044 (0.577236) | 8.258874 \/ 2.268929 (5.989945) | 3.633260 \/ 55.444624 (-51.811364) | 2.878827 \/ 6.876477 (-3.997649) | 2.977178 \/ 2.142072 (0.835106) | 1.130467 \/ 4.805227 (-3.674760) | 0.226381 \/ 6.500664 (-6.274283) | 0.081550 \/ 0.075469 (0.006081) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.842927 \/ 1.841788 (0.001139) | 18.411520 \/ 8.074308 (10.337212) | 21.118228 \/ 10.191392 (10.926836) | 0.231526 \/ 0.680424 (-0.448898) | 0.029300 \/ 0.534201 (-0.504901) | 0.527450 \/ 0.579283 (-0.051834) | 0.618873 \/ 0.434364 (0.184509) | 0.593314 \/ 0.540337 (0.052976) | 0.734430 \/ 1.386936 (-0.652506) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#0d2b8854c265b4dc202e480427890f472b34ea15 \"CML watermark\")\n"],"created_at":1687875308000,"updated_at":1687960562000,"closed_at":1687959993000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5995","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5995","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5995.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5995.patch","merged_at":1687959993000},"body":"Allow returning Pandas DataFrames in `map` transforms.\r\n\r\n(Plus, raise an error in the non-batched mode if a returned PyArrow table\/Pandas DataFrame has more than one row)\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5995\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5995\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5994","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5994\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5994\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5994\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5994","id":1776829004,"node_id":"PR_kwDODunzps5UB1cA","number":5994,"title":"Fix select_columns columns order","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005969 \/ 0.011353 (-0.005384) | 0.003687 \/ 0.011008 (-0.007321) | 0.100843 \/ 0.038508 (0.062335) | 0.036912 \/ 0.023109 (0.013803) | 0.312389 \/ 0.275898 (0.036491) | 0.370335 \/ 0.323480 (0.046855) | 0.003434 \/ 0.007986 (-0.004552) | 0.003710 \/ 0.004328 (-0.000619) | 0.076899 \/ 0.004250 (0.072648) | 0.053647 \/ 0.037052 (0.016594) | 0.324825 \/ 0.258489 (0.066336) | 0.367711 \/ 0.293841 (0.073870) | 0.028079 \/ 0.128546 (-0.100467) | 0.008326 \/ 0.075646 (-0.067320) | 0.312342 \/ 0.419271 (-0.106930) | 0.047423 \/ 0.043533 (0.003890) | 0.321063 \/ 0.255139 (0.065924) | 0.336508 \/ 0.283200 (0.053308) | 0.019973 \/ 0.141683 (-0.121710) | 1.529334 \/ 1.452155 (0.077179) | 1.573746 \/ 1.492716 (0.081030) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.210849 \/ 0.018006 (0.192843) | 0.418798 \/ 0.000490 (0.418309) | 0.007347 \/ 0.000200 (0.007147) | 0.000070 \/ 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.022718 \/ 0.037411 (-0.014694) | 0.098400 \/ 0.014526 (0.083874) | 0.106590 \/ 0.176557 (-0.069967) | 0.168460 \/ 0.737135 (-0.568675) | 0.108401 \/ 0.296338 (-0.187938) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.443066 \/ 0.215209 (0.227857) | 4.416658 \/ 2.077655 (2.339003) | 2.088844 \/ 1.504120 (0.584724) | 1.879564 \/ 1.541195 (0.338369) | 1.933815 \/ 1.468490 (0.465325) | 0.565085 \/ 4.584777 (-4.019692) | 3.412440 \/ 3.745712 (-0.333273) | 1.754686 \/ 5.269862 (-3.515175) | 1.024576 \/ 4.565676 (-3.541100) | 0.067909 \/ 0.424275 (-0.356366) | 0.011054 \/ 0.007607 (0.003447) | 0.534748 \/ 0.226044 (0.308703) | 5.351457 \/ 2.268929 (3.082529) | 2.517368 \/ 55.444624 (-52.927256) | 2.182762 \/ 6.876477 (-4.693715) | 2.238205 \/ 2.142072 (0.096133) | 0.672962 \/ 4.805227 (-4.132265) | 0.136098 \/ 6.500664 (-6.364566) | 0.066534 \/ 0.075469 (-0.008935) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.281241 \/ 1.841788 (-0.560547) | 13.872881 \/ 8.074308 (5.798573) | 13.161023 \/ 10.191392 (2.969631) | 0.130011 \/ 0.680424 (-0.550412) | 0.016759 \/ 0.534201 (-0.517442) | 0.359802 \/ 0.579283 (-0.219481) | 0.392577 \/ 0.434364 (-0.041787) | 0.427742 \/ 0.540337 (-0.112595) | 0.522241 \/ 1.386936 (-0.864695) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005985 \/ 0.011353 (-0.005368) | 0.003705 \/ 0.011008 (-0.007304) | 0.077699 \/ 0.038508 (0.039191) | 0.035686 \/ 0.023109 (0.012577) | 0.420356 \/ 0.275898 (0.144458) | 0.476753 \/ 0.323480 (0.153273) | 0.003510 \/ 0.007986 (-0.004475) | 0.002807 \/ 0.004328 (-0.001521) | 0.077151 \/ 0.004250 (0.072901) | 0.046420 \/ 0.037052 (0.009368) | 0.391781 \/ 0.258489 (0.133292) | 0.461128 \/ 0.293841 (0.167287) | 0.027847 \/ 0.128546 (-0.100699) | 0.008322 \/ 0.075646 (-0.067324) | 0.082768 \/ 0.419271 (-0.336503) | 0.042629 \/ 0.043533 (-0.000904) | 0.405745 \/ 0.255139 (0.150606) | 0.430797 \/ 0.283200 (0.147598) | 0.019832 \/ 0.141683 (-0.121851) | 1.556208 \/ 1.452155 (0.104054) | 1.612166 \/ 1.492716 (0.119450) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.230633 \/ 0.018006 (0.212626) | 0.401667 \/ 0.000490 (0.401178) | 0.000776 \/ 0.000200 (0.000576) | 0.000069 \/ 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024959 \/ 0.037411 (-0.012452) | 0.100560 \/ 0.014526 (0.086034) | 0.109175 \/ 0.176557 (-0.067382) | 0.159919 \/ 0.737135 (-0.577217) | 0.112810 \/ 0.296338 (-0.183528) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.460601 \/ 0.215209 (0.245392) | 4.620039 \/ 2.077655 (2.542385) | 2.257900 \/ 1.504120 (0.753780) | 2.039192 \/ 1.541195 (0.497997) | 2.064451 \/ 1.468490 (0.595961) | 0.557887 \/ 4.584777 (-4.026890) | 3.356100 \/ 3.745712 (-0.389612) | 1.703578 \/ 5.269862 (-3.566284) | 1.024984 \/ 4.565676 (-3.540693) | 0.067602 \/ 0.424275 (-0.356673) | 0.011450 \/ 0.007607 (0.003842) | 0.563230 \/ 0.226044 (0.337186) | 5.632150 \/ 2.268929 (3.363221) | 2.698701 \/ 55.444624 (-52.745924) | 2.363218 \/ 6.876477 (-4.513259) | 2.363997 \/ 2.142072 (0.221925) | 0.671260 \/ 4.805227 (-4.133967) | 0.136166 \/ 6.500664 (-6.364499) | 0.067094 \/ 0.075469 (-0.008375) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.303030 \/ 1.841788 (-0.538757) | 14.137277 \/ 8.074308 (6.062969) | 13.937631 \/ 10.191392 (3.746239) | 0.162626 \/ 0.680424 (-0.517798) | 0.016687 \/ 0.534201 (-0.517514) | 0.363657 \/ 0.579283 (-0.215626) | 0.392021 \/ 0.434364 (-0.042343) | 0.427275 \/ 0.540337 (-0.113062) | 0.512192 \/ 1.386936 (-0.874744) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#42603528d9bd8c3ab287ed0eadc7fa3d1ef4cfd8 \"CML watermark\")\n","_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005974 \/ 0.011353 (-0.005378) | 0.003947 \/ 0.011008 (-0.007061) | 0.098604 \/ 0.038508 (0.060096) | 0.036947 \/ 0.023109 (0.013838) | 0.311844 \/ 0.275898 (0.035946) | 0.375243 \/ 0.323480 (0.051763) | 0.003453 \/ 0.007986 (-0.004533) | 0.003834 \/ 0.004328 (-0.000495) | 0.077943 \/ 0.004250 (0.073692) | 0.052956 \/ 0.037052 (0.015904) | 0.320812 \/ 0.258489 (0.062323) | 0.373963 \/ 0.293841 (0.080122) | 0.028382 \/ 0.128546 (-0.100164) | 0.008525 \/ 0.075646 (-0.067121) | 0.311306 \/ 0.419271 (-0.107965) | 0.047029 \/ 0.043533 (0.003496) | 0.309933 \/ 0.255139 (0.054794) | 0.335114 \/ 0.283200 (0.051915) | 0.019629 \/ 0.141683 (-0.122054) | 1.569771 \/ 1.452155 (0.117617) | 1.585899 \/ 1.492716 (0.093182) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.216565 \/ 0.018006 (0.198559) | 0.426717 \/ 0.000490 (0.426228) | 0.003609 \/ 0.000200 (0.003409) | 0.000077 \/ 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023079 \/ 0.037411 (-0.014332) | 0.096954 \/ 0.014526 (0.082428) | 0.105398 \/ 0.176557 (-0.071158) | 0.165433 \/ 0.737135 (-0.571703) | 0.109703 \/ 0.296338 (-0.186636) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.456227 \/ 0.215209 (0.241018) | 4.529857 \/ 2.077655 (2.452202) | 2.214054 \/ 1.504120 (0.709934) | 2.029716 \/ 1.541195 (0.488521) | 2.081175 \/ 1.468490 (0.612685) | 0.563642 \/ 4.584777 (-4.021135) | 3.355393 \/ 3.745712 (-0.390320) | 1.765938 \/ 5.269862 (-3.503924) | 1.039062 \/ 4.565676 (-3.526615) | 0.067952 \/ 0.424275 (-0.356323) | 0.011044 \/ 0.007607 (0.003437) | 0.556935 \/ 0.226044 (0.330890) | 5.588167 \/ 2.268929 (3.319239) | 2.667217 \/ 55.444624 (-52.777407) | 2.337383 \/ 6.876477 (-4.539094) | 2.429590 \/ 2.142072 (0.287517) | 0.676972 \/ 4.805227 (-4.128256) | 0.135782 \/ 6.500664 (-6.364882) | 0.066323 \/ 0.075469 (-0.009146) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.237358 \/ 1.841788 (-0.604429) | 13.910492 \/ 8.074308 (5.836184) | 13.227275 \/ 10.191392 (3.035883) | 0.146857 \/ 0.680424 (-0.533567) | 0.016991 \/ 0.534201 (-0.517210) | 0.363637 \/ 0.579283 (-0.215646) | 0.392462 \/ 0.434364 (-0.041902) | 0.450009 \/ 0.540337 (-0.090329) | 0.536077 \/ 1.386936 (-0.850859) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006067 \/ 0.011353 (-0.005286) | 0.003851 \/ 0.011008 (-0.007158) | 0.078462 \/ 0.038508 (0.039954) | 0.036221 \/ 0.023109 (0.013112) | 0.389195 \/ 0.275898 (0.113297) | 0.428710 \/ 0.323480 (0.105230) | 0.004645 \/ 0.007986 (-0.003341) | 0.002973 \/ 0.004328 (-0.001355) | 0.078299 \/ 0.004250 (0.074048) | 0.047076 \/ 0.037052 (0.010024) | 0.375673 \/ 0.258489 (0.117184) | 0.432352 \/ 0.293841 (0.138511) | 0.028212 \/ 0.128546 (-0.100334) | 0.008475 \/ 0.075646 (-0.067172) | 0.083902 \/ 0.419271 (-0.335369) | 0.046699 \/ 0.043533 (0.003166) | 0.364502 \/ 0.255139 (0.109363) | 0.389792 \/ 0.283200 (0.106592) | 0.025266 \/ 0.141683 (-0.116417) | 1.517458 \/ 1.452155 (0.065303) | 1.543634 \/ 1.492716 (0.050918) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.236479 \/ 0.018006 (0.218472) | 0.411528 \/ 0.000490 (0.411038) | 0.005213 \/ 0.000200 (0.005013) | 0.000091 \/ 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025764 \/ 0.037411 (-0.011647) | 0.103174 \/ 0.014526 (0.088648) | 0.110609 \/ 0.176557 (-0.065948) | 0.164630 \/ 0.737135 (-0.572506) | 0.114863 \/ 0.296338 (-0.181475) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.457155 \/ 0.215209 (0.241946) | 4.550675 \/ 2.077655 (2.473021) | 2.350473 \/ 1.504120 (0.846353) | 2.204919 \/ 1.541195 (0.663724) | 2.076724 \/ 1.468490 (0.608234) | 0.563107 \/ 4.584777 (-4.021670) | 3.390669 \/ 3.745712 (-0.355043) | 1.741111 \/ 5.269862 (-3.528751) | 1.033268 \/ 4.565676 (-3.532408) | 0.068400 \/ 0.424275 (-0.355875) | 0.011607 \/ 0.007607 (0.004000) | 0.561944 \/ 0.226044 (0.335900) | 5.620224 \/ 2.268929 (3.351296) | 2.705241 \/ 55.444624 (-52.739384) | 2.344520 \/ 6.876477 (-4.531957) | 2.386119 \/ 2.142072 (0.244046) | 0.681583 \/ 4.805227 (-4.123644) | 0.137272 \/ 6.500664 (-6.363392) | 0.069217 \/ 0.075469 (-0.006252) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.322690 \/ 1.841788 (-0.519098) | 14.464953 \/ 8.074308 (6.390645) | 14.269350 \/ 10.191392 (4.077958) | 0.158879 \/ 0.680424 (-0.521545) | 0.016722 \/ 0.534201 (-0.517479) | 0.360299 \/ 0.579283 (-0.218984) | 0.391609 \/ 0.434364 (-0.042755) | 0.420507 \/ 0.540337 (-0.119831) | 0.512822 \/ 1.386936 (-0.874114) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#ca68191900d97b29abb3c2c4ba0502fe30d137d1 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007106 \/ 0.011353 (-0.004247) | 0.005224 \/ 0.011008 (-0.005784) | 0.127563 \/ 0.038508 (0.089055) | 0.055067 \/ 0.023109 (0.031958) | 0.418660 \/ 0.275898 (0.142761) | 0.487891 \/ 0.323480 (0.164411) | 0.005712 \/ 0.007986 (-0.002274) | 0.004585 \/ 0.004328 (0.000256) | 0.090994 \/ 0.004250 (0.086743) | 0.071837 \/ 0.037052 (0.034784) | 0.446957 \/ 0.258489 (0.188468) | 0.475966 \/ 0.293841 (0.182125) | 0.038062 \/ 0.128546 (-0.090484) | 0.010056 \/ 0.075646 (-0.065590) | 0.406796 \/ 0.419271 (-0.012475) | 0.066542 \/ 0.043533 (0.023009) | 0.413676 \/ 0.255139 (0.158537) | 0.448624 \/ 0.283200 (0.165424) | 0.030332 \/ 0.141683 (-0.111351) | 1.895307 \/ 1.452155 (0.443152) | 1.904411 \/ 1.492716 (0.411694) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.221246 \/ 0.018006 (0.203240) | 0.461288 \/ 0.000490 (0.460799) | 0.005957 \/ 0.000200 (0.005757) | 0.000112 \/ 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029255 \/ 0.037411 (-0.008156) | 0.131299 \/ 0.014526 (0.116773) | 0.135814 \/ 0.176557 (-0.040742) | 0.201342 \/ 0.737135 (-0.535793) | 0.141748 \/ 0.296338 (-0.154591) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.463936 \/ 0.215209 (0.248727) | 4.709621 \/ 2.077655 (2.631966) | 2.093844 \/ 1.504120 (0.589724) | 1.897963 \/ 1.541195 (0.356768) | 1.927865 \/ 1.468490 (0.459375) | 0.610879 \/ 4.584777 (-3.973898) | 4.481370 \/ 3.745712 (0.735658) | 2.112235 \/ 5.269862 (-3.157627) | 1.203349 \/ 4.565676 (-3.362327) | 0.074828 \/ 0.424275 (-0.349447) | 0.013121 \/ 0.007607 (0.005514) | 0.580894 \/ 0.226044 (0.354849) | 5.801872 \/ 2.268929 (3.532943) | 2.579950 \/ 55.444624 (-52.864674) | 2.251569 \/ 6.876477 (-4.624908) | 2.421305 \/ 2.142072 (0.279232) | 0.760938 \/ 4.805227 (-4.044289) | 0.169554 \/ 6.500664 (-6.331110) | 0.077499 \/ 0.075469 (0.002030) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.410419 \/ 1.841788 (-0.431368) | 17.442331 \/ 8.074308 (9.368023) | 15.782183 \/ 10.191392 (5.590791) | 0.180649 \/ 0.680424 (-0.499775) | 0.021790 \/ 0.534201 (-0.512411) | 0.511040 \/ 0.579283 (-0.068243) | 0.510472 \/ 0.434364 (0.076108) | 0.607141 \/ 0.540337 (0.066804) | 0.724794 \/ 1.386936 (-0.662142) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007280 \/ 0.011353 (-0.004073) | 0.004712 \/ 0.011008 (-0.006296) | 0.089225 \/ 0.038508 (0.050717) | 0.053157 \/ 0.023109 (0.030048) | 0.431949 \/ 0.275898 (0.156051) | 0.478128 \/ 0.323480 (0.154648) | 0.006181 \/ 0.007986 (-0.001804) | 0.003387 \/ 0.004328 (-0.000941) | 0.083741 \/ 0.004250 (0.079490) | 0.071610 \/ 0.037052 (0.034557) | 0.414698 \/ 0.258489 (0.156209) | 0.484422 \/ 0.293841 (0.190581) | 0.034988 \/ 0.128546 (-0.093558) | 0.009831 \/ 0.075646 (-0.065816) | 0.089644 \/ 0.419271 (-0.329628) | 0.057053 \/ 0.043533 (0.013520) | 0.413144 \/ 0.255139 (0.158005) | 0.445464 \/ 0.283200 (0.162264) | 0.026109 \/ 0.141683 (-0.115574) | 1.842899 \/ 1.452155 (0.390745) | 1.923774 \/ 1.492716 (0.431057) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.245051 \/ 0.018006 (0.227045) | 0.460444 \/ 0.000490 (0.459954) | 0.000444 \/ 0.000200 (0.000244) | 0.000067 \/ 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034835 \/ 0.037411 (-0.002577) | 0.130078 \/ 0.014526 (0.115553) | 0.147012 \/ 0.176557 (-0.029544) | 0.203097 \/ 0.737135 (-0.534038) | 0.149636 \/ 0.296338 (-0.146702) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.521664 \/ 0.215209 (0.306455) | 5.283865 \/ 2.077655 (3.206210) | 2.456701 \/ 1.504120 (0.952581) | 2.266059 \/ 1.541195 (0.724864) | 2.295387 \/ 1.468490 (0.826897) | 0.613200 \/ 4.584777 (-3.971577) | 4.526107 \/ 3.745712 (0.780394) | 2.047327 \/ 5.269862 (-3.222535) | 1.261063 \/ 4.565676 (-3.304614) | 0.070402 \/ 0.424275 (-0.353873) | 0.014128 \/ 0.007607 (0.006521) | 0.620929 \/ 0.226044 (0.394884) | 6.109127 \/ 2.268929 (3.840198) | 3.081406 \/ 55.444624 (-52.363218) | 2.658224 \/ 6.876477 (-4.218253) | 2.671974 \/ 2.142072 (0.529902) | 0.744081 \/ 4.805227 (-4.061146) | 0.161498 \/ 6.500664 (-6.339166) | 0.075148 \/ 0.075469 (-0.000321) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.585640 \/ 1.841788 (-0.256148) | 17.884321 \/ 8.074308 (9.810013) | 15.938937 \/ 10.191392 (5.747545) | 0.220818 \/ 0.680424 (-0.459605) | 0.021452 \/ 0.534201 (-0.512749) | 0.499747 \/ 0.579283 (-0.079536) | 0.512318 \/ 0.434364 (0.077954) | 0.562853 \/ 0.540337 (0.022515) | 0.678512 \/ 1.386936 (-0.708424) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#aa50937d82256827aee3dbd749c7a23555e05e38 \"CML watermark\")\n"],"created_at":1687869166000,"updated_at":1687880447000,"closed_at":1687879963000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5994","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5994","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5994.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5994.patch","merged_at":1687879963000},"body":"Fix the order of the columns in dataset.features when the order changes with `dataset.select_columns()`.\r\n\r\nI also fixed the same issue for `dataset.flatten()`\r\n\r\nClose https:\/\/github.com\/huggingface\/datasets\/issues\/5993","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5994\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5994\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5993","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5993\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5993\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5993\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5993","id":1776643555,"node_id":"I_kwDODunzps5p5W3j","number":5993,"title":"ValueError: Table schema does not match schema used to create file","user":{"login":"exs-avianello","id":128361578,"node_id":"U_kgDOB6akag","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/128361578?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/exs-avianello","html_url":"https:\/\/github.com\/exs-avianello","followers_url":"https:\/\/api.github.com\/users\/exs-avianello\/followers","following_url":"https:\/\/api.github.com\/users\/exs-avianello\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/exs-avianello\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/exs-avianello\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/exs-avianello\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/exs-avianello\/orgs","repos_url":"https:\/\/api.github.com\/users\/exs-avianello\/repos","events_url":"https:\/\/api.github.com\/users\/exs-avianello\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/exs-avianello\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186.0,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["We'll do a new release of `datasets` soon to make the fix available :)\r\n\r\nIn the meantime you can use `datasets` from source (main)","Thank you very much @lhoestq ! \ud83d\ude80 "],"created_at":1687863247000,"updated_at":1687880202000,"closed_at":1687879964000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nSaving a dataset as parquet fails with a `ValueError: Table schema does not match schema used to create file` if the dataset was obtained out of a `.select_columns()` call with columns selected out of order.\n\n### Steps to reproduce the bug\n\n```python\r\nimport datasets\r\n\r\ndataset = datasets.Dataset.from_dict(\r\n {\r\n \"x1\": [1, 2, 3],\r\n \"x2\": [10, 11, 12],\r\n }\r\n)\r\n\r\nds = dataset.select_columns([\"x2\", \"x1\"])\r\n\r\nds.to_parquet(\"demo.parquet\")\r\n```\r\n\r\n```shell\r\n>>>\r\nValueError: Table schema does not match schema used to create file: \r\ntable:\r\nx2: int64\r\nx1: int64\r\n-- schema metadata --\r\nhuggingface: '{\"info\": {\"features\": {\"x2\": {\"dtype\": \"int64\", \"_type\": \"V' + 53 vs. \r\nfile:\r\nx1: int64\r\nx2: int64\r\n-- schema metadata --\r\nhuggingface: '{\"info\": {\"features\": {\"x1\": {\"dtype\": \"int64\", \"_type\": \"V' + 53\r\n```\r\n\r\n--- \r\n\r\nI think this is because after the `.select_columns()` call with out of order columns, the output dataset features' schema ends up being out of sync with the schema of the arrow table backing it. \r\n\r\n```python\r\nds.features.arrow_schema\r\n>>>\r\nx1: int64\r\nx2: int64\r\n-- schema metadata --\r\nhuggingface: '{\"info\": {\"features\": {\"x1\": {\"dtype\": \"int64\", \"_type\": \"V' + 53\r\n\r\nds.data.schema\r\n>>>\r\nx2: int64\r\nx1: int64\r\n-- schema metadata --\r\nhuggingface: '{\"info\": {\"features\": {\"x2\": {\"dtype\": \"int64\", \"_type\": \"V' + 53\r\n```\r\n\r\n\r\nSo when we call `.to_parquet()`, the call behind the scenes to `datasets.io.parquet.ParquetDatasetWriter(...).write()` which initialises the backend `pyarrow.parquet.ParquetWriter` with `schema = self.dataset.features.arrow_schema` triggers `pyarrow` on write when [it checks](https:\/\/github.com\/apache\/arrow\/blob\/11b140a734a516e436adaddaeb35d23f30dcce44\/python\/pyarrow\/parquet\/core.py#L1086-L1090) that the `ParquetWriter` schema matches the schema of the table being written \ud83d\ude4c \r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/6ed837325cb539a5deb99129e5ad181d0269e050\/src\/datasets\/io\/parquet.py#L139-L141\r\n\n\n### Expected behavior\n\nThe dataset gets successfully saved as parquet. \r\n\r\n*In the same way as it does if saving it as csv:\r\n\r\n```python\r\nimport datasets\r\n\r\ndataset = datasets.Dataset.from_dict(\r\n {\r\n \"x1\": [1, 2, 3],\r\n \"x2\": [10, 11, 12],\r\n }\r\n)\r\n\r\nds = dataset.select_columns([\"x2\", \"x1\"])\r\n\r\nds.to_csv(\"demo.csv\")\r\n```\n\n### Environment info\n\n`python==3.11`\r\n`datasets==2.13.1`\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5993\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5993\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5992","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5992\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5992\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5992\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5992","id":1776460964,"node_id":"PR_kwDODunzps5UAk3C","number":5992,"title":"speedup","user":{"login":"qgallouedec","id":45557362,"node_id":"MDQ6VXNlcjQ1NTU3MzYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/45557362?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/qgallouedec","html_url":"https:\/\/github.com\/qgallouedec","followers_url":"https:\/\/api.github.com\/users\/qgallouedec\/followers","following_url":"https:\/\/api.github.com\/users\/qgallouedec\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/qgallouedec\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/qgallouedec\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/qgallouedec\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/qgallouedec\/orgs","repos_url":"https:\/\/api.github.com\/users\/qgallouedec\/repos","events_url":"https:\/\/api.github.com\/users\/qgallouedec\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/qgallouedec\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_5992). All of your documentation changes will be reflected on that endpoint."],"created_at":1687857478000,"updated_at":1687857787000,"closed_at":1687857484000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5992","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5992","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5992.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5992.patch","merged_at":null},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5992\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5992\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5991","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5991\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5991\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5991\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5991","id":1774456518,"node_id":"I_kwDODunzps5pxA7G","number":5991,"title":"`map` with any joblib backend","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1687775622000,"updated_at":1687775622000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"draft":null,"pull_request":null,"body":"We recently enabled the (experimental) parallel backend switch for data download and extraction but not for `map` yet.\r\n\r\nRight now we're using our `iflatmap_unordered` implementation for multiprocessing that uses a shared Queue to gather progress updates from the subprocesses and show a progress bar in the main process.\r\n\r\nIf a Queue implementation that would work on any joblib backend by leveraging the filesystem that is shared among workers, we can have `iflatmap_unordered` for joblib and therefore a `map` with any joblib backend with a progress bar !\r\n\r\nNote that the Queue doesn't need to be that optimized though since we can choose a small frequency for progress updates (like 1 update per second).","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5991\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5991\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5989","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5989\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5989\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5989\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5989","id":1774134091,"node_id":"I_kwDODunzps5pvyNL","number":5989,"title":"Set a rule on the config and split names","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["in this case we need to decide what to do with the existing datasets with white space characters (there shouldn't be a lot of them I think)","I imagine that we should stop supporting them, and help the user fix them?"],"created_at":1687764854000,"updated_at":1687785178000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"> should we actually allow characters like spaces? maybe it's better to add validation for whitespace symbols and directly in datasets and raise\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets-server\/issues\/853\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5989\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5989\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5988","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5988\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5988\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5988\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5988","id":1773257828,"node_id":"I_kwDODunzps5pscRk","number":5988,"title":"ConnectionError: Couldn't reach dataset_infos.json ","user":{"login":"yulingao","id":20674868,"node_id":"MDQ6VXNlcjIwNjc0ODY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20674868?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yulingao","html_url":"https:\/\/github.com\/yulingao","followers_url":"https:\/\/api.github.com\/users\/yulingao\/followers","following_url":"https:\/\/api.github.com\/users\/yulingao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yulingao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yulingao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yulingao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yulingao\/orgs","repos_url":"https:\/\/api.github.com\/users\/yulingao\/repos","events_url":"https:\/\/api.github.com\/users\/yulingao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yulingao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Unfortunately, I can't reproduce the error. What does the following code return for you?\r\n```python\r\nimport requests\r\nfrom huggingface_hub import hf_hub_url\r\nr = requests.get(hf_hub_url(\"codeparrot\/codeparrot-clean-train\", \"dataset_infos.json\", repo_type=\"dataset\"))\r\n```\r\n\r\nAlso, can you provide more info about your network (region, proxies, etc.)?"],"created_at":1687696771000,"updated_at":1688736057000,"closed_at":1688736057000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI'm trying to load codeparrot\/codeparrot-clean-train, but get the following error:\r\n\r\nConnectionError: Couldn't reach https:\/\/huggingface.co\/datasets\/codeparrot\/codeparrot-clean-train\/resolve\/main\/dataset_infos.json (ConnectionError(ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))))\r\n\r\n\n\n### Steps to reproduce the bug\n\ntrain_data = load_dataset('codeparrot\/codeparrot-clean-train', split='train')\r\n\n\n### Expected behavior\n\ndownload the dataset\n\n### Environment info\n\ncentos7","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5988\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5988\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5987","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5987\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5987\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5987\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5987","id":1773047909,"node_id":"I_kwDODunzps5prpBl","number":5987,"title":"Why max_shard_size is not supported in load_dataset and passed to download_and_prepare","user":{"login":"npuichigo","id":11533479,"node_id":"MDQ6VXNlcjExNTMzNDc5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11533479?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/npuichigo","html_url":"https:\/\/github.com\/npuichigo","followers_url":"https:\/\/api.github.com\/users\/npuichigo\/followers","following_url":"https:\/\/api.github.com\/users\/npuichigo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/npuichigo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/npuichigo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/npuichigo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/npuichigo\/orgs","repos_url":"https:\/\/api.github.com\/users\/npuichigo\/repos","events_url":"https:\/\/api.github.com\/users\/npuichigo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/npuichigo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Can you explain your use case for `max_shard_size`? \r\n\r\nOn some systems, there is a limit to the size of a memory-mapped file, so we could consider exposing this parameter in `load_dataset`.","In my use case, users may choose a proper size to balance the cost and benefit of using large shard size. (On azure blob or hdfs which may automatically download the shard from background)","But `load_dataset` doesn't support caching (and reading) Arrow datasets from remote storage. \r\n\r\n`load_datset_builder` + `download_and_prepare` is not equal to `load_dataset`. The latter has one more step, `builder.as_dataset`, that memory-maps Arrow files, which only works for local files.","Thanks. So if I want to use `IterableDataset` and control the size of single arrow file, how should I organize the data loader? Maybe `load_dataset_build` + `download_and_prepare` + `builder.as_dataset` + `dataset.to_iterable_dataset`?","Yes, this should work.\r\n\r\nI think we can expose `max_shard_size` in `load_dataset`, so feel free to open a PR."],"created_at":1687666753000,"updated_at":1688054768000,"closed_at":1688054768000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/a8a797cc92e860c8d0df71e0aa826f4d2690713e\/src\/datasets\/load.py#L1809\r\n\r\nWhat I can to is break the `load_dataset` and use `load_datset_builder` + `download_and_prepare` instead.\n\n### Steps to reproduce the bug\n\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/a8a797cc92e860c8d0df71e0aa826f4d2690713e\/src\/datasets\/load.py#L1809\n\n### Expected behavior\n\nUsers can define the max shard size.\n\n### Environment info\n\ndatasets==2.13.1","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5987\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5987\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5986","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5986\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5986\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5986\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5986","id":1772233111,"node_id":"PR_kwDODunzps5TygOZ","number":5986,"title":"Make IterableDataset.from_spark more efficient","user":{"login":"mathewjacob1002","id":134338709,"node_id":"U_kgDOCAHYlQ","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/134338709?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mathewjacob1002","html_url":"https:\/\/github.com\/mathewjacob1002","followers_url":"https:\/\/api.github.com\/users\/mathewjacob1002\/followers","following_url":"https:\/\/api.github.com\/users\/mathewjacob1002\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mathewjacob1002\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mathewjacob1002\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mathewjacob1002\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mathewjacob1002\/orgs","repos_url":"https:\/\/api.github.com\/users\/mathewjacob1002\/repos","events_url":"https:\/\/api.github.com\/users\/mathewjacob1002\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mathewjacob1002\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq would you be able to review this please and also approve the workflow?","Sounds good to me :) feel free to run `make style` to apply code formatting","_The documentation is not available anymore as the PR was closed or merged._","cool ! I think we can merge once all comments have been addressed","@lhoestq I just addressed the comments and I think we can move ahead with this! \r\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007734 \/ 0.011353 (-0.003619) | 0.004608 \/ 0.011008 (-0.006400) | 0.094466 \/ 0.038508 (0.055958) | 0.086477 \/ 0.023109 (0.063368) | 0.410311 \/ 0.275898 (0.134413) | 0.455560 \/ 0.323480 (0.132080) | 0.006112 \/ 0.007986 (-0.001874) | 0.003845 \/ 0.004328 (-0.000483) | 0.072506 \/ 0.004250 (0.068256) | 0.066721 \/ 0.037052 (0.029669) | 0.409967 \/ 0.258489 (0.151478) | 0.460480 \/ 0.293841 (0.166639) | 0.036700 \/ 0.128546 (-0.091847) | 0.009854 \/ 0.075646 (-0.065792) | 0.320936 \/ 0.419271 (-0.098335) | 0.061002 \/ 0.043533 (0.017469) | 0.413963 \/ 0.255139 (0.158824) | 0.426787 \/ 0.283200 (0.143588) | 0.029182 \/ 0.141683 (-0.112501) | 1.685136 \/ 1.452155 (0.232981) | 1.754590 \/ 1.492716 (0.261873) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.222698 \/ 0.018006 (0.204692) | 0.505929 \/ 0.000490 (0.505440) | 0.005291 \/ 0.000200 (0.005091) | 0.000097 \/ 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032527 \/ 0.037411 (-0.004884) | 0.094842 \/ 0.014526 (0.080317) | 0.110138 \/ 0.176557 (-0.066418) | 0.193786 \/ 0.737135 (-0.543349) | 0.112593 \/ 0.296338 (-0.183745) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.441671 \/ 0.215209 (0.226461) | 4.392961 \/ 2.077655 (2.315306) | 2.161111 \/ 1.504120 (0.656991) | 1.967080 \/ 1.541195 (0.425885) | 2.065411 \/ 1.468490 (0.596920) | 0.561080 \/ 4.584777 (-4.023697) | 4.159612 \/ 3.745712 (0.413900) | 6.435248 \/ 5.269862 (1.165386) | 3.732338 \/ 4.565676 (-0.833339) | 0.066156 \/ 0.424275 (-0.358119) | 0.008030 \/ 0.007607 (0.000423) | 0.532182 \/ 0.226044 (0.306137) | 5.315142 \/ 2.268929 (3.046213) | 2.680157 \/ 55.444624 (-52.764467) | 2.303799 \/ 6.876477 (-4.572677) | 2.530911 \/ 2.142072 (0.388838) | 0.669504 \/ 4.805227 (-4.135723) | 0.151940 \/ 6.500664 (-6.348724) | 0.066999 \/ 0.075469 (-0.008470) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.424275 \/ 1.841788 (-0.417513) | 21.550742 \/ 8.074308 (13.476434) | 16.031414 \/ 10.191392 (5.840022) | 0.194681 \/ 0.680424 (-0.485743) | 0.020389 \/ 0.534201 (-0.513812) | 0.429808 \/ 0.579283 (-0.149475) | 0.457503 \/ 0.434364 (0.023139) | 0.511522 \/ 0.540337 (-0.028816) | 0.682621 \/ 1.386936 (-0.704315) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007519 \/ 0.011353 (-0.003834) | 0.004445 \/ 0.011008 (-0.006563) | 0.071946 \/ 0.038508 (0.033438) | 0.082982 \/ 0.023109 (0.059873) | 0.459938 \/ 0.275898 (0.184040) | 0.504875 \/ 0.323480 (0.181395) | 0.005805 \/ 0.007986 (-0.002181) | 0.003740 \/ 0.004328 (-0.000589) | 0.071998 \/ 0.004250 (0.067747) | 0.062580 \/ 0.037052 (0.025527) | 0.462263 \/ 0.258489 (0.203774) | 0.506355 \/ 0.293841 (0.212514) | 0.036321 \/ 0.128546 (-0.092225) | 0.009830 \/ 0.075646 (-0.065816) | 0.079810 \/ 0.419271 (-0.339461) | 0.055291 \/ 0.043533 (0.011758) | 0.464093 \/ 0.255139 (0.208954) | 0.481109 \/ 0.283200 (0.197910) | 0.026909 \/ 0.141683 (-0.114774) | 1.652538 \/ 1.452155 (0.200383) | 1.750713 \/ 1.492716 (0.257997) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.267552 \/ 0.018006 (0.249546) | 0.502021 \/ 0.000490 (0.501531) | 0.001635 \/ 0.000200 (0.001435) | 0.000099 \/ 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033747 \/ 0.037411 (-0.003665) | 0.104242 \/ 0.014526 (0.089716) | 0.113829 \/ 0.176557 (-0.062728) | 0.176242 \/ 0.737135 (-0.560893) | 0.117002 \/ 0.296338 (-0.179336) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.476731 \/ 0.215209 (0.261522) | 4.727054 \/ 2.077655 (2.649399) | 2.589396 \/ 1.504120 (1.085276) | 2.511180 \/ 1.541195 (0.969985) | 2.634122 \/ 1.468490 (1.165632) | 0.563840 \/ 4.584777 (-4.020937) | 4.140212 \/ 3.745712 (0.394500) | 6.188789 \/ 5.269862 (0.918928) | 3.716897 \/ 4.565676 (-0.848780) | 0.065823 \/ 0.424275 (-0.358452) | 0.007705 \/ 0.007607 (0.000098) | 0.566580 \/ 0.226044 (0.340535) | 5.653306 \/ 2.268929 (3.384377) | 3.028756 \/ 55.444624 (-52.415868) | 2.592319 \/ 6.876477 (-4.284158) | 2.614250 \/ 2.142072 (0.472178) | 0.667135 \/ 4.805227 (-4.138093) | 0.153455 \/ 6.500664 (-6.347209) | 0.069321 \/ 0.075469 (-0.006148) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.541978 \/ 1.841788 (-0.299810) | 21.747360 \/ 8.074308 (13.673052) | 15.963657 \/ 10.191392 (5.772265) | 0.192843 \/ 0.680424 (-0.487581) | 0.020702 \/ 0.534201 (-0.513499) | 0.433620 \/ 0.579283 (-0.145663) | 0.467327 \/ 0.434364 (0.032963) | 0.507398 \/ 0.540337 (-0.032940) | 0.692797 \/ 1.386936 (-0.694140) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#396cf9419d12e3150e2051793b10f2c813780a90 \"CML watermark\")\n"],"created_at":1687558700000,"updated_at":1688724358000,"closed_at":1688723769000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5986","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5986","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5986.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5986.patch","merged_at":1688723769000},"body":"Moved the code from using collect() to using toLocalIterator, which allows for prefetching partitions that will be selected next, thus allowing for better performance when iterating. ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5986\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5986\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5985","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5985\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5985\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5985\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5985","id":1771588158,"node_id":"I_kwDODunzps5pmEo-","number":5985,"title":"Cannot reuse tokenizer object for dataset map","user":{"login":"vikigenius","id":12724810,"node_id":"MDQ6VXNlcjEyNzI0ODEw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12724810?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vikigenius","html_url":"https:\/\/github.com\/vikigenius","followers_url":"https:\/\/api.github.com\/users\/vikigenius\/followers","following_url":"https:\/\/api.github.com\/users\/vikigenius\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vikigenius\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vikigenius\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vikigenius\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vikigenius\/orgs","repos_url":"https:\/\/api.github.com\/users\/vikigenius\/repos","events_url":"https:\/\/api.github.com\/users\/vikigenius\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vikigenius\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892865,"node_id":"MDU6TGFiZWwxOTM1ODkyODY1","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/duplicate","name":"duplicate","color":"cfd3d7","default":true,"description":"This issue or pull request already exists"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This is a known issue: https:\/\/github.com\/huggingface\/datasets\/issues\/3847.\r\n\r\nFixing this requires significant work - rewriting the `tokenizers` lib to make them immutable.\r\n\r\nThe current solution is to pass `cache_file_name` to `map` to use that file for caching or calling a tokenizer before `map` (with the same set of parameters as the ones in the map transform)"],"created_at":1687531531000,"updated_at":1687782890000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nRelated to https:\/\/github.com\/huggingface\/transformers\/issues\/24441. Not sure if this is a tokenizer issue or caching issue, so filing in both.\r\n\r\nPassing the tokenizer to the dataset map function causes the tokenizer to be fingerprinted weirdly. After calling the tokenizer with arguments like padding and truncation the tokenizer object changes interanally, even though the hash remains the same.\r\n\r\nBut dumps is able to detect that internal change which causes the tokenizer object's fingerprint to change.\r\n\r\n\n\n### Steps to reproduce the bug\n\n```python\r\nfrom transformers import AutoTokenizer\r\nfrom datasets.utils.py_utils import dumps # Huggingface datasets\r\n\r\nt = AutoTokenizer.from_pretrained('bert-base-uncased')\r\nt.save_pretrained(\"tok1\")\r\nth1 = hash(dumps(t))\r\ntext = \"This is an example text\"\r\nttext = t(text, max_length=512, padding=\"max_length\", truncation=True)\r\nt.save_pretrained(\"tok2\")\r\nth2 = hash(dumps(t))\r\n\r\nassert th1 == th2 # Assertion Error\r\n```\r\n\r\nBut if you use just the hash of the object without dumps, the hashes don't change\r\n\r\n```python\r\nfrom transformers import AutoTokenizer\r\nfrom datasets.utils.py_utils import dumps # Huggingface datasets\r\n\r\nt = AutoTokenizer.from_pretrained('bert-base-uncased')\r\nth1 = hash(t) # Just hash no dumps\r\ntext = \"This is an example text\"\r\nttext = t(text, max_length=512, padding=\"max_length\", truncation=True)\r\nth2 = hash(t) # Just hash no dumps\r\n\r\nassert th1 == th2 # This is OK\r\n```\r\n\r\nThis causes situations such as the following\r\n\r\n1. Create a text file like this `yes \"This is an example text\" | head -n 10000 > lines.txt`\r\n\r\n```python\r\nfrom transformers import AutoTokenizer\r\nimport datasets\r\n\r\n\r\nclass TokenizeMapper(object):\r\n \"\"\"Mapper for tokenizer.\r\n\r\n This is needed because the caching mechanism of HuggingFace does not work on\r\n lambdas. Each time a new lambda will be created by a new process which will\r\n lead to a different hash.\r\n This way we can have a universal mapper object in init and reuse it with the same\r\n hash for each process.\r\n \"\"\"\r\n\r\n def __init__(self, tokenizer):\r\n \"\"\"Initialize the tokenizer.\"\"\"\r\n self.tokenizer = tokenizer\r\n\r\n def __call__(self, examples, **kwargs):\r\n \"\"\"Run the mapper.\"\"\"\r\n texts = examples[\"text\"]\r\n tt = self.tokenizer(texts, max_length=256, padding=\"max_length\", truncation=True)\r\n batch_outputs = {\r\n \"input_ids\": tt.input_ids,\r\n \"attention_mask\": tt.attention_mask,\r\n }\r\n return batch_outputs\r\n\r\n\r\nt = AutoTokenizer.from_pretrained('bert-base-uncased')\r\nmapper = TokenizeMapper(t)\r\n\r\nds = datasets.load_dataset(\"text\", data_files=\"lines.txt\")\r\n\r\nmds1 = ds.map(\r\n mapper,\r\n batched=False,\r\n remove_columns=[\"text\"],\r\n).with_format(\"torch\")\r\n\r\nmds2 = ds.map(\r\n mapper,\r\n batched=False,\r\n remove_columns=[\"text\"],\r\n).with_format(\"torch\")\r\n```\r\n\r\nThe second call to map should reuse the cached processed dataset from mds1, but it instead it redoes the tokenization because of the behavior of dumps.\n\n### Expected behavior\n\nWe should be able to initialize a tokenizer. And reusing it should let us reuse the same map computation for the same dataset.\r\n\r\nThe second call to map should reuse the cached processed dataset from mds1, but it instead it redoes the tokenization because of the behavior of dumps.\n\n### Environment info\n\n- `datasets` version: 2.13.0\r\n- Platform: Linux-6.1.31_1-x86_64-with-glibc2.36\r\n- Python version: 3.9.16\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 12.0.1\r\n- Pandas version: 2.0.2","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5985\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5985\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5984","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5984\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5984\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5984\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5984","id":1771571458,"node_id":"I_kwDODunzps5pmAkC","number":5984,"title":"AutoSharding IterableDataset's when num_workers > 1","user":{"login":"mathephysicist","id":25594384,"node_id":"MDQ6VXNlcjI1NTk0Mzg0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25594384?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mathephysicist","html_url":"https:\/\/github.com\/mathephysicist","followers_url":"https:\/\/api.github.com\/users\/mathephysicist\/followers","following_url":"https:\/\/api.github.com\/users\/mathephysicist\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mathephysicist\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mathephysicist\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mathephysicist\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mathephysicist\/orgs","repos_url":"https:\/\/api.github.com\/users\/mathephysicist\/repos","events_url":"https:\/\/api.github.com\/users\/mathephysicist\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mathephysicist\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["For this to be possible, we would have to switch from the \"Streaming\" Arrow format to the \"Random Access\" (IPC\/Feather) format, which allows reading arbitrary record batches (explained [here](https:\/\/arrow.apache.org\/docs\/python\/ipc.html)). We could then use these batches to construct shards.\r\n\r\n@lhoestq @albertvillanova Do you think this use case is worth the switch? Also, we currently shard files, not inner row groups\/chunks. Should we also support sharding row groups (e.g. if the number of input files is 1)?\r\n\r\nPS: I don't expect significant speed-up for local, uncompressed Arrow files.","Alternatively we could support multiprocessing map for iterable datasets and let the user do the CPU intensive task there ?\r\n\r\nThis way it would work on arrow data but also on any iterable dataset","> For this to be possible, we would have to switch from the \"Streaming\" Arrow format to the \"Random Access\" (IPC\/Feather) format, which allows reading arbitrary record batches (explained [here](https:\/\/arrow.apache.org\/docs\/python\/ipc.html)). We could then use these batches to construct shards.\r\n> \r\n> @lhoestq @albertvillanova Do you think this use case is worth the switch? Also, we currently shard files, not inner row groups\/chunks. Should we also support sharding row groups (e.g. if the number of input files is 1)?\r\n> \r\n> PS: I don't expect significant speed-up for local, uncompressed Arrow files.\r\n\r\nCould you explain why you'd need to change the arrow format?\r\n\r\nWhen we use streaming datasets we simply determine the number of worker shards and then add some modulo logic at the appropriate place. Worst case scenario, you'd skip streaming entries according to the number of shards.\r\n\r\nFor PyTorch, I'd be happy to provide an implementation or a sketch thereof, if you point me toward what the testing requirements would be for such a PR.","> Could you explain why you'd need to change the arrow format?\r\n\r\nThis way workers have random access to the location of the file where its dataset subset starts. Currently we're using the Arrow streaming format which doesn't include the metadata of the record batches offsets. This is needed here to efficiently split a dataset made of one single file.","> > Could you explain why you'd need to change the arrow format?\r\n> \r\n> This way workers have random access to the location of the file where its dataset subset starts. Currently we're using the Arrow streaming format which doesn't include the metadata of the record batches offsets. This is needed here to efficiently split a dataset made of one single file.\r\n\r\nI guess I don't understand why you'd need to subset the dataset in the first place. \r\nIt seems sufficient to figure out how to offset or skip rows.\r\n\r\nFor instance, using pyArrow, you could use RecordBatchStreamReader to zero-copy iterate over records with read_next_batch and then only initiate the next step for records modulo worker shard.\r\nThat's one way to do it, where of course you'd need to account for gpu sharding as well.\r\n\r\n\r\nOtherwise, how did you implement worker\/node\/GPU sharding for iterable\/streaming data where you do not have index information or prior splits (e.g. files)?","> For instance, using pyArrow, you could use RecordBatchStreamReader to zero-copy iterate over records with read_next_batch and then only initiate the next step for records modulo worker shard.\r\n\r\nThat works indeed ! And what we meant is that you can make it even faster to instantiate. Indeed using RecordBatchStreamReader you need to get the list of all the record batches in each worker, whereas you could just get the list of record batches per worker if you use the record batches locations in the Arrow IPC file footer. This would be especially appreciated to have a fast instantiation in case you have tens of thousands of Arrow files for example."],"created_at":1687530860000,"updated_at":1688490236000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\n\n\r\nMinimal Example\r\n\r\n```\r\nimport torch\r\nfrom datasets import IterableDataset\r\n\r\nd = IterableDataset.from_file()\r\ndl = torch.utils.data.dataloader.DataLoader(d,num_workers=3)\r\n\r\nfor sample in dl:\r\n print(sample)\r\n\r\n```\r\n\r\nWarning:\r\nToo many dataloader workers: 2 (max is dataset.n_shards=1). Stopping 1 dataloader workers.\r\nTo parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary to have a number of workers greater than dataset.n_shards=1. To enable more parallelism, please split the dataset in more files than 1.\r\n\r\nExpected Behavior:\r\nDataset is sharded each cpu uses subset (contiguously - so you can do checkpoint loading\/saving) \n\n### Motivation\n\nI have a lot of unused cpu's and would like to be able to shard iterable datasets with pytorch's dataloader when num_workers > 1. This is for a very large single file. I am aware that we can use the `split_dataset_by_node` to ensure that each node (for distributed) gets different shards, but we should extend it so that this also continues for multiple workers. \n\n### Your contribution\n\nIf someone points me to what needs to change, I can create a PR.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5984\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5984\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5983","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5983\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5983\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5983\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5983","id":1770578804,"node_id":"PR_kwDODunzps5TtDdy","number":5983,"title":"replaced PathLike as a variable for save_to_disk for dataset_path wit\u2026","user":{"login":"benjaminbrown038","id":35114142,"node_id":"MDQ6VXNlcjM1MTE0MTQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35114142?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/benjaminbrown038","html_url":"https:\/\/github.com\/benjaminbrown038","followers_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/followers","following_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/orgs","repos_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/repos","events_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1687481825000,"updated_at":1687481825000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5983","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5983","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5983.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5983.patch","merged_at":null},"body":"\u2026h str like that of load_from_disk","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5983\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5983\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5982","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5982\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5982\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5982\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5982","id":1770333296,"node_id":"I_kwDODunzps5phSRw","number":5982,"title":"404 on Datasets Documentation Page","user":{"login":"kmulka-bloomberg","id":118509387,"node_id":"U_kgDOBxBPSw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/118509387?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kmulka-bloomberg","html_url":"https:\/\/github.com\/kmulka-bloomberg","followers_url":"https:\/\/api.github.com\/users\/kmulka-bloomberg\/followers","following_url":"https:\/\/api.github.com\/users\/kmulka-bloomberg\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kmulka-bloomberg\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kmulka-bloomberg\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kmulka-bloomberg\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kmulka-bloomberg\/orgs","repos_url":"https:\/\/api.github.com\/users\/kmulka-bloomberg\/repos","events_url":"https:\/\/api.github.com\/users\/kmulka-bloomberg\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kmulka-bloomberg\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This wasn\u2019t working for me a bit earlier, but it looks to be back up now","We had a minor issue updating the docs after the latest release. It should work now :)."],"created_at":1687464897000,"updated_at":1687794303000,"closed_at":1687794303000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nGetting a 404 from the Hugging Face Datasets docs page:\r\nhttps:\/\/huggingface.co\/docs\/datasets\/index\r\n\n\n### Steps to reproduce the bug\n\n1. Go to URL https:\/\/huggingface.co\/docs\/datasets\/index\r\n2. Notice 404 not found\n\n### Expected behavior\n\nURL should either show docs or redirect to new location\n\n### Environment info\n\nhugginface.co","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5982\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5982\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5981","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5981\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5981\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5981\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5981","id":1770310087,"node_id":"I_kwDODunzps5phMnH","number":5981,"title":"Only two cores are getting used in sagemaker with pytorch 3.10 kernel","user":{"login":"mmr-crexi","id":107141022,"node_id":"U_kgDOBmLXng","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/107141022?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mmr-crexi","html_url":"https:\/\/github.com\/mmr-crexi","followers_url":"https:\/\/api.github.com\/users\/mmr-crexi\/followers","following_url":"https:\/\/api.github.com\/users\/mmr-crexi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mmr-crexi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mmr-crexi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mmr-crexi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mmr-crexi\/orgs","repos_url":"https:\/\/api.github.com\/users\/mmr-crexi\/repos","events_url":"https:\/\/api.github.com\/users\/mmr-crexi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mmr-crexi\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think it's more likely that this issue is related to PyTorch than Datasets, as PyTorch (on import) registers functions to execute when forking a process. Maybe this is the culprit: https:\/\/github.com\/pytorch\/pytorch\/issues\/99625","From reading that ticket, it may be down in mkl? Is it worth hotfixing in the meantime, with the express intention of turning it off? I know that's a horribly crufty solution, but it's also deeply frustrating to be limited to 2 cores for operations as simple as filtration.","This is too specific and unrelated to `datasets`, so this shouldn't be fixed here."],"created_at":1687463851000,"updated_at":1688662414000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nWhen using the newer pytorch 3.10 kernel, only 2 cores are being used by huggingface filter and map functions. The Pytorch 3.9 kernel would use as many cores as specified in the num_proc field.\r\n\r\nWe have solved this in our own code by placing the following snippet in the code that is called inside subprocesses:\r\n\r\n```os.sched_setaffinity(0, {i for i in range(1000)})```\r\n\r\nThe problem, as near as we can tell, us that once upon a time, cpu affinity was set using a bitmask (\"0xfffff\" and the like), and affinity recently changed to a list of processors rather than to using the mask. As such, only processors 1 and 17 are shown to be working in htop.\r\n![Selection_072](https:\/\/github.com\/huggingface\/datasets\/assets\/107141022\/04c5a824-5321-4531-afca-7bc84dff36b4)\r\n\r\n\r\nWhen running functions via `map`, the above resetting of affinity works to spread across the cores. When using `filter`, however, only two cores are active.\n\n### Steps to reproduce the bug\n\nRepro steps:\r\n\r\n1. Create an aws sagemaker instance\r\n2. use the pytorch 3_10 kernel\r\n3. Load a dataset\r\n4. run a filter operation\r\n5. watch as only 2 cores are used when num_proc > 2\r\n6. run a map operation\r\n7. watch as only 2 cores are used when num_proc > 2\r\n8. run a map operation with processor affinity reset inside the function called via map\r\n9. Watch as all cores run\r\n\r\n\n\n### Expected behavior\n\nAll specified cores are used via the num_proc argument.\n\n### Environment info\n\nAWS sagemaker with the following init script run in the terminal after instance creation:\r\n\r\nconda init bash\r\nbash\r\nconda activate pytorch_p310\r\npip install Wand PyPDF pytesseract datasets seqeval pdfplumber transformers pymupdf sentencepiece timm donut-python accelerate optimum xgboost\r\npython -m pip install 'git+https:\/\/github.com\/facebookresearch\/detectron2.git'\r\nsudo yum -y install htop\r\nsudo yum -y update\r\nsudo yum -y install wget libstdc++ autoconf automake libtool autoconf-archive pkg-config gcc gcc-c++ make libjpeg-devel libpng-devel libtiff-devel zlib-devel","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5981\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5981\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5980","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5980\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5980\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5980\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5980","id":1770255973,"node_id":"I_kwDODunzps5pg_Zl","number":5980,"title":"Viewing dataset card returns \u201c502 Bad Gateway\u201d","user":{"login":"tbenthompson","id":4241811,"node_id":"MDQ6VXNlcjQyNDE4MTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4241811?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tbenthompson","html_url":"https:\/\/github.com\/tbenthompson","followers_url":"https:\/\/api.github.com\/users\/tbenthompson\/followers","following_url":"https:\/\/api.github.com\/users\/tbenthompson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tbenthompson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tbenthompson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tbenthompson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tbenthompson\/orgs","repos_url":"https:\/\/api.github.com\/users\/tbenthompson\/repos","events_url":"https:\/\/api.github.com\/users\/tbenthompson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tbenthompson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Can you try again? Maybe there was a minor outage.","Yes, it seems to be working now. In case it's helpful, the outage lasted several days. It was failing as late as yesterday morning. ","we fixed something on the server side, glad it's fixed now"],"created_at":1687461288000,"updated_at":1687855099000,"closed_at":1687790565000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"The url is: https:\/\/huggingface.co\/datasets\/Confirm-Labs\/pile_ngrams_trigrams\r\n\r\nI am able to successfully view the \u201cFiles and versions\u201d tab: [Confirm-Labs\/pile_ngrams_trigrams at main](https:\/\/huggingface.co\/datasets\/Confirm-Labs\/pile_ngrams_trigrams\/tree\/main)\r\n\r\nAny help would be appreciated! Thanks! I hope this is the right place to report an issue like this.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5980\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5980\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5979","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5979\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5979\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5979\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5979","id":1770198250,"node_id":"PR_kwDODunzps5TrxS_","number":5979,"title":"set dev version","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_5979). All of your documentation changes will be reflected on that endpoint.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008087 \/ 0.011353 (-0.003266) | 0.004691 \/ 0.011008 (-0.006317) | 0.121545 \/ 0.038508 (0.083037) | 0.057436 \/ 0.023109 (0.034326) | 0.368864 \/ 0.275898 (0.092966) | 0.457199 \/ 0.323480 (0.133719) | 0.006745 \/ 0.007986 (-0.001241) | 0.003689 \/ 0.004328 (-0.000640) | 0.090480 \/ 0.004250 (0.086229) | 0.071368 \/ 0.037052 (0.034316) | 0.372788 \/ 0.258489 (0.114299) | 0.429894 \/ 0.293841 (0.136053) | 0.037544 \/ 0.128546 (-0.091002) | 0.010142 \/ 0.075646 (-0.065505) | 0.420467 \/ 0.419271 (0.001196) | 0.064359 \/ 0.043533 (0.020826) | 0.370345 \/ 0.255139 (0.115206) | 0.405220 \/ 0.283200 (0.122020) | 0.028410 \/ 0.141683 (-0.113273) | 1.824845 \/ 1.452155 (0.372690) | 1.888109 \/ 1.492716 (0.395392) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.234585 \/ 0.018006 (0.216578) | 0.499965 \/ 0.000490 (0.499476) | 0.000461 \/ 0.000200 (0.000261) | 0.000064 \/ 0.000054 (0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032294 \/ 0.037411 (-0.005117) | 0.131769 \/ 0.014526 (0.117243) | 0.146472 \/ 0.176557 (-0.030085) | 0.210035 \/ 0.737135 (-0.527100) | 0.145600 \/ 0.296338 (-0.150739) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.507455 \/ 0.215209 (0.292246) | 5.080090 \/ 2.077655 (3.002435) | 2.506104 \/ 1.504120 (1.001984) | 2.297655 \/ 1.541195 (0.756460) | 2.324920 \/ 1.468490 (0.856430) | 0.645003 \/ 4.584777 (-3.939774) | 4.677856 \/ 3.745712 (0.932144) | 2.254179 \/ 5.269862 (-3.015683) | 1.280663 \/ 4.565676 (-3.285013) | 0.078809 \/ 0.424275 (-0.345466) | 0.014059 \/ 0.007607 (0.006452) | 0.628053 \/ 0.226044 (0.402009) | 6.327289 \/ 2.268929 (4.058360) | 2.957918 \/ 55.444624 (-52.486706) | 2.571568 \/ 6.876477 (-4.304909) | 2.708766 \/ 2.142072 (0.566694) | 0.772868 \/ 4.805227 (-4.032360) | 0.164835 \/ 6.500664 (-6.335829) | 0.075334 \/ 0.075469 (-0.000135) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.471930 \/ 1.841788 (-0.369858) | 17.917340 \/ 8.074308 (9.843032) | 15.719327 \/ 10.191392 (5.527935) | 0.191999 \/ 0.680424 (-0.488424) | 0.022464 \/ 0.534201 (-0.511737) | 0.511038 \/ 0.579283 (-0.068245) | 0.512050 \/ 0.434364 (0.077686) | 0.608711 \/ 0.540337 (0.068373) | 0.749660 \/ 1.386936 (-0.637276) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008028 \/ 0.011353 (-0.003325) | 0.004908 \/ 0.011008 (-0.006100) | 0.092294 \/ 0.038508 (0.053786) | 0.053051 \/ 0.023109 (0.029942) | 0.453862 \/ 0.275898 (0.177964) | 0.512548 \/ 0.323480 (0.189068) | 0.004817 \/ 0.007986 (-0.003168) | 0.005330 \/ 0.004328 (0.001002) | 0.095600 \/ 0.004250 (0.091350) | 0.068763 \/ 0.037052 (0.031710) | 0.453654 \/ 0.258489 (0.195165) | 0.504995 \/ 0.293841 (0.211154) | 0.038123 \/ 0.128546 (-0.090423) | 0.010650 \/ 0.075646 (-0.064996) | 0.102854 \/ 0.419271 (-0.316417) | 0.062973 \/ 0.043533 (0.019440) | 0.430420 \/ 0.255139 (0.175281) | 0.465448 \/ 0.283200 (0.182248) | 0.029736 \/ 0.141683 (-0.111947) | 1.844225 \/ 1.452155 (0.392070) | 1.934685 \/ 1.492716 (0.441968) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.227797 \/ 0.018006 (0.209791) | 0.467868 \/ 0.000490 (0.467378) | 0.004531 \/ 0.000200 (0.004331) | 0.000105 \/ 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.035632 \/ 0.037411 (-0.001780) | 0.145943 \/ 0.014526 (0.131417) | 0.151944 \/ 0.176557 (-0.024613) | 0.220519 \/ 0.737135 (-0.516616) | 0.159732 \/ 0.296338 (-0.136606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.520641 \/ 0.215209 (0.305432) | 5.184740 \/ 2.077655 (3.107086) | 2.538751 \/ 1.504120 (1.034631) | 2.316571 \/ 1.541195 (0.775377) | 2.387898 \/ 1.468490 (0.919408) | 0.614515 \/ 4.584777 (-3.970262) | 4.573142 \/ 3.745712 (0.827430) | 4.657052 \/ 5.269862 (-0.612809) | 2.159664 \/ 4.565676 (-2.406013) | 0.079713 \/ 0.424275 (-0.344562) | 0.014462 \/ 0.007607 (0.006855) | 0.656611 \/ 0.226044 (0.430566) | 6.481630 \/ 2.268929 (4.212702) | 3.135047 \/ 55.444624 (-52.309577) | 2.757502 \/ 6.876477 (-4.118975) | 2.851488 \/ 2.142072 (0.709415) | 0.790795 \/ 4.805227 (-4.014432) | 0.172358 \/ 6.500664 (-6.328306) | 0.080255 \/ 0.075469 (0.004786) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.571391 \/ 1.841788 (-0.270396) | 19.025224 \/ 8.074308 (10.950916) | 17.079230 \/ 10.191392 (6.887838) | 0.172823 \/ 0.680424 (-0.507601) | 0.021845 \/ 0.534201 (-0.512356) | 0.522286 \/ 0.579283 (-0.056998) | 0.510406 \/ 0.434364 (0.076042) | 0.604830 \/ 0.540337 (0.064493) | 0.735466 \/ 1.386936 (-0.651471) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#4084609bdc40d173d1daa74ad2fe98f3ead72f8e \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.010025 \/ 0.011353 (-0.001328) | 0.005699 \/ 0.011008 (-0.005310) | 0.134194 \/ 0.038508 (0.095686) | 0.056154 \/ 0.023109 (0.033045) | 0.470091 \/ 0.275898 (0.194193) | 0.539225 \/ 0.323480 (0.215745) | 0.006659 \/ 0.007986 (-0.001326) | 0.004468 \/ 0.004328 (0.000140) | 0.110040 \/ 0.004250 (0.105790) | 0.074172 \/ 0.037052 (0.037119) | 0.497450 \/ 0.258489 (0.238961) | 0.535048 \/ 0.293841 (0.241207) | 0.051195 \/ 0.128546 (-0.077352) | 0.014926 \/ 0.075646 (-0.060721) | 0.461334 \/ 0.419271 (0.042062) | 0.073773 \/ 0.043533 (0.030240) | 0.450741 \/ 0.255139 (0.195602) | 0.474853 \/ 0.283200 (0.191653) | 0.036372 \/ 0.141683 (-0.105311) | 1.982873 \/ 1.452155 (0.530719) | 1.989912 \/ 1.492716 (0.497196) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.287817 \/ 0.018006 (0.269811) | 0.613415 \/ 0.000490 (0.612926) | 0.007082 \/ 0.000200 (0.006882) | 0.000100 \/ 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031119 \/ 0.037411 (-0.006292) | 0.129886 \/ 0.014526 (0.115361) | 0.143492 \/ 0.176557 (-0.033065) | 0.208536 \/ 0.737135 (-0.528600) | 0.147081 \/ 0.296338 (-0.149257) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.668312 \/ 0.215209 (0.453103) | 6.568609 \/ 2.077655 (4.490955) | 2.708788 \/ 1.504120 (1.204668) | 2.366737 \/ 1.541195 (0.825542) | 2.392598 \/ 1.468490 (0.924108) | 0.967582 \/ 4.584777 (-3.617195) | 5.582743 \/ 3.745712 (1.837031) | 3.021607 \/ 5.269862 (-2.248255) | 1.866402 \/ 4.565676 (-2.699275) | 0.115998 \/ 0.424275 (-0.308277) | 0.015571 \/ 0.007607 (0.007964) | 0.820069 \/ 0.226044 (0.594025) | 8.229725 \/ 2.268929 (5.960797) | 3.437068 \/ 55.444624 (-52.007557) | 2.902312 \/ 6.876477 (-3.974164) | 3.025874 \/ 2.142072 (0.883802) | 1.230359 \/ 4.805227 (-3.574868) | 0.237341 \/ 6.500664 (-6.263323) | 0.089923 \/ 0.075469 (0.014453) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.670970 \/ 1.841788 (-0.170818) | 19.667167 \/ 8.074308 (11.592859) | 21.624423 \/ 10.191392 (11.433031) | 0.231683 \/ 0.680424 (-0.448741) | 0.029145 \/ 0.534201 (-0.505056) | 0.543441 \/ 0.579283 (-0.035842) | 0.617510 \/ 0.434364 (0.183146) | 0.612662 \/ 0.540337 (0.072324) | 0.790589 \/ 1.386936 (-0.596347) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.010324 \/ 0.011353 (-0.001029) | 0.005339 \/ 0.011008 (-0.005669) | 0.104762 \/ 0.038508 (0.066254) | 0.052631 \/ 0.023109 (0.029522) | 0.485864 \/ 0.275898 (0.209966) | 0.595768 \/ 0.323480 (0.272288) | 0.007417 \/ 0.007986 (-0.000569) | 0.005229 \/ 0.004328 (0.000900) | 0.100775 \/ 0.004250 (0.096524) | 0.067144 \/ 0.037052 (0.030092) | 0.522269 \/ 0.258489 (0.263780) | 0.592597 \/ 0.293841 (0.298756) | 0.051101 \/ 0.128546 (-0.077446) | 0.015277 \/ 0.075646 (-0.060369) | 0.115530 \/ 0.419271 (-0.303741) | 0.071922 \/ 0.043533 (0.028390) | 0.490208 \/ 0.255139 (0.235069) | 0.578936 \/ 0.283200 (0.295736) | 0.040382 \/ 0.141683 (-0.101301) | 1.986059 \/ 1.452155 (0.533904) | 2.040600 \/ 1.492716 (0.547883) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.300399 \/ 0.018006 (0.282393) | 0.624702 \/ 0.000490 (0.624212) | 0.004908 \/ 0.000200 (0.004708) | 0.000155 \/ 0.000054 (0.000100) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.038031 \/ 0.037411 (0.000619) | 0.140353 \/ 0.014526 (0.125828) | 0.152600 \/ 0.176557 (-0.023956) | 0.219165 \/ 0.737135 (-0.517970) | 0.154232 \/ 0.296338 (-0.142106) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.698855 \/ 0.215209 (0.483646) | 7.125543 \/ 2.077655 (5.047889) | 3.251222 \/ 1.504120 (1.747102) | 2.953404 \/ 1.541195 (1.412209) | 3.051108 \/ 1.468490 (1.582618) | 0.962068 \/ 4.584777 (-3.622709) | 5.789579 \/ 3.745712 (2.043867) | 5.193271 \/ 5.269862 (-0.076591) | 2.757886 \/ 4.565676 (-1.807790) | 0.111865 \/ 0.424275 (-0.312410) | 0.014684 \/ 0.007607 (0.007077) | 0.875967 \/ 0.226044 (0.649923) | 8.818359 \/ 2.268929 (6.549430) | 4.165216 \/ 55.444624 (-51.279408) | 3.372059 \/ 6.876477 (-3.504418) | 3.486886 \/ 2.142072 (1.344813) | 1.232276 \/ 4.805227 (-3.572951) | 0.238967 \/ 6.500664 (-6.261697) | 0.091584 \/ 0.075469 (0.016115) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.850755 \/ 1.841788 (0.008968) | 20.058756 \/ 8.074308 (11.984448) | 23.761271 \/ 10.191392 (13.569879) | 0.231826 \/ 0.680424 (-0.448598) | 0.030119 \/ 0.534201 (-0.504082) | 0.532614 \/ 0.579283 (-0.046669) | 0.628968 \/ 0.434364 (0.194604) | 0.628403 \/ 0.540337 (0.088066) | 0.745648 \/ 1.386936 (-0.641288) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#a8a797cc92e860c8d0df71e0aa826f4d2690713e \"CML watermark\")\n"],"created_at":1687458734000,"updated_at":1687459342000,"closed_at":1687458742000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5979","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5979","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5979.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5979.patch","merged_at":1687458742000},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5979\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5979\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5978","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5978\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5978\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5978\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5978","id":1770187053,"node_id":"PR_kwDODunzps5Tru2_","number":5978,"title":"Release: 2.13.1","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006173 \/ 0.011353 (-0.005180) | 0.003773 \/ 0.011008 (-0.007235) | 0.099499 \/ 0.038508 (0.060991) | 0.037918 \/ 0.023109 (0.014809) | 0.321329 \/ 0.275898 (0.045431) | 0.379739 \/ 0.323480 (0.056259) | 0.004664 \/ 0.007986 (-0.003322) | 0.002943 \/ 0.004328 (-0.001385) | 0.077759 \/ 0.004250 (0.073509) | 0.055271 \/ 0.037052 (0.018219) | 0.329428 \/ 0.258489 (0.070939) | 0.378731 \/ 0.293841 (0.084890) | 0.027737 \/ 0.128546 (-0.100810) | 0.008566 \/ 0.075646 (-0.067081) | 0.313220 \/ 0.419271 (-0.106052) | 0.047101 \/ 0.043533 (0.003568) | 0.316211 \/ 0.255139 (0.061072) | 0.341826 \/ 0.283200 (0.058626) | 0.020838 \/ 0.141683 (-0.120845) | 1.550064 \/ 1.452155 (0.097909) | 1.706518 \/ 1.492716 (0.213801) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.203093 \/ 0.018006 (0.185087) | 0.425345 \/ 0.000490 (0.424856) | 0.004800 \/ 0.000200 (0.004600) | 0.000077 \/ 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024590 \/ 0.037411 (-0.012821) | 0.098115 \/ 0.014526 (0.083589) | 0.108274 \/ 0.176557 (-0.068282) | 0.170804 \/ 0.737135 (-0.566332) | 0.110560 \/ 0.296338 (-0.185778) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.425251 \/ 0.215209 (0.210042) | 4.239075 \/ 2.077655 (2.161421) | 1.955601 \/ 1.504120 (0.451481) | 1.774796 \/ 1.541195 (0.233602) | 1.826641 \/ 1.468490 (0.358150) | 0.558777 \/ 4.584777 (-4.026000) | 3.361697 \/ 3.745712 (-0.384015) | 1.764468 \/ 5.269862 (-3.505394) | 1.032280 \/ 4.565676 (-3.533396) | 0.067872 \/ 0.424275 (-0.356403) | 0.010998 \/ 0.007607 (0.003391) | 0.525682 \/ 0.226044 (0.299637) | 5.254356 \/ 2.268929 (2.985427) | 2.384332 \/ 55.444624 (-53.060292) | 2.045578 \/ 6.876477 (-4.830898) | 2.170914 \/ 2.142072 (0.028841) | 0.674782 \/ 4.805227 (-4.130445) | 0.135351 \/ 6.500664 (-6.365314) | 0.066591 \/ 0.075469 (-0.008878) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.209181 \/ 1.841788 (-0.632606) | 14.044518 \/ 8.074308 (5.970210) | 13.184705 \/ 10.191392 (2.993313) | 0.130836 \/ 0.680424 (-0.549588) | 0.016582 \/ 0.534201 (-0.517619) | 0.360005 \/ 0.579283 (-0.219279) | 0.379519 \/ 0.434364 (-0.054845) | 0.422174 \/ 0.540337 (-0.118164) | 0.515546 \/ 1.386936 (-0.871390) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006293 \/ 0.011353 (-0.005060) | 0.003784 \/ 0.011008 (-0.007224) | 0.079248 \/ 0.038508 (0.040739) | 0.038452 \/ 0.023109 (0.015343) | 0.444727 \/ 0.275898 (0.168829) | 0.500535 \/ 0.323480 (0.177055) | 0.003455 \/ 0.007986 (-0.004531) | 0.002873 \/ 0.004328 (-0.001455) | 0.077439 \/ 0.004250 (0.073189) | 0.047855 \/ 0.037052 (0.010803) | 0.448049 \/ 0.258489 (0.189560) | 0.509517 \/ 0.293841 (0.215676) | 0.028359 \/ 0.128546 (-0.100188) | 0.008503 \/ 0.075646 (-0.067143) | 0.084961 \/ 0.419271 (-0.334310) | 0.042880 \/ 0.043533 (-0.000653) | 0.436628 \/ 0.255139 (0.181489) | 0.456574 \/ 0.283200 (0.173375) | 0.019539 \/ 0.141683 (-0.122144) | 1.561273 \/ 1.452155 (0.109118) | 1.572018 \/ 1.492716 (0.079301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.230250 \/ 0.018006 (0.212244) | 0.415189 \/ 0.000490 (0.414700) | 0.003213 \/ 0.000200 (0.003013) | 0.000080 \/ 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025541 \/ 0.037411 (-0.011871) | 0.102326 \/ 0.014526 (0.087800) | 0.110258 \/ 0.176557 (-0.066298) | 0.162488 \/ 0.737135 (-0.574647) | 0.112782 \/ 0.296338 (-0.183556) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.457936 \/ 0.215209 (0.242727) | 4.581503 \/ 2.077655 (2.503848) | 2.237659 \/ 1.504120 (0.733540) | 2.029960 \/ 1.541195 (0.488765) | 2.082911 \/ 1.468490 (0.614421) | 0.556485 \/ 4.584777 (-4.028292) | 3.384418 \/ 3.745712 (-0.361295) | 1.748809 \/ 5.269862 (-3.521053) | 1.034759 \/ 4.565676 (-3.530917) | 0.067500 \/ 0.424275 (-0.356776) | 0.011425 \/ 0.007607 (0.003818) | 0.561340 \/ 0.226044 (0.335295) | 5.623629 \/ 2.268929 (3.354701) | 2.733587 \/ 55.444624 (-52.711038) | 2.401578 \/ 6.876477 (-4.474899) | 2.524569 \/ 2.142072 (0.382496) | 0.673170 \/ 4.805227 (-4.132057) | 0.136681 \/ 6.500664 (-6.363983) | 0.068060 \/ 0.075469 (-0.007409) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.318651 \/ 1.841788 (-0.523137) | 14.362123 \/ 8.074308 (6.287815) | 14.385964 \/ 10.191392 (4.194572) | 0.149914 \/ 0.680424 (-0.530510) | 0.016877 \/ 0.534201 (-0.517324) | 0.358406 \/ 0.579283 (-0.220877) | 0.394349 \/ 0.434364 (-0.040015) | 0.422471 \/ 0.540337 (-0.117866) | 0.513807 \/ 1.386936 (-0.873129) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#1b9ce11d1b94e6178df663ff5fcad029849d10fb \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006272 \/ 0.011353 (-0.005080) | 0.003903 \/ 0.011008 (-0.007105) | 0.100180 \/ 0.038508 (0.061672) | 0.037799 \/ 0.023109 (0.014690) | 0.385627 \/ 0.275898 (0.109729) | 0.446518 \/ 0.323480 (0.123038) | 0.004811 \/ 0.007986 (-0.003175) | 0.003032 \/ 0.004328 (-0.001296) | 0.077063 \/ 0.004250 (0.072812) | 0.055564 \/ 0.037052 (0.018512) | 0.397346 \/ 0.258489 (0.138857) | 0.443242 \/ 0.293841 (0.149401) | 0.027904 \/ 0.128546 (-0.100642) | 0.008386 \/ 0.075646 (-0.067260) | 0.315013 \/ 0.419271 (-0.104259) | 0.047943 \/ 0.043533 (0.004410) | 0.378443 \/ 0.255139 (0.123304) | 0.411472 \/ 0.283200 (0.128272) | 0.020465 \/ 0.141683 (-0.121218) | 1.526594 \/ 1.452155 (0.074439) | 1.547018 \/ 1.492716 (0.054301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.219377 \/ 0.018006 (0.201370) | 0.430254 \/ 0.000490 (0.429764) | 0.003218 \/ 0.000200 (0.003018) | 0.000072 \/ 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023667 \/ 0.037411 (-0.013744) | 0.099143 \/ 0.014526 (0.084617) | 0.106044 \/ 0.176557 (-0.070513) | 0.166186 \/ 0.737135 (-0.570949) | 0.108736 \/ 0.296338 (-0.187603) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.437971 \/ 0.215209 (0.222762) | 4.363675 \/ 2.077655 (2.286021) | 2.011993 \/ 1.504120 (0.507873) | 1.845189 \/ 1.541195 (0.303994) | 1.831848 \/ 1.468490 (0.363358) | 0.562402 \/ 4.584777 (-4.022375) | 3.365259 \/ 3.745712 (-0.380453) | 1.781491 \/ 5.269862 (-3.488371) | 1.023454 \/ 4.565676 (-3.542223) | 0.067857 \/ 0.424275 (-0.356418) | 0.011076 \/ 0.007607 (0.003469) | 0.532267 \/ 0.226044 (0.306223) | 5.340344 \/ 2.268929 (3.071415) | 2.388649 \/ 55.444624 (-53.055976) | 2.055373 \/ 6.876477 (-4.821104) | 2.205047 \/ 2.142072 (0.062975) | 0.672909 \/ 4.805227 (-4.132318) | 0.135244 \/ 6.500664 (-6.365420) | 0.066184 \/ 0.075469 (-0.009285) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.206838 \/ 1.841788 (-0.634950) | 13.967075 \/ 8.074308 (5.892767) | 13.143971 \/ 10.191392 (2.952579) | 0.143991 \/ 0.680424 (-0.536433) | 0.016673 \/ 0.534201 (-0.517527) | 0.376180 \/ 0.579283 (-0.203103) | 0.386550 \/ 0.434364 (-0.047814) | 0.440590 \/ 0.540337 (-0.099747) | 0.529974 \/ 1.386936 (-0.856962) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006299 \/ 0.011353 (-0.005054) | 0.003784 \/ 0.011008 (-0.007224) | 0.077875 \/ 0.038508 (0.039367) | 0.038689 \/ 0.023109 (0.015580) | 0.421684 \/ 0.275898 (0.145786) | 0.472649 \/ 0.323480 (0.149169) | 0.003570 \/ 0.007986 (-0.004415) | 0.004448 \/ 0.004328 (0.000120) | 0.077867 \/ 0.004250 (0.073616) | 0.049514 \/ 0.037052 (0.012462) | 0.375983 \/ 0.258489 (0.117494) | 0.470632 \/ 0.293841 (0.176791) | 0.028238 \/ 0.128546 (-0.100308) | 0.008462 \/ 0.075646 (-0.067185) | 0.082452 \/ 0.419271 (-0.336819) | 0.043617 \/ 0.043533 (0.000084) | 0.400874 \/ 0.255139 (0.145735) | 0.426191 \/ 0.283200 (0.142992) | 0.020602 \/ 0.141683 (-0.121081) | 1.567658 \/ 1.452155 (0.115504) | 1.572610 \/ 1.492716 (0.079893) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.246144 \/ 0.018006 (0.228138) | 0.419402 \/ 0.000490 (0.418913) | 0.001691 \/ 0.000200 (0.001491) | 0.000071 \/ 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026105 \/ 0.037411 (-0.011306) | 0.104734 \/ 0.014526 (0.090208) | 0.110257 \/ 0.176557 (-0.066300) | 0.161429 \/ 0.737135 (-0.575706) | 0.114367 \/ 0.296338 (-0.181972) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.453352 \/ 0.215209 (0.238143) | 4.537924 \/ 2.077655 (2.460269) | 2.196193 \/ 1.504120 (0.692073) | 2.002087 \/ 1.541195 (0.460892) | 2.041722 \/ 1.468490 (0.573231) | 0.561643 \/ 4.584777 (-4.023134) | 3.449108 \/ 3.745712 (-0.296605) | 2.862800 \/ 5.269862 (-2.407062) | 1.387895 \/ 4.565676 (-3.177782) | 0.068076 \/ 0.424275 (-0.356199) | 0.011568 \/ 0.007607 (0.003961) | 0.559279 \/ 0.226044 (0.333235) | 5.598738 \/ 2.268929 (3.329809) | 2.676649 \/ 55.444624 (-52.767975) | 2.334588 \/ 6.876477 (-4.541889) | 2.376215 \/ 2.142072 (0.234142) | 0.673109 \/ 4.805227 (-4.132118) | 0.137587 \/ 6.500664 (-6.363077) | 0.069131 \/ 0.075469 (-0.006338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.307332 \/ 1.841788 (-0.534456) | 14.536036 \/ 8.074308 (6.461728) | 14.173734 \/ 10.191392 (3.982342) | 0.145143 \/ 0.680424 (-0.535281) | 0.016662 \/ 0.534201 (-0.517539) | 0.366901 \/ 0.579283 (-0.212383) | 0.394498 \/ 0.434364 (-0.039866) | 0.430546 \/ 0.540337 (-0.109792) | 0.518950 \/ 1.386936 (-0.867986) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#682d21e94ab1e64c11b583de39dc4c93f0101c5a \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008122 \/ 0.011353 (-0.003231) | 0.005585 \/ 0.011008 (-0.005424) | 0.121219 \/ 0.038508 (0.082711) | 0.047616 \/ 0.023109 (0.024507) | 0.440576 \/ 0.275898 (0.164678) | 0.491053 \/ 0.323480 (0.167573) | 0.004774 \/ 0.007986 (-0.003211) | 0.006758 \/ 0.004328 (0.002430) | 0.103852 \/ 0.004250 (0.099602) | 0.071560 \/ 0.037052 (0.034508) | 0.463107 \/ 0.258489 (0.204618) | 0.516904 \/ 0.293841 (0.223063) | 0.048052 \/ 0.128546 (-0.080494) | 0.013679 \/ 0.075646 (-0.061968) | 0.428383 \/ 0.419271 (0.009112) | 0.069468 \/ 0.043533 (0.025936) | 0.432593 \/ 0.255139 (0.177454) | 0.471810 \/ 0.283200 (0.188611) | 0.037541 \/ 0.141683 (-0.104142) | 1.823490 \/ 1.452155 (0.371335) | 1.922558 \/ 1.492716 (0.429842) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.252315 \/ 0.018006 (0.234309) | 0.541757 \/ 0.000490 (0.541267) | 0.000373 \/ 0.000200 (0.000173) | 0.000083 \/ 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030361 \/ 0.037411 (-0.007050) | 0.125928 \/ 0.014526 (0.111402) | 0.145102 \/ 0.176557 (-0.031455) | 0.209798 \/ 0.737135 (-0.527337) | 0.147349 \/ 0.296338 (-0.148990) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.627554 \/ 0.215209 (0.412345) | 5.917422 \/ 2.077655 (3.839767) | 2.491083 \/ 1.504120 (0.986963) | 2.147078 \/ 1.541195 (0.605883) | 2.167511 \/ 1.468490 (0.699021) | 0.903061 \/ 4.584777 (-3.681716) | 5.518537 \/ 3.745712 (1.772825) | 2.654348 \/ 5.269862 (-2.615514) | 1.645121 \/ 4.565676 (-2.920556) | 0.103782 \/ 0.424275 (-0.320493) | 0.013048 \/ 0.007607 (0.005441) | 0.756732 \/ 0.226044 (0.530687) | 7.622873 \/ 2.268929 (5.353945) | 3.122689 \/ 55.444624 (-52.321936) | 2.537735 \/ 6.876477 (-4.338742) | 2.640090 \/ 2.142072 (0.498018) | 1.128635 \/ 4.805227 (-3.676593) | 0.228089 \/ 6.500664 (-6.272575) | 0.086207 \/ 0.075469 (0.010738) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.561591 \/ 1.841788 (-0.280197) | 18.110299 \/ 8.074308 (10.035991) | 20.718017 \/ 10.191392 (10.526625) | 0.225741 \/ 0.680424 (-0.454682) | 0.031738 \/ 0.534201 (-0.502463) | 0.530789 \/ 0.579283 (-0.048495) | 0.607364 \/ 0.434364 (0.173000) | 0.581593 \/ 0.540337 (0.041256) | 0.726033 \/ 1.386936 (-0.660903) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009323 \/ 0.011353 (-0.002030) | 0.005360 \/ 0.011008 (-0.005649) | 0.103608 \/ 0.038508 (0.065100) | 0.050158 \/ 0.023109 (0.027049) | 0.499906 \/ 0.275898 (0.224008) | 0.561005 \/ 0.323480 (0.237525) | 0.005093 \/ 0.007986 (-0.002892) | 0.008285 \/ 0.004328 (0.003956) | 0.103446 \/ 0.004250 (0.099196) | 0.061478 \/ 0.037052 (0.024426) | 0.494016 \/ 0.258489 (0.235527) | 0.537550 \/ 0.293841 (0.243709) | 0.048829 \/ 0.128546 (-0.079717) | 0.017032 \/ 0.075646 (-0.058614) | 0.107748 \/ 0.419271 (-0.311524) | 0.065607 \/ 0.043533 (0.022074) | 0.488709 \/ 0.255139 (0.233570) | 0.512023 \/ 0.283200 (0.228823) | 0.032067 \/ 0.141683 (-0.109616) | 1.907585 \/ 1.452155 (0.455431) | 1.960994 \/ 1.492716 (0.468278) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.278378 \/ 0.018006 (0.260371) | 0.551474 \/ 0.000490 (0.550985) | 0.006886 \/ 0.000200 (0.006686) | 0.000106 \/ 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030674 \/ 0.037411 (-0.006737) | 0.135179 \/ 0.014526 (0.120654) | 0.133703 \/ 0.176557 (-0.042853) | 0.198923 \/ 0.737135 (-0.538212) | 0.155108 \/ 0.296338 (-0.141231) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.690566 \/ 0.215209 (0.475357) | 6.789594 \/ 2.077655 (4.711940) | 2.940668 \/ 1.504120 (1.436549) | 2.562431 \/ 1.541195 (1.021236) | 2.554232 \/ 1.468490 (1.085742) | 0.888470 \/ 4.584777 (-3.696307) | 5.672318 \/ 3.745712 (1.926606) | 2.741626 \/ 5.269862 (-2.528236) | 1.818336 \/ 4.565676 (-2.747340) | 0.110434 \/ 0.424275 (-0.313841) | 0.014114 \/ 0.007607 (0.006507) | 0.830632 \/ 0.226044 (0.604588) | 8.270787 \/ 2.268929 (6.001859) | 3.723486 \/ 55.444624 (-51.721139) | 2.993671 \/ 6.876477 (-3.882806) | 2.918273 \/ 2.142072 (0.776201) | 1.105337 \/ 4.805227 (-3.699891) | 0.222976 \/ 6.500664 (-6.277688) | 0.085290 \/ 0.075469 (0.009820) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.816027 \/ 1.841788 (-0.025760) | 18.496850 \/ 8.074308 (10.422541) | 20.457032 \/ 10.191392 (10.265640) | 0.243533 \/ 0.680424 (-0.436891) | 0.027044 \/ 0.534201 (-0.507157) | 0.500752 \/ 0.579283 (-0.078531) | 0.620963 \/ 0.434364 (0.186599) | 0.607995 \/ 0.540337 (0.067658) | 0.722915 \/ 1.386936 (-0.664021) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#682d21e94ab1e64c11b583de39dc4c93f0101c5a \"CML watermark\")\n"],"created_at":1687458191000,"updated_at":1687459224000,"closed_at":1687458616000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5978","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5978","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5978.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5978.patch","merged_at":1687458616000},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5978\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5978\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5976","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5976\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5976\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5976\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5976","id":1768503913,"node_id":"PR_kwDODunzps5TmAFp","number":5976,"title":"Avoid stuck map operation when subprocesses crashes","user":{"login":"pappacena","id":1213561,"node_id":"MDQ6VXNlcjEyMTM1NjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1213561?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pappacena","html_url":"https:\/\/github.com\/pappacena","followers_url":"https:\/\/api.github.com\/users\/pappacena\/followers","following_url":"https:\/\/api.github.com\/users\/pappacena\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pappacena\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pappacena\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pappacena\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pappacena\/orgs","repos_url":"https:\/\/api.github.com\/users\/pappacena\/repos","events_url":"https:\/\/api.github.com\/users\/pappacena\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pappacena\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Do you think this can be fixed at the Pool level ? Ideally it should be the Pool responsibility to handle this, not the `map` code. We could even subclass Pool if needed (at least the one from `multiprocess`)","@lhoestq it makes sense to me. Just pushed a refactoring creating a `class ProcessPool(multiprocess.pool.Pool)` to keep track of the PID changes.","_The documentation is not available anymore as the PR was closed or merged._","I managed to raise an error without subclassing Pool with two additions to `iflatmap_unordered`:\r\n\r\n1. at the beggining\r\n```python\r\noriginal_pool = list(pool._pool)\r\n```\r\n\r\n2. in the loop\r\n```python\r\nif any(async_result._pool != original_pool for async_result in async_results) and queue.empty():\r\n raise RuntimeError(\r\n \"One of the subprocesses has abruptly died during map operation.\"\r\n \"To debug the error, disable multiprocessing.\"\r\n )\r\n```\r\n\r\nIt's still a fix that only works for `iflatmap_unordered` (so not for map, imap etc) but is maybe simpler that subclassing. It also works for both multiprocessing.Pool and multiprocess.Pool","@lhoestq sorry for the delay. Busy weeks here. \r\n\r\nI just pushed the change you requested. It looks closer to the original proposal, actually.\r\n\r\nIt seems that `map` actually uses `iflatmap_unordered` ([here](https:\/\/github.com\/huggingface\/datasets\/blob\/819bb4346434912eb405ce3f3e9f21dc25a2fe85\/src\/datasets\/arrow_dataset.py#L1509)). I think this solution works fine for the `map` method (which is the one being tested by the new `tests\/test_arrow_dataset.py::BaseDatasetTest::test_map_crash_subprocess`, right?).","Yes fixing iflatmap_unordered does fix Dataset.map, but it won't fix any Pool.map that we may use elsewhere so we'll have to keep this in mind.","It looks all good to me, feel free to fix code formatting by running `make style` and we can merge :)","> Yes fixing iflatmap_unordered does fix Dataset.map, but it won't fix any Pool.map that we may use elsewhere so we'll have to keep this in mind.\r\n\r\nRight, I agree. The best way moving forward is probably not using the buggy `multiprocess.Pool` anymore, and replace it with `concurrent.futures.ProcessPoolExecutor` as much as possible.\r\n\r\nAnyway, I've run `make style` now. Thanks for the support!","It looks like checking the async_result._pool doesn't always work - sorry about that. We might just go back to your original solution then. Would also be cool to open an issue in `multiprocess` to ask if they have a solution or if they plan to fix this.","@lhoestq no problem! Reverted to the previous version.\r\n\r\nTBH, given the discussions [in this python issue](https:\/\/github.com\/python\/cpython\/issues\/66587), I don't think the error in `multiprocess` will be merged upstream any time soon...","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006060 \/ 0.011353 (-0.005293) | 0.003695 \/ 0.011008 (-0.007313) | 0.080484 \/ 0.038508 (0.041976) | 0.061894 \/ 0.023109 (0.038785) | 0.312510 \/ 0.275898 (0.036612) | 0.352398 \/ 0.323480 (0.028918) | 0.004638 \/ 0.007986 (-0.003348) | 0.002918 \/ 0.004328 (-0.001410) | 0.062932 \/ 0.004250 (0.058681) | 0.050859 \/ 0.037052 (0.013807) | 0.316812 \/ 0.258489 (0.058323) | 0.357684 \/ 0.293841 (0.063843) | 0.027622 \/ 0.128546 (-0.100924) | 0.008012 \/ 0.075646 (-0.067634) | 0.260970 \/ 0.419271 (-0.158302) | 0.045807 \/ 0.043533 (0.002275) | 0.321235 \/ 0.255139 (0.066096) | 0.343162 \/ 0.283200 (0.059962) | 0.021136 \/ 0.141683 (-0.120547) | 1.465886 \/ 1.452155 (0.013731) | 1.500216 \/ 1.492716 (0.007500) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.187286 \/ 0.018006 (0.169279) | 0.428724 \/ 0.000490 (0.428235) | 0.003029 \/ 0.000200 (0.002829) | 0.000063 \/ 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.022703 \/ 0.037411 (-0.014708) | 0.072740 \/ 0.014526 (0.058215) | 0.083436 \/ 0.176557 (-0.093120) | 0.144559 \/ 0.737135 (-0.592577) | 0.083958 \/ 0.296338 (-0.212380) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.435729 \/ 0.215209 (0.220520) | 4.351146 \/ 2.077655 (2.273491) | 2.316627 \/ 1.504120 (0.812508) | 2.144587 \/ 1.541195 (0.603393) | 2.209182 \/ 1.468490 (0.740692) | 0.501131 \/ 4.584777 (-4.083646) | 3.077085 \/ 3.745712 (-0.668627) | 4.353706 \/ 5.269862 (-0.916156) | 2.621523 \/ 4.565676 (-1.944154) | 0.058976 \/ 0.424275 (-0.365299) | 0.006467 \/ 0.007607 (-0.001141) | 0.506690 \/ 0.226044 (0.280646) | 5.085787 \/ 2.268929 (2.816858) | 2.731336 \/ 55.444624 (-52.713289) | 2.419451 \/ 6.876477 (-4.457025) | 2.583649 \/ 2.142072 (0.441577) | 0.589869 \/ 4.805227 (-4.215359) | 0.131040 \/ 6.500664 (-6.369624) | 0.061332 \/ 0.075469 (-0.014137) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.220542 \/ 1.841788 (-0.621245) | 18.169643 \/ 8.074308 (10.095335) | 13.251704 \/ 10.191392 (3.060312) | 0.142952 \/ 0.680424 (-0.537472) | 0.016639 \/ 0.534201 (-0.517562) | 0.334851 \/ 0.579283 (-0.244432) | 0.361865 \/ 0.434364 (-0.072499) | 0.380933 \/ 0.540337 (-0.159404) | 0.527374 \/ 1.386936 (-0.859562) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006319 \/ 0.011353 (-0.005034) | 0.003778 \/ 0.011008 (-0.007231) | 0.062388 \/ 0.038508 (0.023880) | 0.062228 \/ 0.023109 (0.039119) | 0.373727 \/ 0.275898 (0.097829) | 0.399442 \/ 0.323480 (0.075962) | 0.005434 \/ 0.007986 (-0.002551) | 0.003020 \/ 0.004328 (-0.001308) | 0.062774 \/ 0.004250 (0.058524) | 0.052784 \/ 0.037052 (0.015732) | 0.376428 \/ 0.258489 (0.117939) | 0.405039 \/ 0.293841 (0.111198) | 0.027884 \/ 0.128546 (-0.100662) | 0.008086 \/ 0.075646 (-0.067561) | 0.067078 \/ 0.419271 (-0.352194) | 0.042927 \/ 0.043533 (-0.000606) | 0.372142 \/ 0.255139 (0.117003) | 0.389604 \/ 0.283200 (0.106405) | 0.021582 \/ 0.141683 (-0.120101) | 1.473332 \/ 1.452155 (0.021177) | 1.536018 \/ 1.492716 (0.043302) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.184729 \/ 0.018006 (0.166723) | 0.421065 \/ 0.000490 (0.420575) | 0.002681 \/ 0.000200 (0.002481) | 0.000070 \/ 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026067 \/ 0.037411 (-0.011344) | 0.077138 \/ 0.014526 (0.062612) | 0.085178 \/ 0.176557 (-0.091379) | 0.139681 \/ 0.737135 (-0.597454) | 0.087528 \/ 0.296338 (-0.208810) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.444899 \/ 0.215209 (0.229690) | 4.459168 \/ 2.077655 (2.381513) | 2.408792 \/ 1.504120 (0.904672) | 2.237243 \/ 1.541195 (0.696048) | 2.296298 \/ 1.468490 (0.827808) | 0.498508 \/ 4.584777 (-4.086269) | 3.067064 \/ 3.745712 (-0.678648) | 4.470577 \/ 5.269862 (-0.799284) | 2.701972 \/ 4.565676 (-1.863705) | 0.057711 \/ 0.424275 (-0.366564) | 0.006443 \/ 0.007607 (-0.001164) | 0.524046 \/ 0.226044 (0.298002) | 5.229928 \/ 2.268929 (2.961000) | 2.862101 \/ 55.444624 (-52.582523) | 2.545972 \/ 6.876477 (-4.330504) | 2.606459 \/ 2.142072 (0.464387) | 0.593285 \/ 4.805227 (-4.211942) | 0.124913 \/ 6.500664 (-6.375751) | 0.061942 \/ 0.075469 (-0.013527) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.322162 \/ 1.841788 (-0.519625) | 18.745796 \/ 8.074308 (10.671488) | 13.955443 \/ 10.191392 (3.764051) | 0.145610 \/ 0.680424 (-0.534814) | 0.016817 \/ 0.534201 (-0.517384) | 0.331180 \/ 0.579283 (-0.248103) | 0.343019 \/ 0.434364 (-0.091345) | 0.379459 \/ 0.540337 (-0.160878) | 0.526403 \/ 1.386936 (-0.860533) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#aca4cdcc79f16ec5157a2a3a665fdef0e3aa176d \"CML watermark\")\n"],"created_at":1687382311000,"updated_at":1688983119000,"closed_at":1688982607000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5976","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5976","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5976.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5976.patch","merged_at":1688982607000},"body":"I've been using Dataset.map() with `num_proc=os.cpu_count()` to leverage multicore processing for my datasets, but from time to time I get stuck processes waiting forever. Apparently, when one of the subprocesses is abruptly killed (OOM killer, segfault, SIGKILL, etc), the main process keeps waiting for the async task sent to that child process to finish.\r\n\r\nIt seems to be easy to reproduce the issue with the following script:\r\n\r\n```\r\nimport os\r\nfrom datasets import Dataset, Features, Value\r\n\r\n\r\ndef do_stuck(item):\r\n os.kill(os.getpid(), 9)\r\n\r\ndata = {\r\n \"col1\": list(range(5)),\r\n \"col2\": list(range(5)),\r\n}\r\n\r\nds = Dataset.from_dict(\r\n data,\r\n features=Features({\r\n \"col1\": Value(\"int64\"),\r\n \"col2\": Value(\"int64\"),\r\n }),\r\n)\r\n\r\nprint(ds.map(do_stuck, num_proc=4))\r\n```\r\n\r\nThis is an old behavior in Python, which apparently was fixed a few years ago in `concurrent.futures.ProcessPoolExecutor` ([ref](https:\/\/bugs.python.org\/issue9205)), but not in `multiprocessing.pool.Pool` \/ `multiprocess.pool.Pool`, which is used by `Dataset.map` ([ref](https:\/\/bugs.python.org\/issue22393)).\r\n\r\nThis PR is an idea to try to detect when a child process gets killed, and raises a `RuntimeError` warning the dataset.map() caller.\r\n\r\nEDIT: Related proposal for future improvement: https:\/\/github.com\/huggingface\/datasets\/discussions\/5977","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5976\/reactions","total_count":2,"+1":2,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5976\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5975","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5975\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5975\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5975\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5975","id":1768271343,"node_id":"I_kwDODunzps5pZa3v","number":5975,"title":"Streaming Dataset behind Proxy - FileNotFoundError","user":{"login":"Veluchs","id":135350576,"node_id":"U_kgDOCBFJMA","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/135350576?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Veluchs","html_url":"https:\/\/github.com\/Veluchs","followers_url":"https:\/\/api.github.com\/users\/Veluchs\/followers","following_url":"https:\/\/api.github.com\/users\/Veluchs\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Veluchs\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Veluchs\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Veluchs\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Veluchs\/orgs","repos_url":"https:\/\/api.github.com\/users\/Veluchs\/repos","events_url":"https:\/\/api.github.com\/users\/Veluchs\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Veluchs\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Duplicate of #","Hi ! can you try to set the upper case environment variables `HTTP_PROXY` and `HTTPS_PROXY` ?\r\n\r\nWe use `aiohttp` for streaming and it uses case sensitive environment variables","Hi, thanks for the quick reply.\r\n\r\nI set the uppercase env variables with\r\n\r\n`\r\nos.environ['HTTP_PROXY'] = \"http:\/\/example.com:xxxx\" \r\nos.environ['HTTPS_PROXY'] = \"http:\/\/example.com:xxxx\" \r\n`\r\n\r\nHowever, I still get the same error.\r\n\r\nOne thing that could be helpfull: When downloading a dataset without streaming i get the following message:\r\n_HF google storage unreachable. Downloading and preparing it from source_.\r\nThe download does however work as expected.\r\n","Are you able to use `aiohttp` to get the file at `https:\/\/huggingface.co\/datasets\/facebook\/voxpopuli\/resolve\/main\/data\/n_files.json` using your proxy ?","It only works when passing trust_env=True when creating the ClientSession, as well as setting ssl=False.\r\n\r\nWorking Example:\r\n\r\n```\r\nimport os\r\n\r\nos.environ['HTTP_PROXY'] = \"xyz\"\r\nos.environ['HTTPS_PROXY'] = \"xyz\"\r\n\r\nimport asyncio\r\nimport aiohttp\r\n\r\nasync def download_pep(url):\r\n async with aiohttp.ClientSession(trust_env=True) as session:\r\n print(\"1\")\r\n async with session.get(url, ssl=False) as resp:\r\n print(\"2\")\r\n content = await resp.text()\r\n print(content)\r\n return content\r\n\r\nasyncio.run(download_pep(\"https:\/\/huggingface.co\/datasets\/facebook\/voxpopuli\/resolve\/main\/data\/n_files.json\"))\r\n```\r\n\r\n\r\n\r\nSSL Verification has been a problem with other packages as well. Usually I circumvent the problem by setting\r\n```\r\nimport ssl\r\nssl._create_default_https_context = ssl._create_unverified_context\r\n```\r\n(probably not the best idea for security), although here aiohttp does not seem to use this default context.","We do pass `trust_env` as well. Could you share the full stack trace you get when streaming using `datasets` ? That could help locate where we might have forgotten to pass `trust_env`","Is there a way to disable ssl verification when streaming a dataset. I suspect this might be the isssue with my proxy.\r\n\r\n\r\nHere you go:\r\n\r\n```\r\nFileNotFoundError Traceback (most recent call last)\r\nCell In[8], line 3\r\n 1 from datasets import load_dataset\r\n----> 3 ds = load_dataset(\"facebook\/voxpopuli\", name=\"de\", streaming=True)\r\n 5 sample = next(iter(ds))\r\n\r\nFile [~\/.conda\/envs\/audio_hf\/lib\/python3.10\/site-packages\/datasets\/load.py:1790](https:\/\/vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net\/home\/wrsbri\/projects\/audio_course\/~\/.conda\/envs\/audio_hf\/lib\/python3.10\/site-packages\/datasets\/load.py:1790), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1788 # Return iterable dataset in case of streaming\r\n 1789 if streaming:\r\n-> 1790 return builder_instance.as_streaming_dataset(split=split)\r\n 1792 # Some datasets are already processed on the HF google storage\r\n 1793 # Don't try downloading from Google storage for the packaged datasets as text, json, csv or pandas\r\n 1794 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n\r\nFile [~\/.conda\/envs\/audio_hf\/lib\/python3.10\/site-packages\/datasets\/builder.py:1281](https:\/\/vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net\/home\/wrsbri\/projects\/audio_course\/~\/.conda\/envs\/audio_hf\/lib\/python3.10\/site-packages\/datasets\/builder.py:1281), in DatasetBuilder.as_streaming_dataset(self, split, base_path)\r\n 1274 dl_manager = StreamingDownloadManager(\r\n 1275 base_path=base_path or self.base_path,\r\n 1276 download_config=DownloadConfig(use_auth_token=self.use_auth_token, storage_options=self.storage_options),\r\n 1277 dataset_name=self.name,\r\n 1278 data_dir=self.config.data_dir,\r\n 1279 )\r\n 1280 self._check_manual_download(dl_manager)\r\n-> 1281 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n 1282 # By default, return all splits\r\n 1283 if split is None:\r\n\r\nFile [~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/facebook--voxpopuli\/b5ff837284f0778eefe0f642734e142d8c3f574eba8c9c8a4b13602297f73604\/voxpopuli.py:120](https:\/\/vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net\/home\/wrsbri\/projects\/audio_course\/~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/facebook--voxpopuli\/b5ff837284f0778eefe0f642734e142d8c3f574eba8c9c8a4b13602297f73604\/voxpopuli.py:120), in Voxpopuli._split_generators(self, dl_manager)\r\n 118 def _split_generators(self, dl_manager):\r\n 119 n_shards_path = dl_manager.download_and_extract(_N_SHARDS_FILE)\r\n--> 120 with open(n_shards_path) as f:\r\n 121 n_shards = json.load(f)\r\n 123 if self.config.name == \"en_accented\":\r\n\r\nFile [~\/.conda\/envs\/audio_hf\/lib\/python3.10\/site-packages\/datasets\/streaming.py:71](https:\/\/vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net\/home\/wrsbri\/projects\/audio_course\/~\/.conda\/envs\/audio_hf\/lib\/python3.10\/site-packages\/datasets\/streaming.py:71), in extend_module_for_streaming..wrap_auth..wrapper(*args, **kwargs)\r\n 69 @wraps(function)\r\n 70 def wrapper(*args, **kwargs):\r\n---> 71 return function(*args, use_auth_token=use_auth_token, **kwargs)\r\n\r\nFile [~\/.conda\/envs\/audio_hf\/lib\/python3.10\/site-packages\/datasets\/download\/streaming_download_manager.py:517](https:\/\/vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net\/home\/wrsbri\/projects\/audio_course\/~\/.conda\/envs\/audio_hf\/lib\/python3.10\/site-packages\/datasets\/download\/streaming_download_manager.py:517), in xopen(file, mode, use_auth_token, *args, **kwargs)\r\n 515 except FileNotFoundError:\r\n 516 if file.startswith(config.HF_ENDPOINT):\r\n--> 517 raise FileNotFoundError(\r\n 518 file + \"\\nIf the repo is private or gated, make sure to log in with `huggingface-cli login`.\"\r\n 519 ) from None\r\n 520 else:\r\n 521 raise\r\n\r\nFileNotFoundError: https:\/\/huggingface.co\/datasets\/facebook\/voxpopuli\/resolve\/main\/data\/n_files.json\r\nIf the repo is private or gated, make sure to log in with `huggingface-cli login`.\r\n```","> Is there a way to disable ssl verification when streaming a dataset.\r\n\r\nI don't think so.\r\n\r\nWe use `fsspec` HTTPFileSystem implementation that is based on `aiohttp`. If you register a subclass of HTTPFileSystem that has SSL disabled by default it could work, but I wouldn't recommended it because it can raise security issues.","Okay thanks for your help! I guess I have to figure out how to improve the proxy environment \/ see if I can make it work with ssl connections."],"created_at":1687374602000,"updated_at":1688104539000,"closed_at":1688104538000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nWhen trying to stream a dataset i get the following error after a few minutes of waiting.\r\n\r\n```\r\nFileNotFoundError: https:\/\/huggingface.co\/datasets\/facebook\/voxpopuli\/resolve\/main\/data\/n_files.json\r\nIf the repo is private or gated, make sure to log in with `huggingface-cli login`.\r\n```\r\n\r\nI have already set the proxy environment variables. Downloading a Dataset without streaming works as expected.\r\nStill i suspect that this is connected to being behind a proxy.\r\n\r\nIs there a way to set the proxy for streaming datasets? Possibly a keyword argument that gets passed to ffspec?\r\n\r\n### Steps to reproduce the bug\r\n\r\nThis is the code i use.\r\n\r\n```\r\nimport os\r\nos.environ['http_proxy'] = \"http:\/\/example.com:xxxx\" \r\nos.environ['https_proxy'] = \"http:\/\/example.com:xxxx\" \r\n\r\n\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"facebook\/voxpopuli\", name=\"de\", streaming=True)\r\n```\r\n\r\n### Expected behavior\r\n\r\nI would expect the streaming functionality to use the set proxy settings.\r\n\r\n### Environment info\r\n\r\n\r\n- `datasets` version: 2.13.0\r\n- Platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.11\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 2.0.2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5975\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5975\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5974","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5974\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5974\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5974\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5974","id":1767981231,"node_id":"PR_kwDODunzps5TkXCb","number":5974,"title":"Deprecate `errors` param in favor of `encoding_errors` in text builder","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006518 \/ 0.011353 (-0.004835) | 0.004121 \/ 0.011008 (-0.006887) | 0.103350 \/ 0.038508 (0.064842) | 0.045030 \/ 0.023109 (0.021920) | 0.351670 \/ 0.275898 (0.075772) | 0.408110 \/ 0.323480 (0.084630) | 0.003883 \/ 0.007986 (-0.004102) | 0.003352 \/ 0.004328 (-0.000977) | 0.078786 \/ 0.004250 (0.074535) | 0.063977 \/ 0.037052 (0.026925) | 0.369759 \/ 0.258489 (0.111270) | 0.415103 \/ 0.293841 (0.121262) | 0.033069 \/ 0.128546 (-0.095477) | 0.008863 \/ 0.075646 (-0.066783) | 0.353660 \/ 0.419271 (-0.065611) | 0.055714 \/ 0.043533 (0.012181) | 0.350458 \/ 0.255139 (0.095319) | 0.369505 \/ 0.283200 (0.086305) | 0.022822 \/ 0.141683 (-0.118861) | 1.537588 \/ 1.452155 (0.085433) | 1.590569 \/ 1.492716 (0.097853) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.206826 \/ 0.018006 (0.188819) | 0.471625 \/ 0.000490 (0.471135) | 0.005188 \/ 0.000200 (0.004988) | 0.000316 \/ 0.000054 (0.000261) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028148 \/ 0.037411 (-0.009263) | 0.111941 \/ 0.014526 (0.097415) | 0.122106 \/ 0.176557 (-0.054451) | 0.181127 \/ 0.737135 (-0.556009) | 0.127534 \/ 0.296338 (-0.168805) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.409520 \/ 0.215209 (0.194311) | 4.098455 \/ 2.077655 (2.020800) | 1.852447 \/ 1.504120 (0.348327) | 1.657036 \/ 1.541195 (0.115842) | 1.709624 \/ 1.468490 (0.241134) | 0.542806 \/ 4.584777 (-4.041970) | 3.809352 \/ 3.745712 (0.063640) | 1.855412 \/ 5.269862 (-3.414449) | 1.109180 \/ 4.565676 (-3.456497) | 0.066801 \/ 0.424275 (-0.357474) | 0.011832 \/ 0.007607 (0.004225) | 0.518338 \/ 0.226044 (0.292293) | 5.190108 \/ 2.268929 (2.921179) | 2.320602 \/ 55.444624 (-53.124023) | 1.991416 \/ 6.876477 (-4.885060) | 2.106989 \/ 2.142072 (-0.035084) | 0.668914 \/ 4.805227 (-4.136313) | 0.145325 \/ 6.500664 (-6.355340) | 0.065145 \/ 0.075469 (-0.010324) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.254706 \/ 1.841788 (-0.587082) | 14.707264 \/ 8.074308 (6.632956) | 14.615423 \/ 10.191392 (4.424031) | 0.170764 \/ 0.680424 (-0.509659) | 0.017905 \/ 0.534201 (-0.516296) | 0.435606 \/ 0.579283 (-0.143677) | 0.434648 \/ 0.434364 (0.000284) | 0.520813 \/ 0.540337 (-0.019524) | 0.633902 \/ 1.386936 (-0.753034) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007212 \/ 0.011353 (-0.004141) | 0.004301 \/ 0.011008 (-0.006707) | 0.080767 \/ 0.038508 (0.042258) | 0.051949 \/ 0.023109 (0.028840) | 0.398473 \/ 0.275898 (0.122575) | 0.465038 \/ 0.323480 (0.141558) | 0.005580 \/ 0.007986 (-0.002406) | 0.003556 \/ 0.004328 (-0.000773) | 0.080682 \/ 0.004250 (0.076431) | 0.059517 \/ 0.037052 (0.022464) | 0.421171 \/ 0.258489 (0.162682) | 0.459752 \/ 0.293841 (0.165911) | 0.032960 \/ 0.128546 (-0.095586) | 0.009107 \/ 0.075646 (-0.066539) | 0.086382 \/ 0.419271 (-0.332889) | 0.056053 \/ 0.043533 (0.012520) | 0.393357 \/ 0.255139 (0.138218) | 0.412972 \/ 0.283200 (0.129772) | 0.031115 \/ 0.141683 (-0.110568) | 1.576961 \/ 1.452155 (0.124806) | 1.627249 \/ 1.492716 (0.134533) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.227618 \/ 0.018006 (0.209612) | 0.444640 \/ 0.000490 (0.444150) | 0.004376 \/ 0.000200 (0.004176) | 0.000092 \/ 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030826 \/ 0.037411 (-0.006586) | 0.117587 \/ 0.014526 (0.103062) | 0.127467 \/ 0.176557 (-0.049089) | 0.184440 \/ 0.737135 (-0.552695) | 0.133664 \/ 0.296338 (-0.162675) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.443183 \/ 0.215209 (0.227974) | 4.408312 \/ 2.077655 (2.330658) | 2.132487 \/ 1.504120 (0.628367) | 1.923632 \/ 1.541195 (0.382438) | 1.967882 \/ 1.468490 (0.499392) | 0.552954 \/ 4.584777 (-4.031823) | 3.777701 \/ 3.745712 (0.031989) | 1.857686 \/ 5.269862 (-3.412176) | 1.104847 \/ 4.565676 (-3.460829) | 0.068350 \/ 0.424275 (-0.355925) | 0.012437 \/ 0.007607 (0.004830) | 0.559258 \/ 0.226044 (0.333214) | 5.593258 \/ 2.268929 (3.324330) | 2.648059 \/ 55.444624 (-52.796565) | 2.277428 \/ 6.876477 (-4.599049) | 2.351685 \/ 2.142072 (0.209612) | 0.678750 \/ 4.805227 (-4.126477) | 0.145550 \/ 6.500664 (-6.355114) | 0.066556 \/ 0.075469 (-0.008913) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.327128 \/ 1.841788 (-0.514659) | 15.649079 \/ 8.074308 (7.574771) | 14.478659 \/ 10.191392 (4.287267) | 0.147633 \/ 0.680424 (-0.532791) | 0.018502 \/ 0.534201 (-0.515699) | 0.438556 \/ 0.579283 (-0.140727) | 0.433381 \/ 0.434364 (-0.000983) | 0.514367 \/ 0.540337 (-0.025970) | 0.618347 \/ 1.386936 (-0.768589) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#16aa1c886c5b499641a4bb3d8ce4a4f7de8244b7 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006078 \/ 0.011353 (-0.005275) | 0.003914 \/ 0.011008 (-0.007095) | 0.102039 \/ 0.038508 (0.063531) | 0.037660 \/ 0.023109 (0.014551) | 0.348963 \/ 0.275898 (0.073065) | 0.407284 \/ 0.323480 (0.083804) | 0.004661 \/ 0.007986 (-0.003324) | 0.003253 \/ 0.004328 (-0.001076) | 0.078276 \/ 0.004250 (0.074025) | 0.054144 \/ 0.037052 (0.017091) | 0.376715 \/ 0.258489 (0.118225) | 0.418499 \/ 0.293841 (0.124658) | 0.027627 \/ 0.128546 (-0.100919) | 0.008494 \/ 0.075646 (-0.067152) | 0.316894 \/ 0.419271 (-0.102377) | 0.046560 \/ 0.043533 (0.003027) | 0.339835 \/ 0.255139 (0.084696) | 0.374628 \/ 0.283200 (0.091428) | 0.020729 \/ 0.141683 (-0.120954) | 1.502769 \/ 1.452155 (0.050615) | 1.548756 \/ 1.492716 (0.056040) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.229192 \/ 0.018006 (0.211186) | 0.426245 \/ 0.000490 (0.425756) | 0.005190 \/ 0.000200 (0.004990) | 0.000081 \/ 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024271 \/ 0.037411 (-0.013140) | 0.098869 \/ 0.014526 (0.084343) | 0.105079 \/ 0.176557 (-0.071477) | 0.164707 \/ 0.737135 (-0.572428) | 0.110337 \/ 0.296338 (-0.186002) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.426593 \/ 0.215209 (0.211383) | 4.293977 \/ 2.077655 (2.216323) | 1.928502 \/ 1.504120 (0.424382) | 1.728623 \/ 1.541195 (0.187428) | 1.792084 \/ 1.468490 (0.323594) | 0.568737 \/ 4.584777 (-4.016040) | 3.438534 \/ 3.745712 (-0.307178) | 1.797798 \/ 5.269862 (-3.472063) | 1.054078 \/ 4.565676 (-3.511598) | 0.068711 \/ 0.424275 (-0.355564) | 0.011250 \/ 0.007607 (0.003643) | 0.529299 \/ 0.226044 (0.303255) | 5.283965 \/ 2.268929 (3.015037) | 2.358274 \/ 55.444624 (-53.086350) | 2.012818 \/ 6.876477 (-4.863659) | 2.109923 \/ 2.142072 (-0.032149) | 0.679556 \/ 4.805227 (-4.125671) | 0.138346 \/ 6.500664 (-6.362318) | 0.066349 \/ 0.075469 (-0.009120) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.193994 \/ 1.841788 (-0.647794) | 14.073158 \/ 8.074308 (5.998850) | 13.488525 \/ 10.191392 (3.297133) | 0.144536 \/ 0.680424 (-0.535888) | 0.016748 \/ 0.534201 (-0.517453) | 0.362703 \/ 0.579283 (-0.216580) | 0.389511 \/ 0.434364 (-0.044853) | 0.427296 \/ 0.540337 (-0.113041) | 0.513227 \/ 1.386936 (-0.873709) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006215 \/ 0.011353 (-0.005138) | 0.003834 \/ 0.011008 (-0.007174) | 0.078001 \/ 0.038508 (0.039493) | 0.036537 \/ 0.023109 (0.013428) | 0.369724 \/ 0.275898 (0.093826) | 0.426761 \/ 0.323480 (0.103281) | 0.003602 \/ 0.007986 (-0.004383) | 0.003001 \/ 0.004328 (-0.001327) | 0.075989 \/ 0.004250 (0.071739) | 0.048618 \/ 0.037052 (0.011566) | 0.374296 \/ 0.258489 (0.115807) | 0.430330 \/ 0.293841 (0.136489) | 0.028299 \/ 0.128546 (-0.100247) | 0.008537 \/ 0.075646 (-0.067109) | 0.083275 \/ 0.419271 (-0.335997) | 0.043136 \/ 0.043533 (-0.000397) | 0.359072 \/ 0.255139 (0.103933) | 0.387391 \/ 0.283200 (0.104192) | 0.021202 \/ 0.141683 (-0.120481) | 1.520832 \/ 1.452155 (0.068677) | 1.567030 \/ 1.492716 (0.074313) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.230944 \/ 0.018006 (0.212938) | 0.422159 \/ 0.000490 (0.421669) | 0.003447 \/ 0.000200 (0.003247) | 0.000125 \/ 0.000054 (0.000071) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025442 \/ 0.037411 (-0.011969) | 0.103944 \/ 0.014526 (0.089418) | 0.110577 \/ 0.176557 (-0.065979) | 0.161393 \/ 0.737135 (-0.575743) | 0.113482 \/ 0.296338 (-0.182857) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.485765 \/ 0.215209 (0.270556) | 4.845737 \/ 2.077655 (2.768083) | 2.556732 \/ 1.504120 (1.052612) | 2.348638 \/ 1.541195 (0.807443) | 2.379289 \/ 1.468490 (0.910799) | 0.561261 \/ 4.584777 (-4.023516) | 3.482468 \/ 3.745712 (-0.263244) | 3.061319 \/ 5.269862 (-2.208543) | 1.483938 \/ 4.565676 (-3.081738) | 0.067584 \/ 0.424275 (-0.356691) | 0.011333 \/ 0.007607 (0.003726) | 0.594342 \/ 0.226044 (0.368297) | 5.935477 \/ 2.268929 (3.666548) | 3.025029 \/ 55.444624 (-52.419595) | 2.687032 \/ 6.876477 (-4.189445) | 2.752470 \/ 2.142072 (0.610398) | 0.674470 \/ 4.805227 (-4.130757) | 0.136777 \/ 6.500664 (-6.363887) | 0.068335 \/ 0.075469 (-0.007134) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.336456 \/ 1.841788 (-0.505332) | 14.376007 \/ 8.074308 (6.301699) | 14.171375 \/ 10.191392 (3.979983) | 0.159620 \/ 0.680424 (-0.520804) | 0.016685 \/ 0.534201 (-0.517516) | 0.364344 \/ 0.579283 (-0.214939) | 0.395358 \/ 0.434364 (-0.039006) | 0.424876 \/ 0.540337 (-0.115461) | 0.513267 \/ 1.386936 (-0.873669) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#6ed837325cb539a5deb99129e5ad181d0269e050 \"CML watermark\")\n"],"created_at":1687365098000,"updated_at":1687775683000,"closed_at":1687775260000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5974","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5974","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5974.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5974.patch","merged_at":1687775260000},"body":"For consistency with the JSON builder and Pandas","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5974\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5974\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5972","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5972\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5972\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5972\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5972","id":1767897485,"node_id":"PR_kwDODunzps5TkE7K","number":5972,"title":"Filter unsupported extensions","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006983 \/ 0.011353 (-0.004369) | 0.004473 \/ 0.011008 (-0.006535) | 0.105158 \/ 0.038508 (0.066650) | 0.048973 \/ 0.023109 (0.025864) | 0.358771 \/ 0.275898 (0.082873) | 0.432389 \/ 0.323480 (0.108909) | 0.005689 \/ 0.007986 (-0.002297) | 0.003584 \/ 0.004328 (-0.000744) | 0.080852 \/ 0.004250 (0.076601) | 0.066133 \/ 0.037052 (0.029081) | 0.370981 \/ 0.258489 (0.112492) | 0.406942 \/ 0.293841 (0.113101) | 0.032123 \/ 0.128546 (-0.096424) | 0.009313 \/ 0.075646 (-0.066333) | 0.355220 \/ 0.419271 (-0.064051) | 0.055768 \/ 0.043533 (0.012235) | 0.370545 \/ 0.255139 (0.115406) | 0.375619 \/ 0.283200 (0.092419) | 0.024258 \/ 0.141683 (-0.117425) | 1.559073 \/ 1.452155 (0.106918) | 1.616520 \/ 1.492716 (0.123804) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.277893 \/ 0.018006 (0.259887) | 0.535447 \/ 0.000490 (0.534957) | 0.004877 \/ 0.000200 (0.004677) | 0.000092 \/ 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029444 \/ 0.037411 (-0.007968) | 0.114366 \/ 0.014526 (0.099841) | 0.130957 \/ 0.176557 (-0.045599) | 0.189604 \/ 0.737135 (-0.547531) | 0.131682 \/ 0.296338 (-0.164656) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.412315 \/ 0.215209 (0.197106) | 4.093879 \/ 2.077655 (2.016225) | 1.856169 \/ 1.504120 (0.352050) | 1.655358 \/ 1.541195 (0.114164) | 1.758190 \/ 1.468490 (0.289699) | 0.545829 \/ 4.584777 (-4.038948) | 3.871436 \/ 3.745712 (0.125724) | 1.938244 \/ 5.269862 (-3.331618) | 1.122727 \/ 4.565676 (-3.442950) | 0.067107 \/ 0.424275 (-0.357168) | 0.012012 \/ 0.007607 (0.004405) | 0.518868 \/ 0.226044 (0.292824) | 5.235081 \/ 2.268929 (2.966153) | 2.335115 \/ 55.444624 (-53.109509) | 2.013074 \/ 6.876477 (-4.863402) | 2.219808 \/ 2.142072 (0.077735) | 0.674602 \/ 4.805227 (-4.130626) | 0.147051 \/ 6.500664 (-6.353613) | 0.068444 \/ 0.075469 (-0.007025) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.245600 \/ 1.841788 (-0.596188) | 15.537727 \/ 8.074308 (7.463419) | 15.074300 \/ 10.191392 (4.882908) | 0.194217 \/ 0.680424 (-0.486207) | 0.018536 \/ 0.534201 (-0.515665) | 0.437085 \/ 0.579283 (-0.142198) | 0.441123 \/ 0.434364 (0.006759) | 0.530681 \/ 0.540337 (-0.009657) | 0.649154 \/ 1.386936 (-0.737782) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007243 \/ 0.011353 (-0.004110) | 0.004688 \/ 0.011008 (-0.006320) | 0.079809 \/ 0.038508 (0.041301) | 0.046915 \/ 0.023109 (0.023805) | 0.415144 \/ 0.275898 (0.139246) | 0.474867 \/ 0.323480 (0.151388) | 0.004550 \/ 0.007986 (-0.003435) | 0.004585 \/ 0.004328 (0.000257) | 0.080837 \/ 0.004250 (0.076587) | 0.061667 \/ 0.037052 (0.024614) | 0.411321 \/ 0.258489 (0.152832) | 0.464195 \/ 0.293841 (0.170354) | 0.032510 \/ 0.128546 (-0.096037) | 0.009306 \/ 0.075646 (-0.066340) | 0.086637 \/ 0.419271 (-0.332635) | 0.053335 \/ 0.043533 (0.009802) | 0.402302 \/ 0.255139 (0.147163) | 0.424864 \/ 0.283200 (0.141664) | 0.026573 \/ 0.141683 (-0.115110) | 1.566793 \/ 1.452155 (0.114639) | 1.628118 \/ 1.492716 (0.135401) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.317802 \/ 0.018006 (0.299796) | 0.544593 \/ 0.000490 (0.544103) | 0.005690 \/ 0.000200 (0.005490) | 0.000107 \/ 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033015 \/ 0.037411 (-0.004397) | 0.121940 \/ 0.014526 (0.107414) | 0.132920 \/ 0.176557 (-0.043637) | 0.191481 \/ 0.737135 (-0.545655) | 0.139139 \/ 0.296338 (-0.157199) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.460382 \/ 0.215209 (0.245173) | 4.610046 \/ 2.077655 (2.532392) | 2.296573 \/ 1.504120 (0.792453) | 2.099735 \/ 1.541195 (0.558540) | 2.213913 \/ 1.468490 (0.745423) | 0.544871 \/ 4.584777 (-4.039906) | 3.814174 \/ 3.745712 (0.068462) | 3.246397 \/ 5.269862 (-2.023464) | 1.480236 \/ 4.565676 (-3.085440) | 0.068464 \/ 0.424275 (-0.355811) | 0.012651 \/ 0.007607 (0.005043) | 0.564989 \/ 0.226044 (0.338944) | 5.639188 \/ 2.268929 (3.370259) | 2.827601 \/ 55.444624 (-52.617023) | 2.473743 \/ 6.876477 (-4.402734) | 2.567413 \/ 2.142072 (0.425340) | 0.674351 \/ 4.805227 (-4.130876) | 0.146248 \/ 6.500664 (-6.354416) | 0.067553 \/ 0.075469 (-0.007916) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.346703 \/ 1.841788 (-0.495085) | 16.494787 \/ 8.074308 (8.420479) | 15.179487 \/ 10.191392 (4.988095) | 0.181864 \/ 0.680424 (-0.498560) | 0.018857 \/ 0.534201 (-0.515344) | 0.437787 \/ 0.579283 (-0.141496) | 0.431770 \/ 0.434364 (-0.002594) | 0.507116 \/ 0.540337 (-0.033221) | 0.608899 \/ 1.386936 (-0.778037) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#0fd5b7412f907675e76b183a6e39ef6d176fdcc0 \"CML watermark\")\n","_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005963 \/ 0.011353 (-0.005390) | 0.003743 \/ 0.011008 (-0.007265) | 0.098519 \/ 0.038508 (0.060011) | 0.037392 \/ 0.023109 (0.014283) | 0.322706 \/ 0.275898 (0.046808) | 0.380032 \/ 0.323480 (0.056552) | 0.004694 \/ 0.007986 (-0.003292) | 0.002897 \/ 0.004328 (-0.001432) | 0.078664 \/ 0.004250 (0.074414) | 0.052646 \/ 0.037052 (0.015594) | 0.335523 \/ 0.258489 (0.077034) | 0.375464 \/ 0.293841 (0.081623) | 0.027537 \/ 0.128546 (-0.101010) | 0.008452 \/ 0.075646 (-0.067194) | 0.313844 \/ 0.419271 (-0.105427) | 0.047368 \/ 0.043533 (0.003835) | 0.313833 \/ 0.255139 (0.058694) | 0.342284 \/ 0.283200 (0.059085) | 0.021136 \/ 0.141683 (-0.120547) | 1.544764 \/ 1.452155 (0.092610) | 1.563850 \/ 1.492716 (0.071134) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.188609 \/ 0.018006 (0.170603) | 0.421686 \/ 0.000490 (0.421196) | 0.003336 \/ 0.000200 (0.003136) | 0.000077 \/ 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023678 \/ 0.037411 (-0.013733) | 0.099191 \/ 0.014526 (0.084665) | 0.105819 \/ 0.176557 (-0.070738) | 0.169654 \/ 0.737135 (-0.567481) | 0.110240 \/ 0.296338 (-0.186099) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.425497 \/ 0.215209 (0.210288) | 4.237165 \/ 2.077655 (2.159510) | 1.902953 \/ 1.504120 (0.398833) | 1.699012 \/ 1.541195 (0.157818) | 1.751107 \/ 1.468490 (0.282617) | 0.563326 \/ 4.584777 (-4.021451) | 3.394189 \/ 3.745712 (-0.351523) | 2.706129 \/ 5.269862 (-2.563732) | 1.361522 \/ 4.565676 (-3.204155) | 0.067776 \/ 0.424275 (-0.356499) | 0.010959 \/ 0.007607 (0.003352) | 0.530905 \/ 0.226044 (0.304860) | 5.322467 \/ 2.268929 (3.053538) | 2.384356 \/ 55.444624 (-53.060269) | 2.044196 \/ 6.876477 (-4.832281) | 2.119837 \/ 2.142072 (-0.022235) | 0.682236 \/ 4.805227 (-4.122991) | 0.136921 \/ 6.500664 (-6.363743) | 0.066784 \/ 0.075469 (-0.008685) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.210642 \/ 1.841788 (-0.631146) | 13.804572 \/ 8.074308 (5.730264) | 13.309229 \/ 10.191392 (3.117837) | 0.154356 \/ 0.680424 (-0.526068) | 0.016833 \/ 0.534201 (-0.517368) | 0.366503 \/ 0.579283 (-0.212780) | 0.385201 \/ 0.434364 (-0.049163) | 0.426713 \/ 0.540337 (-0.113624) | 0.516795 \/ 1.386936 (-0.870141) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006144 \/ 0.011353 (-0.005209) | 0.003723 \/ 0.011008 (-0.007285) | 0.077427 \/ 0.038508 (0.038919) | 0.037636 \/ 0.023109 (0.014527) | 0.375048 \/ 0.275898 (0.099150) | 0.442254 \/ 0.323480 (0.118774) | 0.003506 \/ 0.007986 (-0.004480) | 0.003751 \/ 0.004328 (-0.000577) | 0.076771 \/ 0.004250 (0.072521) | 0.047915 \/ 0.037052 (0.010862) | 0.378918 \/ 0.258489 (0.120429) | 0.435300 \/ 0.293841 (0.141459) | 0.028317 \/ 0.128546 (-0.100230) | 0.008413 \/ 0.075646 (-0.067233) | 0.082774 \/ 0.419271 (-0.336497) | 0.043211 \/ 0.043533 (-0.000321) | 0.362022 \/ 0.255139 (0.106883) | 0.404928 \/ 0.283200 (0.121728) | 0.020692 \/ 0.141683 (-0.120991) | 1.527303 \/ 1.452155 (0.075148) | 1.596091 \/ 1.492716 (0.103375) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.225537 \/ 0.018006 (0.207530) | 0.399901 \/ 0.000490 (0.399412) | 0.000424 \/ 0.000200 (0.000224) | 0.000058 \/ 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026483 \/ 0.037411 (-0.010928) | 0.104373 \/ 0.014526 (0.089847) | 0.111271 \/ 0.176557 (-0.065286) | 0.163872 \/ 0.737135 (-0.573264) | 0.113991 \/ 0.296338 (-0.182347) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.456484 \/ 0.215209 (0.241275) | 4.572652 \/ 2.077655 (2.494998) | 2.374908 \/ 1.504120 (0.870788) | 2.207855 \/ 1.541195 (0.666661) | 2.260009 \/ 1.468490 (0.791519) | 0.562678 \/ 4.584777 (-4.022099) | 3.441778 \/ 3.745712 (-0.303934) | 1.729006 \/ 5.269862 (-3.540855) | 1.024937 \/ 4.565676 (-3.540739) | 0.068707 \/ 0.424275 (-0.355568) | 0.011334 \/ 0.007607 (0.003727) | 0.564293 \/ 0.226044 (0.338248) | 5.638367 \/ 2.268929 (3.369438) | 2.665654 \/ 55.444624 (-52.778970) | 2.320033 \/ 6.876477 (-4.556444) | 2.328706 \/ 2.142072 (0.186634) | 0.677433 \/ 4.805227 (-4.127794) | 0.137190 \/ 6.500664 (-6.363474) | 0.068585 \/ 0.075469 (-0.006885) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.312476 \/ 1.841788 (-0.529312) | 14.206685 \/ 8.074308 (6.132377) | 14.217928 \/ 10.191392 (4.026536) | 0.143416 \/ 0.680424 (-0.537007) | 0.016647 \/ 0.534201 (-0.517554) | 0.361228 \/ 0.579283 (-0.218055) | 0.396185 \/ 0.434364 (-0.038178) | 0.423275 \/ 0.540337 (-0.117063) | 0.512966 \/ 1.386936 (-0.873970) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#b424648fd68bd0b5279eb916cec4836d1220e268 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008913 \/ 0.011353 (-0.002440) | 0.005142 \/ 0.011008 (-0.005866) | 0.133958 \/ 0.038508 (0.095449) | 0.049180 \/ 0.023109 (0.026071) | 0.389169 \/ 0.275898 (0.113270) | 0.481513 \/ 0.323480 (0.158033) | 0.006555 \/ 0.007986 (-0.001430) | 0.003806 \/ 0.004328 (-0.000522) | 0.102056 \/ 0.004250 (0.097806) | 0.083259 \/ 0.037052 (0.046207) | 0.392536 \/ 0.258489 (0.134047) | 0.447503 \/ 0.293841 (0.153662) | 0.047472 \/ 0.128546 (-0.081074) | 0.014748 \/ 0.075646 (-0.060899) | 0.475619 \/ 0.419271 (0.056348) | 0.107306 \/ 0.043533 (0.063773) | 0.421942 \/ 0.255139 (0.166803) | 0.419736 \/ 0.283200 (0.136536) | 0.044195 \/ 0.141683 (-0.097488) | 1.793840 \/ 1.452155 (0.341686) | 1.960204 \/ 1.492716 (0.467488) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.252046 \/ 0.018006 (0.234040) | 0.627725 \/ 0.000490 (0.627236) | 0.007435 \/ 0.000200 (0.007235) | 0.000526 \/ 0.000054 (0.000472) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034656 \/ 0.037411 (-0.002755) | 0.114534 \/ 0.014526 (0.100008) | 0.135804 \/ 0.176557 (-0.040753) | 0.209309 \/ 0.737135 (-0.527826) | 0.140369 \/ 0.296338 (-0.155969) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.636736 \/ 0.215209 (0.421527) | 6.039985 \/ 2.077655 (3.962330) | 2.640141 \/ 1.504120 (1.136021) | 2.284492 \/ 1.541195 (0.743297) | 2.324956 \/ 1.468490 (0.856466) | 0.934499 \/ 4.584777 (-3.650278) | 5.673415 \/ 3.745712 (1.927703) | 5.184584 \/ 5.269862 (-0.085278) | 2.661911 \/ 4.565676 (-1.903766) | 0.150420 \/ 0.424275 (-0.273855) | 0.015655 \/ 0.007607 (0.008048) | 0.748290 \/ 0.226044 (0.522246) | 7.579755 \/ 2.268929 (5.310827) | 3.346732 \/ 55.444624 (-52.097892) | 2.708212 \/ 6.876477 (-4.168264) | 2.682423 \/ 2.142072 (0.540351) | 1.170389 \/ 4.805227 (-3.634838) | 0.215775 \/ 6.500664 (-6.284889) | 0.076360 \/ 0.075469 (0.000891) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.516794 \/ 1.841788 (-0.324993) | 18.709117 \/ 8.074308 (10.634809) | 22.492542 \/ 10.191392 (12.301150) | 0.237978 \/ 0.680424 (-0.442446) | 0.027828 \/ 0.534201 (-0.506373) | 0.499968 \/ 0.579283 (-0.079315) | 0.645899 \/ 0.434364 (0.211535) | 0.548599 \/ 0.540337 (0.008262) | 0.675428 \/ 1.386936 (-0.711508) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008469 \/ 0.011353 (-0.002884) | 0.005420 \/ 0.011008 (-0.005589) | 0.093340 \/ 0.038508 (0.054832) | 0.045896 \/ 0.023109 (0.022786) | 0.533267 \/ 0.275898 (0.257369) | 0.596034 \/ 0.323480 (0.272555) | 0.004816 \/ 0.007986 (-0.003170) | 0.004379 \/ 0.004328 (0.000051) | 0.096356 \/ 0.004250 (0.092106) | 0.058339 \/ 0.037052 (0.021287) | 0.574464 \/ 0.258489 (0.315975) | 0.649301 \/ 0.293841 (0.355461) | 0.047599 \/ 0.128546 (-0.080947) | 0.013759 \/ 0.075646 (-0.061887) | 0.104672 \/ 0.419271 (-0.314599) | 0.061658 \/ 0.043533 (0.018125) | 0.560956 \/ 0.255139 (0.305817) | 0.585328 \/ 0.283200 (0.302128) | 0.034137 \/ 0.141683 (-0.107546) | 1.844528 \/ 1.452155 (0.392373) | 1.971398 \/ 1.492716 (0.478682) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.278666 \/ 0.018006 (0.260660) | 0.577342 \/ 0.000490 (0.576853) | 0.005496 \/ 0.000200 (0.005296) | 0.000131 \/ 0.000054 (0.000076) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029863 \/ 0.037411 (-0.007549) | 0.161703 \/ 0.014526 (0.147177) | 0.132279 \/ 0.176557 (-0.044277) | 0.227345 \/ 0.737135 (-0.509791) | 0.138047 \/ 0.296338 (-0.158291) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.651535 \/ 0.215209 (0.436326) | 7.077949 \/ 2.077655 (5.000295) | 2.926990 \/ 1.504120 (1.422871) | 2.598872 \/ 1.541195 (1.057678) | 2.614192 \/ 1.468490 (1.145702) | 0.913845 \/ 4.584777 (-3.670932) | 5.704301 \/ 3.745712 (1.958589) | 2.796914 \/ 5.269862 (-2.472948) | 1.836096 \/ 4.565676 (-2.729580) | 0.106294 \/ 0.424275 (-0.317981) | 0.012705 \/ 0.007607 (0.005098) | 0.836336 \/ 0.226044 (0.610291) | 8.234079 \/ 2.268929 (5.965150) | 3.836410 \/ 55.444624 (-51.608215) | 3.116752 \/ 6.876477 (-3.759724) | 3.154258 \/ 2.142072 (1.012186) | 1.195794 \/ 4.805227 (-3.609434) | 0.240491 \/ 6.500664 (-6.260173) | 0.087913 \/ 0.075469 (0.012444) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.724723 \/ 1.841788 (-0.117064) | 19.492194 \/ 8.074308 (11.417885) | 21.443341 \/ 10.191392 (11.251949) | 0.245819 \/ 0.680424 (-0.434605) | 0.027024 \/ 0.534201 (-0.507177) | 0.481071 \/ 0.579283 (-0.098212) | 0.596359 \/ 0.434364 (0.161995) | 0.646462 \/ 0.540337 (0.106124) | 0.706380 \/ 1.386936 (-0.680556) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#67ca664e6d5ef137127b238aae1d0aff54e22db2 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006634 \/ 0.011353 (-0.004719) | 0.004003 \/ 0.011008 (-0.007005) | 0.097874 \/ 0.038508 (0.059365) | 0.043528 \/ 0.023109 (0.020419) | 0.302293 \/ 0.275898 (0.026395) | 0.357041 \/ 0.323480 (0.033561) | 0.003761 \/ 0.007986 (-0.004225) | 0.004312 \/ 0.004328 (-0.000016) | 0.076253 \/ 0.004250 (0.072003) | 0.062807 \/ 0.037052 (0.025755) | 0.316737 \/ 0.258489 (0.058248) | 0.356722 \/ 0.293841 (0.062881) | 0.030816 \/ 0.128546 (-0.097730) | 0.008691 \/ 0.075646 (-0.066955) | 0.328366 \/ 0.419271 (-0.090906) | 0.062299 \/ 0.043533 (0.018766) | 0.293877 \/ 0.255139 (0.038738) | 0.319832 \/ 0.283200 (0.036632) | 0.024996 \/ 0.141683 (-0.116687) | 1.473912 \/ 1.452155 (0.021758) | 1.565439 \/ 1.492716 (0.072723) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.208428 \/ 0.018006 (0.190422) | 0.435618 \/ 0.000490 (0.435128) | 0.000695 \/ 0.000200 (0.000495) | 0.000056 \/ 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026253 \/ 0.037411 (-0.011158) | 0.106908 \/ 0.014526 (0.092382) | 0.117075 \/ 0.176557 (-0.059482) | 0.177969 \/ 0.737135 (-0.559166) | 0.123400 \/ 0.296338 (-0.172938) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.424970 \/ 0.215209 (0.209761) | 4.203233 \/ 2.077655 (2.125578) | 2.009679 \/ 1.504120 (0.505559) | 1.825691 \/ 1.541195 (0.284496) | 1.870639 \/ 1.468490 (0.402149) | 0.530758 \/ 4.584777 (-4.054019) | 3.718791 \/ 3.745712 (-0.026921) | 1.800206 \/ 5.269862 (-3.469656) | 1.071651 \/ 4.565676 (-3.494025) | 0.065126 \/ 0.424275 (-0.359149) | 0.011312 \/ 0.007607 (0.003704) | 0.532503 \/ 0.226044 (0.306458) | 5.353950 \/ 2.268929 (3.085021) | 2.463548 \/ 55.444624 (-52.981076) | 2.139832 \/ 6.876477 (-4.736645) | 2.238722 \/ 2.142072 (0.096650) | 0.655736 \/ 4.805227 (-4.149492) | 0.141689 \/ 6.500664 (-6.358975) | 0.063282 \/ 0.075469 (-0.012187) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.183523 \/ 1.841788 (-0.658265) | 14.146428 \/ 8.074308 (6.072120) | 14.312883 \/ 10.191392 (4.121491) | 0.169286 \/ 0.680424 (-0.511138) | 0.017343 \/ 0.534201 (-0.516858) | 0.397934 \/ 0.579283 (-0.181349) | 0.417791 \/ 0.434364 (-0.016573) | 0.463639 \/ 0.540337 (-0.076698) | 0.562787 \/ 1.386936 (-0.824149) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006594 \/ 0.011353 (-0.004759) | 0.004086 \/ 0.011008 (-0.006922) | 0.075122 \/ 0.038508 (0.036614) | 0.041849 \/ 0.023109 (0.018740) | 0.362645 \/ 0.275898 (0.086747) | 0.464350 \/ 0.323480 (0.140870) | 0.003760 \/ 0.007986 (-0.004226) | 0.003327 \/ 0.004328 (-0.001001) | 0.076154 \/ 0.004250 (0.071904) | 0.053232 \/ 0.037052 (0.016180) | 0.407863 \/ 0.258489 (0.149374) | 0.460787 \/ 0.293841 (0.166946) | 0.031917 \/ 0.128546 (-0.096630) | 0.008770 \/ 0.075646 (-0.066876) | 0.082612 \/ 0.419271 (-0.336660) | 0.051311 \/ 0.043533 (0.007779) | 0.354508 \/ 0.255139 (0.099369) | 0.419533 \/ 0.283200 (0.136334) | 0.023980 \/ 0.141683 (-0.117703) | 1.491255 \/ 1.452155 (0.039100) | 1.536101 \/ 1.492716 (0.043384) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.178261 \/ 0.018006 (0.160255) | 0.444680 \/ 0.000490 (0.444190) | 0.013761 \/ 0.000200 (0.013561) | 0.000117 \/ 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027875 \/ 0.037411 (-0.009536) | 0.111269 \/ 0.014526 (0.096744) | 0.121096 \/ 0.176557 (-0.055461) | 0.174387 \/ 0.737135 (-0.562749) | 0.124714 \/ 0.296338 (-0.171624) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.445422 \/ 0.215209 (0.230213) | 4.435877 \/ 2.077655 (2.358222) | 2.221895 \/ 1.504120 (0.717775) | 2.030571 \/ 1.541195 (0.489376) | 2.074863 \/ 1.468490 (0.606373) | 0.543331 \/ 4.584777 (-4.041446) | 3.753615 \/ 3.745712 (0.007903) | 3.317074 \/ 5.269862 (-1.952787) | 1.630390 \/ 4.565676 (-2.935286) | 0.066726 \/ 0.424275 (-0.357549) | 0.011556 \/ 0.007607 (0.003949) | 0.546985 \/ 0.226044 (0.320941) | 5.460634 \/ 2.268929 (3.191705) | 2.705945 \/ 55.444624 (-52.738679) | 2.373425 \/ 6.876477 (-4.503052) | 2.401472 \/ 2.142072 (0.259399) | 0.663225 \/ 4.805227 (-4.142002) | 0.143694 \/ 6.500664 (-6.356970) | 0.065283 \/ 0.075469 (-0.010186) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.264804 \/ 1.841788 (-0.576983) | 14.803228 \/ 8.074308 (6.728919) | 14.178514 \/ 10.191392 (3.987122) | 0.162651 \/ 0.680424 (-0.517772) | 0.017586 \/ 0.534201 (-0.516615) | 0.398740 \/ 0.579283 (-0.180543) | 0.414478 \/ 0.434364 (-0.019886) | 0.465442 \/ 0.540337 (-0.074895) | 0.563450 \/ 1.386936 (-0.823486) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#76f75a9a3b2aaad05ea0ea5ab77e01fd2ca66760 \"CML watermark\")\n"],"created_at":1687362181000,"updated_at":1687443809000,"closed_at":1687443386000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5972","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5972","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5972.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5972.patch","merged_at":1687443386000},"body":"I used a regex to filter the data files based on their extension for packaged builders.\r\n\r\nI tried and a regex is 10x faster that using `in` to check if the extension is in the list of supported extensions.\r\n\r\nSupersedes https:\/\/github.com\/huggingface\/datasets\/pull\/5850\r\n\r\nClose https:\/\/github.com\/huggingface\/datasets\/issues\/5849\r\n\r\nI also did a small change to favor the parquet module in case of a draw in the extension counter.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5972\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5972\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5971","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5971\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5971\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5971\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5971","id":1767053635,"node_id":"I_kwDODunzps5pUxlD","number":5971,"title":"Docs: make \"repository structure\" easier to find","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"open","locked":false,"assignee":{"login":"benjaminbrown038","id":35114142.0,"node_id":"MDQ6VXNlcjM1MTE0MTQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35114142?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/benjaminbrown038","html_url":"https:\/\/github.com\/benjaminbrown038","followers_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/followers","following_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/orgs","repos_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/repos","events_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/received_events","type":"User","site_admin":false},"assignees":[{"login":"benjaminbrown038","id":35114142,"node_id":"MDQ6VXNlcjM1MTE0MTQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35114142?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/benjaminbrown038","html_url":"https:\/\/github.com\/benjaminbrown038","followers_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/followers","following_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/orgs","repos_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/repos","events_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/benjaminbrown038\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Loading a local dataset also works the same way when `data_files` are not specified, so I agree we should make this info easier to discover \r\n\r\ncc @stevhliu ","Is this issue open? If so, I will self assign. ","@benjaminbrown038 Yes, it is. Maybe @stevhliu can give some pointers on improving this doc page's discoverability.","I think we can add a version of the [Main use-case](https:\/\/huggingface.co\/docs\/datasets\/repository_structure#main-usecase) section to the [Share a dataset to the Hub](https:\/\/huggingface.co\/docs\/datasets\/upload_dataset) tutorial. \r\n\r\nCurrently, it doesn't tell you *how* to structure the repository; it only tells you how to create it. So adding the \"main use-case\" will help bridge the gap and make it easier to find. We should also add a link to the [Structure your repository](https:\/\/huggingface.co\/docs\/datasets\/repository_structure) guide for users who want to learn about the other options.","#self-assign"],"created_at":1687336004000,"updated_at":1688539898000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"The page https:\/\/huggingface.co\/docs\/datasets\/repository_structure explains how to create a simple repository structure without a dataset script.\r\nIt's the simplest way to create a dataset and should be easier to find, particularly on the docs' first pages.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5971\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5971\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5970","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5970\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5970\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5970\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5970","id":1766010356,"node_id":"I_kwDODunzps5pQy30","number":5970,"title":"description disappearing from Info when Uploading a Dataset Created with `from_dict`","user":{"login":"balisujohn","id":20377292,"node_id":"MDQ6VXNlcjIwMzc3Mjky","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20377292?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/balisujohn","html_url":"https:\/\/github.com\/balisujohn","followers_url":"https:\/\/api.github.com\/users\/balisujohn\/followers","following_url":"https:\/\/api.github.com\/users\/balisujohn\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/balisujohn\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/balisujohn\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/balisujohn\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/balisujohn\/orgs","repos_url":"https:\/\/api.github.com\/users\/balisujohn\/repos","events_url":"https:\/\/api.github.com\/users\/balisujohn\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/balisujohn\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Here's a minimal way to reproduce the bug, for the sake of convenience.\r\n````\r\nfrom datasets import Dataset, DatasetInfo, load_dataset\r\n\r\n\r\nepisodes_dict = {\"test\":[1,2,3],\"test2\": [1,2,4]}\r\n\r\nhugging_face_dataset = Dataset.from_dict(\r\n episodes_dict, info=DatasetInfo(description=\"test_str\")\r\n)\r\nprint(hugging_face_dataset.info)\r\n\r\nhugging_face_dataset.push_to_hub(\"balisujohn\/minari_test\", private=True)\r\n\r\nredownloaded_dataset= load_dataset(\"balisujohn\/minari_test\")[\"train\"]\r\n\r\n\r\nprint(redownloaded_dataset.info)\r\n````\r\n","Thanks for reporting !\r\n\r\nFor now I would recommend uploading a separate JSON file for your metadata.\r\n\r\nAlternatively you can upload a second configuration of the dataset containing your metadata but this feature is not released yet (though you can already use it from [here](https:\/\/github.com\/huggingface\/datasets\/pull\/5331), it will be released soon)"],"created_at":1687288706000,"updated_at":1687443836000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nWhen uploading a dataset created locally using `from_dict` with a specified `description` field. It appears before upload, but is missing after upload and re-download.\r\n\r\n\r\n### Steps to reproduce the bug\r\n\r\nI think the most relevant pattern in the code might be the following lines:\r\n\r\n```\r\ndescription_json_str = json.dumps(\r\n {\r\n \"dataset_id\": dataset.spec.dataset_id,\r\n \"env_name\": dataset.spec.env_spec.id,\r\n \"action_space\": serialize_space(dataset.spec.action_space),\r\n \"observation_space\": serialize_space(dataset.spec.observation_space),\r\n }\r\n)\r\n\r\nhugging_face_dataset = Dataset.from_dict(\r\n episodes_dict, info=DatasetInfo(description=description_json_str)\r\n)\r\n\r\n```\r\nWhich comes from this function https:\/\/github.com\/balisujohn\/minarai\/blob\/8e023727f0a8488c4451651d9f7a79b981412c40\/minari\/integrations\/hugging_face.py#L39\r\n\r\n\r\n\r\nTo replicate,\r\nclone this branch of my Minari fork https:\/\/github.com\/balisujohn\/minarai\/tree\/dev-huggingface then run\r\n\r\n```\r\npython3.8 -m venv env\r\nsource env\/bin\/activate\r\npython3 -m pip install -e .\r\npython3 -m pip install pytest\r\n```\r\n\r\nThe change the hugging face repo path in the test called `test_hugging_face_push_and_pull_dataset` in `tests\/integrations\/test_hugging_face.py` to one you have permissions to write to.\r\n\r\nThen run:\r\n\r\n```\r\npytest tests\/integrations\/test_hugging_face.py::test_hugging_face_push_and_pull_dataset\r\n```\r\n\r\n\r\n\r\n\r\n\r\n\r\n### Expected behavior\r\n\r\nDATASET INFO BEFORE UPLOADING\r\nDatasetInfo(description='{\"dataset_id\": \"dummy-combo-test-v0\", \"env_name\": \"DummyComboEnv-v0\", \"action_space\": \"{\\\\\"type\\\\\": \\\\\"Tuple\\\\\", \\\\\"subspaces\\\\\": [{\\\\\"type\\\\\": \\\\\"Box\\\\\", \\\\\"dtype\\\\\": \\\\\"float32\\\\\", \\\\\"shape\\\\\": [1], \\\\\"low\\\\\": [2.0], \\\\\"high\\\\\": [3.0]}, {\\\\\"type\\\\\": \\\\\"Box\\\\\", \\\\\"dtype\\\\\": \\\\\"float32\\\\\", \\\\\"shape\\\\\": [1], \\\\\"low\\\\\": [4.0], \\\\\"high\\\\\": [5.0]}]}\", \"observation_space\": \"{\\\\\"type\\\\\": \\\\\"Tuple\\\\\", \\\\\"subspaces\\\\\": [{\\\\\"type\\\\\": \\\\\"Box\\\\\", \\\\\"dtype\\\\\": \\\\\"float32\\\\\", \\\\\"shape\\\\\": [1], \\\\\"low\\\\\": [2.0], \\\\\"high\\\\\": [3.0]}, {\\\\\"type\\\\\": \\\\\"Tuple\\\\\", \\\\\"subspaces\\\\\": [{\\\\\"type\\\\\": \\\\\"Box\\\\\", \\\\\"dtype\\\\\": \\\\\"float32\\\\\", \\\\\"shape\\\\\": [1], \\\\\"low\\\\\": [2.0], \\\\\"high\\\\\": [3.0]}, {\\\\\"type\\\\\": \\\\\"Dict\\\\\", \\\\\"subspaces\\\\\": {\\\\\"component_1\\\\\": {\\\\\"type\\\\\": \\\\\"Box\\\\\", \\\\\"dtype\\\\\": \\\\\"float32\\\\\", \\\\\"shape\\\\\": [1], \\\\\"low\\\\\": [-1.0], \\\\\"high\\\\\": [1.0]}, \\\\\"component_2\\\\\": {\\\\\"type\\\\\": \\\\\"Dict\\\\\", \\\\\"subspaces\\\\\": {\\\\\"subcomponent_1\\\\\": {\\\\\"type\\\\\": \\\\\"Box\\\\\", \\\\\"dtype\\\\\": \\\\\"float32\\\\\", \\\\\"shape\\\\\": [1], \\\\\"low\\\\\": [2.0], \\\\\"high\\\\\": [3.0]}, \\\\\"subcomponent_2\\\\\": {\\\\\"type\\\\\": \\\\\"Tuple\\\\\", \\\\\"subspaces\\\\\": [{\\\\\"type\\\\\": \\\\\"Box\\\\\", \\\\\"dtype\\\\\": \\\\\"float32\\\\\", \\\\\"shape\\\\\": [1], \\\\\"low\\\\\": [4.0], \\\\\"high\\\\\": [5.0]}, {\\\\\"type\\\\\": \\\\\"Discrete\\\\\", \\\\\"dtype\\\\\": \\\\\"int64\\\\\", \\\\\"start\\\\\": 0, \\\\\"n\\\\\": 10}]}}}}}]}]}\"}', citation='', homepage='', license='', features={'observations': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'component_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'component_2': {'subcomponent_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'subcomponent_2': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Value(dtype='int64', id=None)}}}}}, 'actions': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None)}, 'rewards': Value(dtype='int64', id=None), 'truncations': Value(dtype='bool', id=None), 'terminations': Value(dtype='bool', id=None), 'episode_ids': Value(dtype='int64', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name=None, config_name=None, version=None, splits=None, download_checksums=None, download_size=None, post_processing_size=None, dataset_size=None, size_in_bytes=None)\r\n...\r\nDATASET INFO AFTER UPLOADING AND DOWNLOADING\r\nDatasetInfo(description='', citation='', homepage='', license='', features={'observations': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'component_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'component_2': {'subcomponent_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'subcomponent_2': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Value(dtype='int64', id=None)}}}}}, 'actions': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None)}, 'rewards': Value(dtype='int64', id=None), 'truncations': Value(dtype='bool', id=None), 'terminations': Value(dtype='bool', id=None), 'episode_ids': Value(dtype='int64', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name=None, config_name=None, version=None, splits={'train': SplitInfo(name='train', num_bytes=4846, num_examples=60, shard_lengths=None, dataset_name='parquet')}, download_checksums={'https:\/\/huggingface.co\/datasets\/balisujohn\/minari_test\/resolve\/8217b614ff9ba5edc1a30c7df430e92a46f65363\/data\/train-00000-of-00001-7c5900b93b35745e.parquet': {'num_bytes': 9052, 'checksum': None}}, download_size=9052, post_processing_size=None, dataset_size=4846, size_in_bytes=13898)\r\n...\r\n\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 2.13.0\r\n- Platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 12.0.1\r\n- Pandas version: 2.0.2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5970\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5970\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5969","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5969\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5969\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5969\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5969","id":1765529905,"node_id":"PR_kwDODunzps5Tcgq4","number":5969,"title":"Add `encoding` and `errors` params to JSON loader","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006770 \/ 0.011353 (-0.004583) | 0.004143 \/ 0.011008 (-0.006865) | 0.098928 \/ 0.038508 (0.060420) | 0.044893 \/ 0.023109 (0.021783) | 0.302630 \/ 0.275898 (0.026732) | 0.368173 \/ 0.323480 (0.044693) | 0.005631 \/ 0.007986 (-0.002354) | 0.003397 \/ 0.004328 (-0.000931) | 0.075748 \/ 0.004250 (0.071497) | 0.062582 \/ 0.037052 (0.025530) | 0.329586 \/ 0.258489 (0.071097) | 0.362625 \/ 0.293841 (0.068784) | 0.033250 \/ 0.128546 (-0.095296) | 0.008880 \/ 0.075646 (-0.066766) | 0.329683 \/ 0.419271 (-0.089588) | 0.054426 \/ 0.043533 (0.010893) | 0.297940 \/ 0.255139 (0.042801) | 0.319796 \/ 0.283200 (0.036597) | 0.023296 \/ 0.141683 (-0.118387) | 1.462142 \/ 1.452155 (0.009987) | 1.495796 \/ 1.492716 (0.003079) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.201771 \/ 0.018006 (0.183765) | 0.454514 \/ 0.000490 (0.454024) | 0.003333 \/ 0.000200 (0.003133) | 0.000081 \/ 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028084 \/ 0.037411 (-0.009327) | 0.109452 \/ 0.014526 (0.094926) | 0.119200 \/ 0.176557 (-0.057357) | 0.180302 \/ 0.737135 (-0.556834) | 0.125653 \/ 0.296338 (-0.170686) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.409819 \/ 0.215209 (0.194610) | 4.055117 \/ 2.077655 (1.977462) | 1.855279 \/ 1.504120 (0.351159) | 1.655281 \/ 1.541195 (0.114086) | 1.687938 \/ 1.468490 (0.219448) | 0.528352 \/ 4.584777 (-4.056425) | 3.750250 \/ 3.745712 (0.004538) | 3.386741 \/ 5.269862 (-1.883121) | 1.572036 \/ 4.565676 (-2.993640) | 0.065125 \/ 0.424275 (-0.359150) | 0.011259 \/ 0.007607 (0.003652) | 0.513449 \/ 0.226044 (0.287405) | 5.139421 \/ 2.268929 (2.870492) | 2.316973 \/ 55.444624 (-53.127651) | 1.984109 \/ 6.876477 (-4.892368) | 2.127915 \/ 2.142072 (-0.014158) | 0.653238 \/ 4.805227 (-4.151989) | 0.142686 \/ 6.500664 (-6.357978) | 0.063666 \/ 0.075469 (-0.011803) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.185174 \/ 1.841788 (-0.656614) | 14.790282 \/ 8.074308 (6.715974) | 13.089222 \/ 10.191392 (2.897830) | 0.146055 \/ 0.680424 (-0.534369) | 0.017835 \/ 0.534201 (-0.516366) | 0.399598 \/ 0.579283 (-0.179685) | 0.425296 \/ 0.434364 (-0.009068) | 0.478552 \/ 0.540337 (-0.061786) | 0.579702 \/ 1.386936 (-0.807234) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006750 \/ 0.011353 (-0.004603) | 0.004156 \/ 0.011008 (-0.006853) | 0.074948 \/ 0.038508 (0.036440) | 0.043368 \/ 0.023109 (0.020259) | 0.355389 \/ 0.275898 (0.079491) | 0.429167 \/ 0.323480 (0.105687) | 0.003911 \/ 0.007986 (-0.004075) | 0.004340 \/ 0.004328 (0.000012) | 0.075940 \/ 0.004250 (0.071689) | 0.054293 \/ 0.037052 (0.017241) | 0.400317 \/ 0.258489 (0.141827) | 0.432001 \/ 0.293841 (0.138160) | 0.032340 \/ 0.128546 (-0.096206) | 0.008876 \/ 0.075646 (-0.066770) | 0.082284 \/ 0.419271 (-0.336987) | 0.050819 \/ 0.043533 (0.007286) | 0.351994 \/ 0.255139 (0.096855) | 0.375917 \/ 0.283200 (0.092717) | 0.022466 \/ 0.141683 (-0.119217) | 1.538824 \/ 1.452155 (0.086669) | 1.563995 \/ 1.492716 (0.071279) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.227330 \/ 0.018006 (0.209323) | 0.446380 \/ 0.000490 (0.445890) | 0.000408 \/ 0.000200 (0.000208) | 0.000058 \/ 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028534 \/ 0.037411 (-0.008878) | 0.113467 \/ 0.014526 (0.098941) | 0.123590 \/ 0.176557 (-0.052966) | 0.174309 \/ 0.737135 (-0.562827) | 0.130631 \/ 0.296338 (-0.165707) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.441020 \/ 0.215209 (0.225811) | 4.386564 \/ 2.077655 (2.308909) | 2.100704 \/ 1.504120 (0.596584) | 1.901484 \/ 1.541195 (0.360289) | 1.963494 \/ 1.468490 (0.495004) | 0.536838 \/ 4.584777 (-4.047939) | 3.739071 \/ 3.745712 (-0.006642) | 3.278981 \/ 5.269862 (-1.990881) | 1.515476 \/ 4.565676 (-3.050201) | 0.066388 \/ 0.424275 (-0.357887) | 0.011857 \/ 0.007607 (0.004250) | 0.545507 \/ 0.226044 (0.319463) | 5.441479 \/ 2.268929 (3.172550) | 2.602144 \/ 55.444624 (-52.842480) | 2.235583 \/ 6.876477 (-4.640894) | 2.293458 \/ 2.142072 (0.151385) | 0.658535 \/ 4.805227 (-4.146692) | 0.141327 \/ 6.500664 (-6.359337) | 0.063726 \/ 0.075469 (-0.011743) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.247819 \/ 1.841788 (-0.593968) | 15.234524 \/ 8.074308 (7.160216) | 14.592700 \/ 10.191392 (4.401308) | 0.141952 \/ 0.680424 (-0.538472) | 0.017747 \/ 0.534201 (-0.516454) | 0.396819 \/ 0.579283 (-0.182465) | 0.415902 \/ 0.434364 (-0.018462) | 0.464619 \/ 0.540337 (-0.075718) | 0.560866 \/ 1.386936 (-0.826070) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#4b7f6c59deb868e21f295917548fa2df10dd0158 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008278 \/ 0.011353 (-0.003075) | 0.005044 \/ 0.011008 (-0.005964) | 0.123382 \/ 0.038508 (0.084874) | 0.054039 \/ 0.023109 (0.030929) | 0.382338 \/ 0.275898 (0.106440) | 0.453287 \/ 0.323480 (0.129807) | 0.006342 \/ 0.007986 (-0.001644) | 0.003930 \/ 0.004328 (-0.000398) | 0.094039 \/ 0.004250 (0.089789) | 0.076525 \/ 0.037052 (0.039472) | 0.394066 \/ 0.258489 (0.135577) | 0.445600 \/ 0.293841 (0.151759) | 0.039348 \/ 0.128546 (-0.089199) | 0.010485 \/ 0.075646 (-0.065161) | 0.433730 \/ 0.419271 (0.014459) | 0.082671 \/ 0.043533 (0.039138) | 0.375250 \/ 0.255139 (0.120111) | 0.416269 \/ 0.283200 (0.133070) | 0.038397 \/ 0.141683 (-0.103286) | 1.864834 \/ 1.452155 (0.412680) | 2.010453 \/ 1.492716 (0.517737) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.240008 \/ 0.018006 (0.222002) | 0.470975 \/ 0.000490 (0.470485) | 0.004001 \/ 0.000200 (0.003801) | 0.000097 \/ 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031107 \/ 0.037411 (-0.006304) | 0.129371 \/ 0.014526 (0.114846) | 0.141559 \/ 0.176557 (-0.034997) | 0.205571 \/ 0.737135 (-0.531564) | 0.144611 \/ 0.296338 (-0.151728) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.506972 \/ 0.215209 (0.291763) | 5.055951 \/ 2.077655 (2.978296) | 2.397438 \/ 1.504120 (0.893318) | 2.170435 \/ 1.541195 (0.629240) | 2.240296 \/ 1.468490 (0.771806) | 0.641559 \/ 4.584777 (-3.943218) | 4.644772 \/ 3.745712 (0.899060) | 4.064200 \/ 5.269862 (-1.205662) | 1.946991 \/ 4.565676 (-2.618685) | 0.086413 \/ 0.424275 (-0.337862) | 0.015082 \/ 0.007607 (0.007475) | 0.670413 \/ 0.226044 (0.444369) | 6.331346 \/ 2.268929 (4.062418) | 2.965813 \/ 55.444624 (-52.478812) | 2.547952 \/ 6.876477 (-4.328524) | 2.718390 \/ 2.142072 (0.576318) | 0.796657 \/ 4.805227 (-4.008571) | 0.173229 \/ 6.500664 (-6.327435) | 0.079606 \/ 0.075469 (0.004137) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.568761 \/ 1.841788 (-0.273026) | 18.485432 \/ 8.074308 (10.411124) | 15.758513 \/ 10.191392 (5.567121) | 0.170427 \/ 0.680424 (-0.509997) | 0.021421 \/ 0.534201 (-0.512780) | 0.518623 \/ 0.579283 (-0.060660) | 0.525887 \/ 0.434364 (0.091523) | 0.640331 \/ 0.540337 (0.099993) | 0.766748 \/ 1.386936 (-0.620188) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007680 \/ 0.011353 (-0.003673) | 0.005289 \/ 0.011008 (-0.005719) | 0.093773 \/ 0.038508 (0.055265) | 0.054997 \/ 0.023109 (0.031888) | 0.456277 \/ 0.275898 (0.180379) | 0.500642 \/ 0.323480 (0.177162) | 0.005935 \/ 0.007986 (-0.002050) | 0.004375 \/ 0.004328 (0.000047) | 0.094131 \/ 0.004250 (0.089881) | 0.063399 \/ 0.037052 (0.026347) | 0.470546 \/ 0.258489 (0.212057) | 0.504989 \/ 0.293841 (0.211148) | 0.038541 \/ 0.128546 (-0.090006) | 0.010403 \/ 0.075646 (-0.065244) | 0.102469 \/ 0.419271 (-0.316802) | 0.063105 \/ 0.043533 (0.019572) | 0.466005 \/ 0.255139 (0.210866) | 0.458677 \/ 0.283200 (0.175477) | 0.028407 \/ 0.141683 (-0.113276) | 1.893829 \/ 1.452155 (0.441675) | 1.917954 \/ 1.492716 (0.425238) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.272760 \/ 0.018006 (0.254754) | 0.476159 \/ 0.000490 (0.475669) | 0.008467 \/ 0.000200 (0.008267) | 0.000146 \/ 0.000054 (0.000091) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.035755 \/ 0.037411 (-0.001656) | 0.145038 \/ 0.014526 (0.130512) | 0.148322 \/ 0.176557 (-0.028235) | 0.210193 \/ 0.737135 (-0.526943) | 0.156547 \/ 0.296338 (-0.139792) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.541204 \/ 0.215209 (0.325995) | 5.382746 \/ 2.077655 (3.305091) | 2.704229 \/ 1.504120 (1.200109) | 2.468422 \/ 1.541195 (0.927227) | 2.522672 \/ 1.468490 (1.054182) | 0.644899 \/ 4.584777 (-3.939878) | 4.654401 \/ 3.745712 (0.908689) | 2.159223 \/ 5.269862 (-3.110638) | 1.280098 \/ 4.565676 (-3.285578) | 0.080053 \/ 0.424275 (-0.344222) | 0.014383 \/ 0.007607 (0.006776) | 0.662770 \/ 0.226044 (0.436725) | 6.617651 \/ 2.268929 (4.348722) | 3.234347 \/ 55.444624 (-52.210277) | 2.861417 \/ 6.876477 (-4.015059) | 2.888928 \/ 2.142072 (0.746856) | 0.792854 \/ 4.805227 (-4.012374) | 0.172553 \/ 6.500664 (-6.328111) | 0.078402 \/ 0.075469 (0.002933) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.565351 \/ 1.841788 (-0.276436) | 18.681916 \/ 8.074308 (10.607608) | 17.264473 \/ 10.191392 (7.073081) | 0.168461 \/ 0.680424 (-0.511963) | 0.021353 \/ 0.534201 (-0.512848) | 0.517843 \/ 0.579283 (-0.061440) | 0.519907 \/ 0.434364 (0.085543) | 0.623687 \/ 0.540337 (0.083350) | 0.761796 \/ 1.386936 (-0.625140) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#bbf58747f734a46e75937bdbcbc05b06ade0224a \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006750 \/ 0.011353 (-0.004603) | 0.004268 \/ 0.011008 (-0.006741) | 0.098644 \/ 0.038508 (0.060136) | 0.044643 \/ 0.023109 (0.021534) | 0.309420 \/ 0.275898 (0.033522) | 0.379294 \/ 0.323480 (0.055815) | 0.005729 \/ 0.007986 (-0.002256) | 0.003615 \/ 0.004328 (-0.000714) | 0.076086 \/ 0.004250 (0.071835) | 0.068994 \/ 0.037052 (0.031942) | 0.325653 \/ 0.258489 (0.067164) | 0.375187 \/ 0.293841 (0.081347) | 0.032546 \/ 0.128546 (-0.096000) | 0.009089 \/ 0.075646 (-0.066557) | 0.329905 \/ 0.419271 (-0.089366) | 0.066832 \/ 0.043533 (0.023300) | 0.299247 \/ 0.255139 (0.044108) | 0.323460 \/ 0.283200 (0.040260) | 0.034226 \/ 0.141683 (-0.107457) | 1.475659 \/ 1.452155 (0.023505) | 1.556234 \/ 1.492716 (0.063518) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.292305 \/ 0.018006 (0.274299) | 0.542584 \/ 0.000490 (0.542094) | 0.003047 \/ 0.000200 (0.002847) | 0.000082 \/ 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030096 \/ 0.037411 (-0.007315) | 0.112341 \/ 0.014526 (0.097815) | 0.124965 \/ 0.176557 (-0.051591) | 0.183159 \/ 0.737135 (-0.553976) | 0.131885 \/ 0.296338 (-0.164453) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.426437 \/ 0.215209 (0.211228) | 4.260984 \/ 2.077655 (2.183330) | 2.078358 \/ 1.504120 (0.574238) | 1.877644 \/ 1.541195 (0.336449) | 2.044036 \/ 1.468490 (0.575546) | 0.532980 \/ 4.584777 (-4.051797) | 3.749573 \/ 3.745712 (0.003860) | 1.944155 \/ 5.269862 (-3.325706) | 1.090307 \/ 4.565676 (-3.475370) | 0.065445 \/ 0.424275 (-0.358830) | 0.011237 \/ 0.007607 (0.003630) | 0.521448 \/ 0.226044 (0.295403) | 5.213118 \/ 2.268929 (2.944189) | 2.507829 \/ 55.444624 (-52.936795) | 2.177179 \/ 6.876477 (-4.699297) | 2.351161 \/ 2.142072 (0.209088) | 0.656775 \/ 4.805227 (-4.148452) | 0.141207 \/ 6.500664 (-6.359457) | 0.063286 \/ 0.075469 (-0.012183) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.190281 \/ 1.841788 (-0.651506) | 15.327424 \/ 8.074308 (7.253116) | 13.300695 \/ 10.191392 (3.109303) | 0.190484 \/ 0.680424 (-0.489939) | 0.017984 \/ 0.534201 (-0.516217) | 0.405714 \/ 0.579283 (-0.173569) | 0.435915 \/ 0.434364 (0.001551) | 0.494083 \/ 0.540337 (-0.046254) | 0.600616 \/ 1.386936 (-0.786320) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006740 \/ 0.011353 (-0.004613) | 0.004289 \/ 0.011008 (-0.006719) | 0.076532 \/ 0.038508 (0.038024) | 0.043305 \/ 0.023109 (0.020196) | 0.356111 \/ 0.275898 (0.080213) | 0.434121 \/ 0.323480 (0.110641) | 0.005599 \/ 0.007986 (-0.002387) | 0.003461 \/ 0.004328 (-0.000868) | 0.077097 \/ 0.004250 (0.072847) | 0.055369 \/ 0.037052 (0.018317) | 0.367093 \/ 0.258489 (0.108604) | 0.418801 \/ 0.293841 (0.124960) | 0.032057 \/ 0.128546 (-0.096489) | 0.009048 \/ 0.075646 (-0.066599) | 0.082897 \/ 0.419271 (-0.336374) | 0.050287 \/ 0.043533 (0.006754) | 0.352060 \/ 0.255139 (0.096921) | 0.376278 \/ 0.283200 (0.093078) | 0.023924 \/ 0.141683 (-0.117759) | 1.522780 \/ 1.452155 (0.070626) | 1.578938 \/ 1.492716 (0.086222) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.287317 \/ 0.018006 (0.269311) | 0.508490 \/ 0.000490 (0.508000) | 0.000431 \/ 0.000200 (0.000231) | 0.000056 \/ 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031139 \/ 0.037411 (-0.006272) | 0.113927 \/ 0.014526 (0.099401) | 0.128147 \/ 0.176557 (-0.048409) | 0.179712 \/ 0.737135 (-0.557424) | 0.134364 \/ 0.296338 (-0.161975) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.452834 \/ 0.215209 (0.237625) | 4.507944 \/ 2.077655 (2.430289) | 2.287758 \/ 1.504120 (0.783638) | 2.091145 \/ 1.541195 (0.549951) | 2.196228 \/ 1.468490 (0.727738) | 0.539306 \/ 4.584777 (-4.045471) | 3.838941 \/ 3.745712 (0.093228) | 1.908801 \/ 5.269862 (-3.361060) | 1.139235 \/ 4.565676 (-3.426442) | 0.066677 \/ 0.424275 (-0.357599) | 0.011422 \/ 0.007607 (0.003815) | 0.562966 \/ 0.226044 (0.336921) | 5.633712 \/ 2.268929 (3.364784) | 2.788622 \/ 55.444624 (-52.656002) | 2.438465 \/ 6.876477 (-4.438012) | 2.523479 \/ 2.142072 (0.381407) | 0.668730 \/ 4.805227 (-4.136498) | 0.143977 \/ 6.500664 (-6.356687) | 0.064661 \/ 0.075469 (-0.010808) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.291708 \/ 1.841788 (-0.550080) | 15.573316 \/ 8.074308 (7.499008) | 14.435099 \/ 10.191392 (4.243707) | 0.147745 \/ 0.680424 (-0.532679) | 0.017602 \/ 0.534201 (-0.516599) | 0.401560 \/ 0.579283 (-0.177723) | 0.429861 \/ 0.434364 (-0.004502) | 0.469800 \/ 0.540337 (-0.070538) | 0.567515 \/ 1.386936 (-0.819421) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#79c340f5dcfd06340f180f6c6ea2d5ef81f49d98 \"CML watermark\")\n"],"created_at":1687271315000,"updated_at":1687354790000,"closed_at":1687354342000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5969","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5969","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5969.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5969.patch","merged_at":1687354342000},"body":"\"Requested\" in https:\/\/discuss.huggingface.co\/t\/utf-16-for-datasets\/43828\/3.\r\n\r\n`pd.read_json` also has these parameters, so it makes sense to be consistent.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5969\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5969\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5968","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5968\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5968\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5968\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5968","id":1765252561,"node_id":"I_kwDODunzps5pN53R","number":5968,"title":"Common Voice datasets still need `use_auth_token=True`","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["cc @pcuenca as well. \r\n\r\nNot super urgent btw","The issue commes from the dataset itself and is not related to the `datasets` lib\r\n\r\nsee https:\/\/huggingface.co\/datasets\/mozilla-foundation\/common_voice_6_1\/blob\/2c475b3b88e0f2e5828f830a4b91618a25ff20b7\/common_voice_6_1.py#L148-L152","Let's remove these lines in the dataset no? cc @anton-l @Vaibhavs10 "],"created_at":1687262317000,"updated_at":1687342117000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nWe don't need to pass `use_auth_token=True` anymore to download gated datasets or models, so the following should work if correctly logged in.\r\n\r\n```py\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"mozilla-foundation\/common_voice_6_1\", \"tr\", split=\"train+validation\")\r\n```\r\n\r\nHowever it throws an error - probably because something weird is hardcoded into the dataset loading script.\n\n### Steps to reproduce the bug\n\n1.) \r\n```\r\nhuggingface-cli login\r\n```\r\n\r\n2.) Make sure that you have accepted the license here:\r\nhttps:\/\/huggingface.co\/datasets\/mozilla-foundation\/common_voice_6_1\r\n\r\n3.) Run:\r\n```py\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"mozilla-foundation\/common_voice_6_1\", \"tr\", split=\"train+validation\")\r\n```\r\n\r\n4.) You'll get:\r\n\r\n```\r\nFile ~\/hf\/lib\/python3.10\/site-packages\/datasets\/builder.py:963, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)\r\n 961 split_dict = SplitDict(dataset_name=self.name)\r\n 962 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 963 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 965 # Checksums verification\r\n 966 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:\r\n\r\nFile ~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/mozilla-foundation--common_voice_6_1\/f4d7854c466f5bd4908988dbd39044ec4fc634d89e0515ab0c51715c0127ffe3\/common_voice_6_1.py:150, in CommonVoice._split_generators(self, dl_manager)\r\n 148 hf_auth_token = dl_manager.download_config.use_auth_token\r\n 149 if hf_auth_token is None:\r\n--> 150 raise ConnectionError(\r\n 151 \"Please set use_auth_token=True or use_auth_token='' to download this dataset\"\r\n 152 )\r\n 154 bundle_url_template = STATS[\"bundleURLTemplate\"]\r\n 155 bundle_version = bundle_url_template.split(\"\/\")[0]\r\n\r\nConnectionError: Please set use_auth_token=True or use_auth_token='' to download this dataset\r\n```\n\n### Expected behavior\n\nOne should not have to pass `use_auth_token=True`. Also see discussion here: https:\/\/github.com\/huggingface\/blog\/pull\/1243#discussion_r1235131150\n\n### Environment info\n\n```\r\n- `datasets` version: 2.13.0\r\n- Platform: Linux-6.2.0-76060200-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.6\r\n- Huggingface_hub version: 0.16.0.dev0\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 1.5.3\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5968\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5968\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5967","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5967\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5967\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5967\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5967","id":1763926520,"node_id":"I_kwDODunzps5pI2H4","number":5967,"title":"Config name \/ split name lost after map with multiproc","user":{"login":"sanchit-gandhi","id":93869735,"node_id":"U_kgDOBZhWpw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/93869735?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sanchit-gandhi","html_url":"https:\/\/github.com\/sanchit-gandhi","followers_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/followers","following_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/orgs","repos_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/repos","events_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This must be due to DatasetInfo.from_merge which drops them and is used in `concatenate_datasets`.\r\n\r\nAnd you're experiencing this issue because multiprocessing does concatenate the resulting datasets from each process.\r\n\r\nMaybe they should be kept if all the subdatasets share the same values for config_name and split","That sounds like a clean workaround!"],"created_at":1687195656000,"updated_at":1687942525000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nPerforming a `.map` method on a dataset loses it's config name \/ split name only if run with multiproc\n\n### Steps to reproduce the bug\n\n```python\r\nfrom datasets import Audio, load_dataset\r\nfrom transformers import AutoFeatureExtractor\r\nimport numpy as np\r\n\r\n# load dummy dataset\r\nlibri = load_dataset(\"hf-internal-testing\/librispeech_asr_dummy\", \"clean\")\r\n\r\n# make train \/ test splits\r\nlibri = libri[\"validation\"].train_test_split(seed=42, shuffle=True, test_size=0.1)\r\n\r\n# example feature extractor\r\nmodel_id = \"ntu-spml\/distilhubert\"\r\nfeature_extractor = AutoFeatureExtractor.from_pretrained(model_id, do_normalize=True, return_attention_mask=True)\r\n\r\nsampling_rate = feature_extractor.sampling_rate\r\n\r\nlibri = libri.cast_column(\"audio\", Audio(sampling_rate=sampling_rate))\r\n\r\nmax_duration = 30.0\r\n\r\ndef preprocess_function(examples):\r\n audio_arrays = [x[\"array\"] for x in examples[\"audio\"]]\r\n inputs = feature_extractor(\r\n audio_arrays,\r\n sampling_rate=feature_extractor.sampling_rate,\r\n max_length=int(feature_extractor.sampling_rate * max_duration),\r\n truncation=True,\r\n return_attention_mask=True,\r\n )\r\n return inputs\r\n\r\n# single proc map\r\nlibri_encoded = libri.map(\r\n preprocess_function, remove_columns=[\"audio\", \"file\"], batched=True, num_proc=1\r\n)\r\n\r\nprint(10 * \"=\" ,\"Single processing\", 10 * \"=\")\r\nprint(\"Config name before: \", libri[\"train\"].config_name, \" Split name before: \", libri[\"train\"].split)\r\nprint(\"Config name after: \", libri_encoded[\"train\"].config_name, \" Split name after: \", libri_encoded[\"train\"].split)\r\n\r\n# multi proc map\r\nlibri_encoded = libri.map(\r\n preprocess_function, remove_columns=[\"audio\", \"file\"], batched=True, num_proc=2\r\n)\r\n\r\nprint(10 * \"=\" ,\"Multi processing\", 10 * \"=\")\r\nprint(\"Config name before: \", libri[\"train\"].config_name, \" Split name before: \", libri[\"train\"].split)\r\nprint(\"Config name after: \", libri_encoded[\"train\"].config_name, \" Split name after: \", libri_encoded[\"train\"].split)\r\n```\r\n\r\n**Print Output:**\r\n```\r\n========== Single processing ==========\r\nConfig name before: clean Split name before: validation\r\nConfig name after: clean Split name after: validation\r\n========== Multi processing ==========\r\nConfig name before: clean Split name before: validation\r\nConfig name after: None Split name after: None\r\n```\r\n\r\n=> we can see that the config\/split names are lost in the multiprocessing setting\r\n\r\n\r\n\n\n### Expected behavior\n\nShould retain both config \/ split names in the multiproc setting\n\n### Environment info\n\n- `datasets` version: 2.13.1.dev0\r\n- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.6\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 12.0.0\r\n- Pandas version: 2.0.2","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5967\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5967\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5966","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5966\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5966\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5966\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5966","id":1763885914,"node_id":"PR_kwDODunzps5TXBLP","number":5966,"title":"Fix JSON generation in benchmarks CI","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006186 \/ 0.011353 (-0.005167) | 0.003744 \/ 0.011008 (-0.007264) | 0.097295 \/ 0.038508 (0.058787) | 0.037106 \/ 0.023109 (0.013997) | 0.424154 \/ 0.275898 (0.148256) | 0.474536 \/ 0.323480 (0.151057) | 0.003454 \/ 0.007986 (-0.004532) | 0.003865 \/ 0.004328 (-0.000463) | 0.077348 \/ 0.004250 (0.073097) | 0.051728 \/ 0.037052 (0.014675) | 0.437120 \/ 0.258489 (0.178631) | 0.478379 \/ 0.293841 (0.184538) | 0.028939 \/ 0.128546 (-0.099608) | 0.008376 \/ 0.075646 (-0.067270) | 0.312002 \/ 0.419271 (-0.107270) | 0.053723 \/ 0.043533 (0.010190) | 0.424815 \/ 0.255139 (0.169676) | 0.446203 \/ 0.283200 (0.163004) | 0.026553 \/ 0.141683 (-0.115130) | 1.479983 \/ 1.452155 (0.027828) | 1.530613 \/ 1.492716 (0.037896) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.196627 \/ 0.018006 (0.178620) | 0.422361 \/ 0.000490 (0.421871) | 0.003442 \/ 0.000200 (0.003242) | 0.000077 \/ 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.022913 \/ 0.037411 (-0.014499) | 0.096011 \/ 0.014526 (0.081485) | 0.104091 \/ 0.176557 (-0.072466) | 0.163273 \/ 0.737135 (-0.573862) | 0.109142 \/ 0.296338 (-0.187197) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.431032 \/ 0.215209 (0.215823) | 4.314391 \/ 2.077655 (2.236737) | 2.003812 \/ 1.504120 (0.499692) | 1.799538 \/ 1.541195 (0.258344) | 1.830026 \/ 1.468490 (0.361536) | 0.560131 \/ 4.584777 (-4.024646) | 3.368997 \/ 3.745712 (-0.376715) | 1.703032 \/ 5.269862 (-3.566830) | 1.026949 \/ 4.565676 (-3.538727) | 0.067507 \/ 0.424275 (-0.356768) | 0.010910 \/ 0.007607 (0.003303) | 0.532606 \/ 0.226044 (0.306562) | 5.345179 \/ 2.268929 (3.076250) | 2.368077 \/ 55.444624 (-53.076548) | 2.028913 \/ 6.876477 (-4.847564) | 2.147621 \/ 2.142072 (0.005549) | 0.675696 \/ 4.805227 (-4.129531) | 0.134902 \/ 6.500664 (-6.365762) | 0.065004 \/ 0.075469 (-0.010465) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.233412 \/ 1.841788 (-0.608376) | 13.767465 \/ 8.074308 (5.693157) | 13.933653 \/ 10.191392 (3.742261) | 0.129010 \/ 0.680424 (-0.551414) | 0.016708 \/ 0.534201 (-0.517493) | 0.362341 \/ 0.579283 (-0.216942) | 0.390902 \/ 0.434364 (-0.043462) | 0.429156 \/ 0.540337 (-0.111182) | 0.521166 \/ 1.386936 (-0.865770) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006169 \/ 0.011353 (-0.005184) | 0.003839 \/ 0.011008 (-0.007169) | 0.078784 \/ 0.038508 (0.040276) | 0.040218 \/ 0.023109 (0.017109) | 0.360439 \/ 0.275898 (0.084541) | 0.423957 \/ 0.323480 (0.100477) | 0.003456 \/ 0.007986 (-0.004529) | 0.002900 \/ 0.004328 (-0.001428) | 0.078820 \/ 0.004250 (0.074569) | 0.047240 \/ 0.037052 (0.010187) | 0.372081 \/ 0.258489 (0.113592) | 0.424263 \/ 0.293841 (0.130422) | 0.027977 \/ 0.128546 (-0.100569) | 0.008400 \/ 0.075646 (-0.067246) | 0.084399 \/ 0.419271 (-0.334872) | 0.043303 \/ 0.043533 (-0.000230) | 0.361583 \/ 0.255139 (0.106444) | 0.394987 \/ 0.283200 (0.111787) | 0.020006 \/ 0.141683 (-0.121677) | 1.520208 \/ 1.452155 (0.068053) | 1.587335 \/ 1.492716 (0.094619) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.223847 \/ 0.018006 (0.205840) | 0.402194 \/ 0.000490 (0.401704) | 0.000384 \/ 0.000200 (0.000184) | 0.000057 \/ 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024902 \/ 0.037411 (-0.012509) | 0.099076 \/ 0.014526 (0.084550) | 0.108041 \/ 0.176557 (-0.068516) | 0.159385 \/ 0.737135 (-0.577750) | 0.111442 \/ 0.296338 (-0.184896) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.446232 \/ 0.215209 (0.231023) | 4.464927 \/ 2.077655 (2.387272) | 2.155234 \/ 1.504120 (0.651114) | 1.953645 \/ 1.541195 (0.412450) | 1.965991 \/ 1.468490 (0.497501) | 0.553473 \/ 4.584777 (-4.031304) | 3.321397 \/ 3.745712 (-0.424315) | 1.693761 \/ 5.269862 (-3.576101) | 1.006299 \/ 4.565676 (-3.559378) | 0.067013 \/ 0.424275 (-0.357262) | 0.011116 \/ 0.007607 (0.003509) | 0.555014 \/ 0.226044 (0.328970) | 5.535694 \/ 2.268929 (3.266765) | 2.598339 \/ 55.444624 (-52.846285) | 2.249298 \/ 6.876477 (-4.627179) | 2.243419 \/ 2.142072 (0.101347) | 0.667603 \/ 4.805227 (-4.137624) | 0.133322 \/ 6.500664 (-6.367343) | 0.065473 \/ 0.075469 (-0.009996) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.293051 \/ 1.841788 (-0.548737) | 14.103731 \/ 8.074308 (6.029423) | 14.215204 \/ 10.191392 (4.023812) | 0.143990 \/ 0.680424 (-0.536434) | 0.016805 \/ 0.534201 (-0.517396) | 0.363264 \/ 0.579283 (-0.216019) | 0.392769 \/ 0.434364 (-0.041594) | 0.425291 \/ 0.540337 (-0.115046) | 0.515479 \/ 1.386936 (-0.871457) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#e03a58f3f5d7e6f07279fb833e62d859a0babaad \"CML watermark\")\n","_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006346 \/ 0.011353 (-0.005006) | 0.004130 \/ 0.011008 (-0.006878) | 0.096898 \/ 0.038508 (0.058390) | 0.042564 \/ 0.023109 (0.019455) | 0.343748 \/ 0.275898 (0.067850) | 0.412515 \/ 0.323480 (0.089035) | 0.006153 \/ 0.007986 (-0.001833) | 0.003345 \/ 0.004328 (-0.000984) | 0.075314 \/ 0.004250 (0.071064) | 0.061478 \/ 0.037052 (0.024426) | 0.362948 \/ 0.258489 (0.104459) | 0.401533 \/ 0.293841 (0.107692) | 0.032363 \/ 0.128546 (-0.096184) | 0.008780 \/ 0.075646 (-0.066867) | 0.328691 \/ 0.419271 (-0.090580) | 0.054253 \/ 0.043533 (0.010721) | 0.340783 \/ 0.255139 (0.085644) | 0.360705 \/ 0.283200 (0.077505) | 0.023183 \/ 0.141683 (-0.118500) | 1.484078 \/ 1.452155 (0.031924) | 1.528581 \/ 1.492716 (0.035865) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.208732 \/ 0.018006 (0.190726) | 0.452572 \/ 0.000490 (0.452082) | 0.002936 \/ 0.000200 (0.002737) | 0.000082 \/ 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024616 \/ 0.037411 (-0.012795) | 0.107547 \/ 0.014526 (0.093021) | 0.114492 \/ 0.176557 (-0.062065) | 0.171770 \/ 0.737135 (-0.565365) | 0.122538 \/ 0.296338 (-0.173800) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.406140 \/ 0.215209 (0.190930) | 4.062391 \/ 2.077655 (1.984736) | 1.865962 \/ 1.504120 (0.361842) | 1.682236 \/ 1.541195 (0.141041) | 1.738119 \/ 1.468490 (0.269629) | 0.532244 \/ 4.584777 (-4.052533) | 3.816421 \/ 3.745712 (0.070709) | 2.981205 \/ 5.269862 (-2.288656) | 1.519497 \/ 4.565676 (-3.046179) | 0.065904 \/ 0.424275 (-0.358371) | 0.011277 \/ 0.007607 (0.003670) | 0.512789 \/ 0.226044 (0.286745) | 5.107618 \/ 2.268929 (2.838690) | 2.419399 \/ 55.444624 (-53.025226) | 2.079262 \/ 6.876477 (-4.797214) | 2.150447 \/ 2.142072 (0.008375) | 0.696737 \/ 4.805227 (-4.108490) | 0.142497 \/ 6.500664 (-6.358167) | 0.063521 \/ 0.075469 (-0.011949) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.180692 \/ 1.841788 (-0.661095) | 14.343084 \/ 8.074308 (6.268776) | 13.303719 \/ 10.191392 (3.112327) | 0.164234 \/ 0.680424 (-0.516190) | 0.017439 \/ 0.534201 (-0.516762) | 0.399712 \/ 0.579283 (-0.179571) | 0.428248 \/ 0.434364 (-0.006115) | 0.471909 \/ 0.540337 (-0.068428) | 0.573853 \/ 1.386936 (-0.813083) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006210 \/ 0.011353 (-0.005143) | 0.004104 \/ 0.011008 (-0.006905) | 0.075140 \/ 0.038508 (0.036632) | 0.044647 \/ 0.023109 (0.021538) | 0.370120 \/ 0.275898 (0.094222) | 0.452936 \/ 0.323480 (0.129457) | 0.003943 \/ 0.007986 (-0.004042) | 0.003285 \/ 0.004328 (-0.001043) | 0.075267 \/ 0.004250 (0.071017) | 0.055517 \/ 0.037052 (0.018465) | 0.396385 \/ 0.258489 (0.137896) | 0.447870 \/ 0.293841 (0.154029) | 0.031342 \/ 0.128546 (-0.097204) | 0.008720 \/ 0.075646 (-0.066926) | 0.082702 \/ 0.419271 (-0.336570) | 0.051010 \/ 0.043533 (0.007477) | 0.350546 \/ 0.255139 (0.095407) | 0.425395 \/ 0.283200 (0.142195) | 0.024483 \/ 0.141683 (-0.117200) | 1.467341 \/ 1.452155 (0.015186) | 1.537187 \/ 1.492716 (0.044471) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.218067 \/ 0.018006 (0.200061) | 0.441603 \/ 0.000490 (0.441114) | 0.003711 \/ 0.000200 (0.003512) | 0.000092 \/ 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028669 \/ 0.037411 (-0.008742) | 0.112941 \/ 0.014526 (0.098415) | 0.122584 \/ 0.176557 (-0.053972) | 0.176494 \/ 0.737135 (-0.560641) | 0.129369 \/ 0.296338 (-0.166970) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.434543 \/ 0.215209 (0.219334) | 4.344056 \/ 2.077655 (2.266401) | 2.079286 \/ 1.504120 (0.575166) | 1.887264 \/ 1.541195 (0.346069) | 1.910386 \/ 1.468490 (0.441896) | 0.538824 \/ 4.584777 (-4.045953) | 3.844786 \/ 3.745712 (0.099074) | 2.902091 \/ 5.269862 (-2.367770) | 1.270852 \/ 4.565676 (-3.294824) | 0.066324 \/ 0.424275 (-0.357951) | 0.011346 \/ 0.007607 (0.003739) | 0.537122 \/ 0.226044 (0.311078) | 5.367354 \/ 2.268929 (3.098426) | 2.533672 \/ 55.444624 (-52.910952) | 2.203260 \/ 6.876477 (-4.673217) | 2.224310 \/ 2.142072 (0.082237) | 0.663806 \/ 4.805227 (-4.141422) | 0.142758 \/ 6.500664 (-6.357906) | 0.063870 \/ 0.075469 (-0.011599) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.260487 \/ 1.841788 (-0.581301) | 14.800106 \/ 8.074308 (6.725798) | 13.993488 \/ 10.191392 (3.802096) | 0.165829 \/ 0.680424 (-0.514595) | 0.017347 \/ 0.534201 (-0.516854) | 0.401819 \/ 0.579283 (-0.177464) | 0.424577 \/ 0.434364 (-0.009787) | 0.475161 \/ 0.540337 (-0.065176) | 0.574659 \/ 1.386936 (-0.812277) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#02e1e9ab6df4720f57b2d08c0b800cecac79a7c8 \"CML watermark\")\n"],"created_at":1687193766000,"updated_at":1687195751000,"closed_at":1687195330000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5966","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5966","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5966.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5966.patch","merged_at":1687195330000},"body":"Related to changes made in https:\/\/github.com\/iterative\/dvc\/pull\/9475","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5966\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5966\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5965","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5965\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5965\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5965\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5965","id":1763648540,"node_id":"I_kwDODunzps5pHyQc","number":5965,"title":"\"Couldn't cast array of type\" in complex datasets","user":{"login":"piercefreeman","id":1712066,"node_id":"MDQ6VXNlcjE3MTIwNjY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1712066?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/piercefreeman","html_url":"https:\/\/github.com\/piercefreeman","followers_url":"https:\/\/api.github.com\/users\/piercefreeman\/followers","following_url":"https:\/\/api.github.com\/users\/piercefreeman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/piercefreeman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/piercefreeman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/piercefreeman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/piercefreeman\/orgs","repos_url":"https:\/\/api.github.com\/users\/piercefreeman\/repos","events_url":"https:\/\/api.github.com\/users\/piercefreeman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/piercefreeman\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":{"login":"mariosasko","id":47462742.0,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"assignees":[{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting! \r\n\r\nSpecifying the target features explicitly should avoid this error:\r\n```python\r\ndataset = dataset.map(\r\n batch_process,\r\n batched=True,\r\n batch_size=1,\r\n num_proc=1,\r\n remove_columns=dataset.column_names,\r\n features=datasets.Features({\"texts\": datasets.Sequence(datasets.Value(\"string\"))})\r\n)\r\n```\r\n\r\nThis error stems from our type promotion not handling the nested case. But this promotion\/casting allocates memory in most scenarios, which can be problematic for large datasets, so explicitly passing the features is the optimal solution.","Hi @mariosasko thanks for the context, this is helpful to know. Would it be worth having some logic to generate this explicit feature specification automatically if a type annotation for a .map returns a dataclass that can be inferred?\r\n\r\nFeels like something that would be easy to implement and could save memory \/ deal with this case in a standardized way.","> . Would it be worth having some logic to generate this explicit feature specification automatically if a type annotation for a .map returns a dataclass that can be inferred?\r\n\r\nInteresting proposal! Yes, we could consider doing this if the (return) type hint is `TypedDict`, and raise an error that type hints are incorrect if the cast using the inferred types fails."],"created_at":1687184174000,"updated_at":1687377603000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nWhen doing a map of a dataset with complex types, sometimes `datasets` is unable to interpret the valid schema of a returned datasets.map() function. This often comes from conflicting types, like when both empty lists and filled lists are competing for the same field value.\r\n\r\nThis is prone to happen in batch mapping, when the mapper returns a sequence of null\/empty values and other batches are non-null. A workaround is to manually cast the new batch to a pyarrow table (like implemented in this [workaround](https:\/\/github.com\/piercefreeman\/lassen\/pull\/3)) but it feels like this ideally should be solved at the core library level.\r\n\r\nNote that the reproduction case only throws this error if the first datapoint has the empty list. If it is processed later, datasets already detects its representation as list-type and therefore allows the empty list to be provided.\n\n### Steps to reproduce the bug\n\nA trivial reproduction case:\r\n\r\n```python\r\nfrom typing import Iterator, Any\r\nimport pandas as pd\r\nfrom datasets import Dataset\r\n\r\ndef batch_to_examples(batch: dict[str, list[Any]]) -> Iterator[dict[str, Any]]:\r\n for i in range(next(iter(lengths))):\r\n yield {feature: values[i] for feature, values in batch.items()}\r\n\r\ndef examples_to_batch(examples) -> dict[str, list[Any]]:\r\n batch = {}\r\n\r\n for example in examples:\r\n for feature, value in example.items():\r\n if feature not in batch:\r\n batch[feature] = []\r\n batch[feature].append(value)\r\n\r\n return batch\r\n\r\ndef batch_process(examples, explicit_schema: bool):\r\n new_examples = []\r\n for example in batch_to_examples(examples):\r\n new_examples.append(dict(texts=example[\"raw_text\"].split()))\r\n return examples_to_batch(new_examples)\r\n\r\ndf = pd.DataFrame(\r\n [\r\n {\"raw_text\": \"\"},\r\n {\"raw_text\": \"This is a test\"},\r\n {\"raw_text\": \"This is another test\"},\r\n ]\r\n)\r\n\r\ndataset = Dataset.from_pandas(df)\r\n\r\n# datasets won't be able to typehint a dataset that starts with an empty example.\r\nwith pytest.raises(TypeError, match=\"Couldn't cast array of type\"):\r\n dataset = dataset.map(\r\n batch_process,\r\n batched=True,\r\n batch_size=1,\r\n num_proc=1,\r\n remove_columns=dataset.column_names,\r\n )\r\n```\r\n\r\nThis results in crashes like:\r\n\r\n```bash\r\n File \"\/Users\/piercefreeman\/Library\/Caches\/pypoetry\/virtualenvs\/example-9kBqeSPy-py3.11\/lib\/python3.11\/site-packages\/datasets\/table.py\", line 1819, in wrapper\r\n return func(array, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/piercefreeman\/Library\/Caches\/pypoetry\/virtualenvs\/example-9kBqeSPy-py3.11\/lib\/python3.11\/site-packages\/datasets\/table.py\", line 2109, in cast_array_to_feature\r\n return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/piercefreeman\/Library\/Caches\/pypoetry\/virtualenvs\/example-9kBqeSPy-py3.11\/lib\/python3.11\/site-packages\/datasets\/table.py\", line 1819, in wrapper\r\n return func(array, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\/Users\/piercefreeman\/Library\/Caches\/pypoetry\/virtualenvs\/example-9kBqeSPy-py3.11\/lib\/python3.11\/site-packages\/datasets\/table.py\", line 1998, in array_cast\r\n raise TypeError(f\"Couldn't cast array of type {array.type} to {pa_type}\")\r\nTypeError: Couldn't cast array of type string to null\r\n```\n\n### Expected behavior\n\nThe code should successfully map and create a new dataset without error.\n\n### Environment info\n\nMac OSX, Linux","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5965\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5965\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5964","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5964\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5964\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5964\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5964","id":1763513574,"node_id":"PR_kwDODunzps5TVweZ","number":5964,"title":"Always return list in `list_datasets`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006795 \/ 0.011353 (-0.004558) | 0.004170 \/ 0.011008 (-0.006838) | 0.098698 \/ 0.038508 (0.060190) | 0.045393 \/ 0.023109 (0.022284) | 0.309205 \/ 0.275898 (0.033307) | 0.361333 \/ 0.323480 (0.037853) | 0.006009 \/ 0.007986 (-0.001977) | 0.003334 \/ 0.004328 (-0.000995) | 0.075071 \/ 0.004250 (0.070821) | 0.062587 \/ 0.037052 (0.025535) | 0.322395 \/ 0.258489 (0.063906) | 0.360499 \/ 0.293841 (0.066659) | 0.032243 \/ 0.128546 (-0.096303) | 0.008768 \/ 0.075646 (-0.066878) | 0.329799 \/ 0.419271 (-0.089472) | 0.062261 \/ 0.043533 (0.018728) | 0.298112 \/ 0.255139 (0.042973) | 0.322815 \/ 0.283200 (0.039615) | 0.032348 \/ 0.141683 (-0.109335) | 1.445807 \/ 1.452155 (-0.006347) | 1.528768 \/ 1.492716 (0.036051) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.195701 \/ 0.018006 (0.177695) | 0.437042 \/ 0.000490 (0.436552) | 0.003867 \/ 0.000200 (0.003667) | 0.000080 \/ 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026713 \/ 0.037411 (-0.010698) | 0.109548 \/ 0.014526 (0.095022) | 0.119216 \/ 0.176557 (-0.057341) | 0.178947 \/ 0.737135 (-0.558188) | 0.125224 \/ 0.296338 (-0.171114) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.400885 \/ 0.215209 (0.185676) | 3.991223 \/ 2.077655 (1.913568) | 1.818449 \/ 1.504120 (0.314329) | 1.609285 \/ 1.541195 (0.068090) | 1.666675 \/ 1.468490 (0.198184) | 0.531486 \/ 4.584777 (-4.053291) | 3.770142 \/ 3.745712 (0.024430) | 3.057189 \/ 5.269862 (-2.212673) | 1.517491 \/ 4.565676 (-3.048186) | 0.065782 \/ 0.424275 (-0.358493) | 0.011251 \/ 0.007607 (0.003644) | 0.504277 \/ 0.226044 (0.278233) | 5.038979 \/ 2.268929 (2.770050) | 2.254717 \/ 55.444624 (-53.189908) | 1.929743 \/ 6.876477 (-4.946734) | 2.080051 \/ 2.142072 (-0.062022) | 0.656831 \/ 4.805227 (-4.148396) | 0.142860 \/ 6.500664 (-6.357804) | 0.063057 \/ 0.075469 (-0.012412) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.208819 \/ 1.841788 (-0.632969) | 14.456966 \/ 8.074308 (6.382658) | 12.839799 \/ 10.191392 (2.648407) | 0.164361 \/ 0.680424 (-0.516063) | 0.017330 \/ 0.534201 (-0.516871) | 0.397384 \/ 0.579283 (-0.181899) | 0.422704 \/ 0.434364 (-0.011660) | 0.472065 \/ 0.540337 (-0.068273) | 0.576960 \/ 1.386936 (-0.809976) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006950 \/ 0.011353 (-0.004403) | 0.004012 \/ 0.011008 (-0.006997) | 0.076050 \/ 0.038508 (0.037542) | 0.046646 \/ 0.023109 (0.023537) | 0.353813 \/ 0.275898 (0.077915) | 0.417111 \/ 0.323480 (0.093631) | 0.005422 \/ 0.007986 (-0.002564) | 0.003356 \/ 0.004328 (-0.000972) | 0.076662 \/ 0.004250 (0.072411) | 0.055018 \/ 0.037052 (0.017966) | 0.371561 \/ 0.258489 (0.113072) | 0.410471 \/ 0.293841 (0.116630) | 0.031860 \/ 0.128546 (-0.096686) | 0.008754 \/ 0.075646 (-0.066893) | 0.083192 \/ 0.419271 (-0.336079) | 0.050479 \/ 0.043533 (0.006946) | 0.351725 \/ 0.255139 (0.096586) | 0.371596 \/ 0.283200 (0.088396) | 0.023042 \/ 0.141683 (-0.118641) | 1.480533 \/ 1.452155 (0.028379) | 1.545970 \/ 1.492716 (0.053254) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.220095 \/ 0.018006 (0.202089) | 0.441550 \/ 0.000490 (0.441061) | 0.000375 \/ 0.000200 (0.000175) | 0.000056 \/ 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029527 \/ 0.037411 (-0.007884) | 0.111645 \/ 0.014526 (0.097119) | 0.125732 \/ 0.176557 (-0.050825) | 0.177322 \/ 0.737135 (-0.559813) | 0.128620 \/ 0.296338 (-0.167718) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.432415 \/ 0.215209 (0.217206) | 4.314381 \/ 2.077655 (2.236726) | 2.079450 \/ 1.504120 (0.575331) | 1.893139 \/ 1.541195 (0.351944) | 1.951363 \/ 1.468490 (0.482873) | 0.531466 \/ 4.584777 (-4.053311) | 3.716860 \/ 3.745712 (-0.028852) | 1.850111 \/ 5.269862 (-3.419750) | 1.100676 \/ 4.565676 (-3.465000) | 0.066247 \/ 0.424275 (-0.358028) | 0.011503 \/ 0.007607 (0.003896) | 0.537208 \/ 0.226044 (0.311164) | 5.367560 \/ 2.268929 (3.098631) | 2.543697 \/ 55.444624 (-52.900927) | 2.221670 \/ 6.876477 (-4.654806) | 2.252009 \/ 2.142072 (0.109937) | 0.658509 \/ 4.805227 (-4.146718) | 0.142345 \/ 6.500664 (-6.358319) | 0.064701 \/ 0.075469 (-0.010768) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.266442 \/ 1.841788 (-0.575346) | 15.105953 \/ 8.074308 (7.031645) | 14.288229 \/ 10.191392 (4.096837) | 0.161182 \/ 0.680424 (-0.519242) | 0.017074 \/ 0.534201 (-0.517127) | 0.399464 \/ 0.579283 (-0.179819) | 0.419459 \/ 0.434364 (-0.014905) | 0.467553 \/ 0.540337 (-0.072784) | 0.566337 \/ 1.386936 (-0.820599) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#53ac2d9662f9e5923ae7c52199eaa620d82f0043 \"CML watermark\")\n"],"created_at":1687180028000,"updated_at":1687195777000,"closed_at":1687195361000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5964","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5964","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5964.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5964.patch","merged_at":1687195361000},"body":"Fix #5925 \r\n\r\nPlus, deprecate `list_datasets`\/`inspect_dataset` in favor of `huggingface_hub.list_datasets`\/\"git clone workflow\" (downloads data files)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5964\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5964\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5963","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5963\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5963\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5963\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5963","id":1762774457,"node_id":"I_kwDODunzps5pEc25","number":5963,"title":"Got an error _pickle.PicklingError use Dataset.from_spark.","user":{"login":"yanzia12138","id":112800614,"node_id":"U_kgDOBrkzZg","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/112800614?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yanzia12138","html_url":"https:\/\/github.com\/yanzia12138","followers_url":"https:\/\/api.github.com\/users\/yanzia12138\/followers","following_url":"https:\/\/api.github.com\/users\/yanzia12138\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yanzia12138\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yanzia12138\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yanzia12138\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yanzia12138\/orgs","repos_url":"https:\/\/api.github.com\/users\/yanzia12138\/repos","events_url":"https:\/\/api.github.com\/users\/yanzia12138\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yanzia12138\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["i got error using method from_spark when using multi-node Spark cluster. seems could only use \"from_spark\" in local?","@lhoestq ","cc @maddiedawson it looks like there an issue with `_validate_cache_dir` ?\r\n\r\nIt looks like the function passed to mapPartitions has a reference to the Spark dataset builder, and therefore contains the SparkContext itself.\r\n\r\nI think it can be fixed by defining `create_cache_and_write_probe` outside the Spark dataset builder, and pass a `partial(create_cache_and_write_probe, cache_dir=self._cache_dir)` to `mapPartitions`","Just saw this; thanks for flagging! Your proposed solution sounds good. I can prepare a PR","@maddiedawson can you show me the demo ,so i can test in local .before your PR"],"created_at":1687152635000,"updated_at":1688003906000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":" python 3.9.2\r\nGot an error _pickle.PicklingError use Dataset.from_spark.\r\n\r\nDid the dataset import load data from spark dataframe using multi-node Spark cluster\r\ndf = spark.read.parquet(args.input_data).repartition(50)\r\nds = Dataset.from_spark(df, keep_in_memory=True,\r\n cache_dir=\"\/pnc-data\/data\/nuplan\/t5_spark\/cache_data\")\r\nds.save_to_disk(args.output_data)\r\n\r\nError : \r\n_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforma\r\ntion. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.\r\n23\/06\/16 21:17:20 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)\r\n\r\n_Originally posted by @yanzia12138 in https:\/\/github.com\/huggingface\/datasets\/issues\/5701#issuecomment-1594674306_\r\n \r\nW\r\nTraceback (most recent call last):\r\n File \"\/home\/work\/main.py\", line 100, in \r\n run(args)\r\n File \"\/home\/work\/main.py\", line 80, in run\r\n ds = Dataset.from_spark(df1, keep_in_memory=True,\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py\", line 1281, in from_spark\r\n return SparkDatasetReader(\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/datasets\/io\/spark.py\", line 53, in read\r\n self.builder.download_and_prepare(\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 909, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 1004, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/datasets\/packaged_modules\/spark\/spark.py\", line 254, in _prepare_split\r\n self._validate_cache_dir()\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/datasets\/packaged_modules\/spark\/spark.py\", line 122, in _validate_cache_dir\r\n self._spark.sparkContext.parallelize(range(1), 1).mapPartitions(create_cache_and_write_probe).collect()\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/pyspark\/rdd.py\", line 950, in collect\r\n sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/pyspark\/rdd.py\", line 2951, in _jrdd\r\n wrapped_func = _wrap_function(self.ctx, self.func, self._prev_jrdd_deserializer,\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/pyspark\/rdd.py\", line 2830, in _wrap_function\r\n pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command)\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/pyspark\/rdd.py\", line 2816, in _prepare_for_python_RDD\r\n pickled_command = ser.dumps(command)\r\n File \"\/home\/work\/.local\/lib\/python3.9\/site-packages\/pyspark\/serializers.py\", line 447, in dumps\r\n raise pickle.PicklingError(msg)\r\n_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. S\r\nparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.\r\n23\/06\/19 13:51:21 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5963\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5963\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5962","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5962\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5962\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5962\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5962","id":1761589882,"node_id":"I_kwDODunzps5o_7p6","number":5962,"title":"Issue with train_test_split maintaining the same underlying PyArrow Table","user":{"login":"Oziel14","id":70730520,"node_id":"MDQ6VXNlcjcwNzMwNTIw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/70730520?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Oziel14","html_url":"https:\/\/github.com\/Oziel14","followers_url":"https:\/\/api.github.com\/users\/Oziel14\/followers","following_url":"https:\/\/api.github.com\/users\/Oziel14\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Oziel14\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Oziel14\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Oziel14\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Oziel14\/orgs","repos_url":"https:\/\/api.github.com\/users\/Oziel14\/repos","events_url":"https:\/\/api.github.com\/users\/Oziel14\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Oziel14\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1686968398000,"updated_at":1686968398000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI've been using the train_test_split method in the datasets module to split my HuggingFace Dataset into separate training, validation, and testing subsets. However, I've noticed an issue where the split datasets appear to maintain the same underlying PyArrow Table.\n\n### Steps to reproduce the bug\n\n1. Load any dataset ```dataset = load_dataset(\"lhoestq\/demo1\")``` \r\n2. Try the next code:\r\n```python\r\nfrom datasets import Dataset, DatasetDict\r\n\r\ntrain_size = 0.6\r\n\r\nsplit_train = dataset[\"train\"].train_test_split(\r\n train_size=train_size,\r\n)\r\n\r\nseparate_dataset_dict = DatasetDict({\r\n \"train\": split_train[\"train\"],\r\n \"test\": split_train[\"test\"],\r\n})\r\n```\r\n3. The next code ```print(separate_dataset_dict)``` when printing the dataset it gives the indication that they have 3 and 2 rows respectively.\r\n4. But the next code: \r\n ```python\r\nprint(len(separate_dataset_dict[\"train\"].data['id']))\r\nprint(len(separate_dataset_dict[\"test\"].data['id'])) \r\n```\r\n\r\n Indicates that both tables still have 5 rows.\n\n### Expected behavior\n\nHowever, I've noticed that train_test_split[\"train\"].data, test_val_split[\"train\"].data, and test_val_split[\"test\"].data are identical, suggesting that they all point to the same underlying PyArrow Table. This means that the split datasets are not independent, as I expected.\r\n\r\nI believe this is a bug in the train_test_split implementation, as I would expect this function to return datasets with separate underlying PyArrow Tables. Could you please help me understand if this is expected behavior, or if there's a workaround to create truly independent split datasets?\r\n\r\nI would appreciate any assistance with this issue. Thank you.\n\n### Environment info\n\nI tried in Colab:\r\n\r\n- `datasets` version: 2.13.0\r\n- Platform: Windows-10-10.0.22621-SP0\r\n- Python version: 3.10.11\r\n- Huggingface_hub version: 0.14.1\r\n- PyArrow version: 12.0.0\r\n- Pandas version: 2.0.1\r\n\r\nand my PC:\r\n\r\n- `datasets` version: 2.13.0\r\n- Platform: Linux-5.15.107+-x86_64-with-glibc2.31\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.5.3","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5962\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5962\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5961","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5961\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5961\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5961\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5961","id":1758525111,"node_id":"I_kwDODunzps5o0Pa3","number":5961,"title":"IterableDataset: split by node and map may preprocess samples that will be skipped anyway","user":{"login":"johnchienbronci","id":27708347,"node_id":"MDQ6VXNlcjI3NzA4MzQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27708347?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/johnchienbronci","html_url":"https:\/\/github.com\/johnchienbronci","followers_url":"https:\/\/api.github.com\/users\/johnchienbronci\/followers","following_url":"https:\/\/api.github.com\/users\/johnchienbronci\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/johnchienbronci\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/johnchienbronci\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/johnchienbronci\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/johnchienbronci\/orgs","repos_url":"https:\/\/api.github.com\/users\/johnchienbronci\/repos","events_url":"https:\/\/api.github.com\/users\/johnchienbronci\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/johnchienbronci\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Does \"number of shards\" refer to the total number of data?\r\n\r\nmy config:\r\nnproc_per_node=2\r\nds=ds['train'] = load_dataset(streaming=True).take(50000)\r\n\r\nI'm test again: in prepare_data(), data have the same for each GPU\r\n","The number of shards is `ds.n_shards`. It corresponds generally to the number of files the dataset is made of, to be able to distribute to several nodes.\r\n\r\n**You don't end up with the same data per GPU**. But all the samples are going through your preprocessing function you pass to map. They are just skipped afterwards to only keep 1 sample out of n(GPUs)","For each GPU, although see the same data in prepare_data(), the actual training data will not be the same in the end. \r\nIs my understanding correct?\r\n\r\nWhere can I print the actual training data for each GPU?","> For each GPU, although see the same data in prepare_data(), the actual training data will not be the same in the end.\r\nIs my understanding correct?\r\n\r\nYes exactly :)\r\n\r\n> Where can I print the actual training data for each GPU?\r\n\r\nYou should call print in the data_collator","I print out n_shards, and under multiple GPUs, this value is always 1.\r\nIs this value correct?","Yes it's correct, and it explains why you always have the same data passed to your map function (the data can't be split).\r\n\r\nBut after being passed to `map`, each GPU keeps one example out of n(GPUs) so that you don't end up with duplicate data across GPUs","> > For each GPU, although see the same data in prepare_data(), the actual training data will not be the same in the end.\r\n> > Is my understanding correct?\r\n> \r\n> Yes exactly :)\r\n> \r\n> > Where can I print the actual training data for each GPU?\r\n> \r\n> You should call print in the data_collator\r\n\r\nOK, when printing the train data in the data collator, each GPU sees different data.\r\n\r\nThanks for your reply"],"created_at":1686824950000,"updated_at":1687224640000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":" There are two ways an iterable dataset can be split by node:\r\n1. if the number of shards is a factor of number of GPUs: in that case the shards are evenly distributed per GPU\r\n2. otherwise, each GPU iterate on the data and at the end keeps 1 sample out of n(GPUs) - skipping the others.\r\n\r\nIn case 2. it's therefore possible to have the same examples passed to `prepare_dataset` for each GPU.\r\n\r\nThis doesn't sound optimized though, because it runs the preprocessing on samples that won't be used in the end.\r\n\r\nCould you open a new issue so that we can discuss about this and find a solution ?\r\n\r\n_Originally posted by @lhoestq in https:\/\/github.com\/huggingface\/datasets\/issues\/5360#issuecomment-1592729051_\r\n ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5961\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5961\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5959","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5959\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5959\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5959\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5959","id":1757397507,"node_id":"I_kwDODunzps5ov8ID","number":5959,"title":"read metric glue.py from local file ","user":{"login":"JiazhaoLi","id":31148397,"node_id":"MDQ6VXNlcjMxMTQ4Mzk3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31148397?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JiazhaoLi","html_url":"https:\/\/github.com\/JiazhaoLi","followers_url":"https:\/\/api.github.com\/users\/JiazhaoLi\/followers","following_url":"https:\/\/api.github.com\/users\/JiazhaoLi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JiazhaoLi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JiazhaoLi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JiazhaoLi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JiazhaoLi\/orgs","repos_url":"https:\/\/api.github.com\/users\/JiazhaoLi\/repos","events_url":"https:\/\/api.github.com\/users\/JiazhaoLi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JiazhaoLi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Sorry, I solve this by call `evaluate.load('glue_metric.py','sst-2')`\r\n"],"created_at":1686765575000,"updated_at":1686765856000,"closed_at":1686765856000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nCurrently, The server is off-line. I am using the glue metric from the local file downloaded from the hub. \r\nI download \/ cached datasets using `load_dataset('glue','sst2', cache_dir='\/xxx')` to cache them and then in the off-line mode, I use `load_dataset('xxx\/glue.py','sst2', cache_dir='\/xxx')`. I can successfully reuse cached datasets.\r\n\r\nMy problem is about the load_metric. \r\nWhen I run `load_dataset('xxx\/glue_metric.py','sst2',cache_dir='\/xxx')` , it returns \r\n\r\n` File \"xx\/lib64\/python3.9\/site-packages\/datasets\/utils\/deprecation_utils.py\", line 46, in wrapper\r\n return deprecated_function(*args, **kwargs)\r\n File \"xx\/\/lib64\/python3.9\/site-packages\/datasets\/load.py\", line 1392, in load_metric\r\n metric = metric_cls(\r\nTypeError: 'NoneType' object is not callable`\r\n\r\nThanks in advance for help! \r\n### Steps to reproduce the bug\r\n\r\nN\/A\r\n\r\n### Expected behavior\r\n\r\nN\/A\r\n\r\n### Environment info\r\n\r\n`datasets == 2.12.0`","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5959\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5959\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5958","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5958\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5958\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5958\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5958","id":1757265971,"node_id":"PR_kwDODunzps5TA3__","number":5958,"title":"set dev version","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_5958). All of your documentation changes will be reflected on that endpoint.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006232 \/ 0.011353 (-0.005121) | 0.003788 \/ 0.011008 (-0.007220) | 0.100014 \/ 0.038508 (0.061506) | 0.036488 \/ 0.023109 (0.013379) | 0.306255 \/ 0.275898 (0.030357) | 0.363337 \/ 0.323480 (0.039857) | 0.004765 \/ 0.007986 (-0.003221) | 0.002935 \/ 0.004328 (-0.001394) | 0.078897 \/ 0.004250 (0.074647) | 0.052221 \/ 0.037052 (0.015169) | 0.315169 \/ 0.258489 (0.056680) | 0.353050 \/ 0.293841 (0.059209) | 0.029059 \/ 0.128546 (-0.099488) | 0.008599 \/ 0.075646 (-0.067047) | 0.318770 \/ 0.419271 (-0.100502) | 0.046631 \/ 0.043533 (0.003098) | 0.303728 \/ 0.255139 (0.048589) | 0.332379 \/ 0.283200 (0.049180) | 0.021164 \/ 0.141683 (-0.120519) | 1.576963 \/ 1.452155 (0.124808) | 1.629575 \/ 1.492716 (0.136859) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.204246 \/ 0.018006 (0.186240) | 0.426600 \/ 0.000490 (0.426110) | 0.004336 \/ 0.000200 (0.004136) | 0.000082 \/ 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024039 \/ 0.037411 (-0.013372) | 0.098240 \/ 0.014526 (0.083715) | 0.108889 \/ 0.176557 (-0.067668) | 0.170827 \/ 0.737135 (-0.566308) | 0.111288 \/ 0.296338 (-0.185051) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.418103 \/ 0.215209 (0.202894) | 4.190759 \/ 2.077655 (2.113104) | 1.875978 \/ 1.504120 (0.371858) | 1.679198 \/ 1.541195 (0.138003) | 1.737965 \/ 1.468490 (0.269474) | 0.556660 \/ 4.584777 (-4.028117) | 3.413800 \/ 3.745712 (-0.331912) | 3.004999 \/ 5.269862 (-2.264862) | 1.464030 \/ 4.565676 (-3.101647) | 0.067338 \/ 0.424275 (-0.356937) | 0.011486 \/ 0.007607 (0.003879) | 0.522589 \/ 0.226044 (0.296544) | 5.214653 \/ 2.268929 (2.945724) | 2.316903 \/ 55.444624 (-53.127722) | 1.991941 \/ 6.876477 (-4.884536) | 2.110601 \/ 2.142072 (-0.031471) | 0.665400 \/ 4.805227 (-4.139828) | 0.135755 \/ 6.500664 (-6.364910) | 0.065980 \/ 0.075469 (-0.009489) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.197269 \/ 1.841788 (-0.644519) | 14.085205 \/ 8.074308 (6.010897) | 14.083360 \/ 10.191392 (3.891968) | 0.148054 \/ 0.680424 (-0.532369) | 0.016548 \/ 0.534201 (-0.517653) | 0.371538 \/ 0.579283 (-0.207745) | 0.391068 \/ 0.434364 (-0.043296) | 0.430589 \/ 0.540337 (-0.109748) | 0.529319 \/ 1.386936 (-0.857617) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006214 \/ 0.011353 (-0.005138) | 0.003846 \/ 0.011008 (-0.007162) | 0.078559 \/ 0.038508 (0.040051) | 0.037855 \/ 0.023109 (0.014745) | 0.437479 \/ 0.275898 (0.161581) | 0.497588 \/ 0.323480 (0.174108) | 0.003491 \/ 0.007986 (-0.004494) | 0.003900 \/ 0.004328 (-0.000428) | 0.078443 \/ 0.004250 (0.074193) | 0.048019 \/ 0.037052 (0.010967) | 0.452076 \/ 0.258489 (0.193587) | 0.494597 \/ 0.293841 (0.200756) | 0.028127 \/ 0.128546 (-0.100419) | 0.008549 \/ 0.075646 (-0.067098) | 0.082977 \/ 0.419271 (-0.336295) | 0.043133 \/ 0.043533 (-0.000400) | 0.441342 \/ 0.255139 (0.186203) | 0.464339 \/ 0.283200 (0.181139) | 0.020110 \/ 0.141683 (-0.121573) | 1.485181 \/ 1.452155 (0.033026) | 1.532019 \/ 1.492716 (0.039302) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.228014 \/ 0.018006 (0.210007) | 0.416887 \/ 0.000490 (0.416397) | 0.001133 \/ 0.000200 (0.000933) | 0.000108 \/ 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026452 \/ 0.037411 (-0.010960) | 0.104328 \/ 0.014526 (0.089802) | 0.110045 \/ 0.176557 (-0.066511) | 0.164725 \/ 0.737135 (-0.572410) | 0.116348 \/ 0.296338 (-0.179990) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.483502 \/ 0.215209 (0.268293) | 4.829814 \/ 2.077655 (2.752159) | 2.505271 \/ 1.504120 (1.001151) | 2.305819 \/ 1.541195 (0.764624) | 2.348633 \/ 1.468490 (0.880143) | 0.562316 \/ 4.584777 (-4.022461) | 3.426425 \/ 3.745712 (-0.319287) | 1.737934 \/ 5.269862 (-3.531927) | 1.042616 \/ 4.565676 (-3.523061) | 0.068088 \/ 0.424275 (-0.356187) | 0.011735 \/ 0.007607 (0.004128) | 0.586339 \/ 0.226044 (0.360295) | 5.861283 \/ 2.268929 (3.592354) | 2.953956 \/ 55.444624 (-52.490668) | 2.626611 \/ 6.876477 (-4.249865) | 2.687978 \/ 2.142072 (0.545906) | 0.672748 \/ 4.805227 (-4.132479) | 0.137231 \/ 6.500664 (-6.363433) | 0.068149 \/ 0.075469 (-0.007320) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.323139 \/ 1.841788 (-0.518649) | 14.503102 \/ 8.074308 (6.428794) | 14.092102 \/ 10.191392 (3.900710) | 0.165395 \/ 0.680424 (-0.515028) | 0.016898 \/ 0.534201 (-0.517303) | 0.366905 \/ 0.579283 (-0.212378) | 0.396671 \/ 0.434364 (-0.037692) | 0.421831 \/ 0.540337 (-0.118506) | 0.514075 \/ 1.386936 (-0.872861) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#9d4238c132dd44b9a6e1dfe7101228bdeb538d57 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007778 \/ 0.011353 (-0.003575) | 0.004624 \/ 0.011008 (-0.006384) | 0.123426 \/ 0.038508 (0.084918) | 0.052209 \/ 0.023109 (0.029100) | 0.341084 \/ 0.275898 (0.065186) | 0.421905 \/ 0.323480 (0.098425) | 0.005768 \/ 0.007986 (-0.002217) | 0.003647 \/ 0.004328 (-0.000682) | 0.085569 \/ 0.004250 (0.081319) | 0.070473 \/ 0.037052 (0.033421) | 0.356626 \/ 0.258489 (0.098136) | 0.407413 \/ 0.293841 (0.113572) | 0.038800 \/ 0.128546 (-0.089746) | 0.010289 \/ 0.075646 (-0.065357) | 0.462707 \/ 0.419271 (0.043436) | 0.060390 \/ 0.043533 (0.016858) | 0.349805 \/ 0.255139 (0.094666) | 0.355288 \/ 0.283200 (0.072088) | 0.025364 \/ 0.141683 (-0.116318) | 1.745720 \/ 1.452155 (0.293565) | 1.852764 \/ 1.492716 (0.360048) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.290582 \/ 0.018006 (0.272576) | 0.480044 \/ 0.000490 (0.479554) | 0.007658 \/ 0.000200 (0.007458) | 0.000100 \/ 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031529 \/ 0.037411 (-0.005882) | 0.130441 \/ 0.014526 (0.115915) | 0.147653 \/ 0.176557 (-0.028904) | 0.215935 \/ 0.737135 (-0.521200) | 0.149871 \/ 0.296338 (-0.146467) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.461662 \/ 0.215209 (0.246453) | 4.570353 \/ 2.077655 (2.492698) | 2.104416 \/ 1.504120 (0.600297) | 1.936974 \/ 1.541195 (0.395779) | 2.139167 \/ 1.468490 (0.670677) | 0.645100 \/ 4.584777 (-3.939677) | 4.361536 \/ 3.745712 (0.615824) | 2.155960 \/ 5.269862 (-3.113902) | 1.207854 \/ 4.565676 (-3.357822) | 0.080162 \/ 0.424275 (-0.344113) | 0.014265 \/ 0.007607 (0.006658) | 0.606294 \/ 0.226044 (0.380250) | 5.928093 \/ 2.268929 (3.659165) | 2.701811 \/ 55.444624 (-52.742813) | 2.344490 \/ 6.876477 (-4.531987) | 2.435997 \/ 2.142072 (0.293925) | 0.761020 \/ 4.805227 (-4.044207) | 0.165860 \/ 6.500664 (-6.334804) | 0.075666 \/ 0.075469 (0.000197) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.427318 \/ 1.841788 (-0.414469) | 17.327468 \/ 8.074308 (9.253160) | 15.323065 \/ 10.191392 (5.131673) | 0.178518 \/ 0.680424 (-0.501905) | 0.020888 \/ 0.534201 (-0.513313) | 0.497891 \/ 0.579283 (-0.081393) | 0.487717 \/ 0.434364 (0.053353) | 0.581430 \/ 0.540337 (0.041093) | 0.703430 \/ 1.386936 (-0.683506) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007954 \/ 0.011353 (-0.003399) | 0.004442 \/ 0.011008 (-0.006566) | 0.090950 \/ 0.038508 (0.052442) | 0.054282 \/ 0.023109 (0.031173) | 0.424474 \/ 0.275898 (0.148576) | 0.531770 \/ 0.323480 (0.208290) | 0.004492 \/ 0.007986 (-0.003493) | 0.004745 \/ 0.004328 (0.000416) | 0.088213 \/ 0.004250 (0.083962) | 0.063967 \/ 0.037052 (0.026914) | 0.454256 \/ 0.258489 (0.195767) | 0.502870 \/ 0.293841 (0.209029) | 0.038203 \/ 0.128546 (-0.090343) | 0.010327 \/ 0.075646 (-0.065319) | 0.097809 \/ 0.419271 (-0.321463) | 0.062136 \/ 0.043533 (0.018604) | 0.426148 \/ 0.255139 (0.171009) | 0.467812 \/ 0.283200 (0.184612) | 0.029148 \/ 0.141683 (-0.112535) | 1.762307 \/ 1.452155 (0.310152) | 1.814238 \/ 1.492716 (0.321521) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.195676 \/ 0.018006 (0.177670) | 0.475382 \/ 0.000490 (0.474892) | 0.003070 \/ 0.000200 (0.002870) | 0.000112 \/ 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033945 \/ 0.037411 (-0.003466) | 0.134666 \/ 0.014526 (0.120140) | 0.147585 \/ 0.176557 (-0.028971) | 0.209472 \/ 0.737135 (-0.527664) | 0.154471 \/ 0.296338 (-0.141867) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.518132 \/ 0.215209 (0.302923) | 5.103423 \/ 2.077655 (3.025768) | 2.565207 \/ 1.504120 (1.061087) | 2.389454 \/ 1.541195 (0.848259) | 2.391706 \/ 1.468490 (0.923216) | 0.606463 \/ 4.584777 (-3.978314) | 4.392227 \/ 3.745712 (0.646515) | 2.067121 \/ 5.269862 (-3.202741) | 1.217551 \/ 4.565676 (-3.348125) | 0.074304 \/ 0.424275 (-0.349971) | 0.013418 \/ 0.007607 (0.005811) | 0.623327 \/ 0.226044 (0.397282) | 6.340233 \/ 2.268929 (4.071304) | 3.153948 \/ 55.444624 (-52.290677) | 2.824548 \/ 6.876477 (-4.051929) | 2.938402 \/ 2.142072 (0.796329) | 0.774305 \/ 4.805227 (-4.030922) | 0.170681 \/ 6.500664 (-6.329983) | 0.075895 \/ 0.075469 (0.000426) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.473491 \/ 1.841788 (-0.368296) | 17.372294 \/ 8.074308 (9.297986) | 15.550201 \/ 10.191392 (5.358809) | 0.191402 \/ 0.680424 (-0.489022) | 0.021401 \/ 0.534201 (-0.512800) | 0.484377 \/ 0.579283 (-0.094906) | 0.488844 \/ 0.434364 (0.054480) | 0.563336 \/ 0.540337 (0.022999) | 0.694210 \/ 1.386936 (-0.692726) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#b96da7f51d81e52d7b587685f820b5e55f71e07d \"CML watermark\")\n"],"created_at":1686759994000,"updated_at":1686760495000,"closed_at":1686760011000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5958","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5958","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5958.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5958.patch","merged_at":1686760011000},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5958\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5958\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5957","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5957\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5957\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5957\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5957","id":1757252466,"node_id":"PR_kwDODunzps5TA1EB","number":5957,"title":"Release: 2.13.0","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006498 \/ 0.011353 (-0.004855) | 0.003970 \/ 0.011008 (-0.007038) | 0.099242 \/ 0.038508 (0.060734) | 0.044363 \/ 0.023109 (0.021254) | 0.313900 \/ 0.275898 (0.038002) | 0.386562 \/ 0.323480 (0.063082) | 0.003837 \/ 0.007986 (-0.004149) | 0.004203 \/ 0.004328 (-0.000125) | 0.076191 \/ 0.004250 (0.071940) | 0.058823 \/ 0.037052 (0.021771) | 0.333838 \/ 0.258489 (0.075349) | 0.368235 \/ 0.293841 (0.074394) | 0.030774 \/ 0.128546 (-0.097772) | 0.008787 \/ 0.075646 (-0.066860) | 0.326474 \/ 0.419271 (-0.092798) | 0.050903 \/ 0.043533 (0.007370) | 0.303928 \/ 0.255139 (0.048789) | 0.321532 \/ 0.283200 (0.038333) | 0.024162 \/ 0.141683 (-0.117520) | 1.479662 \/ 1.452155 (0.027507) | 1.520300 \/ 1.492716 (0.027584) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.212403 \/ 0.018006 (0.194397) | 0.448019 \/ 0.000490 (0.447529) | 0.005465 \/ 0.000200 (0.005265) | 0.000388 \/ 0.000054 (0.000334) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027533 \/ 0.037411 (-0.009878) | 0.117477 \/ 0.014526 (0.102952) | 0.121182 \/ 0.176557 (-0.055374) | 0.181150 \/ 0.737135 (-0.555985) | 0.128557 \/ 0.296338 (-0.167782) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.397763 \/ 0.215209 (0.182554) | 3.959460 \/ 2.077655 (1.881805) | 1.822057 \/ 1.504120 (0.317937) | 1.627020 \/ 1.541195 (0.085826) | 1.695394 \/ 1.468490 (0.226904) | 0.536848 \/ 4.584777 (-4.047929) | 3.765205 \/ 3.745712 (0.019493) | 3.196300 \/ 5.269862 (-2.073561) | 1.623583 \/ 4.565676 (-2.942094) | 0.065823 \/ 0.424275 (-0.358452) | 0.011062 \/ 0.007607 (0.003455) | 0.500428 \/ 0.226044 (0.274384) | 5.008816 \/ 2.268929 (2.739888) | 2.314660 \/ 55.444624 (-53.129965) | 2.007429 \/ 6.876477 (-4.869047) | 2.141438 \/ 2.142072 (-0.000635) | 0.656697 \/ 4.805227 (-4.148530) | 0.143555 \/ 6.500664 (-6.357109) | 0.063928 \/ 0.075469 (-0.011541) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.169038 \/ 1.841788 (-0.672750) | 15.027186 \/ 8.074308 (6.952878) | 13.571484 \/ 10.191392 (3.380092) | 0.166437 \/ 0.680424 (-0.513986) | 0.017656 \/ 0.534201 (-0.516545) | 0.397725 \/ 0.579283 (-0.181558) | 0.451019 \/ 0.434364 (0.016655) | 0.469134 \/ 0.540337 (-0.071203) | 0.575885 \/ 1.386936 (-0.811051) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006887 \/ 0.011353 (-0.004465) | 0.004166 \/ 0.011008 (-0.006842) | 0.077137 \/ 0.038508 (0.038629) | 0.055631 \/ 0.023109 (0.032522) | 0.397658 \/ 0.275898 (0.121760) | 0.473981 \/ 0.323480 (0.150502) | 0.005365 \/ 0.007986 (-0.002621) | 0.003401 \/ 0.004328 (-0.000928) | 0.076481 \/ 0.004250 (0.072231) | 0.056014 \/ 0.037052 (0.018961) | 0.415253 \/ 0.258489 (0.156764) | 0.457620 \/ 0.293841 (0.163779) | 0.031850 \/ 0.128546 (-0.096696) | 0.008869 \/ 0.075646 (-0.066777) | 0.083475 \/ 0.419271 (-0.335796) | 0.049232 \/ 0.043533 (0.005699) | 0.392947 \/ 0.255139 (0.137808) | 0.417243 \/ 0.283200 (0.134043) | 0.024554 \/ 0.141683 (-0.117129) | 1.508081 \/ 1.452155 (0.055926) | 1.541845 \/ 1.492716 (0.049129) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.228470 \/ 0.018006 (0.210464) | 0.450933 \/ 0.000490 (0.450443) | 0.001508 \/ 0.000200 (0.001308) | 0.000083 \/ 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030189 \/ 0.037411 (-0.007222) | 0.118853 \/ 0.014526 (0.104327) | 0.124809 \/ 0.176557 (-0.051747) | 0.175066 \/ 0.737135 (-0.562069) | 0.129819 \/ 0.296338 (-0.166519) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.451830 \/ 0.215209 (0.236621) | 4.505352 \/ 2.077655 (2.427698) | 2.309303 \/ 1.504120 (0.805183) | 2.120983 \/ 1.541195 (0.579789) | 2.198808 \/ 1.468490 (0.730317) | 0.543836 \/ 4.584777 (-4.040940) | 3.836650 \/ 3.745712 (0.090938) | 1.872293 \/ 5.269862 (-3.397568) | 1.122335 \/ 4.565676 (-3.443342) | 0.067463 \/ 0.424275 (-0.356812) | 0.012143 \/ 0.007607 (0.004536) | 0.553674 \/ 0.226044 (0.327630) | 5.572101 \/ 2.268929 (3.303173) | 2.772151 \/ 55.444624 (-52.672473) | 2.451557 \/ 6.876477 (-4.424920) | 2.521241 \/ 2.142072 (0.379169) | 0.665799 \/ 4.805227 (-4.139428) | 0.143842 \/ 6.500664 (-6.356822) | 0.065373 \/ 0.075469 (-0.010096) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.271013 \/ 1.841788 (-0.570775) | 15.290054 \/ 8.074308 (7.215746) | 14.807044 \/ 10.191392 (4.615652) | 0.163767 \/ 0.680424 (-0.516657) | 0.017383 \/ 0.534201 (-0.516818) | 0.393046 \/ 0.579283 (-0.186237) | 0.423056 \/ 0.434364 (-0.011308) | 0.459193 \/ 0.540337 (-0.081145) | 0.559964 \/ 1.386936 (-0.826972) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#011b75f044ef7fa6b8981ef3496615296aeb315b \"CML watermark\")\n","_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006112 \/ 0.011353 (-0.005241) | 0.003712 \/ 0.011008 (-0.007297) | 0.099996 \/ 0.038508 (0.061488) | 0.037526 \/ 0.023109 (0.014417) | 0.305834 \/ 0.275898 (0.029936) | 0.361368 \/ 0.323480 (0.037888) | 0.004849 \/ 0.007986 (-0.003136) | 0.002912 \/ 0.004328 (-0.001417) | 0.077729 \/ 0.004250 (0.073479) | 0.053203 \/ 0.037052 (0.016151) | 0.318088 \/ 0.258489 (0.059599) | 0.371745 \/ 0.293841 (0.077904) | 0.029384 \/ 0.128546 (-0.099162) | 0.008504 \/ 0.075646 (-0.067142) | 0.318472 \/ 0.419271 (-0.100799) | 0.046043 \/ 0.043533 (0.002510) | 0.310418 \/ 0.255139 (0.055279) | 0.335044 \/ 0.283200 (0.051844) | 0.020364 \/ 0.141683 (-0.121319) | 1.503201 \/ 1.452155 (0.051047) | 1.556408 \/ 1.492716 (0.063692) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.210245 \/ 0.018006 (0.192239) | 0.418918 \/ 0.000490 (0.418428) | 0.002552 \/ 0.000200 (0.002352) | 0.000084 \/ 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.022295 \/ 0.037411 (-0.015116) | 0.099534 \/ 0.014526 (0.085008) | 0.106432 \/ 0.176557 (-0.070124) | 0.165110 \/ 0.737135 (-0.572026) | 0.109851 \/ 0.296338 (-0.186488) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.423947 \/ 0.215209 (0.208738) | 4.232978 \/ 2.077655 (2.155323) | 2.004849 \/ 1.504120 (0.500729) | 1.814345 \/ 1.541195 (0.273151) | 1.809192 \/ 1.468490 (0.340702) | 0.561146 \/ 4.584777 (-4.023631) | 3.385043 \/ 3.745712 (-0.360669) | 1.708265 \/ 5.269862 (-3.561597) | 1.030290 \/ 4.565676 (-3.535387) | 0.067095 \/ 0.424275 (-0.357180) | 0.011052 \/ 0.007607 (0.003445) | 0.522416 \/ 0.226044 (0.296371) | 5.207003 \/ 2.268929 (2.938075) | 2.367067 \/ 55.444624 (-53.077558) | 1.998705 \/ 6.876477 (-4.877772) | 2.068633 \/ 2.142072 (-0.073439) | 0.672396 \/ 4.805227 (-4.132831) | 0.135818 \/ 6.500664 (-6.364846) | 0.065229 \/ 0.075469 (-0.010240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.187079 \/ 1.841788 (-0.654709) | 13.893153 \/ 8.074308 (5.818845) | 13.951328 \/ 10.191392 (3.759936) | 0.142519 \/ 0.680424 (-0.537905) | 0.016546 \/ 0.534201 (-0.517655) | 0.364008 \/ 0.579283 (-0.215275) | 0.385957 \/ 0.434364 (-0.048407) | 0.425218 \/ 0.540337 (-0.115120) | 0.519586 \/ 1.386936 (-0.867350) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005914 \/ 0.011353 (-0.005439) | 0.003619 \/ 0.011008 (-0.007389) | 0.077806 \/ 0.038508 (0.039298) | 0.037254 \/ 0.023109 (0.014144) | 0.378976 \/ 0.275898 (0.103078) | 0.433620 \/ 0.323480 (0.110140) | 0.003291 \/ 0.007986 (-0.004694) | 0.004523 \/ 0.004328 (0.000194) | 0.077604 \/ 0.004250 (0.073353) | 0.047493 \/ 0.037052 (0.010441) | 0.396027 \/ 0.258489 (0.137538) | 0.453345 \/ 0.293841 (0.159504) | 0.028170 \/ 0.128546 (-0.100376) | 0.008431 \/ 0.075646 (-0.067215) | 0.083985 \/ 0.419271 (-0.335286) | 0.045149 \/ 0.043533 (0.001617) | 0.369364 \/ 0.255139 (0.114225) | 0.407191 \/ 0.283200 (0.123991) | 0.024033 \/ 0.141683 (-0.117649) | 1.516838 \/ 1.452155 (0.064683) | 1.564260 \/ 1.492716 (0.071544) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.200848 \/ 0.018006 (0.182842) | 0.407818 \/ 0.000490 (0.407328) | 0.003971 \/ 0.000200 (0.003771) | 0.000077 \/ 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025033 \/ 0.037411 (-0.012378) | 0.103585 \/ 0.014526 (0.089059) | 0.108741 \/ 0.176557 (-0.067816) | 0.161061 \/ 0.737135 (-0.576075) | 0.112763 \/ 0.296338 (-0.183576) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.479913 \/ 0.215209 (0.264704) | 4.801904 \/ 2.077655 (2.724249) | 2.511433 \/ 1.504120 (1.007313) | 2.307523 \/ 1.541195 (0.766328) | 2.338343 \/ 1.468490 (0.869853) | 0.557731 \/ 4.584777 (-4.027046) | 3.386261 \/ 3.745712 (-0.359451) | 2.999978 \/ 5.269862 (-2.269883) | 1.463058 \/ 4.565676 (-3.102619) | 0.067645 \/ 0.424275 (-0.356630) | 0.011224 \/ 0.007607 (0.003617) | 0.596854 \/ 0.226044 (0.370810) | 5.940946 \/ 2.268929 (3.672017) | 2.980194 \/ 55.444624 (-52.464430) | 2.634961 \/ 6.876477 (-4.241516) | 2.648160 \/ 2.142072 (0.506088) | 0.669728 \/ 4.805227 (-4.135499) | 0.135536 \/ 6.500664 (-6.365128) | 0.066865 \/ 0.075469 (-0.008604) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.287151 \/ 1.841788 (-0.554637) | 14.491681 \/ 8.074308 (6.417373) | 14.185752 \/ 10.191392 (3.994360) | 0.129391 \/ 0.680424 (-0.551032) | 0.016650 \/ 0.534201 (-0.517551) | 0.380111 \/ 0.579283 (-0.199172) | 0.392877 \/ 0.434364 (-0.041487) | 0.439402 \/ 0.540337 (-0.100935) | 0.530865 \/ 1.386936 (-0.856071) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#9aaee6fd0b2bcbe18e4829602084bcd83d669c5e \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.011446 \/ 0.011353 (0.000093) | 0.006623 \/ 0.011008 (-0.004386) | 0.131915 \/ 0.038508 (0.093407) | 0.047364 \/ 0.023109 (0.024255) | 0.369203 \/ 0.275898 (0.093305) | 0.451509 \/ 0.323480 (0.128029) | 0.006265 \/ 0.007986 (-0.001720) | 0.004072 \/ 0.004328 (-0.000257) | 0.098626 \/ 0.004250 (0.094375) | 0.079523 \/ 0.037052 (0.042470) | 0.406038 \/ 0.258489 (0.147549) | 0.450564 \/ 0.293841 (0.156723) | 0.050793 \/ 0.128546 (-0.077753) | 0.014667 \/ 0.075646 (-0.060979) | 0.401359 \/ 0.419271 (-0.017913) | 0.072299 \/ 0.043533 (0.028767) | 0.404456 \/ 0.255139 (0.149317) | 0.396223 \/ 0.283200 (0.113023) | 0.037048 \/ 0.141683 (-0.104635) | 1.869123 \/ 1.452155 (0.416968) | 1.953621 \/ 1.492716 (0.460905) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.237246 \/ 0.018006 (0.219240) | 0.533207 \/ 0.000490 (0.532717) | 0.007392 \/ 0.000200 (0.007192) | 0.000117 \/ 0.000054 (0.000062) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029458 \/ 0.037411 (-0.007954) | 0.112438 \/ 0.014526 (0.097912) | 0.139115 \/ 0.176557 (-0.037441) | 0.215225 \/ 0.737135 (-0.521911) | 0.134440 \/ 0.296338 (-0.161898) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.616783 \/ 0.215209 (0.401574) | 6.113925 \/ 2.077655 (4.036270) | 2.403465 \/ 1.504120 (0.899345) | 1.967523 \/ 1.541195 (0.426329) | 2.042144 \/ 1.468490 (0.573654) | 0.927447 \/ 4.584777 (-3.657330) | 5.280413 \/ 3.745712 (1.534701) | 2.715335 \/ 5.269862 (-2.554527) | 1.755640 \/ 4.565676 (-2.810036) | 0.114370 \/ 0.424275 (-0.309905) | 0.013583 \/ 0.007607 (0.005976) | 0.761701 \/ 0.226044 (0.535657) | 7.466049 \/ 2.268929 (5.197120) | 3.041943 \/ 55.444624 (-52.402682) | 2.314477 \/ 6.876477 (-4.562000) | 2.469285 \/ 2.142072 (0.327213) | 1.216055 \/ 4.805227 (-3.589172) | 0.214205 \/ 6.500664 (-6.286459) | 0.080901 \/ 0.075469 (0.005432) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.565185 \/ 1.841788 (-0.276603) | 18.387986 \/ 8.074308 (10.313678) | 19.665109 \/ 10.191392 (9.473717) | 0.226670 \/ 0.680424 (-0.453754) | 0.028430 \/ 0.534201 (-0.505771) | 0.510526 \/ 0.579283 (-0.068757) | 0.623178 \/ 0.434364 (0.188814) | 0.592039 \/ 0.540337 (0.051702) | 0.728462 \/ 1.386936 (-0.658474) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009161 \/ 0.011353 (-0.002192) | 0.004891 \/ 0.011008 (-0.006117) | 0.106502 \/ 0.038508 (0.067994) | 0.048234 \/ 0.023109 (0.025125) | 0.451173 \/ 0.275898 (0.175275) | 0.557948 \/ 0.323480 (0.234468) | 0.005350 \/ 0.007986 (-0.002635) | 0.004559 \/ 0.004328 (0.000230) | 0.110393 \/ 0.004250 (0.106142) | 0.060624 \/ 0.037052 (0.023572) | 0.459265 \/ 0.258489 (0.200776) | 0.575302 \/ 0.293841 (0.281461) | 0.051379 \/ 0.128546 (-0.077167) | 0.015576 \/ 0.075646 (-0.060070) | 0.116650 \/ 0.419271 (-0.302621) | 0.065534 \/ 0.043533 (0.022001) | 0.461431 \/ 0.255139 (0.206292) | 0.487677 \/ 0.283200 (0.204477) | 0.037773 \/ 0.141683 (-0.103910) | 1.992416 \/ 1.452155 (0.540261) | 1.991280 \/ 1.492716 (0.498564) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.233607 \/ 0.018006 (0.215601) | 0.507539 \/ 0.000490 (0.507049) | 0.001307 \/ 0.000200 (0.001107) | 0.000096 \/ 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032897 \/ 0.037411 (-0.004514) | 0.126549 \/ 0.014526 (0.112023) | 0.137893 \/ 0.176557 (-0.038663) | 0.192124 \/ 0.737135 (-0.545012) | 0.147300 \/ 0.296338 (-0.149038) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.679371 \/ 0.215209 (0.464162) | 6.673249 \/ 2.077655 (4.595595) | 2.979141 \/ 1.504120 (1.475022) | 2.568789 \/ 1.541195 (1.027594) | 2.537540 \/ 1.468490 (1.069050) | 0.973555 \/ 4.584777 (-3.611222) | 5.313536 \/ 3.745712 (1.567824) | 2.693283 \/ 5.269862 (-2.576579) | 1.819483 \/ 4.565676 (-2.746194) | 0.111644 \/ 0.424275 (-0.312631) | 0.013218 \/ 0.007607 (0.005611) | 0.776114 \/ 0.226044 (0.550070) | 7.758907 \/ 2.268929 (5.489978) | 3.417611 \/ 55.444624 (-52.027013) | 2.859502 \/ 6.876477 (-4.016975) | 2.927726 \/ 2.142072 (0.785653) | 1.163671 \/ 4.805227 (-3.641556) | 0.228636 \/ 6.500664 (-6.272028) | 0.082077 \/ 0.075469 (0.006607) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.746150 \/ 1.841788 (-0.095637) | 17.961955 \/ 8.074308 (9.887647) | 21.590545 \/ 10.191392 (11.399153) | 0.210017 \/ 0.680424 (-0.470406) | 0.028435 \/ 0.534201 (-0.505766) | 0.509253 \/ 0.579283 (-0.070030) | 0.606993 \/ 0.434364 (0.172629) | 0.587189 \/ 0.540337 (0.046851) | 0.684023 \/ 1.386936 (-0.702913) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#9aaee6fd0b2bcbe18e4829602084bcd83d669c5e \"CML watermark\")\n"],"created_at":1686759446000,"updated_at":1686760419000,"closed_at":1686759879000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5957","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5957","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5957.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5957.patch","merged_at":1686759879000},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5957\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5957\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5956","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5956\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5956\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5956\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5956","id":1756959367,"node_id":"PR_kwDODunzps5S_1o2","number":5956,"title":"Fix ArrowExamplesIterable.shard_data_sources","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005893 \/ 0.011353 (-0.005460) | 0.003682 \/ 0.011008 (-0.007327) | 0.098358 \/ 0.038508 (0.059850) | 0.028130 \/ 0.023109 (0.005020) | 0.305960 \/ 0.275898 (0.030062) | 0.334869 \/ 0.323480 (0.011390) | 0.003522 \/ 0.007986 (-0.004463) | 0.003683 \/ 0.004328 (-0.000645) | 0.079418 \/ 0.004250 (0.075168) | 0.037662 \/ 0.037052 (0.000609) | 0.310893 \/ 0.258489 (0.052404) | 0.341347 \/ 0.293841 (0.047506) | 0.027450 \/ 0.128546 (-0.101096) | 0.008381 \/ 0.075646 (-0.067265) | 0.316020 \/ 0.419271 (-0.103252) | 0.045079 \/ 0.043533 (0.001546) | 0.307806 \/ 0.255139 (0.052667) | 0.331804 \/ 0.283200 (0.048604) | 0.091806 \/ 0.141683 (-0.049877) | 1.492611 \/ 1.452155 (0.040457) | 1.551762 \/ 1.492716 (0.059046) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.201640 \/ 0.018006 (0.183634) | 0.422776 \/ 0.000490 (0.422286) | 0.003734 \/ 0.000200 (0.003535) | 0.000080 \/ 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025429 \/ 0.037411 (-0.011982) | 0.104699 \/ 0.014526 (0.090173) | 0.110505 \/ 0.176557 (-0.066051) | 0.171252 \/ 0.737135 (-0.565883) | 0.113131 \/ 0.296338 (-0.183208) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.419914 \/ 0.215209 (0.204705) | 4.184414 \/ 2.077655 (2.106760) | 1.999263 \/ 1.504120 (0.495143) | 1.828669 \/ 1.541195 (0.287474) | 1.940366 \/ 1.468490 (0.471876) | 0.556939 \/ 4.584777 (-4.027838) | 3.389164 \/ 3.745712 (-0.356548) | 1.796323 \/ 5.269862 (-3.473538) | 1.048843 \/ 4.565676 (-3.516833) | 0.067315 \/ 0.424275 (-0.356960) | 0.011531 \/ 0.007607 (0.003923) | 0.517226 \/ 0.226044 (0.291182) | 5.167255 \/ 2.268929 (2.898326) | 2.431129 \/ 55.444624 (-53.013495) | 2.133913 \/ 6.876477 (-4.742564) | 2.359021 \/ 2.142072 (0.216948) | 0.666390 \/ 4.805227 (-4.138838) | 0.135147 \/ 6.500664 (-6.365517) | 0.064855 \/ 0.075469 (-0.010614) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.166530 \/ 1.841788 (-0.675258) | 14.060551 \/ 8.074308 (5.986242) | 14.171663 \/ 10.191392 (3.980271) | 0.285821 \/ 0.680424 (-0.394603) | 0.016867 \/ 0.534201 (-0.517334) | 0.369102 \/ 0.579283 (-0.210181) | 0.393580 \/ 0.434364 (-0.040784) | 0.423721 \/ 0.540337 (-0.116616) | 0.512559 \/ 1.386936 (-0.874377) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006674 \/ 0.011353 (-0.004679) | 0.004006 \/ 0.011008 (-0.007002) | 0.080160 \/ 0.038508 (0.041652) | 0.032508 \/ 0.023109 (0.009399) | 0.378168 \/ 0.275898 (0.102270) | 0.417796 \/ 0.323480 (0.094316) | 0.003706 \/ 0.007986 (-0.004280) | 0.002995 \/ 0.004328 (-0.001333) | 0.079275 \/ 0.004250 (0.075025) | 0.043690 \/ 0.037052 (0.006638) | 0.377717 \/ 0.258489 (0.119228) | 0.439801 \/ 0.293841 (0.145961) | 0.028438 \/ 0.128546 (-0.100108) | 0.008661 \/ 0.075646 (-0.066985) | 0.085280 \/ 0.419271 (-0.333991) | 0.043716 \/ 0.043533 (0.000183) | 0.370086 \/ 0.255139 (0.114947) | 0.403763 \/ 0.283200 (0.120563) | 0.095022 \/ 0.141683 (-0.046661) | 1.534376 \/ 1.452155 (0.082221) | 1.597658 \/ 1.492716 (0.104942) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.240229 \/ 0.018006 (0.222223) | 0.496281 \/ 0.000490 (0.495792) | 0.002165 \/ 0.000200 (0.001965) | 0.000075 \/ 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025330 \/ 0.037411 (-0.012081) | 0.102414 \/ 0.014526 (0.087888) | 0.112733 \/ 0.176557 (-0.063824) | 0.161181 \/ 0.737135 (-0.575955) | 0.114196 \/ 0.296338 (-0.182143) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.456808 \/ 0.215209 (0.241599) | 4.534937 \/ 2.077655 (2.457283) | 2.318834 \/ 1.504120 (0.814714) | 2.074085 \/ 1.541195 (0.532890) | 2.117409 \/ 1.468490 (0.648919) | 0.559110 \/ 4.584777 (-4.025667) | 3.371695 \/ 3.745712 (-0.374017) | 2.543154 \/ 5.269862 (-2.726708) | 1.360552 \/ 4.565676 (-3.205125) | 0.067602 \/ 0.424275 (-0.356674) | 0.011396 \/ 0.007607 (0.003789) | 0.561666 \/ 0.226044 (0.335622) | 5.607666 \/ 2.268929 (3.338737) | 2.802775 \/ 55.444624 (-52.641849) | 2.486162 \/ 6.876477 (-4.390315) | 2.390885 \/ 2.142072 (0.248813) | 0.667407 \/ 4.805227 (-4.137820) | 0.135948 \/ 6.500664 (-6.364717) | 0.067272 \/ 0.075469 (-0.008197) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.279664 \/ 1.841788 (-0.562124) | 15.188099 \/ 8.074308 (7.113791) | 14.380355 \/ 10.191392 (4.188963) | 0.140344 \/ 0.680424 (-0.540080) | 0.016832 \/ 0.534201 (-0.517369) | 0.364631 \/ 0.579283 (-0.214652) | 0.400306 \/ 0.434364 (-0.034058) | 0.430793 \/ 0.540337 (-0.109545) | 0.525923 \/ 1.386936 (-0.861013) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#48ca19cf1f4d1c99765a1f847c1f6b849496d99d \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008502 \/ 0.011353 (-0.002851) | 0.005946 \/ 0.011008 (-0.005062) | 0.131279 \/ 0.038508 (0.092771) | 0.035400 \/ 0.023109 (0.012291) | 0.423240 \/ 0.275898 (0.147342) | 0.470248 \/ 0.323480 (0.146768) | 0.004949 \/ 0.007986 (-0.003037) | 0.004544 \/ 0.004328 (0.000215) | 0.106856 \/ 0.004250 (0.102605) | 0.046579 \/ 0.037052 (0.009527) | 0.441135 \/ 0.258489 (0.182646) | 0.470401 \/ 0.293841 (0.176561) | 0.047231 \/ 0.128546 (-0.081315) | 0.017278 \/ 0.075646 (-0.058368) | 0.401937 \/ 0.419271 (-0.017335) | 0.067151 \/ 0.043533 (0.023619) | 0.453908 \/ 0.255139 (0.198769) | 0.422171 \/ 0.283200 (0.138971) | 0.123583 \/ 0.141683 (-0.018100) | 1.852895 \/ 1.452155 (0.400740) | 1.827282 \/ 1.492716 (0.334566) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.246419 \/ 0.018006 (0.228413) | 0.576930 \/ 0.000490 (0.576440) | 0.007511 \/ 0.000200 (0.007312) | 0.000165 \/ 0.000054 (0.000111) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032732 \/ 0.037411 (-0.004680) | 0.130266 \/ 0.014526 (0.115740) | 0.150537 \/ 0.176557 (-0.026019) | 0.218554 \/ 0.737135 (-0.518582) | 0.148572 \/ 0.296338 (-0.147766) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.598611 \/ 0.215209 (0.383402) | 6.181219 \/ 2.077655 (4.103564) | 2.473468 \/ 1.504120 (0.969348) | 2.206374 \/ 1.541195 (0.665179) | 2.216707 \/ 1.468490 (0.748217) | 0.981295 \/ 4.584777 (-3.603482) | 5.716384 \/ 3.745712 (1.970672) | 5.882327 \/ 5.269862 (0.612466) | 2.761081 \/ 4.565676 (-1.804595) | 0.113544 \/ 0.424275 (-0.310731) | 0.015131 \/ 0.007607 (0.007524) | 0.850939 \/ 0.226044 (0.624894) | 8.046611 \/ 2.268929 (5.777682) | 3.340542 \/ 55.444624 (-52.104083) | 2.673692 \/ 6.876477 (-4.202785) | 2.926330 \/ 2.142072 (0.784257) | 1.176164 \/ 4.805227 (-3.629064) | 0.226745 \/ 6.500664 (-6.273919) | 0.085910 \/ 0.075469 (0.010441) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.483792 \/ 1.841788 (-0.357995) | 18.895009 \/ 8.074308 (10.820701) | 20.982461 \/ 10.191392 (10.791069) | 0.253085 \/ 0.680424 (-0.427339) | 0.031284 \/ 0.534201 (-0.502917) | 0.516569 \/ 0.579283 (-0.062714) | 0.635781 \/ 0.434364 (0.201417) | 0.604359 \/ 0.540337 (0.064022) | 0.725278 \/ 1.386936 (-0.661658) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009220 \/ 0.011353 (-0.002133) | 0.005792 \/ 0.011008 (-0.005216) | 0.099795 \/ 0.038508 (0.061287) | 0.033812 \/ 0.023109 (0.010703) | 0.459386 \/ 0.275898 (0.183488) | 0.518067 \/ 0.323480 (0.194587) | 0.005083 \/ 0.007986 (-0.002902) | 0.004145 \/ 0.004328 (-0.000183) | 0.103506 \/ 0.004250 (0.099255) | 0.050429 \/ 0.037052 (0.013377) | 0.478149 \/ 0.258489 (0.219660) | 0.531280 \/ 0.293841 (0.237440) | 0.047373 \/ 0.128546 (-0.081173) | 0.013647 \/ 0.075646 (-0.061999) | 0.115174 \/ 0.419271 (-0.304098) | 0.061099 \/ 0.043533 (0.017566) | 0.455002 \/ 0.255139 (0.199863) | 0.507765 \/ 0.283200 (0.224565) | 0.112219 \/ 0.141683 (-0.029464) | 1.873591 \/ 1.452155 (0.421436) | 1.952061 \/ 1.492716 (0.459345) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.283587 \/ 0.018006 (0.265581) | 0.587562 \/ 0.000490 (0.587073) | 0.001252 \/ 0.000200 (0.001052) | 0.000095 \/ 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032706 \/ 0.037411 (-0.004705) | 0.137715 \/ 0.014526 (0.123189) | 0.131932 \/ 0.176557 (-0.044625) | 0.200042 \/ 0.737135 (-0.537094) | 0.159327 \/ 0.296338 (-0.137011) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.624061 \/ 0.215209 (0.408852) | 6.386235 \/ 2.077655 (4.308580) | 2.908786 \/ 1.504120 (1.404666) | 2.589855 \/ 1.541195 (1.048660) | 2.387988 \/ 1.468490 (0.919498) | 0.952625 \/ 4.584777 (-3.632152) | 5.571641 \/ 3.745712 (1.825929) | 2.711154 \/ 5.269862 (-2.558708) | 1.788015 \/ 4.565676 (-2.777662) | 0.104488 \/ 0.424275 (-0.319787) | 0.015213 \/ 0.007607 (0.007606) | 0.798446 \/ 0.226044 (0.572401) | 8.011614 \/ 2.268929 (5.742686) | 3.711951 \/ 55.444624 (-51.732673) | 2.896881 \/ 6.876477 (-3.979595) | 3.172116 \/ 2.142072 (1.030043) | 1.136816 \/ 4.805227 (-3.668411) | 0.239254 \/ 6.500664 (-6.261410) | 0.081136 \/ 0.075469 (0.005667) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.798246 \/ 1.841788 (-0.043542) | 19.497108 \/ 8.074308 (11.422800) | 23.450258 \/ 10.191392 (13.258866) | 0.250021 \/ 0.680424 (-0.430403) | 0.029138 \/ 0.534201 (-0.505063) | 0.532984 \/ 0.579283 (-0.046299) | 0.638161 \/ 0.434364 (0.203797) | 0.615720 \/ 0.540337 (0.075382) | 0.770621 \/ 1.386936 (-0.616315) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#7d8345c5f8a844ff44cfbb30cbda514ffe89bfd7 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009120 \/ 0.011353 (-0.002233) | 0.005381 \/ 0.011008 (-0.005627) | 0.139719 \/ 0.038508 (0.101211) | 0.037229 \/ 0.023109 (0.014120) | 0.414633 \/ 0.275898 (0.138734) | 0.480313 \/ 0.323480 (0.156833) | 0.005027 \/ 0.007986 (-0.002959) | 0.005015 \/ 0.004328 (0.000687) | 0.108513 \/ 0.004250 (0.104263) | 0.056167 \/ 0.037052 (0.019115) | 0.407588 \/ 0.258489 (0.149099) | 0.518899 \/ 0.293841 (0.225058) | 0.048857 \/ 0.128546 (-0.079689) | 0.013694 \/ 0.075646 (-0.061952) | 0.418035 \/ 0.419271 (-0.001237) | 0.067755 \/ 0.043533 (0.024222) | 0.417740 \/ 0.255139 (0.162601) | 0.478622 \/ 0.283200 (0.195422) | 0.118290 \/ 0.141683 (-0.023393) | 1.901473 \/ 1.452155 (0.449319) | 1.978126 \/ 1.492716 (0.485409) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.271960 \/ 0.018006 (0.253954) | 0.602745 \/ 0.000490 (0.602255) | 0.005371 \/ 0.000200 (0.005171) | 0.000102 \/ 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029620 \/ 0.037411 (-0.007791) | 0.122402 \/ 0.014526 (0.107877) | 0.132645 \/ 0.176557 (-0.043911) | 0.212635 \/ 0.737135 (-0.524500) | 0.136901 \/ 0.296338 (-0.159438) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.644017 \/ 0.215209 (0.428808) | 6.597151 \/ 2.077655 (4.519496) | 2.454471 \/ 1.504120 (0.950351) | 2.151357 \/ 1.541195 (0.610163) | 2.290748 \/ 1.468490 (0.822258) | 0.970194 \/ 4.584777 (-3.614583) | 5.475275 \/ 3.745712 (1.729563) | 2.772658 \/ 5.269862 (-2.497204) | 1.785311 \/ 4.565676 (-2.780366) | 0.114503 \/ 0.424275 (-0.309772) | 0.015374 \/ 0.007607 (0.007767) | 0.768413 \/ 0.226044 (0.542368) | 7.956219 \/ 2.268929 (5.687290) | 3.272138 \/ 55.444624 (-52.172486) | 2.539638 \/ 6.876477 (-4.336839) | 2.713526 \/ 2.142072 (0.571454) | 1.181221 \/ 4.805227 (-3.624006) | 0.236327 \/ 6.500664 (-6.264337) | 0.089815 \/ 0.075469 (0.014345) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.521805 \/ 1.841788 (-0.319983) | 18.196529 \/ 8.074308 (10.122221) | 20.287324 \/ 10.191392 (10.095932) | 0.256959 \/ 0.680424 (-0.423465) | 0.028846 \/ 0.534201 (-0.505355) | 0.522354 \/ 0.579283 (-0.056929) | 0.600216 \/ 0.434364 (0.165852) | 0.607668 \/ 0.540337 (0.067331) | 0.762101 \/ 1.386936 (-0.624835) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009227 \/ 0.011353 (-0.002126) | 0.005398 \/ 0.011008 (-0.005610) | 0.094998 \/ 0.038508 (0.056490) | 0.036633 \/ 0.023109 (0.013524) | 0.493317 \/ 0.275898 (0.217419) | 0.517216 \/ 0.323480 (0.193736) | 0.005510 \/ 0.007986 (-0.002476) | 0.004249 \/ 0.004328 (-0.000079) | 0.107936 \/ 0.004250 (0.103685) | 0.050223 \/ 0.037052 (0.013171) | 0.580275 \/ 0.258489 (0.321786) | 0.551477 \/ 0.293841 (0.257636) | 0.048758 \/ 0.128546 (-0.079788) | 0.013954 \/ 0.075646 (-0.061692) | 0.107021 \/ 0.419271 (-0.312250) | 0.064416 \/ 0.043533 (0.020884) | 0.485225 \/ 0.255139 (0.230086) | 0.513862 \/ 0.283200 (0.230663) | 0.118848 \/ 0.141683 (-0.022835) | 1.755396 \/ 1.452155 (0.303241) | 1.970349 \/ 1.492716 (0.477633) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.290743 \/ 0.018006 (0.272737) | 0.603293 \/ 0.000490 (0.602803) | 0.006814 \/ 0.000200 (0.006614) | 0.000156 \/ 0.000054 (0.000101) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029862 \/ 0.037411 (-0.007550) | 0.136530 \/ 0.014526 (0.122005) | 0.133728 \/ 0.176557 (-0.042829) | 0.194709 \/ 0.737135 (-0.542427) | 0.151080 \/ 0.296338 (-0.145258) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.649202 \/ 0.215209 (0.433993) | 6.637578 \/ 2.077655 (4.559923) | 3.040135 \/ 1.504120 (1.536015) | 2.671308 \/ 1.541195 (1.130113) | 2.722412 \/ 1.468490 (1.253922) | 0.953029 \/ 4.584777 (-3.631748) | 5.805002 \/ 3.745712 (2.059290) | 5.049939 \/ 5.269862 (-0.219922) | 2.284053 \/ 4.565676 (-2.281623) | 0.130399 \/ 0.424275 (-0.293876) | 0.014726 \/ 0.007607 (0.007119) | 0.932570 \/ 0.226044 (0.706526) | 8.576693 \/ 2.268929 (6.307765) | 4.032738 \/ 55.444624 (-51.411886) | 3.274715 \/ 6.876477 (-3.601762) | 3.513788 \/ 2.142072 (1.371716) | 1.130624 \/ 4.805227 (-3.674603) | 0.219597 \/ 6.500664 (-6.281067) | 0.081425 \/ 0.075469 (0.005956) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.735312 \/ 1.841788 (-0.106476) | 18.438587 \/ 8.074308 (10.364279) | 21.582310 \/ 10.191392 (11.390918) | 0.224040 \/ 0.680424 (-0.456384) | 0.027590 \/ 0.534201 (-0.506611) | 0.503598 \/ 0.579283 (-0.075685) | 0.624379 \/ 0.434364 (0.190015) | 0.571911 \/ 0.540337 (0.031574) | 0.723215 \/ 1.386936 (-0.663721) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#9e40d28f2b0060a429c70827191fa5ff3ce8cf27 \"CML watermark\")\n"],"created_at":1686750638000,"updated_at":1686753792000,"closed_at":1686753225000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5956","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5956","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5956.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5956.patch","merged_at":1686753225000},"body":"ArrowExamplesIterable.shard_data_sources was outdated\r\n\r\nI also fixed a warning message by not using format_type= in with_format()","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5956\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5956\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5955","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5955\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5955\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5955\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5955","id":1756827133,"node_id":"I_kwDODunzps5otw39","number":5955,"title":"Strange bug in loading local JSON files, using load_dataset","user":{"login":"Night-Quiet","id":73934131,"node_id":"MDQ6VXNlcjczOTM0MTMx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/73934131?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Night-Quiet","html_url":"https:\/\/github.com\/Night-Quiet","followers_url":"https:\/\/api.github.com\/users\/Night-Quiet\/followers","following_url":"https:\/\/api.github.com\/users\/Night-Quiet\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Night-Quiet\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Night-Quiet\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Night-Quiet\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Night-Quiet\/orgs","repos_url":"https:\/\/api.github.com\/users\/Night-Quiet\/repos","events_url":"https:\/\/api.github.com\/users\/Night-Quiet\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Night-Quiet\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This is the actual error:\r\n```\r\nFailed to read file '\/home\/lakala\/hjc\/code\/pycode\/glm\/temp.json' with error : cannot mix list and non-list, non-null values\r\n```\r\nWhich means some samples are incorrectly formatted.\r\n\r\nPyArrow, a storage backend that we use under the hood, requires that all the list elements have the same level of nesting (same number of dimensions) or are `None`.\r\n```python\r\nimport pyarrow as pa\r\npa.array([[1, 2, 3], 2]) # ArrowInvalid: cannot mix list and non-list, non-null values\r\npa.array([[1, 2, 3], [2]]) # works\r\n``` ","@mariosasko \r\nI used the same operation to check the original data before and after slicing.\r\nThis is reflected in my code.\r\n160000 is not a specific number.\r\nI can also get output using 150000.\r\nThis doesn't seem to align very well with what you said.\r\nBecause if only some sample formats are incorrect.\r\nSo there should be an error in one of the front and back slices.\r\nthank you for your reply.","Our JSON loader does the following in your case:\r\n\r\n```python\r\nimport json\r\nimport pyarrow as pa\r\n\r\nwith open(file, encoding=\"utf-8\") as f:\r\n dataset = json.load(f)\r\nkeys = set().union(*[row.keys() for row in dataset])\r\nmapping = {col: [row.get(col) for row in dataset] for col in keys}\r\npa_table = pa.Table.from_pydict(mapping) # the ArrowInvalid error comes from here\r\n```\r\n\r\nSo if this code throws an error with correctly-formatted JSON, then this is an Arrow bug and should be reported in their repo.\r\n\r\n> I used the same operation to check the original data before and after slicing.\r\nThis is reflected in my code.\r\n160000 is not a specific number.\r\nI can also get output using 150000.\r\nThis doesn't seem to align very well with what you said.\r\nBecause if only some sample formats are incorrect.\r\nSo there should be an error in one of the front and back slices.\r\n\r\nYou should shuffle the data to make sure that's not the case","@mariosasko \r\nThank you.\r\nI will try again."],"created_at":1686746760000,"updated_at":1687358535000,"closed_at":1687358535000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI am using 'load_dataset 'loads a JSON file, but I found a strange bug: an error will be reported when the length of the JSON file exceeds 160000 (uncertain exact number). I have checked the data through the following code and there are no issues. So I cannot determine the true reason for this error. \r\n\r\nThe data is a list containing a dictionary. As follows: \r\n\r\n[\r\n{'input': 'someting...', 'target': 'someting...', 'type': 'someting...', 'history': ['someting...', ...]}, \r\n...\r\n]\n\n### Steps to reproduce the bug\n\n```\r\nimport json\r\nfrom datasets import load_dataset\r\n\r\npath = \"target.json\"\r\ntemp_path = \"temp.json\"\r\n\r\nwith open(path, \"r\") as f:\r\n data = json.load(f)\r\n print(f\"\\n-------the JSON file length is: {len(data)}-------\\n\")\r\n\r\nwith open(temp_path, \"w\") as f:\r\n json.dump(data[:160000], f)\r\ndataset = load_dataset(\"json\", data_files=temp_path)\r\nprint(\"\\n-------This works when the JSON file length is 160000-------\\n\")\r\n\r\nwith open(temp_path, \"w\") as f:\r\n json.dump(data[160000:], f)\r\ndataset = load_dataset(\"json\", data_files=temp_path)\r\nprint(\"\\n-------This works and eliminates data issues-------\\n\")\r\n\r\nwith open(temp_path, \"w\") as f:\r\n json.dump(data[:170000], f)\r\ndataset = load_dataset(\"json\", data_files=temp_path)\r\n```\n\n### Expected behavior\n\n```\r\n-------the JSON file length is: 173049-------\r\n\r\nDownloading and preparing dataset json\/default to \/root\/.cache\/huggingface\/datasets\/json\/default-acf3c7f418c5f4b4\/0.0.0\/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 3328.81it\/s]\r\nExtracting data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 639.47it\/s]\r\nDataset json downloaded and prepared to \/root\/.cache\/huggingface\/datasets\/json\/default-acf3c7f418c5f4b4\/0.0.0\/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4. Subsequent calls will reuse this data.\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 265.85it\/s]\r\n\r\n-------This works when the JSON file length is 160000-------\r\n\r\nDownloading and preparing dataset json\/default to \/root\/.cache\/huggingface\/datasets\/json\/default-a42f04b263ceea6a\/0.0.0\/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 2038.05it\/s]\r\nExtracting data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 794.83it\/s]\r\nDataset json downloaded and prepared to \/root\/.cache\/huggingface\/datasets\/json\/default-a42f04b263ceea6a\/0.0.0\/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4. Subsequent calls will reuse this data.\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 681.00it\/s]\r\n\r\n-------This works and eliminates data issues-------\r\n\r\nDownloading and preparing dataset json\/default to \/root\/.cache\/huggingface\/datasets\/json\/default-63f391c89599c7b0\/0.0.0\/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 3682.44it\/s]\r\nExtracting data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 788.70it\/s]\r\nGenerating train split: 0 examples [00:00, ? examples\/s]Failed to read file '\/home\/lakala\/hjc\/code\/pycode\/glm\/temp.json' with error : cannot mix list and non-list, non-null values\r\nTraceback (most recent call last):\r\n File \"\/home\/lakala\/conda\/envs\/glm\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1858, in _prepare_split_single\r\n for _, table in generator:\r\n File \"\/home\/lakala\/conda\/envs\/glm\/lib\/python3.8\/site-packages\/datasets\/packaged_modules\/json\/json.py\", line 146, in _generate_tables\r\n raise ValueError(f\"Not able to read records in the JSON file at {file}.\") from None\r\nValueError: Not able to read records in the JSON file at \/home\/lakala\/hjc\/code\/pycode\/glm\/temp.json.\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"\/home\/lakala\/hjc\/code\/pycode\/glm\/test.py\", line 22, in \r\n dataset = load_dataset(\"json\", data_files=temp_path)\r\n File \"\/home\/lakala\/conda\/envs\/glm\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1797, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/lakala\/conda\/envs\/glm\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 890, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/lakala\/conda\/envs\/glm\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 985, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/lakala\/conda\/envs\/glm\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1746, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"\/home\/lakala\/conda\/envs\/glm\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1891, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\ndatasets.builder.DatasetGenerationError: An error occurred while generating the dataset\r\n```\n\n### Environment info\n\n```\r\nUbuntu==22.04\r\n\r\npython==3.8\r\n\r\npytorch-transformers==1.2.0\r\ntransformers== 4.27.1\r\ndatasets==2.12.0\r\nnumpy==1.24.3\r\npandas==1.5.3\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5955\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5955\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5954","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5954\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5954\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5954\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5954","id":1756572994,"node_id":"PR_kwDODunzps5S-hSP","number":5954,"title":"Better filenotfound for gated","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006374 \/ 0.011353 (-0.004979) | 0.004100 \/ 0.011008 (-0.006909) | 0.104031 \/ 0.038508 (0.065523) | 0.035186 \/ 0.023109 (0.012076) | 0.328904 \/ 0.275898 (0.053006) | 0.361409 \/ 0.323480 (0.037929) | 0.003855 \/ 0.007986 (-0.004130) | 0.004140 \/ 0.004328 (-0.000189) | 0.080406 \/ 0.004250 (0.076156) | 0.045658 \/ 0.037052 (0.008606) | 0.341133 \/ 0.258489 (0.082644) | 0.372688 \/ 0.293841 (0.078847) | 0.032025 \/ 0.128546 (-0.096521) | 0.008877 \/ 0.075646 (-0.066769) | 0.354784 \/ 0.419271 (-0.064488) | 0.068874 \/ 0.043533 (0.025341) | 0.335441 \/ 0.255139 (0.080302) | 0.356498 \/ 0.283200 (0.073298) | 0.113367 \/ 0.141683 (-0.028316) | 1.522458 \/ 1.452155 (0.070304) | 1.608046 \/ 1.492716 (0.115329) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.231653 \/ 0.018006 (0.213647) | 0.446678 \/ 0.000490 (0.446188) | 0.003246 \/ 0.000200 (0.003046) | 0.000085 \/ 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025299 \/ 0.037411 (-0.012112) | 0.111440 \/ 0.014526 (0.096914) | 0.118758 \/ 0.176557 (-0.057799) | 0.175037 \/ 0.737135 (-0.562098) | 0.124583 \/ 0.296338 (-0.171755) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.418694 \/ 0.215209 (0.203484) | 4.174695 \/ 2.077655 (2.097041) | 1.890323 \/ 1.504120 (0.386203) | 1.683300 \/ 1.541195 (0.142106) | 1.781954 \/ 1.468490 (0.313464) | 0.546131 \/ 4.584777 (-4.038645) | 3.768055 \/ 3.745712 (0.022343) | 1.839878 \/ 5.269862 (-3.429983) | 1.111877 \/ 4.565676 (-3.453800) | 0.068568 \/ 0.424275 (-0.355707) | 0.011950 \/ 0.007607 (0.004343) | 0.527469 \/ 0.226044 (0.301425) | 5.274887 \/ 2.268929 (3.005958) | 2.391274 \/ 55.444624 (-53.053351) | 2.063837 \/ 6.876477 (-4.812640) | 2.140627 \/ 2.142072 (-0.001445) | 0.681508 \/ 4.805227 (-4.123719) | 0.148203 \/ 6.500664 (-6.352461) | 0.064456 \/ 0.075469 (-0.011013) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.221478 \/ 1.841788 (-0.620310) | 14.713705 \/ 8.074308 (6.639397) | 14.674184 \/ 10.191392 (4.482792) | 0.148411 \/ 0.680424 (-0.532012) | 0.017858 \/ 0.534201 (-0.516343) | 0.436166 \/ 0.579283 (-0.143117) | 0.437290 \/ 0.434364 (0.002926) | 0.521994 \/ 0.540337 (-0.018343) | 0.635488 \/ 1.386936 (-0.751448) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006108 \/ 0.011353 (-0.005245) | 0.003888 \/ 0.011008 (-0.007120) | 0.078424 \/ 0.038508 (0.039916) | 0.033618 \/ 0.023109 (0.010509) | 0.376284 \/ 0.275898 (0.100386) | 0.396957 \/ 0.323480 (0.073477) | 0.003799 \/ 0.007986 (-0.004187) | 0.003160 \/ 0.004328 (-0.001168) | 0.078358 \/ 0.004250 (0.074107) | 0.045597 \/ 0.037052 (0.008545) | 0.386396 \/ 0.258489 (0.127907) | 0.412985 \/ 0.293841 (0.119144) | 0.031610 \/ 0.128546 (-0.096936) | 0.008720 \/ 0.075646 (-0.066926) | 0.085944 \/ 0.419271 (-0.333328) | 0.050780 \/ 0.043533 (0.007247) | 0.378099 \/ 0.255139 (0.122960) | 0.381894 \/ 0.283200 (0.098694) | 0.098926 \/ 0.141683 (-0.042756) | 1.513842 \/ 1.452155 (0.061688) | 1.595040 \/ 1.492716 (0.102323) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.208169 \/ 0.018006 (0.190163) | 0.431653 \/ 0.000490 (0.431163) | 0.000935 \/ 0.000200 (0.000735) | 0.000088 \/ 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029600 \/ 0.037411 (-0.007812) | 0.116936 \/ 0.014526 (0.102410) | 0.125603 \/ 0.176557 (-0.050953) | 0.177007 \/ 0.737135 (-0.560129) | 0.130602 \/ 0.296338 (-0.165736) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.457158 \/ 0.215209 (0.241949) | 4.563254 \/ 2.077655 (2.485599) | 2.303549 \/ 1.504120 (0.799429) | 2.107269 \/ 1.541195 (0.566074) | 2.130861 \/ 1.468490 (0.662371) | 0.548931 \/ 4.584777 (-4.035846) | 3.745578 \/ 3.745712 (-0.000134) | 1.820372 \/ 5.269862 (-3.449490) | 1.099316 \/ 4.565676 (-3.466361) | 0.068218 \/ 0.424275 (-0.356057) | 0.012336 \/ 0.007607 (0.004728) | 0.569721 \/ 0.226044 (0.343676) | 5.691312 \/ 2.268929 (3.422384) | 2.797483 \/ 55.444624 (-52.647141) | 2.422621 \/ 6.876477 (-4.453855) | 2.426187 \/ 2.142072 (0.284115) | 0.674777 \/ 4.805227 (-4.130451) | 0.144855 \/ 6.500664 (-6.355809) | 0.065805 \/ 0.075469 (-0.009664) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.305078 \/ 1.841788 (-0.536709) | 14.874315 \/ 8.074308 (6.800007) | 14.541301 \/ 10.191392 (4.349909) | 0.175818 \/ 0.680424 (-0.504606) | 0.018169 \/ 0.534201 (-0.516032) | 0.435836 \/ 0.579283 (-0.143447) | 0.458397 \/ 0.434364 (0.024033) | 0.506232 \/ 0.540337 (-0.034106) | 0.605306 \/ 1.386936 (-0.781630) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#7e0c1ceab96821c7c6557482d25a9bd2078d716a \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006138 \/ 0.011353 (-0.005215) | 0.003792 \/ 0.011008 (-0.007216) | 0.099417 \/ 0.038508 (0.060908) | 0.028739 \/ 0.023109 (0.005630) | 0.302835 \/ 0.275898 (0.026937) | 0.336397 \/ 0.323480 (0.012918) | 0.003537 \/ 0.007986 (-0.004449) | 0.002973 \/ 0.004328 (-0.001355) | 0.077461 \/ 0.004250 (0.073211) | 0.039493 \/ 0.037052 (0.002440) | 0.302367 \/ 0.258489 (0.043878) | 0.344936 \/ 0.293841 (0.051095) | 0.027813 \/ 0.128546 (-0.100733) | 0.008591 \/ 0.075646 (-0.067055) | 0.318975 \/ 0.419271 (-0.100297) | 0.045971 \/ 0.043533 (0.002438) | 0.301672 \/ 0.255139 (0.046533) | 0.328202 \/ 0.283200 (0.045003) | 0.091400 \/ 0.141683 (-0.050282) | 1.487215 \/ 1.452155 (0.035060) | 1.557730 \/ 1.492716 (0.065014) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.208343 \/ 0.018006 (0.190336) | 0.426764 \/ 0.000490 (0.426275) | 0.001196 \/ 0.000200 (0.000996) | 0.000069 \/ 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024332 \/ 0.037411 (-0.013079) | 0.101861 \/ 0.014526 (0.087335) | 0.108669 \/ 0.176557 (-0.067888) | 0.172042 \/ 0.737135 (-0.565093) | 0.113048 \/ 0.296338 (-0.183290) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.421419 \/ 0.215209 (0.206210) | 4.200816 \/ 2.077655 (2.123162) | 1.913516 \/ 1.504120 (0.409396) | 1.712167 \/ 1.541195 (0.170972) | 1.762129 \/ 1.468490 (0.293639) | 0.561616 \/ 4.584777 (-4.023161) | 3.398122 \/ 3.745712 (-0.347590) | 1.744323 \/ 5.269862 (-3.525538) | 1.036023 \/ 4.565676 (-3.529653) | 0.067658 \/ 0.424275 (-0.356617) | 0.011145 \/ 0.007607 (0.003538) | 0.522803 \/ 0.226044 (0.296759) | 5.226245 \/ 2.268929 (2.957317) | 2.355148 \/ 55.444624 (-53.089476) | 2.014939 \/ 6.876477 (-4.861538) | 2.140028 \/ 2.142072 (-0.002044) | 0.695049 \/ 4.805227 (-4.110178) | 0.138428 \/ 6.500664 (-6.362236) | 0.066721 \/ 0.075469 (-0.008748) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.219610 \/ 1.841788 (-0.622177) | 14.239576 \/ 8.074308 (6.165268) | 14.381955 \/ 10.191392 (4.190563) | 0.131208 \/ 0.680424 (-0.549216) | 0.016698 \/ 0.534201 (-0.517503) | 0.361373 \/ 0.579283 (-0.217910) | 0.382560 \/ 0.434364 (-0.051804) | 0.419427 \/ 0.540337 (-0.120911) | 0.508314 \/ 1.386936 (-0.878622) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006174 \/ 0.011353 (-0.005179) | 0.003893 \/ 0.011008 (-0.007115) | 0.079614 \/ 0.038508 (0.041106) | 0.028685 \/ 0.023109 (0.005576) | 0.368627 \/ 0.275898 (0.092729) | 0.411599 \/ 0.323480 (0.088119) | 0.003573 \/ 0.007986 (-0.004413) | 0.002989 \/ 0.004328 (-0.001340) | 0.078653 \/ 0.004250 (0.074402) | 0.041146 \/ 0.037052 (0.004094) | 0.362387 \/ 0.258489 (0.103898) | 0.417234 \/ 0.293841 (0.123393) | 0.027958 \/ 0.128546 (-0.100589) | 0.008695 \/ 0.075646 (-0.066952) | 0.084637 \/ 0.419271 (-0.334635) | 0.044188 \/ 0.043533 (0.000655) | 0.358514 \/ 0.255139 (0.103375) | 0.392314 \/ 0.283200 (0.109114) | 0.093986 \/ 0.141683 (-0.047697) | 1.535366 \/ 1.452155 (0.083212) | 1.605978 \/ 1.492716 (0.113262) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.196215 \/ 0.018006 (0.178209) | 0.429403 \/ 0.000490 (0.428913) | 0.003736 \/ 0.000200 (0.003536) | 0.000078 \/ 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025281 \/ 0.037411 (-0.012130) | 0.104325 \/ 0.014526 (0.089799) | 0.111548 \/ 0.176557 (-0.065009) | 0.162326 \/ 0.737135 (-0.574809) | 0.113853 \/ 0.296338 (-0.182486) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.447600 \/ 0.215209 (0.232391) | 4.463422 \/ 2.077655 (2.385767) | 2.168028 \/ 1.504120 (0.663908) | 1.968699 \/ 1.541195 (0.427504) | 2.035531 \/ 1.468490 (0.567041) | 0.564575 \/ 4.584777 (-4.020202) | 3.435338 \/ 3.745712 (-0.310374) | 2.981930 \/ 5.269862 (-2.287932) | 1.492172 \/ 4.565676 (-3.073505) | 0.067981 \/ 0.424275 (-0.356294) | 0.011254 \/ 0.007607 (0.003647) | 0.544385 \/ 0.226044 (0.318340) | 5.441694 \/ 2.268929 (3.172765) | 2.650168 \/ 55.444624 (-52.794456) | 2.333974 \/ 6.876477 (-4.542503) | 2.383424 \/ 2.142072 (0.241351) | 0.669814 \/ 4.805227 (-4.135414) | 0.135456 \/ 6.500664 (-6.365209) | 0.067067 \/ 0.075469 (-0.008402) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.313275 \/ 1.841788 (-0.528513) | 14.527636 \/ 8.074308 (6.453328) | 14.470957 \/ 10.191392 (4.279565) | 0.144361 \/ 0.680424 (-0.536063) | 0.016847 \/ 0.534201 (-0.517354) | 0.365158 \/ 0.579283 (-0.214125) | 0.393809 \/ 0.434364 (-0.040555) | 0.428527 \/ 0.540337 (-0.111810) | 0.515816 \/ 1.386936 (-0.871120) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#7845d4c3c301226b3f8941ac90aaa123bfd7c69e \"CML watermark\")\n"],"created_at":1686738790000,"updated_at":1686746007000,"closed_at":1686745591000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5954","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5954","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5954.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5954.patch","merged_at":1686745591000},"body":"close https:\/\/github.com\/huggingface\/datasets\/issues\/5953\r\n\r\n\"image\"\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5954\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5954\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5953","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5953\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5953\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5953\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5953","id":1756520523,"node_id":"I_kwDODunzps5osmBL","number":5953,"title":"Bad error message when trying to download gated dataset","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["cc @sanchit-gandhi @Vaibhavs10 @lhoestq - this is mainly for demos that use Common Voice datasets as done here: https:\/\/github.com\/facebookresearch\/fairseq\/tree\/main\/examples\/mms#-transformers\r\n","Hi ! the error for me is\r\n\r\n```\r\nFileNotFoundError: Couldn't find a dataset script at \/content\/mozilla-foundation\/common_voice_13_0\/common_voice_13_0.py or any data file in the same directory. Couldn't find 'mozilla-foundation\/common_voice_13_0' on the Hugging Face Hub either: FileNotFoundError: Dataset 'mozilla-foundation\/common_voice_13_0' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`.\r\n```\r\n\r\nAnd tbh idk how you managed to get your error. \"n_shards.json\" is not even a thing in `datasets`","Okay, I am able to reproduce @patrickvonplaten's original error: https:\/\/github.com\/Vaibhavs10\/scratchpad\/blob\/main\/cv13_datasets_test.ipynb\r\n\r\nAlso not sure why it looks for `n_shards.json`","Ok I see, this file is downloaded from the CV dataset script - let me investigate","Ok I see: when you log out you no longer have access to the repository.\r\n\r\nTherefore the dataset script is loaded from cache:\r\n```\r\nWARNING:datasets.load:Using the latest cached version of the module from \/root\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/mozilla-foundation--common_voice_13_0\/22809012aac1fc9803eaffc44122e4149043748e93933935d5ea19898587e4d7 (last modified on Wed Jun 14 10:13:17 2023) since it couldn't be found locally at mozilla-foundation\/common_voice_13_0., or remotely on the Hugging Face Hub.\r\n```\r\n\r\nand the script tries to download the n_shards.json but fails","Is this ok for you https:\/\/github.com\/huggingface\/datasets\/pull\/5954 ?\r\n\r\nI'll do a release this afternoon","Cool! ","this is included in the new release 2.13.0"],"created_at":1686737019000,"updated_at":1686760611000,"closed_at":1686745592000,"author_association":"MEMBER","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nWhen I attempt to download a model from the Hub that is gated without being logged in, I get a nice error message. E.g.:\r\n\r\nE.g.\r\n```sh\r\nRepository Not Found for url: https:\/\/huggingface.co\/api\/models\/DeepFloyd\/IF-I-XL-v1.0.\r\nPlease make sure you specified the correct `repo_id` and `repo_type`.\r\nIf you are trying to access a private or gated repo, make sure you are authenticated.\r\nInvalid username or password..\r\nWill try to load from local cache.\r\n```\r\n\r\nIf I do the same for a gated dataset on the Hub, I'm not gated a nice error message IMO:\r\n\r\n```sh\r\nFile ~\/hf\/lib\/python3.10\/site-packages\/fsspec\/implementations\/http.py:430, in HTTPFileSystem._info(self, url, **kwargs)\r\n 427 except Exception as exc:\r\n 428 if policy == \"get\":\r\n 429 # If get failed, then raise a FileNotFoundError\r\n--> 430 raise FileNotFoundError(url) from exc\r\n 431 logger.debug(str(exc))\r\n 433 return {\"name\": url, \"size\": None, **info, \"type\": \"file\"}\r\n\r\nFileNotFoundError: https:\/\/huggingface.co\/datasets\/mozilla-foundation\/common_voice_13_0\/resolve\/main\/n_shards.json\r\n```\n\n### Steps to reproduce the bug\n\n```\r\nhuggingface-cli logout\r\n```\r\n\r\nand then:\r\n\r\n```py\r\nfrom datasets import load_dataset, Audio\r\n\r\n# English\r\nstream_data = load_dataset(\"mozilla-foundation\/common_voice_13_0\", \"en\", split=\"test\", streaming=True)\r\nstream_data = stream_data.cast_column(\"audio\", Audio(sampling_rate=16000))\r\nen_sample = next(iter(stream_data))[\"audio\"][\"array\"]\r\n\r\n# Swahili\r\nstream_data = load_dataset(\"mozilla-foundation\/common_voice_13_0\", \"sw\", split=\"test\", streaming=True)\r\nstream_data = stream_data.cast_column(\"audio\", Audio(sampling_rate=16000))\r\nsw_sample = next(iter(stream_data))[\"audio\"][\"array\"]\r\n```\n\n### Expected behavior\n\nBetter error message\n\n### Environment info\n\nCopy-and-paste the text below in your GitHub issue.\r\n\r\n- `datasets` version: 2.12.0\r\n- Platform: Linux-6.2.0-76060200-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.6\r\n- Huggingface_hub version: 0.16.0.dev0\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 1.5.3\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5953\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5953\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5952","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5952\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5952\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5952\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5952","id":1756481591,"node_id":"PR_kwDODunzps5S-OIh","number":5952,"title":"Add Arrow builder docs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006522 \/ 0.011353 (-0.004831) | 0.004319 \/ 0.011008 (-0.006690) | 0.099280 \/ 0.038508 (0.060772) | 0.033117 \/ 0.023109 (0.010007) | 0.339392 \/ 0.275898 (0.063494) | 0.366219 \/ 0.323480 (0.042739) | 0.003896 \/ 0.007986 (-0.004090) | 0.003412 \/ 0.004328 (-0.000916) | 0.076655 \/ 0.004250 (0.072404) | 0.045203 \/ 0.037052 (0.008150) | 0.355800 \/ 0.258489 (0.097311) | 0.372533 \/ 0.293841 (0.078692) | 0.032318 \/ 0.128546 (-0.096229) | 0.009030 \/ 0.075646 (-0.066616) | 0.328701 \/ 0.419271 (-0.090571) | 0.052891 \/ 0.043533 (0.009358) | 0.341131 \/ 0.255139 (0.085992) | 0.351593 \/ 0.283200 (0.068393) | 0.105136 \/ 0.141683 (-0.036546) | 1.475953 \/ 1.452155 (0.023798) | 1.566074 \/ 1.492716 (0.073357) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.216671 \/ 0.018006 (0.198664) | 0.446952 \/ 0.000490 (0.446462) | 0.006340 \/ 0.000200 (0.006140) | 0.000096 \/ 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028293 \/ 0.037411 (-0.009118) | 0.112298 \/ 0.014526 (0.097773) | 0.118634 \/ 0.176557 (-0.057923) | 0.175542 \/ 0.737135 (-0.561593) | 0.124773 \/ 0.296338 (-0.171565) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.435209 \/ 0.215209 (0.220000) | 4.344361 \/ 2.077655 (2.266706) | 2.128943 \/ 1.504120 (0.624823) | 1.945465 \/ 1.541195 (0.404271) | 2.049932 \/ 1.468490 (0.581442) | 0.547126 \/ 4.584777 (-4.037651) | 3.768698 \/ 3.745712 (0.022986) | 1.924441 \/ 5.269862 (-3.345420) | 1.146364 \/ 4.565676 (-3.419312) | 0.067466 \/ 0.424275 (-0.356809) | 0.011175 \/ 0.007607 (0.003568) | 0.540978 \/ 0.226044 (0.314933) | 5.393120 \/ 2.268929 (3.124191) | 2.639027 \/ 55.444624 (-52.805597) | 2.327216 \/ 6.876477 (-4.549261) | 2.500532 \/ 2.142072 (0.358460) | 0.679120 \/ 4.805227 (-4.126107) | 0.148824 \/ 6.500664 (-6.351840) | 0.064195 \/ 0.075469 (-0.011274) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.158387 \/ 1.841788 (-0.683401) | 14.880751 \/ 8.074308 (6.806443) | 14.725249 \/ 10.191392 (4.533857) | 0.149785 \/ 0.680424 (-0.530639) | 0.017338 \/ 0.534201 (-0.516863) | 0.390980 \/ 0.579283 (-0.188303) | 0.425611 \/ 0.434364 (-0.008753) | 0.458851 \/ 0.540337 (-0.081487) | 0.559209 \/ 1.386936 (-0.827727) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006835 \/ 0.011353 (-0.004518) | 0.004318 \/ 0.011008 (-0.006690) | 0.076715 \/ 0.038508 (0.038207) | 0.033528 \/ 0.023109 (0.010419) | 0.411986 \/ 0.275898 (0.136087) | 0.438752 \/ 0.323480 (0.115272) | 0.004039 \/ 0.007986 (-0.003947) | 0.003509 \/ 0.004328 (-0.000819) | 0.077924 \/ 0.004250 (0.073673) | 0.049519 \/ 0.037052 (0.012467) | 0.420595 \/ 0.258489 (0.162106) | 0.450536 \/ 0.293841 (0.156695) | 0.032817 \/ 0.128546 (-0.095729) | 0.008963 \/ 0.075646 (-0.066684) | 0.083818 \/ 0.419271 (-0.335454) | 0.057591 \/ 0.043533 (0.014058) | 0.404605 \/ 0.255139 (0.149466) | 0.423661 \/ 0.283200 (0.140462) | 0.110698 \/ 0.141683 (-0.030984) | 1.512515 \/ 1.452155 (0.060361) | 1.569207 \/ 1.492716 (0.076490) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.200795 \/ 0.018006 (0.182789) | 0.448853 \/ 0.000490 (0.448363) | 0.003657 \/ 0.000200 (0.003457) | 0.000102 \/ 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031612 \/ 0.037411 (-0.005799) | 0.116712 \/ 0.014526 (0.102186) | 0.126162 \/ 0.176557 (-0.050395) | 0.180522 \/ 0.737135 (-0.556614) | 0.129768 \/ 0.296338 (-0.166570) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.433797 \/ 0.215209 (0.218588) | 4.353099 \/ 2.077655 (2.275444) | 2.117582 \/ 1.504120 (0.613462) | 1.934487 \/ 1.541195 (0.393292) | 2.016988 \/ 1.468490 (0.548498) | 0.531387 \/ 4.584777 (-4.053390) | 3.843520 \/ 3.745712 (0.097807) | 1.879560 \/ 5.269862 (-3.390301) | 1.129445 \/ 4.565676 (-3.436231) | 0.065952 \/ 0.424275 (-0.358323) | 0.011566 \/ 0.007607 (0.003959) | 0.533949 \/ 0.226044 (0.307904) | 5.327447 \/ 2.268929 (3.058518) | 2.572202 \/ 55.444624 (-52.872422) | 2.240723 \/ 6.876477 (-4.635753) | 2.329290 \/ 2.142072 (0.187217) | 0.662162 \/ 4.805227 (-4.143066) | 0.143191 \/ 6.500664 (-6.357473) | 0.065273 \/ 0.075469 (-0.010196) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.274945 \/ 1.841788 (-0.566843) | 15.444511 \/ 8.074308 (7.370203) | 14.793524 \/ 10.191392 (4.602132) | 0.175607 \/ 0.680424 (-0.504817) | 0.017324 \/ 0.534201 (-0.516877) | 0.396172 \/ 0.579283 (-0.183111) | 0.437334 \/ 0.434364 (0.002970) | 0.472621 \/ 0.540337 (-0.067716) | 0.574888 \/ 1.386936 (-0.812048) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#b4ab1b3ed7257b0e0ad075d7271a51835f320a5e \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006976 \/ 0.011353 (-0.004377) | 0.004541 \/ 0.011008 (-0.006467) | 0.106085 \/ 0.038508 (0.067577) | 0.029148 \/ 0.023109 (0.006039) | 0.306386 \/ 0.275898 (0.030488) | 0.351474 \/ 0.323480 (0.027994) | 0.003924 \/ 0.007986 (-0.004062) | 0.004588 \/ 0.004328 (0.000260) | 0.090479 \/ 0.004250 (0.086229) | 0.041195 \/ 0.037052 (0.004142) | 0.346020 \/ 0.258489 (0.087531) | 0.362526 \/ 0.293841 (0.068685) | 0.041020 \/ 0.128546 (-0.087526) | 0.012536 \/ 0.075646 (-0.063110) | 0.333247 \/ 0.419271 (-0.086024) | 0.059786 \/ 0.043533 (0.016253) | 0.318094 \/ 0.255139 (0.062955) | 0.343879 \/ 0.283200 (0.060679) | 0.110083 \/ 0.141683 (-0.031600) | 1.514027 \/ 1.452155 (0.061872) | 1.551435 \/ 1.492716 (0.058719) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.235401 \/ 0.018006 (0.217395) | 0.544292 \/ 0.000490 (0.543803) | 0.005284 \/ 0.000200 (0.005084) | 0.000112 \/ 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025008 \/ 0.037411 (-0.012403) | 0.102235 \/ 0.014526 (0.087709) | 0.105523 \/ 0.176557 (-0.071034) | 0.180846 \/ 0.737135 (-0.556289) | 0.107078 \/ 0.296338 (-0.189261) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.502374 \/ 0.215209 (0.287165) | 5.224254 \/ 2.077655 (3.146600) | 1.987193 \/ 1.504120 (0.483073) | 1.694680 \/ 1.541195 (0.153485) | 1.663907 \/ 1.468490 (0.195417) | 0.786470 \/ 4.584777 (-3.798307) | 4.977895 \/ 3.745712 (1.232183) | 4.713451 \/ 5.269862 (-0.556410) | 2.298763 \/ 4.565676 (-2.266913) | 0.090225 \/ 0.424275 (-0.334051) | 0.011427 \/ 0.007607 (0.003820) | 0.640686 \/ 0.226044 (0.414641) | 6.351727 \/ 2.268929 (4.082798) | 2.636912 \/ 55.444624 (-52.807712) | 2.075566 \/ 6.876477 (-4.800911) | 2.080260 \/ 2.142072 (-0.061812) | 0.952727 \/ 4.805227 (-3.852500) | 0.188651 \/ 6.500664 (-6.312013) | 0.068997 \/ 0.075469 (-0.006472) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.258878 \/ 1.841788 (-0.582910) | 15.444724 \/ 8.074308 (7.370416) | 17.521918 \/ 10.191392 (7.330526) | 0.189732 \/ 0.680424 (-0.490692) | 0.031084 \/ 0.534201 (-0.503117) | 0.445150 \/ 0.579283 (-0.134133) | 0.575844 \/ 0.434364 (0.141480) | 0.498162 \/ 0.540337 (-0.042176) | 0.635885 \/ 1.386936 (-0.751051) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007402 \/ 0.011353 (-0.003951) | 0.005058 \/ 0.011008 (-0.005950) | 0.077659 \/ 0.038508 (0.039151) | 0.034934 \/ 0.023109 (0.011825) | 0.373139 \/ 0.275898 (0.097241) | 0.411857 \/ 0.323480 (0.088377) | 0.003751 \/ 0.007986 (-0.004235) | 0.003634 \/ 0.004328 (-0.000695) | 0.075914 \/ 0.004250 (0.071663) | 0.037555 \/ 0.037052 (0.000503) | 0.387482 \/ 0.258489 (0.128993) | 0.434407 \/ 0.293841 (0.140566) | 0.040540 \/ 0.128546 (-0.088006) | 0.013458 \/ 0.075646 (-0.062189) | 0.096129 \/ 0.419271 (-0.323143) | 0.055369 \/ 0.043533 (0.011836) | 0.386564 \/ 0.255139 (0.131425) | 0.410417 \/ 0.283200 (0.127218) | 0.093265 \/ 0.141683 (-0.048418) | 1.432841 \/ 1.452155 (-0.019314) | 1.533180 \/ 1.492716 (0.040463) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.281051 \/ 0.018006 (0.263045) | 0.547635 \/ 0.000490 (0.547146) | 0.004434 \/ 0.000200 (0.004234) | 0.000105 \/ 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026409 \/ 0.037411 (-0.011002) | 0.098586 \/ 0.014526 (0.084060) | 0.109223 \/ 0.176557 (-0.067334) | 0.165958 \/ 0.737135 (-0.571177) | 0.111751 \/ 0.296338 (-0.184587) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.542717 \/ 0.215209 (0.327508) | 5.530075 \/ 2.077655 (3.452420) | 2.351141 \/ 1.504120 (0.847022) | 2.021659 \/ 1.541195 (0.480464) | 1.964900 \/ 1.468490 (0.496410) | 0.819698 \/ 4.584777 (-3.765079) | 4.917412 \/ 3.745712 (1.171700) | 2.425149 \/ 5.269862 (-2.844712) | 1.561953 \/ 4.565676 (-3.003724) | 0.098417 \/ 0.424275 (-0.325858) | 0.012594 \/ 0.007607 (0.004986) | 0.717212 \/ 0.226044 (0.491168) | 6.994833 \/ 2.268929 (4.725904) | 2.997347 \/ 55.444624 (-52.447277) | 2.388366 \/ 6.876477 (-4.488111) | 2.502913 \/ 2.142072 (0.360841) | 1.030545 \/ 4.805227 (-3.774682) | 0.184844 \/ 6.500664 (-6.315820) | 0.076889 \/ 0.075469 (0.001420) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.371647 \/ 1.841788 (-0.470141) | 15.522995 \/ 8.074308 (7.448687) | 17.349823 \/ 10.191392 (7.158431) | 0.229709 \/ 0.680424 (-0.450714) | 0.023303 \/ 0.534201 (-0.510898) | 0.413874 \/ 0.579283 (-0.165409) | 0.567552 \/ 0.434364 (0.133188) | 0.491722 \/ 0.540337 (-0.048615) | 0.590640 \/ 1.386936 (-0.796296) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#f1911ffa5d1f58f509d04fe1ddeb9d00a63f94d5 \"CML watermark\")\n"],"created_at":1686735766000,"updated_at":1686753751000,"closed_at":1686753279000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5952","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5952","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5952.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5952.patch","merged_at":1686753279000},"body":"following https:\/\/github.com\/huggingface\/datasets\/pull\/5944","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5952\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5952\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5951","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5951\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5951\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5951\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5951","id":1756363546,"node_id":"I_kwDODunzps5or_sa","number":5951,"title":"What is the Right way to use discofuse dataset??","user":{"login":"akesh1235","id":125154243,"node_id":"U_kgDOB3Wzww","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/125154243?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/akesh1235","html_url":"https:\/\/github.com\/akesh1235","followers_url":"https:\/\/api.github.com\/users\/akesh1235\/followers","following_url":"https:\/\/api.github.com\/users\/akesh1235\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/akesh1235\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/akesh1235\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/akesh1235\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/akesh1235\/orgs","repos_url":"https:\/\/api.github.com\/users\/akesh1235\/repos","events_url":"https:\/\/api.github.com\/users\/akesh1235\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/akesh1235\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for opening https:\/\/huggingface.co\/datasets\/discofuse\/discussions\/3, let's continue the discussion over there if you don't mind","I have posted there also sir, please check\r\n@lhoestq"],"created_at":1686731919000,"updated_at":1686749106000,"closed_at":1686744616000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"[Click here for Dataset link](https:\/\/huggingface.co\/datasets\/discofuse\/viewer\/discofuse-wikipedia\/train?row=6)\r\n**Below is the following way, as per my understanding , Is it correct :question: :question:**\r\n\r\nThe **columns\/features from `DiscoFuse dataset`** that will be the **input to the `encoder` and `decoder`** are:\r\n\r\n[Click here for Dataset link](https:\/\/huggingface.co\/datasets\/discofuse\/viewer\/discofuse-wikipedia\/train?row=6)\r\n\r\n1. **coherent_first_sentence**\r\n\r\n2. **coherent_second_sentence**\r\n\r\n3. **incoherent_first_sentence**\r\n\r\n4. **incoherent_second_sentence**\r\n\r\n[Click here for Dataset link](https:\/\/huggingface.co\/datasets\/discofuse\/viewer\/discofuse-wikipedia\/train?row=6)\r\n\r\nThe **`encoder` will take these four columns as input and encode them into a sequence of hidden states. The `decoder` will then take these hidden states as input and decode them into a new sentence that fuses the two original sentences together.**\r\n\r\nThe **discourse type, connective_string, has_coref_type_pronoun, and has_coref_type_nominal columns will not be used as input to the encoder or decoder.** These columns are used to provide additional information about the dataset, but they are not necessary for the task of sentence fusion.\r\n\r\nPlease correct me if I am wrong; otherwise, if this understanding is right, how shall I implement this task practically?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5951\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5951\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5950","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5950\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5950\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5950\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5950","id":1755197946,"node_id":"I_kwDODunzps5onjH6","number":5950,"title":"Support for data with instance-wise dictionary as features","user":{"login":"richardwth","id":33274336,"node_id":"MDQ6VXNlcjMzMjc0MzM2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33274336?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richardwth","html_url":"https:\/\/github.com\/richardwth","followers_url":"https:\/\/api.github.com\/users\/richardwth\/followers","following_url":"https:\/\/api.github.com\/users\/richardwth\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richardwth\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richardwth\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richardwth\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richardwth\/orgs","repos_url":"https:\/\/api.github.com\/users\/richardwth\/repos","events_url":"https:\/\/api.github.com\/users\/richardwth\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richardwth\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! We use the Arrow columnar format under the hood, which doesn't support such dictionaries: each field must have a fixed type and exist in each sample.\r\n\r\nInstead you can restructure your data like\r\n```\r\n{\r\n \"index\": 0,\r\n \"keys\": [\"2 * x + y >= 3\"],\r\n \"values\": [[\"2 * x + y >= 3\", \"4 * x + 2 * y >= 6\"]],\r\n }\r\n},\r\n...\r\n{\r\n \"index\": 9999,\r\n \"keys\": [\"x >= 6\"],\r\n \"values\": [[\"x >= 6\", \"x >= 0\", \"x >= -1\"]],\r\n},\r\n...\r\n```"],"created_at":1686671340000,"updated_at":1686744818000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\n\nI notice that when loading data instances with feature type of python dictionary, the dictionary keys would be broadcast so that every instance has the same set of keys. Please see an example in the Motivation section.\r\n\r\nIt is possible to avoid this behavior, i.e., load dictionary features as it is and do not broadcast the keys among instances? Please note that these dictionaries would have to be processed dynamically at each training iteration into strings (and tokenized).\n\n### Motivation\n\nI am trying to load a dataset from a json file. Each instance of the dataset has a feature that is a dictionary but its keys depend on the instance. Every two instances may have different keys. For example, imagine a dataset that contains a set of math expressions from a bunch of mutually redundant expressions:\r\n```\r\n{\r\n \"index\": 0,\r\n \"feature\": {\r\n \"2 * x + y >= 3\": [\"2 * x + y >= 3\", \"4 * x + 2 * y >= 6\"],\r\n ...\r\n }\r\n},\r\n...\r\n{\r\n \"index\": 9999,\r\n \"feature\": {\r\n \"x >= 6\": [\"x >= 6\", \"x >= 0\", \"x >= -1\"],\r\n ...\r\n }\r\n},\r\n...\r\n```\r\nWhen directly loading the dataset using `data = load_dataset(\"json\", data_files=file_paths, split='train')`, each instance would have all the keys from other instances and None as values. That is, instance of index 0 becomes:\r\n```\r\n{\r\n \"index\": 0,\r\n \"feature\": {\r\n \"2 * x + y >= 3\": [\"2 * x + y >= 3\", \"4 * x + 2 * y >= 6\"],\r\n ...\r\n \"x >= 6\": None, # keys from other instances\r\n ...\r\n }\r\n},\r\n```\r\nThis is not desirable. Moreover, issue would be raised if I attempt to combine two such datasets using `data = concatenate_datasets(multi_datasets)`, perhaps because their dictionary features contain different keys.\r\n\r\nA solution I can think of is to store the dictionary features as a long string, and evaluate it later. Please kindly suggest any other solution using existing methods of datasets.\n\n### Your contribution\n\nN\/A","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5950\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5950\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5949","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5949\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5949\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5949\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5949","id":1754843717,"node_id":"PR_kwDODunzps5S4oPC","number":5949,"title":"Replace metadata utils with `huggingface_hub`'s RepoCard API","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006635 \/ 0.011353 (-0.004718) | 0.004439 \/ 0.011008 (-0.006570) | 0.107831 \/ 0.038508 (0.069323) | 0.035664 \/ 0.023109 (0.012555) | 0.393733 \/ 0.275898 (0.117835) | 0.418336 \/ 0.323480 (0.094856) | 0.005739 \/ 0.007986 (-0.002247) | 0.005737 \/ 0.004328 (0.001408) | 0.079820 \/ 0.004250 (0.075569) | 0.045402 \/ 0.037052 (0.008349) | 0.396108 \/ 0.258489 (0.137619) | 0.422951 \/ 0.293841 (0.129110) | 0.030506 \/ 0.128546 (-0.098040) | 0.009785 \/ 0.075646 (-0.065861) | 0.375302 \/ 0.419271 (-0.043969) | 0.054355 \/ 0.043533 (0.010823) | 0.399652 \/ 0.255139 (0.144513) | 0.410825 \/ 0.283200 (0.127625) | 0.109238 \/ 0.141683 (-0.032445) | 1.687532 \/ 1.452155 (0.235378) | 1.736829 \/ 1.492716 (0.244113) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.226514 \/ 0.018006 (0.208508) | 0.487010 \/ 0.000490 (0.486520) | 0.006436 \/ 0.000200 (0.006236) | 0.000102 \/ 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029097 \/ 0.037411 (-0.008315) | 0.122979 \/ 0.014526 (0.108453) | 0.129454 \/ 0.176557 (-0.047103) | 0.194006 \/ 0.737135 (-0.543129) | 0.137968 \/ 0.296338 (-0.158370) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.466425 \/ 0.215209 (0.251216) | 4.627307 \/ 2.077655 (2.549652) | 2.108840 \/ 1.504120 (0.604720) | 1.882547 \/ 1.541195 (0.341353) | 1.891077 \/ 1.468490 (0.422587) | 0.590646 \/ 4.584777 (-3.994131) | 4.176918 \/ 3.745712 (0.431205) | 2.071475 \/ 5.269862 (-3.198386) | 1.173815 \/ 4.565676 (-3.391862) | 0.075330 \/ 0.424275 (-0.348945) | 0.012944 \/ 0.007607 (0.005337) | 0.587080 \/ 0.226044 (0.361036) | 5.827053 \/ 2.268929 (3.558125) | 2.694258 \/ 55.444624 (-52.750366) | 2.276997 \/ 6.876477 (-4.599480) | 2.329678 \/ 2.142072 (0.187605) | 0.721860 \/ 4.805227 (-4.083367) | 0.159238 \/ 6.500664 (-6.341426) | 0.073013 \/ 0.075469 (-0.002456) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.345396 \/ 1.841788 (-0.496391) | 16.619283 \/ 8.074308 (8.544975) | 14.754754 \/ 10.191392 (4.563362) | 0.180784 \/ 0.680424 (-0.499639) | 0.020376 \/ 0.534201 (-0.513825) | 0.451010 \/ 0.579283 (-0.128273) | 0.481524 \/ 0.434364 (0.047160) | 0.564777 \/ 0.540337 (0.024440) | 0.683232 \/ 1.386936 (-0.703704) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007243 \/ 0.011353 (-0.004110) | 0.005262 \/ 0.011008 (-0.005746) | 0.084090 \/ 0.038508 (0.045581) | 0.037429 \/ 0.023109 (0.014320) | 0.404038 \/ 0.275898 (0.128140) | 0.445040 \/ 0.323480 (0.121560) | 0.006220 \/ 0.007986 (-0.001766) | 0.004256 \/ 0.004328 (-0.000072) | 0.083794 \/ 0.004250 (0.079544) | 0.052655 \/ 0.037052 (0.015603) | 0.414083 \/ 0.258489 (0.155594) | 0.458190 \/ 0.293841 (0.164349) | 0.032719 \/ 0.128546 (-0.095828) | 0.010063 \/ 0.075646 (-0.065583) | 0.092281 \/ 0.419271 (-0.326990) | 0.053888 \/ 0.043533 (0.010355) | 0.407813 \/ 0.255139 (0.152674) | 0.431692 \/ 0.283200 (0.148493) | 0.119799 \/ 0.141683 (-0.021884) | 1.709853 \/ 1.452155 (0.257698) | 1.771592 \/ 1.492716 (0.278876) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.246540 \/ 0.018006 (0.228534) | 0.483199 \/ 0.000490 (0.482709) | 0.002514 \/ 0.000200 (0.002315) | 0.000096 \/ 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031576 \/ 0.037411 (-0.005835) | 0.130020 \/ 0.014526 (0.115495) | 0.140285 \/ 0.176557 (-0.036272) | 0.196164 \/ 0.737135 (-0.540972) | 0.143924 \/ 0.296338 (-0.152414) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.488549 \/ 0.215209 (0.273340) | 4.888055 \/ 2.077655 (2.810400) | 2.389163 \/ 1.504120 (0.885043) | 2.184626 \/ 1.541195 (0.643431) | 2.260227 \/ 1.468490 (0.791737) | 0.601331 \/ 4.584777 (-3.983446) | 4.386159 \/ 3.745712 (0.640447) | 3.345814 \/ 5.269862 (-1.924048) | 1.734360 \/ 4.565676 (-2.831317) | 0.073199 \/ 0.424275 (-0.351076) | 0.012397 \/ 0.007607 (0.004790) | 0.601411 \/ 0.226044 (0.375366) | 6.135000 \/ 2.268929 (3.866072) | 2.930169 \/ 55.444624 (-52.514456) | 2.532631 \/ 6.876477 (-4.343845) | 2.619351 \/ 2.142072 (0.477279) | 0.740954 \/ 4.805227 (-4.064274) | 0.162936 \/ 6.500664 (-6.337728) | 0.073885 \/ 0.075469 (-0.001585) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.502493 \/ 1.841788 (-0.339294) | 17.026756 \/ 8.074308 (8.952448) | 15.880958 \/ 10.191392 (5.689566) | 0.167261 \/ 0.680424 (-0.513163) | 0.020347 \/ 0.534201 (-0.513854) | 0.452902 \/ 0.579283 (-0.126381) | 0.481614 \/ 0.434364 (0.047250) | 0.539893 \/ 0.540337 (-0.000445) | 0.653401 \/ 1.386936 (-0.733535) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#6a5781212e968e2515afdf29370a6eab6f657120 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008268 \/ 0.011353 (-0.003084) | 0.005538 \/ 0.011008 (-0.005470) | 0.126136 \/ 0.038508 (0.087628) | 0.046100 \/ 0.023109 (0.022991) | 0.366882 \/ 0.275898 (0.090984) | 0.408912 \/ 0.323480 (0.085432) | 0.007090 \/ 0.007986 (-0.000895) | 0.004820 \/ 0.004328 (0.000491) | 0.091432 \/ 0.004250 (0.087181) | 0.058390 \/ 0.037052 (0.021338) | 0.368787 \/ 0.258489 (0.110298) | 0.419429 \/ 0.293841 (0.125588) | 0.034958 \/ 0.128546 (-0.093588) | 0.010526 \/ 0.075646 (-0.065120) | 0.463063 \/ 0.419271 (0.043791) | 0.070544 \/ 0.043533 (0.027011) | 0.366182 \/ 0.255139 (0.111043) | 0.390851 \/ 0.283200 (0.107652) | 0.128377 \/ 0.141683 (-0.013306) | 1.819385 \/ 1.452155 (0.367231) | 1.928834 \/ 1.492716 (0.436117) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.228413 \/ 0.018006 (0.210407) | 0.485511 \/ 0.000490 (0.485021) | 0.005395 \/ 0.000200 (0.005195) | 0.000119 \/ 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.035209 \/ 0.037411 (-0.002203) | 0.144492 \/ 0.014526 (0.129967) | 0.150467 \/ 0.176557 (-0.026089) | 0.223861 \/ 0.737135 (-0.513274) | 0.156363 \/ 0.296338 (-0.139975) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.517751 \/ 0.215209 (0.302542) | 5.150438 \/ 2.077655 (3.072783) | 2.483601 \/ 1.504120 (0.979481) | 2.279786 \/ 1.541195 (0.738592) | 2.374510 \/ 1.468490 (0.906020) | 0.637547 \/ 4.584777 (-3.947230) | 4.845393 \/ 3.745712 (1.099681) | 2.241554 \/ 5.269862 (-3.028307) | 1.290105 \/ 4.565676 (-3.275572) | 0.079791 \/ 0.424275 (-0.344484) | 0.014915 \/ 0.007607 (0.007308) | 0.640468 \/ 0.226044 (0.414423) | 6.394810 \/ 2.268929 (4.125881) | 3.012748 \/ 55.444624 (-52.431876) | 2.625565 \/ 6.876477 (-4.250912) | 2.792435 \/ 2.142072 (0.650363) | 0.782284 \/ 4.805227 (-4.022944) | 0.171628 \/ 6.500664 (-6.329036) | 0.081714 \/ 0.075469 (0.006245) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.592411 \/ 1.841788 (-0.249377) | 18.999604 \/ 8.074308 (10.925295) | 18.469946 \/ 10.191392 (8.278554) | 0.200878 \/ 0.680424 (-0.479546) | 0.021595 \/ 0.534201 (-0.512606) | 0.519247 \/ 0.579283 (-0.060036) | 0.534940 \/ 0.434364 (0.100576) | 0.656325 \/ 0.540337 (0.115987) | 0.789658 \/ 1.386936 (-0.597278) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008093 \/ 0.011353 (-0.003260) | 0.005524 \/ 0.011008 (-0.005484) | 0.092339 \/ 0.038508 (0.053831) | 0.045619 \/ 0.023109 (0.022510) | 0.449376 \/ 0.275898 (0.173478) | 0.478587 \/ 0.323480 (0.155107) | 0.006978 \/ 0.007986 (-0.001007) | 0.004622 \/ 0.004328 (0.000294) | 0.090618 \/ 0.004250 (0.086368) | 0.059321 \/ 0.037052 (0.022269) | 0.450989 \/ 0.258489 (0.192500) | 0.491652 \/ 0.293841 (0.197811) | 0.033308 \/ 0.128546 (-0.095238) | 0.010677 \/ 0.075646 (-0.064969) | 0.099836 \/ 0.419271 (-0.319435) | 0.055937 \/ 0.043533 (0.012404) | 0.440560 \/ 0.255139 (0.185421) | 0.475305 \/ 0.283200 (0.192105) | 0.130829 \/ 0.141683 (-0.010854) | 1.857943 \/ 1.452155 (0.405789) | 1.989534 \/ 1.492716 (0.496818) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.244715 \/ 0.018006 (0.226709) | 0.482866 \/ 0.000490 (0.482377) | 0.001100 \/ 0.000200 (0.000900) | 0.000095 \/ 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.036288 \/ 0.037411 (-0.001124) | 0.147903 \/ 0.014526 (0.133377) | 0.154141 \/ 0.176557 (-0.022416) | 0.221863 \/ 0.737135 (-0.515272) | 0.162319 \/ 0.296338 (-0.134019) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.536972 \/ 0.215209 (0.321763) | 5.382866 \/ 2.077655 (3.305211) | 2.719575 \/ 1.504120 (1.215456) | 2.516596 \/ 1.541195 (0.975401) | 2.699602 \/ 1.468490 (1.231112) | 0.639886 \/ 4.584777 (-3.944891) | 5.109746 \/ 3.745712 (1.364034) | 2.260206 \/ 5.269862 (-3.009656) | 1.305506 \/ 4.565676 (-3.260170) | 0.080262 \/ 0.424275 (-0.344013) | 0.014801 \/ 0.007607 (0.007194) | 0.661228 \/ 0.226044 (0.435184) | 6.596485 \/ 2.268929 (4.327557) | 3.226114 \/ 55.444624 (-52.218510) | 2.859776 \/ 6.876477 (-4.016701) | 3.059355 \/ 2.142072 (0.917282) | 0.793413 \/ 4.805227 (-4.011814) | 0.176521 \/ 6.500664 (-6.324143) | 0.084062 \/ 0.075469 (0.008593) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.642085 \/ 1.841788 (-0.199703) | 20.355459 \/ 8.074308 (12.281151) | 17.979620 \/ 10.191392 (7.788228) | 0.229329 \/ 0.680424 (-0.451094) | 0.025681 \/ 0.534201 (-0.508520) | 0.534142 \/ 0.579283 (-0.045141) | 0.623439 \/ 0.434364 (0.189075) | 0.621938 \/ 0.540337 (0.081601) | 0.759038 \/ 1.386936 (-0.627898) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#6a98ff43225df344139023a5b7eb9caef610b677 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007703 \/ 0.011353 (-0.003649) | 0.005362 \/ 0.011008 (-0.005646) | 0.113111 \/ 0.038508 (0.074602) | 0.038891 \/ 0.023109 (0.015782) | 0.348938 \/ 0.275898 (0.073040) | 0.398079 \/ 0.323480 (0.074599) | 0.006707 \/ 0.007986 (-0.001278) | 0.004489 \/ 0.004328 (0.000160) | 0.087194 \/ 0.004250 (0.082943) | 0.054268 \/ 0.037052 (0.017216) | 0.359949 \/ 0.258489 (0.101460) | 0.402959 \/ 0.293841 (0.109118) | 0.032508 \/ 0.128546 (-0.096038) | 0.010224 \/ 0.075646 (-0.065422) | 0.387007 \/ 0.419271 (-0.032264) | 0.058971 \/ 0.043533 (0.015439) | 0.345085 \/ 0.255139 (0.089946) | 0.384306 \/ 0.283200 (0.101107) | 0.122253 \/ 0.141683 (-0.019430) | 1.706353 \/ 1.452155 (0.254199) | 1.840780 \/ 1.492716 (0.348063) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.254374 \/ 0.018006 (0.236368) | 0.497387 \/ 0.000490 (0.496897) | 0.012294 \/ 0.000200 (0.012094) | 0.000108 \/ 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030902 \/ 0.037411 (-0.006509) | 0.132098 \/ 0.014526 (0.117573) | 0.140311 \/ 0.176557 (-0.036245) | 0.205887 \/ 0.737135 (-0.531249) | 0.143992 \/ 0.296338 (-0.152347) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.467367 \/ 0.215209 (0.252158) | 4.669936 \/ 2.077655 (2.592281) | 2.155358 \/ 1.504120 (0.651238) | 1.984132 \/ 1.541195 (0.442937) | 2.102352 \/ 1.468490 (0.633861) | 0.607014 \/ 4.584777 (-3.977763) | 4.396479 \/ 3.745712 (0.650767) | 4.666056 \/ 5.269862 (-0.603806) | 2.176649 \/ 4.565676 (-2.389028) | 0.072657 \/ 0.424275 (-0.351619) | 0.012367 \/ 0.007607 (0.004759) | 0.569706 \/ 0.226044 (0.343661) | 5.749083 \/ 2.268929 (3.480154) | 2.640824 \/ 55.444624 (-52.803801) | 2.310253 \/ 6.876477 (-4.566224) | 2.486748 \/ 2.142072 (0.344676) | 0.737891 \/ 4.805227 (-4.067336) | 0.163507 \/ 6.500664 (-6.337157) | 0.075776 \/ 0.075469 (0.000307) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.362710 \/ 1.841788 (-0.479078) | 17.010705 \/ 8.074308 (8.936396) | 15.084231 \/ 10.191392 (4.892839) | 0.218274 \/ 0.680424 (-0.462150) | 0.019555 \/ 0.534201 (-0.514646) | 0.456013 \/ 0.579283 (-0.123270) | 0.502772 \/ 0.434364 (0.068408) | 0.581480 \/ 0.540337 (0.041142) | 0.686952 \/ 1.386936 (-0.699984) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007976 \/ 0.011353 (-0.003377) | 0.005141 \/ 0.011008 (-0.005868) | 0.086629 \/ 0.038508 (0.048121) | 0.039553 \/ 0.023109 (0.016444) | 0.433028 \/ 0.275898 (0.157130) | 0.463444 \/ 0.323480 (0.139964) | 0.006967 \/ 0.007986 (-0.001018) | 0.005814 \/ 0.004328 (0.001485) | 0.086266 \/ 0.004250 (0.082015) | 0.055384 \/ 0.037052 (0.018332) | 0.428733 \/ 0.258489 (0.170243) | 0.475670 \/ 0.293841 (0.181829) | 0.032872 \/ 0.128546 (-0.095674) | 0.010664 \/ 0.075646 (-0.064983) | 0.094357 \/ 0.419271 (-0.324915) | 0.058386 \/ 0.043533 (0.014854) | 0.431114 \/ 0.255139 (0.175975) | 0.441728 \/ 0.283200 (0.158528) | 0.131942 \/ 0.141683 (-0.009740) | 1.782214 \/ 1.452155 (0.330060) | 1.843185 \/ 1.492716 (0.350469) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.247047 \/ 0.018006 (0.229041) | 0.488931 \/ 0.000490 (0.488441) | 0.002657 \/ 0.000200 (0.002457) | 0.000106 \/ 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033893 \/ 0.037411 (-0.003518) | 0.131021 \/ 0.014526 (0.116495) | 0.142892 \/ 0.176557 (-0.033665) | 0.200955 \/ 0.737135 (-0.536180) | 0.151329 \/ 0.296338 (-0.145010) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.521138 \/ 0.215209 (0.305929) | 5.085207 \/ 2.077655 (3.007552) | 2.652901 \/ 1.504120 (1.148781) | 2.401545 \/ 1.541195 (0.860350) | 2.553461 \/ 1.468490 (1.084971) | 0.615347 \/ 4.584777 (-3.969430) | 4.448038 \/ 3.745712 (0.702326) | 2.049997 \/ 5.269862 (-3.219865) | 1.190602 \/ 4.565676 (-3.375075) | 0.073356 \/ 0.424275 (-0.350919) | 0.013685 \/ 0.007607 (0.006078) | 0.626705 \/ 0.226044 (0.400660) | 6.391941 \/ 2.268929 (4.123012) | 3.218864 \/ 55.444624 (-52.225760) | 2.858808 \/ 6.876477 (-4.017669) | 3.005808 \/ 2.142072 (0.863736) | 0.740725 \/ 4.805227 (-4.064502) | 0.161904 \/ 6.500664 (-6.338760) | 0.073727 \/ 0.075469 (-0.001742) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.488623 \/ 1.841788 (-0.353164) | 17.584367 \/ 8.074308 (9.510059) | 16.281818 \/ 10.191392 (6.090426) | 0.164482 \/ 0.680424 (-0.515942) | 0.020197 \/ 0.534201 (-0.514003) | 0.456750 \/ 0.579283 (-0.122533) | 0.501156 \/ 0.434364 (0.066792) | 0.549779 \/ 0.540337 (0.009442) | 0.650156 \/ 1.386936 (-0.736780) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#2b6cc63b868ea4ee60502845ebec68abb943958b \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008337 \/ 0.011353 (-0.003016) | 0.005911 \/ 0.011008 (-0.005097) | 0.129037 \/ 0.038508 (0.090529) | 0.046071 \/ 0.023109 (0.022962) | 0.418657 \/ 0.275898 (0.142759) | 0.490340 \/ 0.323480 (0.166860) | 0.006387 \/ 0.007986 (-0.001598) | 0.004724 \/ 0.004328 (0.000396) | 0.097953 \/ 0.004250 (0.093702) | 0.069025 \/ 0.037052 (0.031972) | 0.431178 \/ 0.258489 (0.172689) | 0.458363 \/ 0.293841 (0.164522) | 0.049341 \/ 0.128546 (-0.079205) | 0.014637 \/ 0.075646 (-0.061009) | 0.439800 \/ 0.419271 (0.020529) | 0.069905 \/ 0.043533 (0.026373) | 0.406775 \/ 0.255139 (0.151636) | 0.441989 \/ 0.283200 (0.158790) | 0.046009 \/ 0.141683 (-0.095674) | 1.847630 \/ 1.452155 (0.395475) | 1.904067 \/ 1.492716 (0.411351) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.288305 \/ 0.018006 (0.270299) | 0.594547 \/ 0.000490 (0.594058) | 0.005600 \/ 0.000200 (0.005400) | 0.000106 \/ 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033847 \/ 0.037411 (-0.003564) | 0.125139 \/ 0.014526 (0.110613) | 0.147982 \/ 0.176557 (-0.028574) | 0.208396 \/ 0.737135 (-0.528739) | 0.144005 \/ 0.296338 (-0.152334) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.669175 \/ 0.215209 (0.453966) | 6.605289 \/ 2.077655 (4.527634) | 2.720468 \/ 1.504120 (1.216348) | 2.341355 \/ 1.541195 (0.800160) | 2.402069 \/ 1.468490 (0.933578) | 0.939303 \/ 4.584777 (-3.645474) | 5.718545 \/ 3.745712 (1.972833) | 2.856235 \/ 5.269862 (-2.413627) | 1.821555 \/ 4.565676 (-2.744121) | 0.105473 \/ 0.424275 (-0.318802) | 0.014490 \/ 0.007607 (0.006883) | 0.774349 \/ 0.226044 (0.548305) | 8.065048 \/ 2.268929 (5.796120) | 3.508482 \/ 55.444624 (-51.936143) | 2.822881 \/ 6.876477 (-4.053596) | 2.962947 \/ 2.142072 (0.820875) | 1.138944 \/ 4.805227 (-3.666284) | 0.248414 \/ 6.500664 (-6.252250) | 0.095665 \/ 0.075469 (0.020196) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.688231 \/ 1.841788 (-0.153557) | 18.673305 \/ 8.074308 (10.598997) | 22.768663 \/ 10.191392 (12.577271) | 0.211238 \/ 0.680424 (-0.469186) | 0.031380 \/ 0.534201 (-0.502821) | 0.517175 \/ 0.579283 (-0.062108) | 0.626437 \/ 0.434364 (0.192073) | 0.624225 \/ 0.540337 (0.083888) | 0.743746 \/ 1.386936 (-0.643191) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008888 \/ 0.011353 (-0.002464) | 0.005491 \/ 0.011008 (-0.005517) | 0.105013 \/ 0.038508 (0.066505) | 0.049456 \/ 0.023109 (0.026347) | 0.528989 \/ 0.275898 (0.253091) | 0.651871 \/ 0.323480 (0.328391) | 0.006683 \/ 0.007986 (-0.001302) | 0.004365 \/ 0.004328 (0.000037) | 0.098161 \/ 0.004250 (0.093911) | 0.075615 \/ 0.037052 (0.038563) | 0.543746 \/ 0.258489 (0.285257) | 0.650855 \/ 0.293841 (0.357014) | 0.050220 \/ 0.128546 (-0.078327) | 0.014471 \/ 0.075646 (-0.061175) | 0.115903 \/ 0.419271 (-0.303368) | 0.065925 \/ 0.043533 (0.022392) | 0.527797 \/ 0.255139 (0.272658) | 0.543834 \/ 0.283200 (0.260634) | 0.043005 \/ 0.141683 (-0.098678) | 1.842846 \/ 1.452155 (0.390691) | 1.970615 \/ 1.492716 (0.477899) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.287350 \/ 0.018006 (0.269343) | 0.591139 \/ 0.000490 (0.590649) | 0.006423 \/ 0.000200 (0.006223) | 0.000107 \/ 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034594 \/ 0.037411 (-0.002818) | 0.137155 \/ 0.014526 (0.122629) | 0.154662 \/ 0.176557 (-0.021894) | 0.217834 \/ 0.737135 (-0.519301) | 0.159642 \/ 0.296338 (-0.136696) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.664288 \/ 0.215209 (0.449079) | 6.926912 \/ 2.077655 (4.849257) | 3.028957 \/ 1.504120 (1.524837) | 2.625178 \/ 1.541195 (1.083983) | 2.725316 \/ 1.468490 (1.256826) | 1.015715 \/ 4.584777 (-3.569062) | 5.834694 \/ 3.745712 (2.088982) | 5.105269 \/ 5.269862 (-0.164593) | 2.316194 \/ 4.565676 (-2.249483) | 0.113802 \/ 0.424275 (-0.310473) | 0.014079 \/ 0.007607 (0.006472) | 0.893727 \/ 0.226044 (0.667683) | 8.577701 \/ 2.268929 (6.308772) | 3.706907 \/ 55.444624 (-51.737717) | 3.087530 \/ 6.876477 (-3.788947) | 3.295004 \/ 2.142072 (1.152931) | 1.204172 \/ 4.805227 (-3.601055) | 0.248720 \/ 6.500664 (-6.251944) | 0.107208 \/ 0.075469 (0.031739) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.800058 \/ 1.841788 (-0.041730) | 19.253646 \/ 8.074308 (11.179338) | 22.590804 \/ 10.191392 (12.399412) | 0.270687 \/ 0.680424 (-0.409737) | 0.028678 \/ 0.534201 (-0.505522) | 0.534670 \/ 0.579283 (-0.044613) | 0.642881 \/ 0.434364 (0.208518) | 0.615521 \/ 0.540337 (0.075184) | 0.723733 \/ 1.386936 (-0.663203) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#2591cd45a002a06bd551343ec785abf16f1433e2 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.017236 \/ 0.011353 (0.005883) | 0.005341 \/ 0.011008 (-0.005667) | 0.131471 \/ 0.038508 (0.092963) | 0.048868 \/ 0.023109 (0.025758) | 0.448942 \/ 0.275898 (0.173044) | 0.498721 \/ 0.323480 (0.175241) | 0.006825 \/ 0.007986 (-0.001161) | 0.004587 \/ 0.004328 (0.000259) | 0.104142 \/ 0.004250 (0.099891) | 0.075521 \/ 0.037052 (0.038469) | 0.439538 \/ 0.258489 (0.181049) | 0.498720 \/ 0.293841 (0.204879) | 0.051352 \/ 0.128546 (-0.077194) | 0.015070 \/ 0.075646 (-0.060576) | 0.441752 \/ 0.419271 (0.022480) | 0.089166 \/ 0.043533 (0.045633) | 0.428909 \/ 0.255139 (0.173770) | 0.446648 \/ 0.283200 (0.163448) | 0.042371 \/ 0.141683 (-0.099312) | 1.993948 \/ 1.452155 (0.541793) | 2.065756 \/ 1.492716 (0.573039) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.257279 \/ 0.018006 (0.239273) | 0.575453 \/ 0.000490 (0.574964) | 0.004120 \/ 0.000200 (0.003920) | 0.000114 \/ 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034012 \/ 0.037411 (-0.003399) | 0.141737 \/ 0.014526 (0.127211) | 0.145241 \/ 0.176557 (-0.031316) | 0.226196 \/ 0.737135 (-0.510939) | 0.149526 \/ 0.296338 (-0.146813) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.665762 \/ 0.215209 (0.450553) | 6.683737 \/ 2.077655 (4.606083) | 2.869485 \/ 1.504120 (1.365365) | 2.462808 \/ 1.541195 (0.921613) | 2.526808 \/ 1.468490 (1.058318) | 0.957518 \/ 4.584777 (-3.627259) | 5.926261 \/ 3.745712 (2.180548) | 5.027822 \/ 5.269862 (-0.242040) | 2.643185 \/ 4.565676 (-1.922491) | 0.117014 \/ 0.424275 (-0.307261) | 0.015142 \/ 0.007607 (0.007535) | 0.835694 \/ 0.226044 (0.609650) | 8.427356 \/ 2.268929 (6.158427) | 3.649597 \/ 55.444624 (-51.795027) | 2.989607 \/ 6.876477 (-3.886870) | 3.043160 \/ 2.142072 (0.901088) | 1.158872 \/ 4.805227 (-3.646355) | 0.240456 \/ 6.500664 (-6.260208) | 0.089196 \/ 0.075469 (0.013726) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.689361 \/ 1.841788 (-0.152427) | 18.842158 \/ 8.074308 (10.767850) | 22.604249 \/ 10.191392 (12.412857) | 0.248487 \/ 0.680424 (-0.431936) | 0.029668 \/ 0.534201 (-0.504533) | 0.536283 \/ 0.579283 (-0.043001) | 0.663253 \/ 0.434364 (0.228890) | 0.622973 \/ 0.540337 (0.082635) | 0.735297 \/ 1.386936 (-0.651639) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009296 \/ 0.011353 (-0.002057) | 0.005955 \/ 0.011008 (-0.005053) | 0.105723 \/ 0.038508 (0.067215) | 0.051184 \/ 0.023109 (0.028074) | 0.527095 \/ 0.275898 (0.251197) | 0.631697 \/ 0.323480 (0.308217) | 0.006577 \/ 0.007986 (-0.001408) | 0.004452 \/ 0.004328 (0.000124) | 0.105921 \/ 0.004250 (0.101670) | 0.071951 \/ 0.037052 (0.034899) | 0.572518 \/ 0.258489 (0.314029) | 0.623957 \/ 0.293841 (0.330116) | 0.050861 \/ 0.128546 (-0.077686) | 0.014897 \/ 0.075646 (-0.060749) | 0.122013 \/ 0.419271 (-0.297258) | 0.067194 \/ 0.043533 (0.023661) | 0.530352 \/ 0.255139 (0.275213) | 0.563912 \/ 0.283200 (0.280712) | 0.034756 \/ 0.141683 (-0.106927) | 1.961580 \/ 1.452155 (0.509425) | 2.052412 \/ 1.492716 (0.559696) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.304996 \/ 0.018006 (0.286990) | 0.584899 \/ 0.000490 (0.584409) | 0.010444 \/ 0.000200 (0.010244) | 0.000134 \/ 0.000054 (0.000080) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032540 \/ 0.037411 (-0.004871) | 0.137349 \/ 0.014526 (0.122823) | 0.146233 \/ 0.176557 (-0.030323) | 0.206978 \/ 0.737135 (-0.530157) | 0.154380 \/ 0.296338 (-0.141959) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.705438 \/ 0.215209 (0.490229) | 7.042159 \/ 2.077655 (4.964504) | 3.285501 \/ 1.504120 (1.781381) | 2.904710 \/ 1.541195 (1.363515) | 2.952838 \/ 1.468490 (1.484348) | 0.987784 \/ 4.584777 (-3.596993) | 5.949550 \/ 3.745712 (2.203838) | 2.927148 \/ 5.269862 (-2.342714) | 1.870054 \/ 4.565676 (-2.695622) | 0.119548 \/ 0.424275 (-0.304727) | 0.014565 \/ 0.007607 (0.006958) | 0.858311 \/ 0.226044 (0.632266) | 8.721679 \/ 2.268929 (6.452750) | 4.100825 \/ 55.444624 (-51.343800) | 3.358093 \/ 6.876477 (-3.518383) | 3.499637 \/ 2.142072 (1.357564) | 1.208932 \/ 4.805227 (-3.596295) | 0.232961 \/ 6.500664 (-6.267703) | 0.089727 \/ 0.075469 (0.014258) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.780143 \/ 1.841788 (-0.061645) | 19.074991 \/ 8.074308 (11.000683) | 21.218487 \/ 10.191392 (11.027095) | 0.258690 \/ 0.680424 (-0.421734) | 0.029514 \/ 0.534201 (-0.504687) | 0.541764 \/ 0.579283 (-0.037519) | 0.640603 \/ 0.434364 (0.206239) | 0.635336 \/ 0.540337 (0.094999) | 0.756309 \/ 1.386936 (-0.630627) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#1b525c199e6352aa8aac55f1dcddeb55a80db373 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009619 \/ 0.011353 (-0.001734) | 0.005683 \/ 0.011008 (-0.005325) | 0.136971 \/ 0.038508 (0.098463) | 0.051607 \/ 0.023109 (0.028497) | 0.439716 \/ 0.275898 (0.163818) | 0.486193 \/ 0.323480 (0.162713) | 0.006304 \/ 0.007986 (-0.001681) | 0.004489 \/ 0.004328 (0.000160) | 0.103837 \/ 0.004250 (0.099587) | 0.082954 \/ 0.037052 (0.045901) | 0.447286 \/ 0.258489 (0.188797) | 0.495434 \/ 0.293841 (0.201593) | 0.049244 \/ 0.128546 (-0.079302) | 0.015176 \/ 0.075646 (-0.060470) | 0.444406 \/ 0.419271 (0.025134) | 0.074766 \/ 0.043533 (0.031233) | 0.438585 \/ 0.255139 (0.183446) | 0.438232 \/ 0.283200 (0.155032) | 0.043372 \/ 0.141683 (-0.098311) | 2.057286 \/ 1.452155 (0.605131) | 2.049540 \/ 1.492716 (0.556824) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.298038 \/ 0.018006 (0.280031) | 0.630771 \/ 0.000490 (0.630281) | 0.008287 \/ 0.000200 (0.008087) | 0.000123 \/ 0.000054 (0.000068) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033637 \/ 0.037411 (-0.003775) | 0.128327 \/ 0.014526 (0.113801) | 0.150672 \/ 0.176557 (-0.025885) | 0.228521 \/ 0.737135 (-0.508614) | 0.142733 \/ 0.296338 (-0.153606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.629072 \/ 0.215209 (0.413863) | 6.612047 \/ 2.077655 (4.534392) | 2.715594 \/ 1.504120 (1.211474) | 2.327823 \/ 1.541195 (0.786628) | 2.417508 \/ 1.468490 (0.949018) | 0.959134 \/ 4.584777 (-3.625643) | 5.669921 \/ 3.745712 (1.924209) | 2.977920 \/ 5.269862 (-2.291941) | 1.814564 \/ 4.565676 (-2.751112) | 0.120233 \/ 0.424275 (-0.304042) | 0.015859 \/ 0.007607 (0.008252) | 0.822618 \/ 0.226044 (0.596574) | 8.440306 \/ 2.268929 (6.171377) | 3.721611 \/ 55.444624 (-51.723013) | 2.954867 \/ 6.876477 (-3.921610) | 3.135364 \/ 2.142072 (0.993292) | 1.226475 \/ 4.805227 (-3.578752) | 0.246658 \/ 6.500664 (-6.254006) | 0.093920 \/ 0.075469 (0.018451) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.665631 \/ 1.841788 (-0.176157) | 19.136369 \/ 8.074308 (11.062061) | 23.659564 \/ 10.191392 (13.468172) | 0.273430 \/ 0.680424 (-0.406994) | 0.028180 \/ 0.534201 (-0.506021) | 0.559588 \/ 0.579283 (-0.019695) | 0.649203 \/ 0.434364 (0.214840) | 0.647113 \/ 0.540337 (0.106776) | 0.737978 \/ 1.386936 (-0.648958) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009104 \/ 0.011353 (-0.002249) | 0.006838 \/ 0.011008 (-0.004171) | 0.104516 \/ 0.038508 (0.066008) | 0.047986 \/ 0.023109 (0.024877) | 0.521849 \/ 0.275898 (0.245951) | 0.586281 \/ 0.323480 (0.262801) | 0.006225 \/ 0.007986 (-0.001760) | 0.005713 \/ 0.004328 (0.001384) | 0.111507 \/ 0.004250 (0.107257) | 0.072320 \/ 0.037052 (0.035267) | 0.551061 \/ 0.258489 (0.292572) | 0.628034 \/ 0.293841 (0.334193) | 0.055417 \/ 0.128546 (-0.073129) | 0.019613 \/ 0.075646 (-0.056034) | 0.123958 \/ 0.419271 (-0.295314) | 0.066132 \/ 0.043533 (0.022600) | 0.504461 \/ 0.255139 (0.249322) | 0.560428 \/ 0.283200 (0.277229) | 0.036098 \/ 0.141683 (-0.105585) | 1.927398 \/ 1.452155 (0.475243) | 2.015952 \/ 1.492716 (0.523235) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.313065 \/ 0.018006 (0.295059) | 0.609174 \/ 0.000490 (0.608684) | 0.008755 \/ 0.000200 (0.008555) | 0.000120 \/ 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.040042 \/ 0.037411 (0.002630) | 0.136053 \/ 0.014526 (0.121527) | 0.143406 \/ 0.176557 (-0.033150) | 0.213080 \/ 0.737135 (-0.524055) | 0.154730 \/ 0.296338 (-0.141609) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.692706 \/ 0.215209 (0.477497) | 6.952968 \/ 2.077655 (4.875314) | 3.232023 \/ 1.504120 (1.727903) | 2.835450 \/ 1.541195 (1.294256) | 2.933821 \/ 1.468490 (1.465331) | 0.984712 \/ 4.584777 (-3.600065) | 6.127651 \/ 3.745712 (2.381939) | 2.956781 \/ 5.269862 (-2.313081) | 1.879928 \/ 4.565676 (-2.685748) | 0.111069 \/ 0.424275 (-0.313206) | 0.014598 \/ 0.007607 (0.006991) | 0.871486 \/ 0.226044 (0.645442) | 8.588500 \/ 2.268929 (6.319572) | 3.910740 \/ 55.444624 (-51.533885) | 3.115781 \/ 6.876477 (-3.760695) | 3.222367 \/ 2.142072 (1.080294) | 1.229680 \/ 4.805227 (-3.575547) | 0.232092 \/ 6.500664 (-6.268572) | 0.097717 \/ 0.075469 (0.022248) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.774193 \/ 1.841788 (-0.067595) | 19.863087 \/ 8.074308 (11.788779) | 24.058856 \/ 10.191392 (13.867464) | 0.214917 \/ 0.680424 (-0.465507) | 0.028771 \/ 0.534201 (-0.505430) | 0.544548 \/ 0.579283 (-0.034735) | 0.655882 \/ 0.434364 (0.221518) | 0.629110 \/ 0.540337 (0.088773) | 0.749246 \/ 1.386936 (-0.637690) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#f4a5ea6a42dcfef1577288b51beeccc0eb124cee \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007075 \/ 0.011353 (-0.004278) | 0.005195 \/ 0.011008 (-0.005813) | 0.113043 \/ 0.038508 (0.074535) | 0.038442 \/ 0.023109 (0.015333) | 0.336310 \/ 0.275898 (0.060412) | 0.381888 \/ 0.323480 (0.058409) | 0.005990 \/ 0.007986 (-0.001996) | 0.003893 \/ 0.004328 (-0.000435) | 0.093123 \/ 0.004250 (0.088872) | 0.058449 \/ 0.037052 (0.021397) | 0.359463 \/ 0.258489 (0.100974) | 0.427485 \/ 0.293841 (0.133644) | 0.041454 \/ 0.128546 (-0.087092) | 0.013016 \/ 0.075646 (-0.062630) | 0.372849 \/ 0.419271 (-0.046422) | 0.059386 \/ 0.043533 (0.015853) | 0.381398 \/ 0.255139 (0.126259) | 0.367603 \/ 0.283200 (0.084403) | 0.033907 \/ 0.141683 (-0.107775) | 1.628903 \/ 1.452155 (0.176749) | 1.764131 \/ 1.492716 (0.271415) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.298329 \/ 0.018006 (0.280322) | 0.593030 \/ 0.000490 (0.592540) | 0.007653 \/ 0.000200 (0.007453) | 0.000091 \/ 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025445 \/ 0.037411 (-0.011966) | 0.112062 \/ 0.014526 (0.097536) | 0.119863 \/ 0.176557 (-0.056693) | 0.178389 \/ 0.737135 (-0.558746) | 0.129934 \/ 0.296338 (-0.166404) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.532834 \/ 0.215209 (0.317625) | 5.250908 \/ 2.077655 (3.173253) | 2.086920 \/ 1.504120 (0.582800) | 1.799745 \/ 1.541195 (0.258550) | 1.909648 \/ 1.468490 (0.441158) | 0.825382 \/ 4.584777 (-3.759395) | 5.268304 \/ 3.745712 (1.522592) | 2.533347 \/ 5.269862 (-2.736515) | 1.730187 \/ 4.565676 (-2.835490) | 0.099824 \/ 0.424275 (-0.324451) | 0.012969 \/ 0.007607 (0.005362) | 0.732234 \/ 0.226044 (0.506189) | 6.989066 \/ 2.268929 (4.720138) | 2.873486 \/ 55.444624 (-52.571138) | 2.274351 \/ 6.876477 (-4.602125) | 2.311060 \/ 2.142072 (0.168987) | 1.125366 \/ 4.805227 (-3.679861) | 0.214522 \/ 6.500664 (-6.286142) | 0.077579 \/ 0.075469 (0.002110) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.670950 \/ 1.841788 (-0.170838) | 18.131528 \/ 8.074308 (10.057220) | 21.277823 \/ 10.191392 (11.086431) | 0.238807 \/ 0.680424 (-0.441617) | 0.032251 \/ 0.534201 (-0.501950) | 0.503859 \/ 0.579283 (-0.075424) | 0.604825 \/ 0.434364 (0.170461) | 0.555623 \/ 0.540337 (0.015286) | 0.647301 \/ 1.386936 (-0.739635) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.010857 \/ 0.011353 (-0.000496) | 0.005581 \/ 0.011008 (-0.005427) | 0.094346 \/ 0.038508 (0.055838) | 0.053084 \/ 0.023109 (0.029975) | 0.457586 \/ 0.275898 (0.181688) | 0.545475 \/ 0.323480 (0.221995) | 0.006761 \/ 0.007986 (-0.001225) | 0.005094 \/ 0.004328 (0.000765) | 0.095509 \/ 0.004250 (0.091258) | 0.077182 \/ 0.037052 (0.040130) | 0.498717 \/ 0.258489 (0.240228) | 0.542433 \/ 0.293841 (0.248592) | 0.051547 \/ 0.128546 (-0.076999) | 0.014633 \/ 0.075646 (-0.061014) | 0.106843 \/ 0.419271 (-0.312428) | 0.068459 \/ 0.043533 (0.024926) | 0.435793 \/ 0.255139 (0.180654) | 0.475484 \/ 0.283200 (0.192285) | 0.039495 \/ 0.141683 (-0.102188) | 1.684906 \/ 1.452155 (0.232751) | 1.798693 \/ 1.492716 (0.305976) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.279853 \/ 0.018006 (0.261847) | 0.601016 \/ 0.000490 (0.600526) | 0.002055 \/ 0.000200 (0.001855) | 0.000219 \/ 0.000054 (0.000165) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030935 \/ 0.037411 (-0.006477) | 0.121197 \/ 0.014526 (0.106671) | 0.143360 \/ 0.176557 (-0.033197) | 0.200862 \/ 0.737135 (-0.536274) | 0.138656 \/ 0.296338 (-0.157683) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.613904 \/ 0.215209 (0.398695) | 6.155422 \/ 2.077655 (4.077767) | 2.777238 \/ 1.504120 (1.273118) | 2.473045 \/ 1.541195 (0.931851) | 2.604470 \/ 1.468490 (1.135980) | 0.898871 \/ 4.584777 (-3.685906) | 5.739666 \/ 3.745712 (1.993954) | 4.719822 \/ 5.269862 (-0.550040) | 2.727354 \/ 4.565676 (-1.838322) | 0.108232 \/ 0.424275 (-0.316043) | 0.013632 \/ 0.007607 (0.006025) | 0.771802 \/ 0.226044 (0.545757) | 7.987466 \/ 2.268929 (5.718537) | 3.609856 \/ 55.444624 (-51.834768) | 2.974421 \/ 6.876477 (-3.902056) | 2.956567 \/ 2.142072 (0.814495) | 1.093792 \/ 4.805227 (-3.711435) | 0.213369 \/ 6.500664 (-6.287295) | 0.084486 \/ 0.075469 (0.009017) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.693855 \/ 1.841788 (-0.147933) | 18.055027 \/ 8.074308 (9.980719) | 21.397964 \/ 10.191392 (11.206571) | 0.240549 \/ 0.680424 (-0.439875) | 0.031212 \/ 0.534201 (-0.502989) | 0.513657 \/ 0.579283 (-0.065626) | 0.651348 \/ 0.434364 (0.216985) | 0.603740 \/ 0.540337 (0.063402) | 0.752287 \/ 1.386936 (-0.634649) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#6f3f38d00dd40a444ae54c18caa28304ae36b9c3 \"CML watermark\")\n"],"created_at":1686661399000,"updated_at":1687884471000,"closed_at":1687883912000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5949","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5949","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5949.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5949.patch","merged_at":1687883912000},"body":"Use `huggingface_hub`'s RepoCard API instead of `DatasetMetadata` for modifying the card's YAML, and deprecate `datasets.utils.metadata` and `datasets.utils.readme`.\r\n\r\nAfter removing these modules, we can also delete `datasets.utils.resources` since the moon landing repo now stores its own version of these resources for the metadata UI.\r\n\r\nPS: this change requires bumping `huggingface_hub` to 0.13.0 (Transformers requires 0.14.0, so should be ok)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5949\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5949\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5948","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5948\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5948\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5948\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5948","id":1754794611,"node_id":"PR_kwDODunzps5S4dUt","number":5948,"title":"Fix sequence of array support for most dtype","user":{"login":"qgallouedec","id":45557362,"node_id":"MDQ6VXNlcjQ1NTU3MzYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/45557362?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/qgallouedec","html_url":"https:\/\/github.com\/qgallouedec","followers_url":"https:\/\/api.github.com\/users\/qgallouedec\/followers","following_url":"https:\/\/api.github.com\/users\/qgallouedec\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/qgallouedec\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/qgallouedec\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/qgallouedec\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/qgallouedec\/orgs","repos_url":"https:\/\/api.github.com\/users\/qgallouedec\/repos","events_url":"https:\/\/api.github.com\/users\/qgallouedec\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/qgallouedec\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007220 \/ 0.011353 (-0.004133) | 0.004558 \/ 0.011008 (-0.006451) | 0.116647 \/ 0.038508 (0.078139) | 0.046845 \/ 0.023109 (0.023736) | 0.352429 \/ 0.275898 (0.076531) | 0.429739 \/ 0.323480 (0.106259) | 0.006620 \/ 0.007986 (-0.001366) | 0.003731 \/ 0.004328 (-0.000597) | 0.088683 \/ 0.004250 (0.084433) | 0.070583 \/ 0.037052 (0.033530) | 0.366699 \/ 0.258489 (0.108210) | 0.420730 \/ 0.293841 (0.126889) | 0.037342 \/ 0.128546 (-0.091204) | 0.010041 \/ 0.075646 (-0.065605) | 0.383477 \/ 0.419271 (-0.035795) | 0.060279 \/ 0.043533 (0.016746) | 0.349988 \/ 0.255139 (0.094849) | 0.371423 \/ 0.283200 (0.088224) | 0.026725 \/ 0.141683 (-0.114958) | 1.736886 \/ 1.452155 (0.284731) | 1.812874 \/ 1.492716 (0.320157) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.253256 \/ 0.018006 (0.235250) | 0.563470 \/ 0.000490 (0.562980) | 0.010475 \/ 0.000200 (0.010275) | 0.000164 \/ 0.000054 (0.000110) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030518 \/ 0.037411 (-0.006893) | 0.133324 \/ 0.014526 (0.118798) | 0.137095 \/ 0.176557 (-0.039461) | 0.202227 \/ 0.737135 (-0.534909) | 0.144195 \/ 0.296338 (-0.152143) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.480870 \/ 0.215209 (0.265661) | 4.822713 \/ 2.077655 (2.745058) | 2.124183 \/ 1.504120 (0.620064) | 1.910733 \/ 1.541195 (0.369538) | 1.970266 \/ 1.468490 (0.501776) | 0.624695 \/ 4.584777 (-3.960082) | 4.459659 \/ 3.745712 (0.713947) | 2.210123 \/ 5.269862 (-3.059739) | 1.300520 \/ 4.565676 (-3.265157) | 0.077096 \/ 0.424275 (-0.347180) | 0.013333 \/ 0.007607 (0.005726) | 0.596841 \/ 0.226044 (0.370797) | 5.917397 \/ 2.268929 (3.648469) | 2.699397 \/ 55.444624 (-52.745228) | 2.274833 \/ 6.876477 (-4.601644) | 2.525376 \/ 2.142072 (0.383304) | 0.755718 \/ 4.805227 (-4.049510) | 0.163587 \/ 6.500664 (-6.337077) | 0.072817 \/ 0.075469 (-0.002653) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.524306 \/ 1.841788 (-0.317481) | 18.843312 \/ 8.074308 (10.769004) | 15.694644 \/ 10.191392 (5.503252) | 0.177400 \/ 0.680424 (-0.503024) | 0.020104 \/ 0.534201 (-0.514097) | 0.466421 \/ 0.579283 (-0.112862) | 0.537274 \/ 0.434364 (0.102910) | 0.576920 \/ 0.540337 (0.036583) | 0.718889 \/ 1.386936 (-0.668047) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007671 \/ 0.011353 (-0.003682) | 0.004850 \/ 0.011008 (-0.006158) | 0.090085 \/ 0.038508 (0.051576) | 0.052023 \/ 0.023109 (0.028914) | 0.508575 \/ 0.275898 (0.232677) | 0.590024 \/ 0.323480 (0.266544) | 0.004564 \/ 0.007986 (-0.003422) | 0.005345 \/ 0.004328 (0.001017) | 0.087904 \/ 0.004250 (0.083653) | 0.064446 \/ 0.037052 (0.027394) | 0.525625 \/ 0.258489 (0.267136) | 0.584307 \/ 0.293841 (0.290466) | 0.037221 \/ 0.128546 (-0.091325) | 0.010588 \/ 0.075646 (-0.065059) | 0.098612 \/ 0.419271 (-0.320659) | 0.059597 \/ 0.043533 (0.016064) | 0.488064 \/ 0.255139 (0.232925) | 0.522330 \/ 0.283200 (0.239131) | 0.030004 \/ 0.141683 (-0.111679) | 1.732512 \/ 1.452155 (0.280357) | 1.809027 \/ 1.492716 (0.316310) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.218741 \/ 0.018006 (0.200735) | 0.494946 \/ 0.000490 (0.494456) | 0.004580 \/ 0.000200 (0.004380) | 0.000104 \/ 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034916 \/ 0.037411 (-0.002495) | 0.133695 \/ 0.014526 (0.119169) | 0.147964 \/ 0.176557 (-0.028592) | 0.213210 \/ 0.737135 (-0.523926) | 0.148850 \/ 0.296338 (-0.147488) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.508855 \/ 0.215209 (0.293646) | 5.065088 \/ 2.077655 (2.987433) | 2.473110 \/ 1.504120 (0.968990) | 2.259765 \/ 1.541195 (0.718570) | 2.359189 \/ 1.468490 (0.890699) | 0.639082 \/ 4.584777 (-3.945695) | 4.768195 \/ 3.745712 (1.022482) | 2.253803 \/ 5.269862 (-3.016059) | 1.442996 \/ 4.565676 (-3.122680) | 0.078761 \/ 0.424275 (-0.345514) | 0.013936 \/ 0.007607 (0.006329) | 0.625977 \/ 0.226044 (0.399933) | 6.260817 \/ 2.268929 (3.991888) | 3.149640 \/ 55.444624 (-52.294985) | 2.753555 \/ 6.876477 (-4.122921) | 2.831872 \/ 2.142072 (0.689799) | 0.781294 \/ 4.805227 (-4.023933) | 0.169109 \/ 6.500664 (-6.331555) | 0.075810 \/ 0.075469 (0.000341) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.533282 \/ 1.841788 (-0.308506) | 19.460579 \/ 8.074308 (11.386271) | 17.250424 \/ 10.191392 (7.059032) | 0.193485 \/ 0.680424 (-0.486939) | 0.020650 \/ 0.534201 (-0.513551) | 0.472110 \/ 0.579283 (-0.107173) | 0.532276 \/ 0.434364 (0.097912) | 0.613152 \/ 0.540337 (0.072814) | 0.684684 \/ 1.386936 (-0.702252) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#650a86ee122209d4a8c8e8068c01ebfd3ba553f5 \"CML watermark\")\n"],"created_at":1686659939000,"updated_at":1686755515000,"closed_at":1686755013000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5948","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5948","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5948.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5948.patch","merged_at":1686755013000},"body":"Fixes #5936 \r\nAlso, a related fix to #5927 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5948\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5948\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5947","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5947\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5947\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5947\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5947","id":1754359316,"node_id":"I_kwDODunzps5okWYU","number":5947,"title":"Return the audio filename when decoding fails due to corrupt files","user":{"login":"wetdog","id":8949105,"node_id":"MDQ6VXNlcjg5NDkxMDU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8949105?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/wetdog","html_url":"https:\/\/github.com\/wetdog","followers_url":"https:\/\/api.github.com\/users\/wetdog\/followers","following_url":"https:\/\/api.github.com\/users\/wetdog\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/wetdog\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/wetdog\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/wetdog\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/wetdog\/orgs","repos_url":"https:\/\/api.github.com\/users\/wetdog\/repos","events_url":"https:\/\/api.github.com\/users\/wetdog\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/wetdog\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! The audio data don't always exist as files on disk - the blobs are often stored in the Arrow files. For now I'd suggest disabling decoding with `.cast_column(\"audio\", Audio(decode=False))` and apply your own decoding that handles corrupted files (maybe to filter them out ?)\r\n\r\ncc @sanchit-gandhi since it's related to our discussion about allowing users to make decoding return `None` and show a warning when there are corrupted files","Thanks @lhoestq, I wasn't aware of the decode flag. It makes more sense as you say to show a warning when there are corrupted files together with some metadata of the file that allows to filter them from the dataset.\r\n\r\nMy workaround was to catch the LibsndfileError and generate a dummy audio with an unsual sample rate to filter it later. However returning `None` seems better. \r\n\r\n`try:\r\n array, sampling_rate = sf.read(file)\r\nexcept sf.LibsndfileError:\r\n print(\"bad file\")\r\n array = np.array([0.0])\r\n sampling_rate = 99.000` \r\n\r\n"],"created_at":1686645849000,"updated_at":1686746701000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\r\n\r\nReturn the audio filename when the audio decoding fails. Although currently there are some checks for mp3 and opus formats with the library version there are still cases when the audio decoding could fail, eg. Corrupt file. \r\n\r\n### Motivation\r\n\r\nWhen you try to load an object file dataset and the decoding fails you can't know which file is corrupt\r\n```\r\n\r\nraise LibsndfileError(err, prefix=\"Error opening {0!r}: \".format(self.name))\r\nsoundfile.LibsndfileError: Error opening <_io.BytesIO object at 0x7f5ab7e38290>: Format not recognised.\r\n```\r\n\r\n### Your contribution\r\n\r\nMake a PR to Add exceptions for LIbsndfileError to return the audio filename or path when soundfile decoding fails.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5947\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5947\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5946","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5946\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5946\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5946\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5946","id":1754234469,"node_id":"I_kwDODunzps5oj35l","number":5946,"title":"IndexError Not Solving -> IndexError: Invalid key: ?? is out of bounds for size 0 or ??","user":{"login":"syngokhan","id":70565543,"node_id":"MDQ6VXNlcjcwNTY1NTQz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/70565543?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/syngokhan","html_url":"https:\/\/github.com\/syngokhan","followers_url":"https:\/\/api.github.com\/users\/syngokhan\/followers","following_url":"https:\/\/api.github.com\/users\/syngokhan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/syngokhan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/syngokhan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/syngokhan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/syngokhan\/orgs","repos_url":"https:\/\/api.github.com\/users\/syngokhan\/repos","events_url":"https:\/\/api.github.com\/users\/syngokhan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/syngokhan\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["https:\/\/colab.research.google.com\/#scrollTo=AQ_HCYruWIHU&fileId=https%3A\/\/huggingface.co\/dfurman\/falcon-40b-chat-oasst1\/blob\/main\/finetune_falcon40b_oasst1_with_bnb_peft.ipynb\r\n\r\nI ran the same administration exactly the same but got the same error","Looks related to https:\/\/discuss.huggingface.co\/t\/indexerror-invalid-key-16-is-out-of-bounds-for-size-0\/14298\/4?u=lhoestq","> Looks related to https:\/\/discuss.huggingface.co\/t\/indexerror-invalid-key-16-is-out-of-bounds-for-size-0\/14298\/4?u=lhoestq\n\nThe problem has not been solved, I have tried this before, but the problem is the same","> \r\n\r\n@syngokhan did u solve it? \r\nI am desperate "],"created_at":1686641655000,"updated_at":1689263983000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nin :1 \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/transformers\/trainer.py:1537 in train \u2502\r\n\u2502 \u2502\r\n\u2502 1534 \u2502 \u2502 inner_training_loop = find_executable_batch_size( \u2502\r\n\u2502 1535 \u2502 \u2502 \u2502 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size \u2502\r\n\u2502 1536 \u2502 \u2502 ) \u2502\r\n\u2502 \u2771 1537 \u2502 \u2502 return inner_training_loop( \u2502\r\n\u2502 1538 \u2502 \u2502 \u2502 args=args, \u2502\r\n\u2502 1539 \u2502 \u2502 \u2502 resume_from_checkpoint=resume_from_checkpoint, \u2502\r\n\u2502 1540 \u2502 \u2502 \u2502 trial=trial, \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/transformers\/trainer.py:1789 in _inner_training_loop \u2502\r\n\u2502 \u2502\r\n\u2502 1786 \u2502 \u2502 \u2502 \u2502 rng_to_sync = True \u2502\r\n\u2502 1787 \u2502 \u2502 \u2502 \u2502\r\n\u2502 1788 \u2502 \u2502 \u2502 step = -1 \u2502\r\n\u2502 \u2771 1789 \u2502 \u2502 \u2502 for step, inputs in enumerate(epoch_iterator): \u2502\r\n\u2502 1790 \u2502 \u2502 \u2502 \u2502 total_batched_samples += 1 \u2502\r\n\u2502 1791 \u2502 \u2502 \u2502 \u2502 if rng_to_sync: \u2502\r\n\u2502 1792 \u2502 \u2502 \u2502 \u2502 \u2502 self._load_rng_state(resume_from_checkpoint) \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/accelerate\/data_loader.py:377 in __iter__ \u2502\r\n\u2502 \u2502\r\n\u2502 374 \u2502 \u2502 dataloader_iter = super().__iter__() \u2502\r\n\u2502 375 \u2502 \u2502 # We iterate one batch ahead to check when we are at the end \u2502\r\n\u2502 376 \u2502 \u2502 try: \u2502\r\n\u2502 \u2771 377 \u2502 \u2502 \u2502 current_batch = next(dataloader_iter) \u2502\r\n\u2502 378 \u2502 \u2502 except StopIteration: \u2502\r\n\u2502 379 \u2502 \u2502 \u2502 yield \u2502\r\n\u2502 380 \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/torch\/utils\/data\/dataloader.py:633 in __next__ \u2502\r\n\u2502 \u2502\r\n\u2502 630 \u2502 \u2502 \u2502 if self._sampler_iter is None: \u2502\r\n\u2502 631 \u2502 \u2502 \u2502 \u2502 # TODO(https:\/\/github.com\/pytorch\/pytorch\/issues\/76750) \u2502\r\n\u2502 632 \u2502 \u2502 \u2502 \u2502 self._reset() # type: ignore[call-arg] \u2502\r\n\u2502 \u2771 633 \u2502 \u2502 \u2502 data = self._next_data() \u2502\r\n\u2502 634 \u2502 \u2502 \u2502 self._num_yielded += 1 \u2502\r\n\u2502 635 \u2502 \u2502 \u2502 if self._dataset_kind == _DatasetKind.Iterable and \\ \u2502\r\n\u2502 636 \u2502 \u2502 \u2502 \u2502 \u2502 self._IterableDataset_len_called is not None and \\ \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/torch\/utils\/data\/dataloader.py:677 in _next_data \u2502\r\n\u2502 \u2502\r\n\u2502 674 \u2502 \u2502\r\n\u2502 675 \u2502 def _next_data(self): \u2502\r\n\u2502 676 \u2502 \u2502 index = self._next_index() # may raise StopIteration \u2502\r\n\u2502 \u2771 677 \u2502 \u2502 data = self._dataset_fetcher.fetch(index) # may raise StopIteration \u2502\r\n\u2502 678 \u2502 \u2502 if self._pin_memory: \u2502\r\n\u2502 679 \u2502 \u2502 \u2502 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) \u2502\r\n\u2502 680 \u2502 \u2502 return data \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/torch\/utils\/data\/_utils\/fetch.py:49 in fetch \u2502\r\n\u2502 \u2502\r\n\u2502 46 \u2502 def fetch(self, possibly_batched_index): \u2502\r\n\u2502 47 \u2502 \u2502 if self.auto_collation: \u2502\r\n\u2502 48 \u2502 \u2502 \u2502 if hasattr(self.dataset, \"__getitems__\") and self.dataset.__getitems__: \u2502\r\n\u2502 \u2771 49 \u2502 \u2502 \u2502 \u2502 data = self.dataset.__getitems__(possibly_batched_index) \u2502\r\n\u2502 50 \u2502 \u2502 \u2502 else: \u2502\r\n\u2502 51 \u2502 \u2502 \u2502 \u2502 data = [self.dataset[idx] for idx in possibly_batched_index] \u2502\r\n\u2502 52 \u2502 \u2502 else: \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/datasets\/arrow_dataset.py:2782 in __getitems__ \u2502\r\n\u2502 \u2502\r\n\u2502 2779 \u2502 \u2502\r\n\u2502 2780 \u2502 def __getitems__(self, keys: List) -> List: \u2502\r\n\u2502 2781 \u2502 \u2502 \"\"\"Can be used to get a batch using a list of integers indices.\"\"\" \u2502\r\n\u2502 \u2771 2782 \u2502 \u2502 batch = self.__getitem__(keys) \u2502\r\n\u2502 2783 \u2502 \u2502 n_examples = len(batch[next(iter(batch))]) \u2502\r\n\u2502 2784 \u2502 \u2502 return [{col: array[i] for col, array in batch.items()} for i in range(n_example \u2502\r\n\u2502 2785 \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/datasets\/arrow_dataset.py:2778 in __getitem__ \u2502\r\n\u2502 \u2502\r\n\u2502 2775 \u2502 \u2502\r\n\u2502 2776 \u2502 def __getitem__(self, key): # noqa: F811 \u2502\r\n\u2502 2777 \u2502 \u2502 \"\"\"Can be used to index columns (by string names) or rows (by integer index or i \u2502\r\n\u2502 \u2771 2778 \u2502 \u2502 return self._getitem(key) \u2502\r\n\u2502 2779 \u2502 \u2502\r\n\u2502 2780 \u2502 def __getitems__(self, keys: List) -> List: \u2502\r\n\u2502 2781 \u2502 \u2502 \"\"\"Can be used to get a batch using a list of integers indices.\"\"\" \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/datasets\/arrow_dataset.py:2762 in _getitem \u2502\r\n\u2502 \u2502\r\n\u2502 2759 \u2502 \u2502 format_kwargs = kwargs[\"format_kwargs\"] if \"format_kwargs\" in kwargs else self._ \u2502\r\n\u2502 2760 \u2502 \u2502 format_kwargs = format_kwargs if format_kwargs is not None else {} \u2502\r\n\u2502 2761 \u2502 \u2502 formatter = get_formatter(format_type, features=self._info.features, **format_kw \u2502\r\n\u2502 \u2771 2762 \u2502 \u2502 pa_subtable = query_table(self._data, key, indices=self._indices if self._indice \u2502\r\n\u2502 2763 \u2502 \u2502 formatted_output = format_table( \u2502\r\n\u2502 2764 \u2502 \u2502 \u2502 pa_subtable, key, formatter=formatter, format_columns=format_columns, output \u2502\r\n\u2502 2765 \u2502 \u2502 ) \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/datasets\/formatting\/formatting.py:578 in query_table \u2502\r\n\u2502 \u2502\r\n\u2502 575 \u2502 \u2502 _check_valid_column_key(key, table.column_names) \u2502\r\n\u2502 576 \u2502 else: \u2502\r\n\u2502 577 \u2502 \u2502 size = indices.num_rows if indices is not None else table.num_rows \u2502\r\n\u2502 \u2771 578 \u2502 \u2502 _check_valid_index_key(key, size) \u2502\r\n\u2502 579 \u2502 # Query the main table \u2502\r\n\u2502 580 \u2502 if indices is None: \u2502\r\n\u2502 581 \u2502 \u2502 pa_subtable = _query_table(table, key) \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/datasets\/formatting\/formatting.py:531 in \u2502\r\n\u2502 _check_valid_index_key \u2502\r\n\u2502 \u2502\r\n\u2502 528 \u2502 \u2502 \u2502 _check_valid_index_key(min(key), size=size) \u2502\r\n\u2502 529 \u2502 elif isinstance(key, Iterable): \u2502\r\n\u2502 530 \u2502 \u2502 if len(key) > 0: \u2502\r\n\u2502 \u2771 531 \u2502 \u2502 \u2502 _check_valid_index_key(int(max(key)), size=size) \u2502\r\n\u2502 532 \u2502 \u2502 \u2502 _check_valid_index_key(int(min(key)), size=size) \u2502\r\n\u2502 533 \u2502 else: \u2502\r\n\u2502 534 \u2502 \u2502 _raise_bad_key_type(key) \u2502\r\n\u2502 \u2502\r\n\u2502 \/usr\/local\/lib\/python3.10\/dist-packages\/datasets\/formatting\/formatting.py:521 in \u2502\r\n\u2502 _check_valid_index_key \u2502\r\n\u2502 \u2502\r\n\u2502 518 def _check_valid_index_key(key: Union[int, slice, range, Iterable], size: int) -> None: \u2502\r\n\u2502 519 \u2502 if isinstance(key, int): \u2502\r\n\u2502 520 \u2502 \u2502 if (key < 0 and key + size < 0) or (key >= size): \u2502\r\n\u2502 \u2771 521 \u2502 \u2502 \u2502 raise IndexError(f\"Invalid key: {key} is out of bounds for size {size}\") \u2502\r\n\u2502 522 \u2502 \u2502 return \u2502\r\n\u2502 523 \u2502 elif isinstance(key, slice): \u2502\r\n\u2502 524 \u2502 \u2502 pass \n\n### Steps to reproduce the bug\n\n``\r\nimport json\r\nimport os\r\nfrom pprint import pprint\r\n\r\nimport bitsandbytes as bnb\r\nimport pandas as pd\r\nimport torch\r\nimport torch.nn as nn\r\nimport transformers\r\nfrom datasets import Dataset,load_dataset\r\n\r\nfrom peft import (\r\n LoraConfig,\r\n PeftConfig,\r\n PeftModel,\r\n get_peft_model,\r\n prepare_model_for_kbit_training\r\n)\r\n\r\nfrom transformers import (\r\n AutoConfig,\r\n AutoModelForCausalLM,\r\n AutoTokenizer,\r\n BitsAndBytesConfig,\r\n\r\n)\r\n\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\r\n\r\ndef print_trainable_parameters(model):\r\n \"\"\"\r\n Prints the number of trainable parameters in the model.\r\n \"\"\"\r\n trainable_params = 0\r\n all_param = 0\r\n for _, param in model.named_parameters():\r\n all_param += param.numel()\r\n if param.requires_grad:\r\n trainable_params += param.numel()\r\n print(\r\n f\"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params \/ all_param}\"\r\n )\r\n\r\n\r\nMODEL_NAME = \"tiiuae\/falcon-7b\"\r\n\r\nbnb_config = BitsAndBytesConfig(\r\n load_in_4bit = True,\r\n bnb_4bit_use_double_quant=True,\r\n bnb_4bit_quant_type=\"nf4\",\r\n bnb_4bit_compute_dtype=torch.bfloat16,\r\n)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n MODEL_NAME,\r\n device_map = \"auto\",\r\n trust_remote_code = True,\r\n quantization_config = bnb_config\r\n)\r\n\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)\r\ntokenizer.pad_token = tokenizer.eos_token\r\n\r\nmodel.gradient_checkpointing_enable()\r\nmodel = prepare_model_for_kbit_training(model)\r\n\r\n\r\nconfig = LoraConfig(\r\n r = 16,\r\n lora_alpha = 32,\r\n target_modules = [\"query_key_value\"],\r\n lora_dropout = 0.05,\r\n bias = \"none\",\r\n task_type = \"CASUAL_LM\"\r\n)\r\n\r\nmodel = get_peft_model(model,config)\r\nprint_trainable_parameters(model)\r\n\r\ndef generate_prompt(data_point):\r\n return f\"\"\"\r\n: {data_point[\"question\"]}\r\n: {data_point[\"answer\"]} \r\n\"\"\".strip()\r\n\r\ndef generate_and_tokenize_prompt(data_point):\r\n full_prompt = generate_prompt(data_point)\r\n tokenized_full_prompt = tokenizer(full_prompt, padding = True, truncation = True,return_tensors = None)\r\n return dict({\r\n \"input_ids\" : tokenized_full_prompt[\"input_ids\"],\r\n \"attention_mask\" : tokenized_full_prompt[\"attention_mask\"]\r\n\r\n })\r\n\r\n\r\ndata = data[\"train\"].shuffle().map(generate_and_tokenize_prompt, batched = False) \r\n\r\nOUTPUT_DIR = \"experiments\"\r\n\r\ntrainings_args = transformers.TrainingArguments(\r\n per_device_train_batch_size = 1,\r\n gradient_accumulation_steps = 4,\r\n num_train_epochs = 1,\r\n learning_rate = 2e-4,\r\n fp16 = True,\r\n save_total_limit = 3,\r\n logging_steps = 1,\r\n output_dir = OUTPUT_DIR,\r\n max_steps = 80,\r\n optim = \"paged_adamw_8bit\",\r\n lr_scheduler_type = \"cosine\",\r\n warmup_ratio = 0.05,\r\n #remove_unused_columns=True\r\n)\r\n\r\ntrainer = transformers.Trainer(\r\n model = model,\r\n train_dataset = data,\r\n args = trainings_args, \r\n data_collator = transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),\r\n\r\n)\r\n\r\nmodel.config.use_cache = False\r\n\r\ntrainer.train()\r\n\r\n\r\nIndexError: Invalid key: 32 is out of bounds for size 0\r\n\r\nDataSet Format is like : \r\n[{\"question\": \"How can I create an account?\", \"answer\": \"To create an account, click on the 'Sign Up' button on the top right corner of our website and follow the instructions to complete the registration process.\"}, .... ]\n\n### Expected behavior\n\n-\n\n### Environment info\n\n\r\n!pip install -q pip \r\n!pip install -q bitsandbytes==0.39.0 \r\n!pip install -q torch==2.0.1\r\n\r\n!pip install -q git+https:\/\/github.com\/huggingface\/transformers.git \r\n!pip install -q git+https:\/\/github.com\/huggingface\/peft.git \r\n!pip install -q git+https:\/\/github.com\/huggingface\/accelerate.git \r\n\r\n!pip install -q datasets \r\n!pip install -q loralib==0.1.1 \r\n!pip install -q einops==0.6.1 \r\n\r\n\r\nimport json\r\nimport os\r\nfrom pprint import pprint\r\n\r\nimport bitsandbytes as bnb\r\nimport pandas as pd\r\nimport torch\r\nimport torch.nn as nn\r\nimport transformers\r\nfrom datasets import Dataset,load_dataset\r\n\r\nfrom peft import (\r\n LoraConfig,\r\n PeftConfig,\r\n PeftModel,\r\n get_peft_model,\r\n prepare_model_for_kbit_training\r\n)\r\n\r\nfrom transformers import (\r\n AutoConfig,\r\n AutoModelForCausalLM,\r\n AutoTokenizer,\r\n BitsAndBytesConfig,\r\n\r\n)\r\n\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5946\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5946\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5945","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5945\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5945\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5945\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5945","id":1754084577,"node_id":"I_kwDODunzps5ojTTh","number":5945,"title":"Failing to upload dataset to the hub","user":{"login":"Ar770","id":77382661,"node_id":"MDQ6VXNlcjc3MzgyNjYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/77382661?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Ar770","html_url":"https:\/\/github.com\/Ar770","followers_url":"https:\/\/api.github.com\/users\/Ar770\/followers","following_url":"https:\/\/api.github.com\/users\/Ar770\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Ar770\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Ar770\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Ar770\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Ar770\/orgs","repos_url":"https:\/\/api.github.com\/users\/Ar770\/repos","events_url":"https:\/\/api.github.com\/users\/Ar770\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Ar770\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Feel free to re-run your code later, it will resume automatically where you left","Tried many times in the last 2 weeks, problem remains.","Alternatively you can save your dataset in parquet files locally and upload them to the hub manually\r\n\r\n```python\r\nfrom tqdm import tqdm\r\nnum_shards = 60\r\nfor index in tqdm(range(num_shards)):\r\n ds.shard(num_shards=num_shards, index=index, contiguous=True).to_parquet(f\"{index:05d}.parquet\")\r\n````"],"created_at":1686635206000,"updated_at":1687172095000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nTrying to upload a dataset of hundreds of thousands of audio samples (the total volume is not very large, 60 gb) to the hub with push_to_hub, it doesn't work.\r\nFrom time to time one piece of the data (parquet) gets pushed and then I get RemoteDisconnected even though my internet is stable.\r\nPlease help.\r\nI'm trying to upload the dataset for almost a week.\r\nThanks\n\n### Steps to reproduce the bug\n\nnot relevant \n\n### Expected behavior\n\nBe able to upload thedataset\n\n### Environment info\n\npython: 3.9","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5945\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5945\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5944","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5944\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5944\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5944\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5944","id":1752882200,"node_id":"PR_kwDODunzps5Sx7O4","number":5944,"title":"Arrow dataset builder to be able to load and stream Arrow datasets","user":{"login":"mariusz-jachimowicz-83","id":10278877,"node_id":"MDQ6VXNlcjEwMjc4ODc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10278877?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83","html_url":"https:\/\/github.com\/mariusz-jachimowicz-83","followers_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/followers","following_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/repos","events_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","@lhoestq tips applied. Thanks for a review. :smile: It's a lot of fun to improve this project. ","Let's add some documentation in a subsequent PR :)\r\n\r\nIn particular @mariosasko and I think it's important to note to users that local arrow data are copied to cache according to the way load_dataset works, but if they want they can use Dataset.from_file instead","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006384 \/ 0.011353 (-0.004969) | 0.003788 \/ 0.011008 (-0.007220) | 0.098524 \/ 0.038508 (0.060016) | 0.031786 \/ 0.023109 (0.008677) | 0.307799 \/ 0.275898 (0.031901) | 0.337329 \/ 0.323480 (0.013849) | 0.003650 \/ 0.007986 (-0.004336) | 0.003731 \/ 0.004328 (-0.000598) | 0.076816 \/ 0.004250 (0.072566) | 0.041888 \/ 0.037052 (0.004835) | 0.310702 \/ 0.258489 (0.052213) | 0.343846 \/ 0.293841 (0.050005) | 0.027841 \/ 0.128546 (-0.100705) | 0.008312 \/ 0.075646 (-0.067334) | 0.320230 \/ 0.419271 (-0.099042) | 0.047378 \/ 0.043533 (0.003845) | 0.308683 \/ 0.255139 (0.053544) | 0.335129 \/ 0.283200 (0.051930) | 0.096294 \/ 0.141683 (-0.045389) | 1.485521 \/ 1.452155 (0.033366) | 1.559868 \/ 1.492716 (0.067152) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.197376 \/ 0.018006 (0.179370) | 0.430461 \/ 0.000490 (0.429972) | 0.004152 \/ 0.000200 (0.003953) | 0.000068 \/ 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023660 \/ 0.037411 (-0.013751) | 0.103128 \/ 0.014526 (0.088602) | 0.107549 \/ 0.176557 (-0.069008) | 0.175934 \/ 0.737135 (-0.561201) | 0.112210 \/ 0.296338 (-0.184129) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.415804 \/ 0.215209 (0.200595) | 4.216333 \/ 2.077655 (2.138679) | 1.910354 \/ 1.504120 (0.406234) | 1.712689 \/ 1.541195 (0.171494) | 1.754705 \/ 1.468490 (0.286215) | 0.554647 \/ 4.584777 (-4.030130) | 3.393592 \/ 3.745712 (-0.352120) | 1.737504 \/ 5.269862 (-3.532358) | 1.021213 \/ 4.565676 (-3.544464) | 0.066908 \/ 0.424275 (-0.357367) | 0.011446 \/ 0.007607 (0.003839) | 0.524630 \/ 0.226044 (0.298585) | 5.243005 \/ 2.268929 (2.974077) | 2.349685 \/ 55.444624 (-53.094939) | 2.027457 \/ 6.876477 (-4.849020) | 2.131053 \/ 2.142072 (-0.011020) | 0.669070 \/ 4.805227 (-4.136157) | 0.136317 \/ 6.500664 (-6.364347) | 0.065924 \/ 0.075469 (-0.009545) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.254102 \/ 1.841788 (-0.587686) | 13.790492 \/ 8.074308 (5.716184) | 14.197772 \/ 10.191392 (4.006380) | 0.143989 \/ 0.680424 (-0.536434) | 0.016577 \/ 0.534201 (-0.517624) | 0.375437 \/ 0.579283 (-0.203846) | 0.398995 \/ 0.434364 (-0.035369) | 0.445287 \/ 0.540337 (-0.095050) | 0.538632 \/ 1.386936 (-0.848304) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006251 \/ 0.011353 (-0.005101) | 0.004019 \/ 0.011008 (-0.006989) | 0.077985 \/ 0.038508 (0.039477) | 0.028705 \/ 0.023109 (0.005596) | 0.417360 \/ 0.275898 (0.141462) | 0.463964 \/ 0.323480 (0.140484) | 0.003489 \/ 0.007986 (-0.004497) | 0.003032 \/ 0.004328 (-0.001296) | 0.077953 \/ 0.004250 (0.073702) | 0.040104 \/ 0.037052 (0.003051) | 0.405242 \/ 0.258489 (0.146753) | 0.475029 \/ 0.293841 (0.181188) | 0.028113 \/ 0.128546 (-0.100433) | 0.008610 \/ 0.075646 (-0.067036) | 0.084847 \/ 0.419271 (-0.334424) | 0.048227 \/ 0.043533 (0.004694) | 0.417235 \/ 0.255139 (0.162096) | 0.450470 \/ 0.283200 (0.167270) | 0.096978 \/ 0.141683 (-0.044705) | 1.514688 \/ 1.452155 (0.062533) | 1.560205 \/ 1.492716 (0.067488) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.235125 \/ 0.018006 (0.217119) | 0.409904 \/ 0.000490 (0.409414) | 0.002474 \/ 0.000200 (0.002275) | 0.000074 \/ 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025152 \/ 0.037411 (-0.012259) | 0.103517 \/ 0.014526 (0.088991) | 0.110154 \/ 0.176557 (-0.066402) | 0.161431 \/ 0.737135 (-0.575704) | 0.114891 \/ 0.296338 (-0.181448) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.456077 \/ 0.215209 (0.240868) | 4.541171 \/ 2.077655 (2.463517) | 2.297912 \/ 1.504120 (0.793792) | 2.079337 \/ 1.541195 (0.538143) | 2.121291 \/ 1.468490 (0.652801) | 0.560172 \/ 4.584777 (-4.024605) | 3.421122 \/ 3.745712 (-0.324590) | 1.764675 \/ 5.269862 (-3.505186) | 1.043482 \/ 4.565676 (-3.522195) | 0.067652 \/ 0.424275 (-0.356623) | 0.011181 \/ 0.007607 (0.003574) | 0.557232 \/ 0.226044 (0.331188) | 5.607851 \/ 2.268929 (3.338922) | 2.783715 \/ 55.444624 (-52.660909) | 2.380943 \/ 6.876477 (-4.495534) | 2.378316 \/ 2.142072 (0.236244) | 0.674356 \/ 4.805227 (-4.130871) | 0.135912 \/ 6.500664 (-6.364752) | 0.067009 \/ 0.075469 (-0.008460) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.309002 \/ 1.841788 (-0.532786) | 14.464073 \/ 8.074308 (6.389765) | 14.418727 \/ 10.191392 (4.227335) | 0.148486 \/ 0.680424 (-0.531938) | 0.016650 \/ 0.534201 (-0.517551) | 0.368786 \/ 0.579283 (-0.210497) | 0.395026 \/ 0.434364 (-0.039338) | 0.433565 \/ 0.540337 (-0.106772) | 0.526603 \/ 1.386936 (-0.860333) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#443fc92700b4f9e12421e8082e205535314a67d5 \"CML watermark\")\n"],"created_at":1686579709000,"updated_at":1686677762000,"closed_at":1686677341000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5944","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5944","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5944.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5944.patch","merged_at":1686677341000},"body":"This adds a Arrow dataset builder to be able to load and stream from already preprocessed Arrow files.\r\nIt's related to https:\/\/github.com\/huggingface\/datasets\/issues\/3035","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5944\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5944\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5942","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5942\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5942\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5942\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5942","id":1752021681,"node_id":"PR_kwDODunzps5Su-V4","number":5942,"title":"Pass datasets-cli additional args as kwargs to DatasetBuilder in `run_beam.py`","user":{"login":"graelo","id":84066822,"node_id":"MDQ6VXNlcjg0MDY2ODIy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/84066822?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/graelo","html_url":"https:\/\/github.com\/graelo","followers_url":"https:\/\/api.github.com\/users\/graelo\/followers","following_url":"https:\/\/api.github.com\/users\/graelo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/graelo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/graelo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/graelo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/graelo\/orgs","repos_url":"https:\/\/api.github.com\/users\/graelo\/repos","events_url":"https:\/\/api.github.com\/users\/graelo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/graelo\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1686552650000,"updated_at":1688116500000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5942","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5942","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5942.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5942.patch","merged_at":null},"body":"Hi,\r\n\r\nFollowing this , here is a simple PR to pass any additional args to datasets-cli as kwargs in the DatasetBuilder in `run_beam.py`.\r\n\r\nI also took the liberty to add missing setup steps to the `beam.mdx` docs in order to help everyone.\r\n\r\n@lhoestq ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5942\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5942\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5941","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5941\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5941\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5941\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5941","id":1751838897,"node_id":"I_kwDODunzps5oavCx","number":5941,"title":"Load Data Sets Too Slow In Train Seq2seq Model","user":{"login":"xyx361100238","id":19569322,"node_id":"MDQ6VXNlcjE5NTY5MzIy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19569322?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/xyx361100238","html_url":"https:\/\/github.com\/xyx361100238","followers_url":"https:\/\/api.github.com\/users\/xyx361100238\/followers","following_url":"https:\/\/api.github.com\/users\/xyx361100238\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/xyx361100238\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/xyx361100238\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/xyx361100238\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/xyx361100238\/orgs","repos_url":"https:\/\/api.github.com\/users\/xyx361100238\/repos","events_url":"https:\/\/api.github.com\/users\/xyx361100238\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/xyx361100238\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! you can speed it up using multiprocessing by passing `num_proc=` to `load_dataset()`","already did\uff0cbut not useful for step Generating train split\uff0cit works in step \"Resolving data files\" & \"Downloading data files\" ","@mariosasko some advice \uff0c thanks\uff01","I met the same problem, terrible experience","@mariosasko ","We need more info about the issue to provide help. \r\n\r\nCan you interrupt the process (with `num_proc=None`) after the `load_dataset` call when the slowdown occurs? So we can know what part of the code is causing it.\r\n\r\nThe `audiofolder` \\ `imagefolder` with metadata is not performant for large datasets. Luckily, we can make them much faster if drop the nested metadata files feature (not that useful). I plan to work on this soon.\r\n\r\nIn the meantime, it's better to use `Dataset.from_generator` (requires replacing the `load_dataset` calls in the transformers script with `Dataset.from_generator`) or write a dataset loading script for large datasets."],"created_at":1686542323000,"updated_at":1689275741000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\n\r\nstep 'Generating train split' in load_dataset is too slow\uff1a\r\n![image](https:\/\/github.com\/huggingface\/datasets\/assets\/19569322\/d9b08eee-95fe-4741-a346-b70416c948f8)\r\n\n\n### Steps to reproduce the bug\n\nData\uff1a own data\uff0c16K16B Mono wav\r\nOficial Script:[ run_speech_recognition_seq2seq.py](https:\/\/github.com\/huggingface\/transformers\/blob\/main\/examples\/pytorch\/speech-recognition\/run_speech_recognition_seq2seq.py)\r\nAdd Code\uff1a\r\n if data_args.data_path is not None:\r\n print(data_args.data_path)\r\n raw_datasets = load_dataset(\"audiofolder\", data_dir=data_args.data_path, cache_dir=model_args.cache_dir)\r\n raw_datasets = raw_datasets.cast_column(\"audio\", Audio(sampling_rate=16000))\r\n raw_datasets = raw_datasets[\"train\"].train_test_split(test_size=0.005, shuffle=True)\r\n \uff08change cache_dir to other path \uff0cex:\/DATA\/cache\uff09\n\n### Expected behavior\n\nload data fast,at least 1000+\r\n`Generating train split: 387875 examples [32:24:45, 1154.83 examples\/s]`\n\n### Environment info\n\n\r\n- `transformers` version: 4.28.0.dev0\r\n- Platform: Linux-5.4.0-149-generic-x86_64-with-debian-bullseye-sid\r\n- Python version: 3.7.16\r\n- Huggingface_hub version: 0.13.2\r\n- PyTorch version (GPU?): 1.13.1+cu116 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?\/GPU?\/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5941\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5941\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5990","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5990\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5990\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5990\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5990","id":1774389854,"node_id":"I_kwDODunzps5pwwpe","number":5990,"title":"Pushing a large dataset on the hub consistently hangs","user":{"login":"AntreasAntoniou","id":10792502,"node_id":"MDQ6VXNlcjEwNzkyNTAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10792502?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AntreasAntoniou","html_url":"https:\/\/github.com\/AntreasAntoniou","followers_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/followers","following_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/orgs","repos_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/repos","events_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @AntreasAntoniou , sorry to know you are facing this issue. To help debugging it, could you tell me:\r\n- What is the total dataset size?\r\n- Is it always failing on the same shard or is the hanging problem happening randomly?\r\n- Were you able to save the dataset as parquet locally? This would help us determine if the problem comes from the upload or the file generation.\r\n\r\nI'm cc-ing @lhoestq who might have some insights from a `datasets` perspective.","One trick that can also help is to check the traceback when you kill your python process: it will show where in the code it was hanging","Right. So I did the trick @lhoestq suggested. Here is where things seem to hang\r\n\r\n```\r\nError while uploading 'data\/train-00120-of-00195-466c2dbab2eb9989.parquet' to the Hub. \r\nPushing split train to the Hub. \r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:03<00:00, 1.15s\/ba]\r\nUpload 1 LFS files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:52<00:00, 52.12s\/it]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:03<00:00, 1.08s\/ba]\r\nUpload 1 LFS files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:45<00:00, 45.54s\/it]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:03<00:00, 1.08s\/ba]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:03<00:00, 1.03s\/ba^Upload 1 LFS files: 0%| | 0\/1 [\r\n21:27:35 \r\n line for line in self.divide(flatten_spans()) if line.plain != separator \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/rich\/text.py\", line 385, in plain \r\n if len(self._text) != 1: \r\nKeyboardInterrupt \r\n \r\nOriginal exception was: \r\nTraceback (most recent call last): \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/tqdm\/contrib\/concurrent.py\", line 51, in _executor_map \r\n return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs)) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/tqdm\/std.py\", line 1178, in __iter__ \r\n for obj in iterable: \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/concurrent\/futures\/_base.py\", line 621, in result_iterator \r\n yield _result_or_cancel(fs.pop()) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/concurrent\/futures\/_base.py\", line 319, in _result_or_cancel \r\n return fut.result(timeout) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/concurrent\/futures\/_base.py\", line 453, in result \r\n self._condition.wait(timeout) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/threading.py\", line 320, in wait \r\n waiter.acquire() \r\nKeyboardInterrupt \r\n \r\nDuring handling of the above exception, another exception occurred: \r\n \r\nTraceback (most recent call last): \r\n File \"\/TALI\/tali\/scripts\/validate_dataset.py\", line 127, in \r\n train_dataset.push_to_hub(repo_id=\"Antreas\/TALI-base\", max_shard_size=\"5GB\") \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/dataset_dict.py\", line 1583, in push_to_hub \r\n repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parquet_shards_to_hub( \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py\", line 5275, in _push_parquet_shards_to_hub \r\n _retry( \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/utils\/file_utils.py\", line 282, in _retry \r\n return func(*func_args, **func_kwargs) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/huggingface_hub\/utils\/_validators.py\", line 118, in _inner_fn \r\n return fn(*args, **kwargs) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/huggingface_hub\/hf_api.py\", line 826, in _inner \r\n return fn(self, *args, **kwargs) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/huggingface_hub\/hf_api.py\", line 3205, in upload_file \r\n commit_info = self.create_commit( \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/huggingface_hub\/utils\/_validators.py\", line 118, in _inner_fn \r\n return fn(*args, **kwargs) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/huggingface_hub\/hf_api.py\", line 826, in _inner \r\n return fn(self, *args, **kwargs) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/huggingface_hub\/hf_api.py\", line 2680, in create_commit \r\n upload_lfs_files( \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/huggingface_hub\/utils\/_validators.py\", line 118, in _inner_fn \r\n return fn(*args, **kwargs) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/huggingface_hub\/_commit_api.py\", line 353, in upload_lfs_files \r\n thread_map( \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/tqdm\/contrib\/concurrent.py\", line 69, in thread_map \r\n return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/tqdm\/contrib\/concurrent.py\", line 49, in _executor_map \r\n with PoolExecutor(max_workers=max_workers, initializer=tqdm_class.set_lock, \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/concurrent\/futures\/_base.py\", line 649, in __exit__ \r\n self.shutdown(wait=True) \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/concurrent\/futures\/thread.py\", line 235, in shutdown \r\n t.join() \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/threading.py\", line 1096, in join \r\n self._wait_for_tstate_lock() \r\n File \"\/opt\/conda\/envs\/main\/lib\/python3.10\/threading.py\", line 1116, in _wait_for_tstate_lock \r\n if lock.acquire(block, timeout): \r\nKeyboardInterrupt \r\n```","@Wauplin \r\n\r\n>What is the total dataset size?\r\n\r\nThere are three variants, and the random hanging happens on all three. The sizes are 2TB, 1TB, and 200GB. \r\n\r\n>Is it always failing on the same shard or is the hanging problem happening randomly?\r\n\r\nIt seems to be very much random, as restarting can help move past the previous hang, only to find a new one, or not. \r\n\r\n>Were you able to save the dataset as parquet locally? This would help us determine if the problem comes from the upload or the file generation.\r\n\r\nYes. The dataset seems to be locally stored as parquet. ","Hmm it looks like an issue with TQDM lock. Maybe you can try updating TQDM ?","I am using the latest version of tqdm\r\n\r\n```\r\n\u2b22 [Docker] \u276f pip install tqdm --upgrade\r\nRequirement already satisfied: tqdm in \/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages (4.65.0)\r\nWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https:\/\/pip.pypa.io\/warnings\/venv\r\n```","I tried trying to catch the hanging issue in action again\r\n\r\n```\r\nPushing dataset shards to the dataset hub: 65%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258a | 127\/195 [2:28:02<1:19:15, 69.94s\/it] \r\nError while uploading 'data\/train-00127-of-00195-3f8d036ade107c27.parquet' to the Hub. \r\nPushing split train to the Hub. \r\nPushing dataset shards to the dataset hub: 64%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258f | 124\/195 [2:06:10<1:12:14, 61.05s\/it]C^[^C^C^C \r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Traceback (most recent call last) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \r\n\u2502 \/TALI\/tali\/scripts\/validate_dataset.py:127 in \u2502 \r\n\u2502 \u2502 \r\n\u2502 124 \u2502 \u2502 \r\n\u2502 125 \u2502 while not succesful_competion: \u2502 \r\n\u2502 126 \u2502 \u2502 try: \u2502 \r\n\u2502 \u2771 127 \u2502 \u2502 \u2502 train_dataset.push_to_hub(repo_id=\"Antreas\/TALI-base\", max_shard_size=\"5GB\") \u2502 \r\n\u2502 128 \u2502 \u2502 \u2502 succesful_competion = True \u2502 \r\n\u2502 129 \u2502 \u2502 except Exception as e: \u2502 \r\n\u2502 130 \u2502 \u2502 \u2502 print(e) \u2502 \r\n\u2502 \u2502 \r\n\u2502 \/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/dataset_dict.py:1583 in push_to_hub \u2502 \r\n\u2502 \u2502 \r\n\u2502 1580 \u2502 \u2502 for split in self.keys(): \u2502 \r\n\u2502 1581 \u2502 \u2502 \u2502 logger.warning(f\"Pushing split {split} to the Hub.\") \u2502 \r\n\u2502 1582 \u2502 \u2502 \u2502 # The split=key needs to be removed before merging \u2502 \r\n\u2502 \u2771 1583 \u2502 \u2502 \u2502 repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parq \u2502 \r\n\u2502 1584 \u2502 \u2502 \u2502 \u2502 repo_id, \u2502 \r\n\u2502 1585 \u2502 \u2502 \u2502 \u2502 split=split, \u2502 \r\n\u2502 1586 \u2502 \u2502 \u2502 \u2502 private=private, \u2502 \r\n\u2502 \u2502 \r\n\u2502 \/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py:5263 in \u2502 \r\n\u2502 _push_parquet_shards_to_hub \u2502 \r\n\u2502 \u2502 \r\n\u2502 5260 \u2502 \u2502 \u2502 \r\n\u2502 5261 \u2502 \u2502 uploaded_size = 0 \u2502 \r\n\u2502 5262 \u2502 \u2502 shards_path_in_repo = [] \u2502 \r\n\u2502 \u2771 5263 \u2502 \u2502 for index, shard in logging.tqdm( \u2502 \r\n\u2502 5264 \u2502 \u2502 \u2502 enumerate(itertools.chain([first_shard], shards_iter)), \u2502 \r\n\u2502 5265 \u2502 \u2502 \u2502 desc=\"Pushing dataset shards to the dataset hub\", \u2502 \r\n\u2502 5266 \u2502 \u2502 \u2502 total=num_shards, \u2502 \r\n\u2502 \u2502 \r\n\u2502 \/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/tqdm\/std.py:1178 in __iter__ \u2502 \r\n\u2502 \u2502 \r\n\u2502 1175 \u2502 \u2502 time = self._time \u2502 \r\n\u2502 1176 \u2502 \u2502 \u2502 \r\n\u2502 1177 \u2502 \u2502 try: \u2502\r\n\u2502 \u2771 1178 \u2502 \u2502 \u2502 for obj in iterable: \u2502\r\n\u2502 1179 \u2502 \u2502 \u2502 \u2502 yield obj \u2502\r\n\u2502 1180 \u2502 \u2502 \u2502 \u2502 # Update and possibly print the progressbar. \u2502\r\n\u2502 1181 \u2502 \u2502 \u2502 \u2502 # Note: does not call self.update(1) for speed optimisation. \u2502\r\n\u2502 \u2502\r\n\u2502 \/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py:5238 in \u2502\r\n\u2502 shards_with_embedded_external_files \u2502\r\n\u2502 \u2502\r\n\u2502 5235 \u2502 \u2502 \u2502 \u2502 for shard in shards: \u2502\r\n\u2502 5236 \u2502 \u2502 \u2502 \u2502 \u2502 format = shard.format \u2502\r\n\u2502 5237 \u2502 \u2502 \u2502 \u2502 \u2502 shard = shard.with_format(\"arrow\") \u2502\r\n\u2502 \u2771 5238 \u2502 \u2502 \u2502 \u2502 \u2502 shard = shard.map( \u2502\r\n\u2502 5239 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 embed_table_storage, \u2502\r\n\u2502 5240 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 batched=True, \u2502\r\n\u2502 5241 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 batch_size=1000, \u2502\r\n\u2502 \u2502\r\n\u2502 \/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py:578 in wrapper \u2502\r\n\u2502 \u2502\r\n\u2502 575 \u2502 \u2502 else: \u2502\r\n\u2502 576 \u2502 \u2502 \u2502 self: \"Dataset\" = kwargs.pop(\"self\") \u2502\r\n\u2502 577 \u2502 \u2502 # apply actual function \u2502\r\n\u2502 \u2771 578 \u2502 \u2502 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs) \u2502 \r\n\u2502 579 \u2502 \u2502 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [ou \u2502 \r\n\u2502 580 \u2502 \u2502 for dataset in datasets: \u2502 \r\n\u2502 581 \u2502 \u2502 \u2502 # Remove task templates if a column mapping of the template is no longer val \u2502 \r\n\u2502 \u2502 \r\n\u2502 \/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py:543 in wrapper \u2502 \r\n\u2502 \u2502 \r\n\u2502 540 \u2502 \u2502 \u2502 \"output_all_columns\": self._output_all_columns, \u2502 \r\n\u2502 541 \u2502 \u2502 } \u2502 \r\n\u2502 542 \u2502 \u2502 # apply actual function \u2502 \r\n\u2502 \u2771 543 \u2502 \u2502 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs) \u2502 \r\n\u2502 544 \u2502 \u2502 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [ou \u2502 \r\n\u2502 545 \u2502 \u2502 # re-apply format to the output \u2502 \r\n\u2502 546 \u2502 \u2502 for dataset in datasets: \u2502 \r\n\u2502 \u2502 \r\n\u2502 \/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py:3073 in map \u2502 \r\n\u2502 \u2502 \r\n\u2502 3070 \u2502 \u2502 \u2502 \u2502 \u2502 leave=False, \u2502 \r\n\u2502 3071 \u2502 \u2502 \u2502 \u2502 \u2502 desc=desc or \"Map\", \u2502 \r\n\u2502 3072 \u2502 \u2502 \u2502 \u2502 ) as pbar: \u2502 \r\n\u2502 \u2771 3073 \u2502 \u2502 \u2502 \u2502 \u2502 for rank, done, content in Dataset._map_single(**dataset_kwargs): \u2502 \r\n\u2502 3074 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 if done: \u2502 \r\n\u2502 3075 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 shards_done += 1 \u2502 \r\n\u2502 3076 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 logger.debug(f\"Finished processing shard number {rank} of {n \u2502 \r\n\u2502 \u2502 \r\n\u2502 \/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py:3464 in _map_single \u2502 \r\n\u2502 \u2502 \r\n\u2502 3461 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 buf_writer, writer, tmp_file = init_buffer_and_writer() \u2502 \r\n\u2502 3462 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 stack.enter_context(writer) \u2502 \r\n\u2502 3463 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 if isinstance(batch, pa.Table): \u2502 \r\n\u2502 \u2771 3464 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 writer.write_table(batch) \u2502 \r\n\u2502 3465 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 else: \u2502 \r\n\u2502 3466 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 writer.write_batch(batch) \u2502 \r\n\u2502 3467 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 num_examples_progress_update += num_examples_in_batch \u2502 \r\n\u2502 \u2502 \r\n\u2502 \/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/datasets\/arrow_writer.py:567 in write_table \u2502 \r\n\u2502 \u2502 \r\n\u2502 564 \u2502 \u2502 \u2502 writer_batch_size = self.writer_batch_size \u2502 \r\n\u2502 565 \u2502 \u2502 if self.pa_writer is None: \u2502 \r\n\u2502 566 \u2502 \u2502 \u2502 self._build_writer(inferred_schema=pa_table.schema) \u2502 \r\n\u2502 \u2771 567 \u2502 \u2502 pa_table = pa_table.combine_chunks() \u2502 \r\n\u2502 568 \u2502 \u2502 pa_table = table_cast(pa_table, self._schema) \u2502 \r\n\u2502 569 \u2502 \u2502 if self.embed_local_files: \u2502 \r\n\u2502 570 \u2502 \u2502 \u2502 pa_table = embed_table_storage(pa_table) \u2502 \r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \r\nKeyboardInterrupt \r\n```","I'm on my phone so can't help that much. What I'd advice to do is to [save_to_disk](https:\/\/huggingface.co\/docs\/datasets\/package_reference\/main_classes#save_to_disk) if it's not already done and then upload the files\/folder to the Hub separately. You can find what you need in the [upload guide](https:\/\/huggingface.co\/docs\/huggingface_hub\/guides\/upload). It might not help finding the exact issue for now but at least it can unblock you. ","In your last stacktrace it interrupted while embedding external content - in case your dataset in made of images or audio files that live on your disk. Is it the case ?","Yeah, the dataset has images, audio, video and text. ","It's maybe related to https:\/\/github.com\/apache\/arrow\/issues\/34455: are you using ArrayND features ?\r\n\r\nAlso what's your `pyarrow` version ? Could you try updating to >= 12.0.1 ?","I was using pyarrow == 12.0.0\r\n\r\nI am not explicitly using ArrayND features, unless the hub API automatically converts my files to such. ","I have now updated to pyarrow == 12.0.1 and retrying","You can also try to reduce the `max_shard_size` - Sometimes parquet has a hard time working with data bigger than 2GB","So, updating the pyarrow seems to help. It can still throw errors here and there but I can retry when that happens. It's better than hanging. \r\n\r\nHowever, I am a bit confused about something. I have uploaded my datasets, but while earlier I could see all three sets, now I can only see 1. What's going on? \r\nhttps:\/\/huggingface.co\/datasets\/Antreas\/TALI-base\r\n\r\nI have seen this happen before as well, so I deleted and reuploaded, but this dataset is way too large for me to do this. ","It's a bug on our side, I'll update the dataset viewer ;)\r\n\r\nThanks for reporting !","Apparently this happened because of bad modifications in the README.md split metadata.\r\n\r\nI fixed them in this PR: https:\/\/huggingface.co\/datasets\/Antreas\/TALI-base\/discussions\/1","@lhoestq It's a bit odd that when uploading a dataset, one set at a time \"train\", \"val\", \"test\", the push_to_hub function overwrites the readme and removes differently named sets from previous commits. i.e., you push \"val\", all is well. Then you push \"test\", and the \"val\" entry disappears from the readme, while the data remain intact. ","Also, just found another related issue. One of the many that make things hang or fail when pushing to hub. \r\n\r\nIn the following code:\r\n\r\n```python\r\ntrain_generator = lambda: data_generator(\"train\", percentage=1.0)\r\n val_generator = lambda: data_generator(\"val\")\r\n test_generator = lambda: data_generator(\"test\")\r\n\r\n train_data = datasets.Dataset.from_generator(\r\n train_generator,\r\n num_proc=mp.cpu_count(),\r\n writer_batch_size=5000,\r\n cache_dir=tali_dataset_dir,\r\n )\r\n\r\n val_data = datasets.Dataset.from_generator(\r\n val_generator,\r\n writer_batch_size=5000,\r\n num_proc=mp.cpu_count(),\r\n cache_dir=tali_dataset_dir,\r\n )\r\n\r\n test_data = datasets.Dataset.from_generator(\r\n test_generator,\r\n writer_batch_size=5000,\r\n num_proc=mp.cpu_count(),\r\n cache_dir=tali_dataset_dir,\r\n )\r\n\r\n print(f\"Pushing TALI-large to hub\")\r\n\r\n dataset = datasets.DatasetDict(\r\n {\"train\": train_data, \"val\": val_data, \"test\": test_data}\r\n )\r\n succesful_competion = False\r\n\r\n while not succesful_competion:\r\n try:\r\n dataset.push_to_hub(repo_id=\"Antreas\/TALI-large\", max_shard_size=\"2GB\")\r\n succesful_competion = True\r\n except Exception as e:\r\n print(e)\r\n ```\r\n \r\n \r\n Things keep failing in the push_to_repo step, at random places, with the following error:\r\n \r\n ```bash\r\n Pushing dataset shards to the dataset hub: 7%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258b | 67\/950 [42:41<9:22:37, 38.23s\/it]\r\nError while uploading 'data\/train-00067-of-00950-a4d179ed5a593486.parquet' to the Hub.\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2\/2 [00:01<00:00, 1.81ba\/s]\r\nUpload 1 LFS files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:11<00:00, 11.20s\/it]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2\/2 [00:00<00:00, 2.48ba\/s]\r\nUpload 1 LFS files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:15<00:00, 15.30s\/it]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2\/2 [00:00<00:00, 2.39ba\/s]\r\nUpload 1 LFS files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:11<00:00, 11.52s\/it]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2\/2 [00:00<00:00, 2.47ba\/s]\r\nUpload 1 LFS files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:10<00:00, 10.39s\/it]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2\/2 [00:00<00:00, 2.26ba\/s]\r\nUpload 1 LFS files: 0%| | 0\/1 [16:38 @lhoestq It's a bit odd that when uploading a dataset, one set at a time \"train\", \"val\", \"test\", the push_to_hub function overwrites the readme and removes differently named sets from previous commits. i.e., you push \"val\", all is well. Then you push \"test\", and the \"val\" entry disappears from the readme, while the data remain intact.\r\n\r\nHmm this shouldn't happen. What code did you run exactly ? Using which version of `datasets` ?","> I have a while loop that forces retries, but it seems that the progress itself is randomly getting lost as well. Any ideas on how to improve this? It has been blocking me for way too long.\r\n\r\nCould you also print the cause of the error (`e.__cause__`) ? Or show the full stack trace when the error happens ?\r\nThis would give more details about why it failed and would help investigate.","> Should I build the parquet manually and then push manually as well? If I do things manually, how can I ensure my dataset works properly with \"stream=True\"?\r\n\r\nParquet is supported out of the box ^^\r\n\r\nIf you want to make sure it works as expected you can try locally first:\r\n```python\r\nds = load_dataset(\"path\/to\/local\", streaming=True)\r\n```","@lhoestq @AntreasAntoniou I transferred this issue to the `datasets` repository as the questions and answers are more related to this repo. Hope it can help other users find the bug and fixes more easily (like updating [tqdm](https:\/\/github.com\/huggingface\/datasets\/issues\/5990#issuecomment-1607120204) and [pyarrow](https:\/\/github.com\/huggingface\/datasets\/issues\/5990#issuecomment-1607120278) or [setting a lower `max_shard_size`](https:\/\/github.com\/huggingface\/datasets\/issues\/5990#issuecomment-1607120328)).\r\n\r\n~For the initial \"pushing large dataset consistently hangs\"-issue, I still think it's best to try to `save_to_disk` first and then upload it manually\/with a script (see [upload_folder](https:\/\/huggingface.co\/docs\/huggingface_hub\/guides\/upload#upload-a-folder)). It's not the most satisfying solution but at least it would confirm from where the problem comes from.~\r\n\r\n**EDIT:** removed suggestion about saving to disk first (see https:\/\/github.com\/huggingface\/datasets\/issues\/5990#issuecomment-1607186914).","> @lhoestq @AntreasAntoniou I transferred this issue to the datasets repository as the questions and answers are more related to this repo. Hope it can help other users find the bug and fixes more easily (like updating https:\/\/github.com\/huggingface\/datasets\/issues\/5990#issuecomment-1607120204 and https:\/\/github.com\/huggingface\/datasets\/issues\/5990#issuecomment-1607120278 or https:\/\/github.com\/huggingface\/datasets\/issues\/5990#issuecomment-1607120328).\r\n\r\nthanks :)\r\n\r\n> For the initial \"pushing large dataset consistently hangs\"-issue, I still think it's best to try to save_to_disk first and then upload it manually\/with a script (see [upload_folder](https:\/\/huggingface.co\/docs\/huggingface_hub\/guides\/upload#upload-a-folder)). It's not the most satisfying solution but at least it would confirm from where the problem comes from.\r\n\r\nAs I've already said in other discussions, I would not recommend pushing files saved with `save_to_disk` to the Hub but save to parquet shards and upload them instead. The Hub does not support datasets saved with `save_to_disk`, which is meant for disk only.","> As I've already said in other discussions, I would not recommend pushing files saved with save_to_disk to the Hub but save to parquet shards and upload them instead. The Hub does not support datasets saved with save_to_disk, which is meant for disk only.\r\n\r\nWell noted, thanks. That part was not clear to me :)","Sorry for not replying in a few days, I was on leave. :) \r\n\r\nSo, here are more information as to the error that causes some of the delay\r\n\r\n```bash\r\nPushing Antreas\/TALI-tiny to hub\r\nAttempting to push to hub\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6\/6 [00:24<00:00, 4.06s\/ba]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6\/6 [00:24<00:00, 4.15s\/ba]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6\/6 [00:26<00:00, 4.45s\/ba]\r\n\/opt\/conda\/envs\/main\/lib\/python3.10\/site-packages\/huggingface_hub\/lfs.py:310: UserWarning: hf_transfer is enabled but does not support uploading from bytes or BinaryIO, falling back to regular upload\r\n warnings.warn(\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6\/6 [00:25<00:00, 4.26s\/ba]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6\/6 [00:27<00:00, 4.58s\/ba]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6\/6 [00:24<00:00, 4.10s\/ba]\r\nPushing dataset shards to the dataset hub: 22%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258e | 5\/23 [52:23<3:08:37, 628.74s\/it]\r\nException: Error while uploading 'data\/train-00005-of-00023-e224d901fd65e062.parquet' to the Hub., with stacktrace: , and type: , and \r\ncause: HTTPSConnectionPool(host='s3.us-east-1.amazonaws.com', port=443): Max retries exceeded with url: \r\n\/lfs.huggingface.co\/repos\/7c\/d3\/7cd385d9324302dc13e3986331d72d9be6fa0174c63dcfe0e08cd474f7f1e8b7\/3415166ae28c0beccbbc692f38742b8dea2c197f5c805321104e888d21d7eb90?X-Amz-Algorithm=AWS4-HMAC-SHA256\r\n&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230627%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230627T003349Z&X-Amz-Expires=86400&X-Amz-Signature=5a12ff96f2\r\n91f644134170992a6628e5f3c4e7b2e7fc3e940b4378fe11ae5390&X-Amz-SignedHeaders=host&partNumber=1&uploadId=JSsK8r63XSF.VlKQx3Vf8OW4DEVp5YIIY7LPnuapNIegsxs5EHgM1p4u0.Nn6_wlPlQnvxm8HKMxZhczKE9KB74t0etB\r\noLcxqBIvsgey3uXBTZMAEGwU6y7CDUADiEIO&x-id=UploadPart (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2426)')))\r\nPush failed, retrying\r\nAttempting to push to hub\r\nPushing split train to the Hub.\r\n```\r\n\r\nOne issue is that the uploading does not continue from the chunk it failed off. It often continues from a very old chunk. e.g. if it failed on chunk 192\/250, it will continue from say 53\/250, and this behaviour appears almost random. ","Are you using a proxy of some sort ?","I am using a kubernetes cluster built into a university VPN. ","So, other than the random connection drops here and there, any idea why the progress does not continue where it left off?\r\n\r\n```bash\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 28\/28 [00:02<00:00, 10.79ba\/s]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 28\/28 [00:02<00:00, 13.65ba\/s]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 28\/28 [00:02<00:00, 13.39ba\/s]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 28\/28 [00:02<00:00, 13.04ba\/s]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 28\/28 [00:02<00:00, 13.52ba\/s]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 28\/28 [00:02<00:00, 12.28ba\/s]\r\nPushing dataset shards to the dataset hub: 20%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 | 75\/381 [1:34:39<6:26:11, 75.72s\/it]\r\nException: Error while uploading 'data\/train-00075-of-00381-1614bc251b778766.parquet' to the Hub., with stacktrace: , and type: , and \r\ncause: HTTPSConnectionPool(host='s3.us-east-1.amazonaws.com', port=443): Max retries exceeded with url: \r\n\/lfs.huggingface.co\/repos\/3b\/31\/3b311464573d8d63b137fcd5b40af1e7a5b1306843c88e80372d0117157504e5\/ed8dae933fb79ae1ef5fb1f698f5125d3e1c02977ac69438631f152bb3bfdd1e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-\r\nAmz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230629%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230629T053004Z&X-Amz-Expires=86400&X-Amz-Signature=da2b26270edfd6d0\r\nd069c015a5a432031107a8664c3f0917717e5e40c688183c&X-Amz-SignedHeaders=host&partNumber=1&uploadId=2erWGHTh3ICqBLU_QvHfnygZ2tkMWbL0rEqpJdYohCKHUHnfwMjvoBIg0TI_KSGn4rSKxUxOyqSIzFUFSRSzixZeLeneaXJOw.Qx8\r\nzLKSV5xV7HRQDj4RBesNve6cSoo&x-id=UploadPart (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2426)')))\r\nPush failed, retrying\r\nAttempting to push to hub\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 28\/28 [00:02<00:00, 12.09ba\/s]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 28\/28 [00:02<00:00, 11.51ba\/s]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 28\/28 [00:02<00:00, 10.77ba\/s]\r\nPushing dataset shards to the dataset hub: 20%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258b | 77\/381 [1:32:50<6:06:34, 72.35s\/it]\r\nException: Error while uploading 'data\/train-00077-of-00381-368b2327a9908aab.parquet' to the Hub., with stacktrace: , and type: , and \r\ncause: HTTPSConnectionPool(host='s3.us-east-1.amazonaws.com', port=443): Max retries exceeded with url: \r\n\/lfs.huggingface.co\/repos\/3b\/31\/3b311464573d8d63b137fcd5b40af1e7a5b1306843c88e80372d0117157504e5\/9462ff2c5e61283b53b091984a22de2f41a2f6e37b681171e2eca4a998f979cb?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-\r\nAmz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230629%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230629T070510Z&X-Amz-Expires=86400&X-Amz-Signature=9ab8487b93d443cd\r\n21f05476405855d46051a0771b4986bbb20f770ded21b1a4&X-Amz-SignedHeaders=host&partNumber=1&uploadId=UiHX1B.DcoAO2QmIHpWpCuNPwhXU_o1dsTkTGPqZt1P51o9k0yz.EsFD9eKpQMwgAST3jOatRG78I_JWRBeLBDYYVNp8r0TpIdeSg\r\neUg8uwPZOCPw9y5mWOw8MWJrnBo&x-id=UploadPart (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2426)')))\r\nPush failed, retrying\r\nAttempting to push to hub\r\nPushing split train to the Hub.\r\nPushing dataset shards to the dataset hub: 8%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258b | 29\/381 [27:39<5:50:03, 59.67s\/it]\r\nMap: 36%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 | 1000\/2764 [00:35<00:34, 51.63 examples\/Map: 72%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258f | 2000\/2764 [00:40<00:15, 49.06 examples\/Map: 72%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258f | 2000\/2764 [00:55<00:15, 49.06 examples\/Map: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2764\/2764 [00:56<00:00, 48.82 examples\/Pushing dataset shards to the dataset hub: 8%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589 | 30\/381 [28:35<5:43:03, 58.64s\/iPushing dataset shards to the dataset hub: 8%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258e | 31\/381 [29:40<5:52:18, 60.40s\/iPushing dataset shards to the dataset hub: 8%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258c | 32\/381 [30:46<6:02:20, 62.29s\/it] \r\nMap: 36%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258e \r\n```\r\n\r\nThis is actually the issue that wastes the most time for me, and I need it fixed. Please advice on how I can go about it.\r\n\r\nNotice how the progress goes from \r\n| 77\/381 to 30\/381","If the any shard is missing on the Hub, it will re-upload it. It looks like the 30th shard was missing on the Hub in your case. \r\n\r\nIt also means that the other files up to the 77th that were successfully uploaded won't be uploaded again.\r\n\r\ncc @mariosasko who might know better"],"created_at":1686408407000,"updated_at":1688133460000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nOnce I have locally built a large dataset that I want to push to hub, I use the recommended approach of .push_to_hub to get the dataset on the hub, and after pushing a few shards, it consistently hangs. This has happened over 40 times over the past week, and despite my best efforts to try and catch this happening and kill a process and restart, it seems to be extremely time wasting -- so I came to you to report this and to seek help. \r\n\r\nI already tried installing hf_transfer, but it doesn't support Byte file uploads so I uninstalled it.\n\n### Reproduction\n\n```python\r\nimport multiprocessing as mp\r\nimport pathlib\r\nfrom math import ceil\r\n\r\nimport datasets\r\nimport numpy as np\r\nfrom tqdm.auto import tqdm\r\n\r\nfrom tali.data.data import select_subtitles_between_timestamps\r\nfrom tali.utils import load_json\r\n\r\ntali_dataset_dir = \"\/data\/\"\r\n\r\nif __name__ == \"__main__\":\r\n full_dataset = datasets.load_dataset(\r\n \"Antreas\/TALI\", num_proc=mp.cpu_count(), cache_dir=tali_dataset_dir\r\n )\r\n\r\n def data_generator(set_name, percentage: float = 1.0):\r\n dataset = full_dataset[set_name]\r\n\r\n for item in tqdm(dataset):\r\n video_list = item[\"youtube_content_video\"]\r\n video_list = np.random.choice(\r\n video_list, int(ceil(len(video_list) * percentage))\r\n )\r\n if len(video_list) == 0:\r\n continue\r\n captions = item[\"youtube_subtitle_text\"]\r\n captions = select_subtitles_between_timestamps(\r\n subtitle_dict=load_json(\r\n captions.replace(\r\n \"\/data\/\",\r\n tali_dataset_dir,\r\n )\r\n ),\r\n starting_timestamp=0,\r\n ending_timestamp=100000000,\r\n )\r\n\r\n for video_path in video_list:\r\n temp_path = video_path.replace(\"\/data\/\", tali_dataset_dir)\r\n video_path_actual: pathlib.Path = pathlib.Path(temp_path)\r\n\r\n if video_path_actual.exists():\r\n item[\"youtube_content_video\"] = open(video_path_actual, \"rb\").read()\r\n item[\"youtube_subtitle_text\"] = captions\r\n yield item\r\n\r\n train_generator = lambda: data_generator(\"train\", percentage=0.1)\r\n val_generator = lambda: data_generator(\"val\")\r\n test_generator = lambda: data_generator(\"test\")\r\n\r\n train_data = datasets.Dataset.from_generator(\r\n train_generator,\r\n num_proc=mp.cpu_count(),\r\n writer_batch_size=5000,\r\n cache_dir=tali_dataset_dir,\r\n )\r\n\r\n val_data = datasets.Dataset.from_generator(\r\n val_generator,\r\n writer_batch_size=5000,\r\n num_proc=mp.cpu_count(),\r\n cache_dir=tali_dataset_dir,\r\n )\r\n\r\n test_data = datasets.Dataset.from_generator(\r\n test_generator,\r\n writer_batch_size=5000,\r\n num_proc=mp.cpu_count(),\r\n cache_dir=tali_dataset_dir,\r\n )\r\n\r\n dataset = datasets.DatasetDict(\r\n {\r\n \"train\": train_data,\r\n \"val\": val_data,\r\n \"test\": test_data,\r\n }\r\n )\r\n succesful_competion = False\r\n while not succesful_competion:\r\n try:\r\n dataset.push_to_hub(repo_id=\"Antreas\/TALI-small\", max_shard_size=\"5GB\")\r\n succesful_competion = True\r\n except Exception as e:\r\n print(e)\r\n```\n\n### Logs\n\n```shell\nPushing dataset shards to the dataset hub: 33%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258e | 7\/21 [24:33<49:06, 210.45s\/it]\r\nError while uploading 'data\/val-00007-of-00021-6b216a984af1a4c8.parquet' to the Hub. \r\nPushing split train to the Hub. \r\nResuming upload of the dataset shards. \r\nPushing dataset shards to the dataset hub: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 46\/46 [42:10<00:00, 55.01s\/it]\r\nPushing split val to the Hub. \r\nResuming upload of the dataset shards. \r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:01<00:00, 1.55ba\/s]\r\nUpload 1 LFS files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:23<00:00, 23.51s\/it]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:02<00:00, 1.39ba\/s]\r\nUpload 1 LFS files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:30<00:00, 30.19s\/it]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:02<00:00, 1.28ba\/s]\r\nUpload 1 LFS files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:24<00:00, 24.08s\/it]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:02<00:00, 1.42ba\/s]\r\nUpload 1 LFS files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:23<00:00, 23.97s\/it]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:02<00:00, 1.49ba\/s]\r\nCreating parquet from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:02<00:00, 1.54ba\/s^\r\nUpload 1 LFS files: 0%| | 0\/1 [04:42\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007241 \/ 0.011353 (-0.004112) | 0.004574 \/ 0.011008 (-0.006434) | 0.120481 \/ 0.038508 (0.081973) | 0.040492 \/ 0.023109 (0.017383) | 0.391399 \/ 0.275898 (0.115501) | 0.422844 \/ 0.323480 (0.099365) | 0.004441 \/ 0.007986 (-0.003545) | 0.004544 \/ 0.004328 (0.000216) | 0.089482 \/ 0.004250 (0.085231) | 0.052939 \/ 0.037052 (0.015887) | 0.393649 \/ 0.258489 (0.135160) | 0.433852 \/ 0.293841 (0.140011) | 0.035882 \/ 0.128546 (-0.092664) | 0.010172 \/ 0.075646 (-0.065474) | 0.410331 \/ 0.419271 (-0.008940) | 0.061481 \/ 0.043533 (0.017948) | 0.405066 \/ 0.255139 (0.149927) | 0.417732 \/ 0.283200 (0.134532) | 0.121647 \/ 0.141683 (-0.020035) | 1.790624 \/ 1.452155 (0.338469) | 1.863398 \/ 1.492716 (0.370681) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.250650 \/ 0.018006 (0.232644) | 0.489044 \/ 0.000490 (0.488554) | 0.010421 \/ 0.000200 (0.010222) | 0.000106 \/ 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030340 \/ 0.037411 (-0.007071) | 0.128318 \/ 0.014526 (0.113792) | 0.140463 \/ 0.176557 (-0.036093) | 0.205762 \/ 0.737135 (-0.531373) | 0.147996 \/ 0.296338 (-0.148342) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.493158 \/ 0.215209 (0.277949) | 4.858346 \/ 2.077655 (2.780691) | 2.242942 \/ 1.504120 (0.738822) | 2.010092 \/ 1.541195 (0.468897) | 2.076765 \/ 1.468490 (0.608275) | 0.636669 \/ 4.584777 (-3.948108) | 4.478027 \/ 3.745712 (0.732314) | 2.157843 \/ 5.269862 (-3.112019) | 1.305133 \/ 4.565676 (-3.260543) | 0.079220 \/ 0.424275 (-0.345055) | 0.013858 \/ 0.007607 (0.006251) | 0.604501 \/ 0.226044 (0.378457) | 5.950071 \/ 2.268929 (3.681143) | 2.738373 \/ 55.444624 (-52.706251) | 2.380275 \/ 6.876477 (-4.496201) | 2.517108 \/ 2.142072 (0.375035) | 0.772249 \/ 4.805227 (-4.032979) | 0.169874 \/ 6.500664 (-6.330790) | 0.078026 \/ 0.075469 (0.002557) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.450200 \/ 1.841788 (-0.391588) | 17.810965 \/ 8.074308 (9.736657) | 15.518998 \/ 10.191392 (5.327606) | 0.200469 \/ 0.680424 (-0.479954) | 0.020777 \/ 0.534201 (-0.513424) | 0.504556 \/ 0.579283 (-0.074727) | 0.518493 \/ 0.434364 (0.084129) | 0.615335 \/ 0.540337 (0.074998) | 0.754065 \/ 1.386936 (-0.632871) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007224 \/ 0.011353 (-0.004129) | 0.004663 \/ 0.011008 (-0.006345) | 0.092151 \/ 0.038508 (0.053643) | 0.038359 \/ 0.023109 (0.015250) | 0.486413 \/ 0.275898 (0.210515) | 0.521596 \/ 0.323480 (0.198116) | 0.004207 \/ 0.007986 (-0.003778) | 0.003745 \/ 0.004328 (-0.000583) | 0.089840 \/ 0.004250 (0.085589) | 0.050996 \/ 0.037052 (0.013943) | 0.498090 \/ 0.258489 (0.239601) | 0.533647 \/ 0.293841 (0.239806) | 0.035151 \/ 0.128546 (-0.093395) | 0.010293 \/ 0.075646 (-0.065354) | 0.099056 \/ 0.419271 (-0.320215) | 0.057365 \/ 0.043533 (0.013833) | 0.470652 \/ 0.255139 (0.215513) | 0.509801 \/ 0.283200 (0.226602) | 0.115650 \/ 0.141683 (-0.026033) | 1.810860 \/ 1.452155 (0.358705) | 1.896775 \/ 1.492716 (0.404059) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.261887 \/ 0.018006 (0.243880) | 0.489919 \/ 0.000490 (0.489430) | 0.006117 \/ 0.000200 (0.005917) | 0.000134 \/ 0.000054 (0.000079) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.035033 \/ 0.037411 (-0.002378) | 0.141093 \/ 0.014526 (0.126567) | 0.152613 \/ 0.176557 (-0.023943) | 0.218351 \/ 0.737135 (-0.518785) | 0.158366 \/ 0.296338 (-0.137972) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.542219 \/ 0.215209 (0.327010) | 5.479358 \/ 2.077655 (3.401703) | 2.749586 \/ 1.504120 (1.245466) | 2.537686 \/ 1.541195 (0.996491) | 2.582351 \/ 1.468490 (1.113861) | 0.636750 \/ 4.584777 (-3.948027) | 4.537501 \/ 3.745712 (0.791789) | 2.141392 \/ 5.269862 (-3.128469) | 1.279711 \/ 4.565676 (-3.285965) | 0.079227 \/ 0.424275 (-0.345048) | 0.014141 \/ 0.007607 (0.006534) | 0.662070 \/ 0.226044 (0.436025) | 6.572144 \/ 2.268929 (4.303215) | 3.321349 \/ 55.444624 (-52.123275) | 2.928219 \/ 6.876477 (-3.948258) | 3.002732 \/ 2.142072 (0.860659) | 0.773808 \/ 4.805227 (-4.031419) | 0.166017 \/ 6.500664 (-6.334647) | 0.076424 \/ 0.075469 (0.000955) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.584325 \/ 1.841788 (-0.257463) | 18.359247 \/ 8.074308 (10.284938) | 16.977875 \/ 10.191392 (6.786483) | 0.195381 \/ 0.680424 (-0.485043) | 0.021048 \/ 0.534201 (-0.513153) | 0.512237 \/ 0.579283 (-0.067047) | 0.511435 \/ 0.434364 (0.077071) | 0.592856 \/ 0.540337 (0.052518) | 0.711905 \/ 1.386936 (-0.675031) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#d536e37b21a6dd5c122b6d8113994ec50846c5b5 \"CML watermark\")\n"],"created_at":1686301273000,"updated_at":1686749738000,"closed_at":1686749244000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5938","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5938","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5938.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5938.patch","merged_at":1686749244000},"body":"This PR ensures that the temporary filename created is the same as the one that is locked, while writing to the cache.\r\n\r\nThis PR stops using `tempfile` to generate the temporary filename.\r\n\r\nAdditionally, the behavior now is aligned for both `resume_download` `True` and `False`.\r\n\r\nRefactor temp_file_manager so that it uses the filename that is locked: \r\n- Use: `cache_path + \".incomplete\"`, when the locked one is `cache_path + \".lock\"`\r\n\r\nBefore it was using `tempfile` inside `cache_dir`, which was not locked: although very improbable name collision (8 random characters), this was not impossible when huge number of multiple processes.\r\n\r\nMaybe related to \"Stale file handle\" issues caused by `tempfile`: \r\n- [ ] https:\/\/huggingface.co\/datasets\/tapaco\/discussions\/4\r\n- [ ] https:\/\/huggingface.co\/datasets\/xcsr\/discussions\/1\r\n- [ ] https:\/\/huggingface.co\/datasets\/covost2\/discussions\/3\r\n```\r\nError code: ConfigNamesError\r\nException: OSError\r\nMessage: [Errno 116] Stale file handle\r\nTraceback: Traceback (most recent call last):\r\n File \"\/src\/services\/worker\/src\/worker\/job_runners\/dataset\/config_names.py\", line 61, in compute_config_names_response\r\n for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))\r\n File \"\/src\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 323, in get_dataset_config_names\r\n dataset_module = dataset_module_factory(\r\n File \"\/src\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 1219, in dataset_module_factory\r\n raise e1 from None\r\n File \"\/src\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 1188, in dataset_module_factory\r\n return HubDatasetModuleFactoryWithScript(\r\n File \"\/src\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 907, in get_module\r\n dataset_readme_path = self.download_dataset_readme_file()\r\n File \"\/src\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 896, in download_dataset_readme_file\r\n return cached_path(\r\n File \"\/src\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/utils\/file_utils.py\", line 183, in cached_path\r\n output_path = get_from_cache(\r\n File \"\/src\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/utils\/file_utils.py\", line 611, in get_from_cache\r\n http_get(\r\n File \"\/usr\/local\/lib\/python3.9\/tempfile.py\", line 496, in __exit__\r\n result = self.file.__exit__(exc, value, tb)\r\n OSError: [Errno 116] Stale file handle\r\n```\r\n- the stale file handle error can be raised when `tempfile` tries to close (when exiting its context manager) a filename that has been already closed by other process\r\n - note that `tempfile` filenames are randomly generated but not locked in our code\r\n\r\nCC: @severo ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5938\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5938\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5937","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5937\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5937\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5937\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5937","id":1749388597,"node_id":"PR_kwDODunzps5SmLIs","number":5937,"title":"Avoid parallel redownload in cache","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006157 \/ 0.011353 (-0.005196) | 0.003790 \/ 0.011008 (-0.007219) | 0.097889 \/ 0.038508 (0.059381) | 0.029038 \/ 0.023109 (0.005929) | 0.306918 \/ 0.275898 (0.031020) | 0.339637 \/ 0.323480 (0.016157) | 0.003526 \/ 0.007986 (-0.004460) | 0.003102 \/ 0.004328 (-0.001227) | 0.076908 \/ 0.004250 (0.072658) | 0.039254 \/ 0.037052 (0.002201) | 0.309197 \/ 0.258489 (0.050708) | 0.345635 \/ 0.293841 (0.051794) | 0.027954 \/ 0.128546 (-0.100593) | 0.008510 \/ 0.075646 (-0.067136) | 0.314674 \/ 0.419271 (-0.104598) | 0.057102 \/ 0.043533 (0.013569) | 0.307495 \/ 0.255139 (0.052356) | 0.329501 \/ 0.283200 (0.046302) | 0.098450 \/ 0.141683 (-0.043233) | 1.480102 \/ 1.452155 (0.027948) | 1.550554 \/ 1.492716 (0.057838) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.207440 \/ 0.018006 (0.189434) | 0.426560 \/ 0.000490 (0.426071) | 0.003250 \/ 0.000200 (0.003050) | 0.000074 \/ 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023777 \/ 0.037411 (-0.013634) | 0.103905 \/ 0.014526 (0.089379) | 0.108324 \/ 0.176557 (-0.068233) | 0.167223 \/ 0.737135 (-0.569913) | 0.113529 \/ 0.296338 (-0.182810) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.426770 \/ 0.215209 (0.211561) | 4.251806 \/ 2.077655 (2.174151) | 2.010426 \/ 1.504120 (0.506306) | 1.858630 \/ 1.541195 (0.317435) | 1.941318 \/ 1.468490 (0.472828) | 0.558056 \/ 4.584777 (-4.026721) | 3.399107 \/ 3.745712 (-0.346606) | 1.758386 \/ 5.269862 (-3.511476) | 1.036305 \/ 4.565676 (-3.529372) | 0.067094 \/ 0.424275 (-0.357182) | 0.011167 \/ 0.007607 (0.003560) | 0.526705 \/ 0.226044 (0.300661) | 5.250319 \/ 2.268929 (2.981390) | 2.496723 \/ 55.444624 (-52.947902) | 2.154013 \/ 6.876477 (-4.722464) | 2.394724 \/ 2.142072 (0.252652) | 0.669723 \/ 4.805227 (-4.135504) | 0.136367 \/ 6.500664 (-6.364297) | 0.067080 \/ 0.075469 (-0.008389) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.269700 \/ 1.841788 (-0.572088) | 14.099775 \/ 8.074308 (6.025467) | 14.422936 \/ 10.191392 (4.231544) | 0.132344 \/ 0.680424 (-0.548080) | 0.016744 \/ 0.534201 (-0.517457) | 0.378286 \/ 0.579283 (-0.200997) | 0.392282 \/ 0.434364 (-0.042082) | 0.437648 \/ 0.540337 (-0.102689) | 0.528554 \/ 1.386936 (-0.858382) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006086 \/ 0.011353 (-0.005267) | 0.003769 \/ 0.011008 (-0.007239) | 0.077414 \/ 0.038508 (0.038906) | 0.027806 \/ 0.023109 (0.004697) | 0.360333 \/ 0.275898 (0.084434) | 0.404725 \/ 0.323480 (0.081245) | 0.003443 \/ 0.007986 (-0.004543) | 0.004434 \/ 0.004328 (0.000106) | 0.077309 \/ 0.004250 (0.073059) | 0.040441 \/ 0.037052 (0.003388) | 0.358627 \/ 0.258489 (0.100138) | 0.415246 \/ 0.293841 (0.121405) | 0.027718 \/ 0.128546 (-0.100829) | 0.008495 \/ 0.075646 (-0.067151) | 0.082874 \/ 0.419271 (-0.336397) | 0.042323 \/ 0.043533 (-0.001210) | 0.354895 \/ 0.255139 (0.099756) | 0.390032 \/ 0.283200 (0.106832) | 0.092377 \/ 0.141683 (-0.049306) | 1.492817 \/ 1.452155 (0.040662) | 1.551859 \/ 1.492716 (0.059143) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.198921 \/ 0.018006 (0.180915) | 0.417699 \/ 0.000490 (0.417209) | 0.001349 \/ 0.000200 (0.001149) | 0.000071 \/ 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026349 \/ 0.037411 (-0.011062) | 0.105712 \/ 0.014526 (0.091186) | 0.111792 \/ 0.176557 (-0.064765) | 0.163677 \/ 0.737135 (-0.573459) | 0.116864 \/ 0.296338 (-0.179474) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.447532 \/ 0.215209 (0.232323) | 4.468770 \/ 2.077655 (2.391116) | 2.403820 \/ 1.504120 (0.899700) | 2.273640 \/ 1.541195 (0.732445) | 2.337505 \/ 1.468490 (0.869015) | 0.560729 \/ 4.584777 (-4.024048) | 3.389165 \/ 3.745712 (-0.356547) | 2.697614 \/ 5.269862 (-2.572247) | 1.351909 \/ 4.565676 (-3.213768) | 0.068089 \/ 0.424275 (-0.356186) | 0.011639 \/ 0.007607 (0.004032) | 0.555277 \/ 0.226044 (0.329233) | 5.559291 \/ 2.268929 (3.290363) | 2.657609 \/ 55.444624 (-52.787015) | 2.346667 \/ 6.876477 (-4.529809) | 2.615823 \/ 2.142072 (0.473751) | 0.668662 \/ 4.805227 (-4.136566) | 0.136593 \/ 6.500664 (-6.364071) | 0.068384 \/ 0.075469 (-0.007085) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.312089 \/ 1.841788 (-0.529699) | 14.477510 \/ 8.074308 (6.403202) | 14.231432 \/ 10.191392 (4.040040) | 0.132015 \/ 0.680424 (-0.548409) | 0.016908 \/ 0.534201 (-0.517293) | 0.368315 \/ 0.579283 (-0.210968) | 0.397964 \/ 0.434364 (-0.036400) | 0.432446 \/ 0.540337 (-0.107891) | 0.526349 \/ 1.386936 (-0.860587) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#78b4d55c3cfc60e309eb033d3ed0aba5e796b6ce \"CML watermark\")\n"],"created_at":1686298716000,"updated_at":1686745859000,"closed_at":1686745437000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5937","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5937","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5937.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5937.patch","merged_at":1686745437000},"body":"Avoid parallel redownload in cache by retrying inside the lock if path exists.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5937\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5937\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5936","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5936\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5936\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5936\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5936","id":1748424388,"node_id":"I_kwDODunzps5oNtbE","number":5936,"title":"Sequence of array not supported for most dtype","user":{"login":"qgallouedec","id":45557362,"node_id":"MDQ6VXNlcjQ1NTU3MzYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/45557362?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/qgallouedec","html_url":"https:\/\/github.com\/qgallouedec","followers_url":"https:\/\/api.github.com\/users\/qgallouedec\/followers","following_url":"https:\/\/api.github.com\/users\/qgallouedec\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/qgallouedec\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/qgallouedec\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/qgallouedec\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/qgallouedec\/orgs","repos_url":"https:\/\/api.github.com\/users\/qgallouedec\/repos","events_url":"https:\/\/api.github.com\/users\/qgallouedec\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/qgallouedec\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Related, `float16` is the only dtype not supported by `Array2D` (probably by every `ArrayND`):\r\n\r\n```python\r\nfrom datasets import Array2D, Features, Dataset\r\n\r\nimport numpy as np\r\n\r\nfor dtype in [\r\n \"bool\", # ok\r\n \"int8\", # ok\r\n \"int16\", # ok\r\n \"int32\", # ok\r\n \"int64\", # ok\r\n \"uint8\", # ok\r\n \"uint16\", # ok\r\n \"uint32\", # ok\r\n \"uint64\", # ok\r\n \"float16\", # failed\r\n \"float32\", # ok\r\n \"float64\", # ok\r\n]:\r\n features = Features({\"foo\": Array2D(dtype=dtype, shape=(3, 4))})\r\n array = np.zeros((3, 4), dtype=dtype)\r\n try:\r\n dataset = Dataset.from_dict({\"foo\": [array]}, features=features)\r\n except Exception as e:\r\n print(f\"Failed for dtype={dtype}\")\r\n```","Here's something I can't explain:\r\n\r\nWhen an array is encoded in the `from_dict` method, the numpy array is converted to a list (thus losing the original dtype, which is transfromed to the nearest builtin Python type)\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/6ee61e6e695b1df9f232d47faf3a5e2b30b33737\/src\/datasets\/features\/features.py#L524-L525\r\n\r\nHowever, later on, this same data is written to memory, and it seems authorized that the data is an array (or in this case, a list of arrays). \r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/6ee61e6e695b1df9f232d47faf3a5e2b30b33737\/src\/datasets\/arrow_writer.py#L185-L186\r\n\r\nSo the question is: why convert it to a Python list? This seems to be quite expensive both in terms of write time (all data is copied) and memory (e.g., an int8 is converted to an int64).\r\n\r\nFinally, if I try to remove this step, it solves all the previous problems, and it seems to me that it doesn't break anything (the CI passes without problem).","Arrow only support 1d numpy arrays, so we convert multidim arrays to lists of 1s arrays (and keep the dtype).\r\n\r\nThough you noticed that it's concerting to lists and lose the dtype. If it's the case then it's a bug.","Ok the conversion to list shouldn't be there indeed ! Could you open a PR to remove it ?"],"created_at":1686248287000,"updated_at":1686755014000,"closed_at":1686755014000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nCreate a dataset composed of sequence of array fails for most dtypes (see code below).\n\n### Steps to reproduce the bug\n\n```python\r\nfrom datasets import Sequence, Array2D, Features, Dataset\r\n\r\nimport numpy as np\r\n\r\nfor dtype in [\r\n \"bool\", # ok\r\n \"int8\", # failed\r\n \"int16\", # failed\r\n \"int32\", # failed\r\n \"int64\", # ok\r\n \"uint8\", # failed\r\n \"uint16\", # failed\r\n \"uint32\", # failed\r\n \"uint64\", # failed\r\n \"float16\", # failed\r\n \"float32\", # failed\r\n \"float64\", # ok\r\n]:\r\n features = Features({\"foo\": Sequence(Array2D(dtype=dtype, shape=(2, 2)))})\r\n sequence = [\r\n [[1.0, 2.0], [3.0, 4.0]],\r\n [[5.0, 6.0], [7.0, 8.0]],\r\n ]\r\n array = np.array(sequence, dtype=dtype)\r\n try:\r\n dataset = Dataset.from_dict({\"foo\": [array]}, features=features)\r\n except Exception as e:\r\n print(f\"Failed for dtype={dtype}\")\r\n```\r\n\r\nTraceback for `dtype=\"int8\"`:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\/home\/qgallouedec\/datasets\/a.py\", line 29, in \r\n raise e\r\n File \"\/home\/qgallouedec\/datasets\/a.py\", line 26, in \r\n dataset = Dataset.from_dict({\"foo\": [array]}, features=features)\r\n File \"\/home\/qgallouedec\/env\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py\", line 899, in from_dict\r\n pa_table = InMemoryTable.from_pydict(mapping=mapping)\r\n File \"\/home\/qgallouedec\/env\/lib\/python3.10\/site-packages\/datasets\/table.py\", line 799, in from_pydict\r\n return cls(pa.Table.from_pydict(*args, **kwargs))\r\n File \"pyarrow\/table.pxi\", line 3725, in pyarrow.lib.Table.from_pydict\r\n File \"pyarrow\/table.pxi\", line 5254, in pyarrow.lib._from_pydict\r\n File \"pyarrow\/array.pxi\", line 350, in pyarrow.lib.asarray\r\n File \"pyarrow\/array.pxi\", line 236, in pyarrow.lib.array\r\n File \"pyarrow\/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"\/home\/qgallouedec\/env\/lib\/python3.10\/site-packages\/datasets\/arrow_writer.py\", line 204, in __arrow_array__\r\n out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)\r\n File \"\/home\/qgallouedec\/env\/lib\/python3.10\/site-packages\/datasets\/table.py\", line 1833, in wrapper\r\n return func(array, *args, **kwargs)\r\n File \"\/home\/qgallouedec\/env\/lib\/python3.10\/site-packages\/datasets\/table.py\", line 2091, in cast_array_to_feature\r\n casted_values = _c(array.values, feature.feature)\r\n File \"\/home\/qgallouedec\/env\/lib\/python3.10\/site-packages\/datasets\/table.py\", line 1833, in wrapper\r\n return func(array, *args, **kwargs)\r\n File \"\/home\/qgallouedec\/env\/lib\/python3.10\/site-packages\/datasets\/table.py\", line 2139, in cast_array_to_feature\r\n return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)\r\n File \"\/home\/qgallouedec\/env\/lib\/python3.10\/site-packages\/datasets\/table.py\", line 1833, in wrapper\r\n return func(array, *args, **kwargs)\r\n File \"\/home\/qgallouedec\/env\/lib\/python3.10\/site-packages\/datasets\/table.py\", line 1967, in array_cast\r\n return pa_type.wrap_array(array)\r\n File \"pyarrow\/types.pxi\", line 879, in pyarrow.lib.BaseExtensionType.wrap_array\r\nTypeError: Incompatible storage type for extension>: expected list>, got list>\r\n```\n\n### Expected behavior\n\nNot to fail.\n\n### Environment info\n\n\r\n- Python 3.10.6\r\n- datasets: master branch\r\n- Numpy: 1.23.4","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5936\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5936\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5935","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5935\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5935\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5935\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5935","id":1748090220,"node_id":"PR_kwDODunzps5Sh9Mg","number":5935,"title":"Better row group size in push_to_hub","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007489 \/ 0.011353 (-0.003864) | 0.004914 \/ 0.011008 (-0.006095) | 0.111626 \/ 0.038508 (0.073117) | 0.037920 \/ 0.023109 (0.014811) | 0.350571 \/ 0.275898 (0.074673) | 0.389667 \/ 0.323480 (0.066187) | 0.006309 \/ 0.007986 (-0.001676) | 0.005488 \/ 0.004328 (0.001160) | 0.083962 \/ 0.004250 (0.079712) | 0.050728 \/ 0.037052 (0.013675) | 0.360997 \/ 0.258489 (0.102508) | 0.392736 \/ 0.293841 (0.098895) | 0.031975 \/ 0.128546 (-0.096571) | 0.009941 \/ 0.075646 (-0.065705) | 0.379840 \/ 0.419271 (-0.039432) | 0.056522 \/ 0.043533 (0.012989) | 0.359379 \/ 0.255139 (0.104240) | 0.384487 \/ 0.283200 (0.101287) | 0.117523 \/ 0.141683 (-0.024160) | 1.683639 \/ 1.452155 (0.231485) | 1.791645 \/ 1.492716 (0.298929) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.236862 \/ 0.018006 (0.218856) | 0.481208 \/ 0.000490 (0.480719) | 0.007455 \/ 0.000200 (0.007255) | 0.000111 \/ 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030854 \/ 0.037411 (-0.006557) | 0.126892 \/ 0.014526 (0.112367) | 0.139207 \/ 0.176557 (-0.037350) | 0.206447 \/ 0.737135 (-0.530689) | 0.143095 \/ 0.296338 (-0.153244) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.474677 \/ 0.215209 (0.259468) | 4.699534 \/ 2.077655 (2.621879) | 2.152102 \/ 1.504120 (0.647983) | 1.934815 \/ 1.541195 (0.393620) | 1.986448 \/ 1.468490 (0.517958) | 0.607184 \/ 4.584777 (-3.977593) | 4.480385 \/ 3.745712 (0.734673) | 2.074729 \/ 5.269862 (-3.195132) | 1.182383 \/ 4.565676 (-3.383294) | 0.075624 \/ 0.424275 (-0.348651) | 0.014046 \/ 0.007607 (0.006439) | 0.598859 \/ 0.226044 (0.372814) | 5.959551 \/ 2.268929 (3.690622) | 2.700851 \/ 55.444624 (-52.743773) | 2.303775 \/ 6.876477 (-4.572702) | 2.456441 \/ 2.142072 (0.314369) | 0.747185 \/ 4.805227 (-4.058042) | 0.165787 \/ 6.500664 (-6.334878) | 0.075817 \/ 0.075469 (0.000348) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.411859 \/ 1.841788 (-0.429928) | 17.375495 \/ 8.074308 (9.301187) | 15.187098 \/ 10.191392 (4.995706) | 0.169953 \/ 0.680424 (-0.510471) | 0.020204 \/ 0.534201 (-0.513997) | 0.461424 \/ 0.579283 (-0.117859) | 0.494443 \/ 0.434364 (0.060080) | 0.544583 \/ 0.540337 (0.004246) | 0.648231 \/ 1.386936 (-0.738705) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007785 \/ 0.011353 (-0.003568) | 0.005314 \/ 0.011008 (-0.005694) | 0.087273 \/ 0.038508 (0.048765) | 0.037810 \/ 0.023109 (0.014701) | 0.425473 \/ 0.275898 (0.149575) | 0.459976 \/ 0.323480 (0.136497) | 0.007270 \/ 0.007986 (-0.000716) | 0.004631 \/ 0.004328 (0.000303) | 0.087063 \/ 0.004250 (0.082812) | 0.052630 \/ 0.037052 (0.015578) | 0.432384 \/ 0.258489 (0.173895) | 0.500291 \/ 0.293841 (0.206450) | 0.033144 \/ 0.128546 (-0.095402) | 0.010101 \/ 0.075646 (-0.065545) | 0.096068 \/ 0.419271 (-0.323204) | 0.062750 \/ 0.043533 (0.019217) | 0.419308 \/ 0.255139 (0.164169) | 0.437099 \/ 0.283200 (0.153900) | 0.122289 \/ 0.141683 (-0.019394) | 1.737829 \/ 1.452155 (0.285674) | 1.851481 \/ 1.492716 (0.358765) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.014277 \/ 0.018006 (-0.003729) | 0.489835 \/ 0.000490 (0.489345) | 0.008423 \/ 0.000200 (0.008223) | 0.000188 \/ 0.000054 (0.000134) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032966 \/ 0.037411 (-0.004445) | 0.130069 \/ 0.014526 (0.115544) | 0.144372 \/ 0.176557 (-0.032185) | 0.200400 \/ 0.737135 (-0.536735) | 0.149384 \/ 0.296338 (-0.146954) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.511542 \/ 0.215209 (0.296333) | 5.093879 \/ 2.077655 (3.016225) | 2.572088 \/ 1.504120 (1.067968) | 2.339118 \/ 1.541195 (0.797923) | 2.441637 \/ 1.468490 (0.973147) | 0.614818 \/ 4.584777 (-3.969959) | 4.724441 \/ 3.745712 (0.978729) | 5.431978 \/ 5.269862 (0.162116) | 2.257794 \/ 4.565676 (-2.307883) | 0.078109 \/ 0.424275 (-0.346166) | 0.013821 \/ 0.007607 (0.006214) | 0.639232 \/ 0.226044 (0.413188) | 6.424623 \/ 2.268929 (4.155694) | 3.163018 \/ 55.444624 (-52.281606) | 2.756786 \/ 6.876477 (-4.119690) | 2.808655 \/ 2.142072 (0.666583) | 0.745843 \/ 4.805227 (-4.059385) | 0.165562 \/ 6.500664 (-6.335102) | 0.076610 \/ 0.075469 (0.001141) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.738630 \/ 1.841788 (-0.103158) | 18.073573 \/ 8.074308 (9.999265) | 16.482820 \/ 10.191392 (6.291428) | 0.213233 \/ 0.680424 (-0.467191) | 0.022839 \/ 0.534201 (-0.511362) | 0.487043 \/ 0.579283 (-0.092240) | 0.512518 \/ 0.434364 (0.078154) | 0.549365 \/ 0.540337 (0.009028) | 0.656612 \/ 1.386936 (-0.730324) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#288e92b03bd4ec91c10c8a529b32631cfaba9fb7 \"CML watermark\")\n","Good idea!\r\n\r\nI was wondering: if we want to optimize the balance between the size of downloading a row group, and the number of rows in the group, would it make sense to compute the row group size by checking the average size of the rows?\r\n\r\neg. 32x32 images could have a larger row group size than full HD images, no? Relying on the size would even remove the need to check the column types.\r\n\r\n(in this proposal, we could use the computed row group size, eg 837, or use the nearest row group size in a list of values: 10, 100, 1000, 10000)","Probably, but I would go for a simpler solution first :p","Sure! I wanted to understand if the idea made sense or not, but it's not for this PR.","I think it will be more useful for people who use the viewer and won't impact sequential io that much.","DuckDB [paragraph](https:\/\/duckdb.org\/docs\/data\/parquet\/tips.html#selecting-a-row_group_size) that explains how to choose the `row_group_size`. Our default shard size is 500 MB in `push_to_hub`, so, ideally, we should aim for 64 MB row groups (and make this part configurable for power users \ud83d\ude42).\r\n\r\nSo, before merging this PR, let's add a TODO or open an issue as a reminder that this can be improved.","I moved the config values, improved the features check and mentioned the improvements we could do in the docstring :)","_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006211 \/ 0.011353 (-0.005141) | 0.004244 \/ 0.011008 (-0.006764) | 0.097941 \/ 0.038508 (0.059433) | 0.028564 \/ 0.023109 (0.005455) | 0.299651 \/ 0.275898 (0.023753) | 0.340694 \/ 0.323480 (0.017214) | 0.005161 \/ 0.007986 (-0.002824) | 0.004764 \/ 0.004328 (0.000435) | 0.075505 \/ 0.004250 (0.071255) | 0.039656 \/ 0.037052 (0.002603) | 0.309242 \/ 0.258489 (0.050753) | 0.350783 \/ 0.293841 (0.056942) | 0.025145 \/ 0.128546 (-0.103401) | 0.008498 \/ 0.075646 (-0.067148) | 0.317657 \/ 0.419271 (-0.101615) | 0.043926 \/ 0.043533 (0.000394) | 0.305915 \/ 0.255139 (0.050776) | 0.331630 \/ 0.283200 (0.048430) | 0.088564 \/ 0.141683 (-0.053119) | 1.533175 \/ 1.452155 (0.081021) | 1.581017 \/ 1.492716 (0.088301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.206032 \/ 0.018006 (0.188025) | 0.433446 \/ 0.000490 (0.432956) | 0.003955 \/ 0.000200 (0.003755) | 0.000095 \/ 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023468 \/ 0.037411 (-0.013943) | 0.103292 \/ 0.014526 (0.088766) | 0.107234 \/ 0.176557 (-0.069322) | 0.168525 \/ 0.737135 (-0.568610) | 0.113218 \/ 0.296338 (-0.183120) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.431085 \/ 0.215209 (0.215875) | 4.302082 \/ 2.077655 (2.224427) | 2.068290 \/ 1.504120 (0.564171) | 1.850718 \/ 1.541195 (0.309523) | 1.964261 \/ 1.468490 (0.495771) | 0.547562 \/ 4.584777 (-4.037215) | 3.410739 \/ 3.745712 (-0.334974) | 1.779640 \/ 5.269862 (-3.490221) | 1.005466 \/ 4.565676 (-3.560210) | 0.066250 \/ 0.424275 (-0.358025) | 0.011877 \/ 0.007607 (0.004270) | 0.525185 \/ 0.226044 (0.299141) | 5.234786 \/ 2.268929 (2.965857) | 2.398045 \/ 55.444624 (-53.046580) | 2.073020 \/ 6.876477 (-4.803457) | 2.210753 \/ 2.142072 (0.068680) | 0.654897 \/ 4.805227 (-4.150331) | 0.134639 \/ 6.500664 (-6.366025) | 0.067050 \/ 0.075469 (-0.008419) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.180210 \/ 1.841788 (-0.661577) | 13.613091 \/ 8.074308 (5.538783) | 13.441837 \/ 10.191392 (3.250445) | 0.146048 \/ 0.680424 (-0.534376) | 0.016505 \/ 0.534201 (-0.517696) | 0.363210 \/ 0.579283 (-0.216073) | 0.405484 \/ 0.434364 (-0.028880) | 0.428712 \/ 0.540337 (-0.111625) | 0.522300 \/ 1.386936 (-0.864636) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006147 \/ 0.011353 (-0.005206) | 0.004161 \/ 0.011008 (-0.006847) | 0.075861 \/ 0.038508 (0.037353) | 0.027948 \/ 0.023109 (0.004839) | 0.362466 \/ 0.275898 (0.086568) | 0.398227 \/ 0.323480 (0.074747) | 0.005014 \/ 0.007986 (-0.002972) | 0.004772 \/ 0.004328 (0.000444) | 0.075674 \/ 0.004250 (0.071423) | 0.039158 \/ 0.037052 (0.002106) | 0.363567 \/ 0.258489 (0.105078) | 0.410378 \/ 0.293841 (0.116537) | 0.025510 \/ 0.128546 (-0.103036) | 0.008528 \/ 0.075646 (-0.067118) | 0.081803 \/ 0.419271 (-0.337468) | 0.040954 \/ 0.043533 (-0.002579) | 0.358492 \/ 0.255139 (0.103353) | 0.381345 \/ 0.283200 (0.098145) | 0.092347 \/ 0.141683 (-0.049336) | 1.567695 \/ 1.452155 (0.115540) | 1.668412 \/ 1.492716 (0.175696) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.203367 \/ 0.018006 (0.185360) | 0.424642 \/ 0.000490 (0.424152) | 0.002451 \/ 0.000200 (0.002251) | 0.000071 \/ 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026129 \/ 0.037411 (-0.011282) | 0.102564 \/ 0.014526 (0.088039) | 0.110583 \/ 0.176557 (-0.065973) | 0.164332 \/ 0.737135 (-0.572804) | 0.115706 \/ 0.296338 (-0.180632) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.468925 \/ 0.215209 (0.253716) | 4.657266 \/ 2.077655 (2.579612) | 2.423280 \/ 1.504120 (0.919160) | 2.236284 \/ 1.541195 (0.695089) | 2.323019 \/ 1.468490 (0.854529) | 0.548120 \/ 4.584777 (-4.036657) | 3.455602 \/ 3.745712 (-0.290110) | 1.730421 \/ 5.269862 (-3.539441) | 1.006089 \/ 4.565676 (-3.559588) | 0.067478 \/ 0.424275 (-0.356797) | 0.011465 \/ 0.007607 (0.003857) | 0.574235 \/ 0.226044 (0.348190) | 5.744404 \/ 2.268929 (3.475475) | 2.882225 \/ 55.444624 (-52.562400) | 2.618246 \/ 6.876477 (-4.258231) | 2.642920 \/ 2.142072 (0.500847) | 0.661441 \/ 4.805227 (-4.143787) | 0.137358 \/ 6.500664 (-6.363306) | 0.070372 \/ 0.075469 (-0.005097) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.333815 \/ 1.841788 (-0.507973) | 14.689667 \/ 8.074308 (6.615359) | 14.362294 \/ 10.191392 (4.170902) | 0.152011 \/ 0.680424 (-0.528413) | 0.016869 \/ 0.534201 (-0.517332) | 0.370433 \/ 0.579283 (-0.208851) | 0.399642 \/ 0.434364 (-0.034722) | 0.433759 \/ 0.540337 (-0.106578) | 0.525443 \/ 1.386936 (-0.861493) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#09e9f9a88edd9055b5c540e3d83b5a11d48f8ba8 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006564 \/ 0.011353 (-0.004789) | 0.004350 \/ 0.011008 (-0.006658) | 0.096277 \/ 0.038508 (0.057769) | 0.032956 \/ 0.023109 (0.009847) | 0.303675 \/ 0.275898 (0.027777) | 0.336384 \/ 0.323480 (0.012904) | 0.005789 \/ 0.007986 (-0.002197) | 0.003957 \/ 0.004328 (-0.000371) | 0.073990 \/ 0.004250 (0.069740) | 0.050974 \/ 0.037052 (0.013922) | 0.321754 \/ 0.258489 (0.063265) | 0.349489 \/ 0.293841 (0.055648) | 0.031138 \/ 0.128546 (-0.097409) | 0.009000 \/ 0.075646 (-0.066646) | 0.325445 \/ 0.419271 (-0.093826) | 0.070173 \/ 0.043533 (0.026640) | 0.304706 \/ 0.255139 (0.049567) | 0.321803 \/ 0.283200 (0.038603) | 0.109405 \/ 0.141683 (-0.032278) | 1.489812 \/ 1.452155 (0.037657) | 1.577729 \/ 1.492716 (0.085013) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.287187 \/ 0.018006 (0.269181) | 0.527625 \/ 0.000490 (0.527135) | 0.006533 \/ 0.000200 (0.006333) | 0.000090 \/ 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026659 \/ 0.037411 (-0.010752) | 0.106236 \/ 0.014526 (0.091710) | 0.118615 \/ 0.176557 (-0.057941) | 0.173156 \/ 0.737135 (-0.563979) | 0.122883 \/ 0.296338 (-0.173456) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.407189 \/ 0.215209 (0.191980) | 4.055732 \/ 2.077655 (1.978078) | 1.865594 \/ 1.504120 (0.361474) | 1.664325 \/ 1.541195 (0.123130) | 1.668961 \/ 1.468490 (0.200471) | 0.521207 \/ 4.584777 (-4.063570) | 3.740424 \/ 3.745712 (-0.005288) | 3.431973 \/ 5.269862 (-1.837889) | 1.636669 \/ 4.565676 (-2.929008) | 0.065271 \/ 0.424275 (-0.359005) | 0.012151 \/ 0.007607 (0.004544) | 0.514233 \/ 0.226044 (0.288189) | 5.110150 \/ 2.268929 (2.841222) | 2.264340 \/ 55.444624 (-53.180284) | 1.940428 \/ 6.876477 (-4.936049) | 2.042286 \/ 2.142072 (-0.099787) | 0.639200 \/ 4.805227 (-4.166028) | 0.139537 \/ 6.500664 (-6.361127) | 0.063195 \/ 0.075469 (-0.012274) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.179501 \/ 1.841788 (-0.662286) | 14.600133 \/ 8.074308 (6.525825) | 14.902137 \/ 10.191392 (4.710745) | 0.144509 \/ 0.680424 (-0.535915) | 0.017449 \/ 0.534201 (-0.516752) | 0.393135 \/ 0.579283 (-0.186148) | 0.413103 \/ 0.434364 (-0.021261) | 0.459897 \/ 0.540337 (-0.080440) | 0.552602 \/ 1.386936 (-0.834334) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006891 \/ 0.011353 (-0.004462) | 0.004633 \/ 0.011008 (-0.006375) | 0.073093 \/ 0.038508 (0.034585) | 0.032509 \/ 0.023109 (0.009399) | 0.348332 \/ 0.275898 (0.072434) | 0.381920 \/ 0.323480 (0.058440) | 0.005978 \/ 0.007986 (-0.002007) | 0.005360 \/ 0.004328 (0.001032) | 0.074307 \/ 0.004250 (0.070056) | 0.049668 \/ 0.037052 (0.012615) | 0.354713 \/ 0.258489 (0.096224) | 0.398521 \/ 0.293841 (0.104681) | 0.032013 \/ 0.128546 (-0.096534) | 0.008890 \/ 0.075646 (-0.066756) | 0.080013 \/ 0.419271 (-0.339259) | 0.051820 \/ 0.043533 (0.008288) | 0.349730 \/ 0.255139 (0.094591) | 0.369267 \/ 0.283200 (0.086067) | 0.103874 \/ 0.141683 (-0.037809) | 1.484148 \/ 1.452155 (0.031993) | 1.573927 \/ 1.492716 (0.081211) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.009699 \/ 0.018006 (-0.008307) | 0.511176 \/ 0.000490 (0.510686) | 0.002938 \/ 0.000200 (0.002738) | 0.000109 \/ 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027847 \/ 0.037411 (-0.009564) | 0.111565 \/ 0.014526 (0.097039) | 0.120625 \/ 0.176557 (-0.055932) | 0.172130 \/ 0.737135 (-0.565006) | 0.125949 \/ 0.296338 (-0.170389) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.430634 \/ 0.215209 (0.215424) | 4.315377 \/ 2.077655 (2.237722) | 2.070764 \/ 1.504120 (0.566644) | 1.881962 \/ 1.541195 (0.340767) | 1.904053 \/ 1.468490 (0.435563) | 0.524973 \/ 4.584777 (-4.059804) | 3.718359 \/ 3.745712 (-0.027353) | 3.415344 \/ 5.269862 (-1.854518) | 1.224568 \/ 4.565676 (-3.341108) | 0.065593 \/ 0.424275 (-0.358682) | 0.011643 \/ 0.007607 (0.004036) | 0.537050 \/ 0.226044 (0.311006) | 5.352155 \/ 2.268929 (3.083226) | 2.557361 \/ 55.444624 (-52.887263) | 2.217770 \/ 6.876477 (-4.658707) | 2.194975 \/ 2.142072 (0.052902) | 0.635142 \/ 4.805227 (-4.170085) | 0.140642 \/ 6.500664 (-6.360022) | 0.064690 \/ 0.075469 (-0.010779) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.266125 \/ 1.841788 (-0.575663) | 14.836413 \/ 8.074308 (6.762105) | 14.446870 \/ 10.191392 (4.255478) | 0.191545 \/ 0.680424 (-0.488878) | 0.017433 \/ 0.534201 (-0.516768) | 0.392296 \/ 0.579283 (-0.186987) | 0.420698 \/ 0.434364 (-0.013666) | 0.463225 \/ 0.540337 (-0.077112) | 0.556127 \/ 1.386936 (-0.830809) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#7fcbe5b1575c8d162b65b9397b3dfda995a4e048 \"CML watermark\")\n"],"created_at":1686236475000,"updated_at":1686332857000,"closed_at":1686332409000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5935","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5935","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5935.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5935.patch","merged_at":1686332409000},"body":"This is a very simple change that improves `to_parquet` to use a more reasonable row group size for image and audio datasets.\r\n\r\nThis is especially useful for `push_to_hub` and will provide a better experience with the dataset viewer on HF","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5935\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5935\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5934","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5934\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5934\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5934\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5934","id":1747904840,"node_id":"PR_kwDODunzps5ShUxQ","number":5934,"title":"Modify levels of some logging messages","user":{"login":"Laurent2916","id":21087104,"node_id":"MDQ6VXNlcjIxMDg3MTA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/21087104?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Laurent2916","html_url":"https:\/\/github.com\/Laurent2916","followers_url":"https:\/\/api.github.com\/users\/Laurent2916\/followers","following_url":"https:\/\/api.github.com\/users\/Laurent2916\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Laurent2916\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Laurent2916\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Laurent2916\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Laurent2916\/orgs","repos_url":"https:\/\/api.github.com\/users\/Laurent2916\/repos","events_url":"https:\/\/api.github.com\/users\/Laurent2916\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Laurent2916\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I've addressed this as part of #6019, so feel free to close this PR. ","Thanks !"],"created_at":1686231104000,"updated_at":1689186063000,"closed_at":1689186062000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5934","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5934","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5934.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5934.patch","merged_at":null},"body":"Some warning messages didn't quite sound like warnings so I modified their logging levels to info.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5934\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5934\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5933","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5933\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5933\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5933\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5933","id":1747382500,"node_id":"PR_kwDODunzps5Sfi5J","number":5933,"title":"Fix `to_numpy` when None values in the sequence","user":{"login":"qgallouedec","id":45557362,"node_id":"MDQ6VXNlcjQ1NTU3MzYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/45557362?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/qgallouedec","html_url":"https:\/\/github.com\/qgallouedec","followers_url":"https:\/\/api.github.com\/users\/qgallouedec\/followers","following_url":"https:\/\/api.github.com\/users\/qgallouedec\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/qgallouedec\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/qgallouedec\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/qgallouedec\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/qgallouedec\/orgs","repos_url":"https:\/\/api.github.com\/users\/qgallouedec\/repos","events_url":"https:\/\/api.github.com\/users\/qgallouedec\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/qgallouedec\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I just added the same test with dynamic shape","_The documentation is not available anymore as the PR was closed or merged._","Awesome ! I'm merging now if you don't mind :)\r\nWe should probably give you permissions to merge your own PRs when you have an approval","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009980 \/ 0.011353 (-0.001373) | 0.005709 \/ 0.011008 (-0.005300) | 0.132185 \/ 0.038508 (0.093677) | 0.039299 \/ 0.023109 (0.016190) | 0.400168 \/ 0.275898 (0.124270) | 0.470582 \/ 0.323480 (0.147102) | 0.007753 \/ 0.007986 (-0.000233) | 0.005196 \/ 0.004328 (0.000868) | 0.093698 \/ 0.004250 (0.089448) | 0.052631 \/ 0.037052 (0.015579) | 0.430347 \/ 0.258489 (0.171858) | 0.460162 \/ 0.293841 (0.166321) | 0.057511 \/ 0.128546 (-0.071035) | 0.013944 \/ 0.075646 (-0.061702) | 0.459008 \/ 0.419271 (0.039737) | 0.075532 \/ 0.043533 (0.031999) | 0.405165 \/ 0.255139 (0.150026) | 0.456142 \/ 0.283200 (0.172942) | 0.117309 \/ 0.141683 (-0.024374) | 1.945787 \/ 1.452155 (0.493633) | 2.067162 \/ 1.492716 (0.574446) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.285755 \/ 0.018006 (0.267749) | 0.619965 \/ 0.000490 (0.619476) | 0.005071 \/ 0.000200 (0.004871) | 0.000114 \/ 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031112 \/ 0.037411 (-0.006299) | 0.128514 \/ 0.014526 (0.113988) | 0.137161 \/ 0.176557 (-0.039396) | 0.211363 \/ 0.737135 (-0.525772) | 0.151045 \/ 0.296338 (-0.145293) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.609361 \/ 0.215209 (0.394152) | 6.124844 \/ 2.077655 (4.047189) | 2.440757 \/ 1.504120 (0.936637) | 2.034495 \/ 1.541195 (0.493300) | 2.047192 \/ 1.468490 (0.578702) | 0.883171 \/ 4.584777 (-3.701606) | 5.470552 \/ 3.745712 (1.724840) | 4.401696 \/ 5.269862 (-0.868165) | 2.378674 \/ 4.565676 (-2.187003) | 0.108065 \/ 0.424275 (-0.316210) | 0.013239 \/ 0.007607 (0.005632) | 0.830957 \/ 0.226044 (0.604913) | 8.090659 \/ 2.268929 (5.821731) | 3.289203 \/ 55.444624 (-52.155422) | 2.500777 \/ 6.876477 (-4.375700) | 2.561440 \/ 2.142072 (0.419367) | 1.064893 \/ 4.805227 (-3.740334) | 0.220486 \/ 6.500664 (-6.280178) | 0.079507 \/ 0.075469 (0.004038) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.544334 \/ 1.841788 (-0.297454) | 17.878997 \/ 8.074308 (9.804689) | 18.952191 \/ 10.191392 (8.760799) | 0.245166 \/ 0.680424 (-0.435258) | 0.028022 \/ 0.534201 (-0.506179) | 0.517828 \/ 0.579283 (-0.061455) | 0.618988 \/ 0.434364 (0.184624) | 0.589742 \/ 0.540337 (0.049405) | 0.670902 \/ 1.386936 (-0.716034) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009616 \/ 0.011353 (-0.001737) | 0.006098 \/ 0.011008 (-0.004911) | 0.100301 \/ 0.038508 (0.061793) | 0.037792 \/ 0.023109 (0.014683) | 0.484667 \/ 0.275898 (0.208769) | 0.519286 \/ 0.323480 (0.195806) | 0.007427 \/ 0.007986 (-0.000558) | 0.007172 \/ 0.004328 (0.002844) | 0.104429 \/ 0.004250 (0.100179) | 0.056567 \/ 0.037052 (0.019515) | 0.502641 \/ 0.258489 (0.244152) | 0.549629 \/ 0.293841 (0.255788) | 0.049574 \/ 0.128546 (-0.078972) | 0.015223 \/ 0.075646 (-0.060424) | 0.113947 \/ 0.419271 (-0.305324) | 0.064585 \/ 0.043533 (0.021053) | 0.512962 \/ 0.255139 (0.257823) | 0.507218 \/ 0.283200 (0.224019) | 0.122194 \/ 0.141683 (-0.019488) | 1.927821 \/ 1.452155 (0.475667) | 2.051161 \/ 1.492716 (0.558445) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.291350 \/ 0.018006 (0.273344) | 0.588099 \/ 0.000490 (0.587610) | 0.001368 \/ 0.000200 (0.001168) | 0.000153 \/ 0.000054 (0.000099) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030604 \/ 0.037411 (-0.006807) | 0.126810 \/ 0.014526 (0.112285) | 0.139309 \/ 0.176557 (-0.037248) | 0.208030 \/ 0.737135 (-0.529105) | 0.138985 \/ 0.296338 (-0.157353) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.681254 \/ 0.215209 (0.466045) | 6.753856 \/ 2.077655 (4.676201) | 2.780704 \/ 1.504120 (1.276585) | 2.475205 \/ 1.541195 (0.934010) | 2.486784 \/ 1.468490 (1.018294) | 0.879223 \/ 4.584777 (-3.705554) | 5.662294 \/ 3.745712 (1.916582) | 2.698705 \/ 5.269862 (-2.571156) | 1.660620 \/ 4.565676 (-2.905057) | 0.112218 \/ 0.424275 (-0.312057) | 0.014211 \/ 0.007607 (0.006604) | 0.796957 \/ 0.226044 (0.570913) | 8.180897 \/ 2.268929 (5.911969) | 3.540419 \/ 55.444624 (-51.904205) | 2.899467 \/ 6.876477 (-3.977010) | 2.870306 \/ 2.142072 (0.728233) | 1.069537 \/ 4.805227 (-3.735690) | 0.211281 \/ 6.500664 (-6.289383) | 0.078898 \/ 0.075469 (0.003429) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.666790 \/ 1.841788 (-0.174998) | 18.302127 \/ 8.074308 (10.227819) | 21.317546 \/ 10.191392 (11.126153) | 0.242795 \/ 0.680424 (-0.437629) | 0.026754 \/ 0.534201 (-0.507447) | 0.493375 \/ 0.579283 (-0.085908) | 0.605400 \/ 0.434364 (0.171036) | 0.586888 \/ 0.540337 (0.046550) | 0.722809 \/ 1.386936 (-0.664127) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#ce2328e7b1d62998b22510492530af55d4493b73 \"CML watermark\")\n"],"created_at":1686213536000,"updated_at":1686318581000,"closed_at":1686317028000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5933","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5933","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5933.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5933.patch","merged_at":1686317028000},"body":"Closes #5927 \r\nI've realized that the error was overlooked during testing due to the presence of only one None value in the sequence.\r\nUnfortunately, it was the only case where the function works as expected. When the sequence contained more than one None value, the function failed. Consequently, I've updated the tests to include sequences with multiple None values.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5933\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5933\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5932","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5932\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5932\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5932\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5932","id":1746249161,"node_id":"PR_kwDODunzps5Sbrzo","number":5932,"title":"[doc build] Use secrets","user":{"login":"mishig25","id":11827707,"node_id":"MDQ6VXNlcjExODI3NzA3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11827707?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mishig25","html_url":"https:\/\/github.com\/mishig25","followers_url":"https:\/\/api.github.com\/users\/mishig25\/followers","following_url":"https:\/\/api.github.com\/users\/mishig25\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mishig25\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mishig25\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mishig25\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mishig25\/orgs","repos_url":"https:\/\/api.github.com\/users\/mishig25\/repos","events_url":"https:\/\/api.github.com\/users\/mishig25\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mishig25\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008499 \/ 0.011353 (-0.002854) | 0.006155 \/ 0.011008 (-0.004853) | 0.124032 \/ 0.038508 (0.085524) | 0.037337 \/ 0.023109 (0.014228) | 0.389274 \/ 0.275898 (0.113376) | 0.427736 \/ 0.323480 (0.104257) | 0.006929 \/ 0.007986 (-0.001057) | 0.005017 \/ 0.004328 (0.000689) | 0.096356 \/ 0.004250 (0.092105) | 0.055694 \/ 0.037052 (0.018642) | 0.391417 \/ 0.258489 (0.132928) | 0.448098 \/ 0.293841 (0.154257) | 0.042442 \/ 0.128546 (-0.086105) | 0.013456 \/ 0.075646 (-0.062190) | 0.423502 \/ 0.419271 (0.004230) | 0.062919 \/ 0.043533 (0.019386) | 0.384317 \/ 0.255139 (0.129178) | 0.410851 \/ 0.283200 (0.127652) | 0.112807 \/ 0.141683 (-0.028875) | 1.746050 \/ 1.452155 (0.293895) | 1.977974 \/ 1.492716 (0.485257) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.306382 \/ 0.018006 (0.288375) | 0.620310 \/ 0.000490 (0.619820) | 0.009309 \/ 0.000200 (0.009109) | 0.000106 \/ 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026900 \/ 0.037411 (-0.010511) | 0.140125 \/ 0.014526 (0.125599) | 0.136295 \/ 0.176557 (-0.040261) | 0.207721 \/ 0.737135 (-0.529414) | 0.146328 \/ 0.296338 (-0.150011) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.616712 \/ 0.215209 (0.401503) | 6.237820 \/ 2.077655 (4.160166) | 2.503809 \/ 1.504120 (0.999689) | 2.129739 \/ 1.541195 (0.588544) | 2.160768 \/ 1.468490 (0.692277) | 0.971273 \/ 4.584777 (-3.613504) | 5.687161 \/ 3.745712 (1.941449) | 2.738148 \/ 5.269862 (-2.531713) | 1.692695 \/ 4.565676 (-2.872981) | 0.113701 \/ 0.424275 (-0.310574) | 0.014809 \/ 0.007607 (0.007202) | 0.774795 \/ 0.226044 (0.548750) | 7.660012 \/ 2.268929 (5.391083) | 3.253036 \/ 55.444624 (-52.191588) | 2.607498 \/ 6.876477 (-4.268979) | 2.681678 \/ 2.142072 (0.539606) | 1.095275 \/ 4.805227 (-3.709952) | 0.239078 \/ 6.500664 (-6.261586) | 0.081034 \/ 0.075469 (0.005565) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.574547 \/ 1.841788 (-0.267240) | 18.323566 \/ 8.074308 (10.249258) | 19.274482 \/ 10.191392 (9.083090) | 0.210275 \/ 0.680424 (-0.470149) | 0.031843 \/ 0.534201 (-0.502358) | 0.514843 \/ 0.579283 (-0.064440) | 0.633782 \/ 0.434364 (0.199418) | 0.588569 \/ 0.540337 (0.048232) | 0.721401 \/ 1.386936 (-0.665535) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008866 \/ 0.011353 (-0.002487) | 0.006460 \/ 0.011008 (-0.004548) | 0.121337 \/ 0.038508 (0.082829) | 0.033896 \/ 0.023109 (0.010786) | 0.455702 \/ 0.275898 (0.179804) | 0.509685 \/ 0.323480 (0.186205) | 0.007650 \/ 0.007986 (-0.000336) | 0.005578 \/ 0.004328 (0.001250) | 0.098505 \/ 0.004250 (0.094255) | 0.056122 \/ 0.037052 (0.019069) | 0.478483 \/ 0.258489 (0.219994) | 0.560008 \/ 0.293841 (0.266167) | 0.044926 \/ 0.128546 (-0.083620) | 0.014562 \/ 0.075646 (-0.061085) | 0.115027 \/ 0.419271 (-0.304244) | 0.066494 \/ 0.043533 (0.022961) | 0.463434 \/ 0.255139 (0.208296) | 0.513856 \/ 0.283200 (0.230656) | 0.126436 \/ 0.141683 (-0.015247) | 1.874729 \/ 1.452155 (0.422575) | 1.925080 \/ 1.492716 (0.432364) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.012672 \/ 0.018006 (-0.005334) | 0.615797 \/ 0.000490 (0.615307) | 0.001606 \/ 0.000200 (0.001406) | 0.000118 \/ 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031104 \/ 0.037411 (-0.006307) | 0.130107 \/ 0.014526 (0.115581) | 0.140587 \/ 0.176557 (-0.035970) | 0.205081 \/ 0.737135 (-0.532054) | 0.144068 \/ 0.296338 (-0.152270) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.646549 \/ 0.215209 (0.431340) | 6.403962 \/ 2.077655 (4.326307) | 2.812594 \/ 1.504120 (1.308474) | 2.478480 \/ 1.541195 (0.937285) | 2.552385 \/ 1.468490 (1.083895) | 0.991987 \/ 4.584777 (-3.592790) | 5.777917 \/ 3.745712 (2.032205) | 5.697830 \/ 5.269862 (0.427969) | 2.370583 \/ 4.565676 (-2.195094) | 0.109905 \/ 0.424275 (-0.314370) | 0.013801 \/ 0.007607 (0.006193) | 0.799932 \/ 0.226044 (0.573888) | 8.155672 \/ 2.268929 (5.886743) | 3.711662 \/ 55.444624 (-51.732963) | 3.042164 \/ 6.876477 (-3.834312) | 3.073549 \/ 2.142072 (0.931477) | 1.137515 \/ 4.805227 (-3.667712) | 0.231266 \/ 6.500664 (-6.269398) | 0.080893 \/ 0.075469 (0.005424) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.669210 \/ 1.841788 (-0.172577) | 18.747144 \/ 8.074308 (10.672836) | 21.084589 \/ 10.191392 (10.893197) | 0.241379 \/ 0.680424 (-0.439045) | 0.029473 \/ 0.534201 (-0.504728) | 0.524605 \/ 0.579283 (-0.054678) | 0.622852 \/ 0.434364 (0.188488) | 0.604941 \/ 0.540337 (0.064604) | 0.715978 \/ 1.386936 (-0.670958) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#142484a60b1330359d7713e906fc9e5e30aa9f64 \"CML watermark\")\n","Cool ! what about `.github\/workflows\/build_pr_documentation.yml` and `.github\/workflows\/delete_doc_comment.yml` ?","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005973 \/ 0.011353 (-0.005380) | 0.004389 \/ 0.011008 (-0.006620) | 0.096076 \/ 0.038508 (0.057568) | 0.031569 \/ 0.023109 (0.008460) | 0.328300 \/ 0.275898 (0.052402) | 0.359356 \/ 0.323480 (0.035876) | 0.005378 \/ 0.007986 (-0.002607) | 0.003703 \/ 0.004328 (-0.000625) | 0.075251 \/ 0.004250 (0.071000) | 0.042340 \/ 0.037052 (0.005287) | 0.346103 \/ 0.258489 (0.087614) | 0.379896 \/ 0.293841 (0.086055) | 0.027493 \/ 0.128546 (-0.101053) | 0.009033 \/ 0.075646 (-0.066613) | 0.327829 \/ 0.419271 (-0.091442) | 0.064074 \/ 0.043533 (0.020541) | 0.337703 \/ 0.255139 (0.082564) | 0.355335 \/ 0.283200 (0.072136) | 0.101179 \/ 0.141683 (-0.040504) | 1.471738 \/ 1.452155 (0.019584) | 1.539031 \/ 1.492716 (0.046315) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.194097 \/ 0.018006 (0.176091) | 0.434190 \/ 0.000490 (0.433701) | 0.005730 \/ 0.000200 (0.005530) | 0.000088 \/ 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025634 \/ 0.037411 (-0.011778) | 0.105080 \/ 0.014526 (0.090555) | 0.116508 \/ 0.176557 (-0.060049) | 0.173867 \/ 0.737135 (-0.563269) | 0.117749 \/ 0.296338 (-0.178590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.401566 \/ 0.215209 (0.186357) | 4.003558 \/ 2.077655 (1.925903) | 1.802756 \/ 1.504120 (0.298636) | 1.604222 \/ 1.541195 (0.063027) | 1.656617 \/ 1.468490 (0.188127) | 0.523385 \/ 4.584777 (-4.061392) | 3.744292 \/ 3.745712 (-0.001420) | 1.794295 \/ 5.269862 (-3.475567) | 1.044690 \/ 4.565676 (-3.520987) | 0.064992 \/ 0.424275 (-0.359284) | 0.011542 \/ 0.007607 (0.003935) | 0.507830 \/ 0.226044 (0.281785) | 5.061574 \/ 2.268929 (2.792645) | 2.252896 \/ 55.444624 (-53.191729) | 1.912551 \/ 6.876477 (-4.963926) | 2.073510 \/ 2.142072 (-0.068562) | 0.642148 \/ 4.805227 (-4.163079) | 0.140151 \/ 6.500664 (-6.360513) | 0.062623 \/ 0.075469 (-0.012846) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.180367 \/ 1.841788 (-0.661421) | 14.263475 \/ 8.074308 (6.189167) | 12.917251 \/ 10.191392 (2.725859) | 0.143815 \/ 0.680424 (-0.536608) | 0.017286 \/ 0.534201 (-0.516915) | 0.388411 \/ 0.579283 (-0.190872) | 0.430512 \/ 0.434364 (-0.003851) | 0.466595 \/ 0.540337 (-0.073742) | 0.564545 \/ 1.386936 (-0.822391) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006059 \/ 0.011353 (-0.005294) | 0.004419 \/ 0.011008 (-0.006590) | 0.074206 \/ 0.038508 (0.035697) | 0.031180 \/ 0.023109 (0.008071) | 0.380031 \/ 0.275898 (0.104133) | 0.410373 \/ 0.323480 (0.086893) | 0.005397 \/ 0.007986 (-0.002589) | 0.003952 \/ 0.004328 (-0.000376) | 0.074426 \/ 0.004250 (0.070176) | 0.046256 \/ 0.037052 (0.009203) | 0.385543 \/ 0.258489 (0.127054) | 0.430724 \/ 0.293841 (0.136883) | 0.028052 \/ 0.128546 (-0.100494) | 0.008810 \/ 0.075646 (-0.066836) | 0.080749 \/ 0.419271 (-0.338522) | 0.046746 \/ 0.043533 (0.003214) | 0.380325 \/ 0.255139 (0.125186) | 0.398901 \/ 0.283200 (0.115701) | 0.099607 \/ 0.141683 (-0.042076) | 1.433343 \/ 1.452155 (-0.018812) | 1.520447 \/ 1.492716 (0.027730) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.202232 \/ 0.018006 (0.184225) | 0.431342 \/ 0.000490 (0.430852) | 0.001020 \/ 0.000200 (0.000820) | 0.000089 \/ 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028762 \/ 0.037411 (-0.008649) | 0.111777 \/ 0.014526 (0.097251) | 0.119283 \/ 0.176557 (-0.057273) | 0.168151 \/ 0.737135 (-0.568985) | 0.126093 \/ 0.296338 (-0.170245) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.442689 \/ 0.215209 (0.227480) | 4.369202 \/ 2.077655 (2.291547) | 2.167703 \/ 1.504120 (0.663583) | 1.960580 \/ 1.541195 (0.419385) | 2.001459 \/ 1.468490 (0.532969) | 0.527169 \/ 4.584777 (-4.057608) | 3.738987 \/ 3.745712 (-0.006726) | 1.819002 \/ 5.269862 (-3.450860) | 1.082786 \/ 4.565676 (-3.482891) | 0.066209 \/ 0.424275 (-0.358066) | 0.011549 \/ 0.007607 (0.003942) | 0.545959 \/ 0.226044 (0.319915) | 5.466655 \/ 2.268929 (3.197727) | 2.671448 \/ 55.444624 (-52.773176) | 2.340968 \/ 6.876477 (-4.535509) | 2.358805 \/ 2.142072 (0.216733) | 0.649456 \/ 4.805227 (-4.155771) | 0.142009 \/ 6.500664 (-6.358655) | 0.064199 \/ 0.075469 (-0.011270) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.259819 \/ 1.841788 (-0.581969) | 14.456988 \/ 8.074308 (6.382680) | 14.478982 \/ 10.191392 (4.287590) | 0.163156 \/ 0.680424 (-0.517268) | 0.017090 \/ 0.534201 (-0.517111) | 0.391339 \/ 0.579283 (-0.187944) | 0.422021 \/ 0.434364 (-0.012343) | 0.465340 \/ 0.540337 (-0.074997) | 0.564517 \/ 1.386936 (-0.822419) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#97358c88f996a65f49923ec215358044e4146a95 \"CML watermark\")\n","> .github\/workflows\/delete_doc_comment.yml \r\n\r\nis already updated https:\/\/github.com\/huggingface\/datasets\/pull\/5932\/files\r\n\r\n> .github\/workflows\/build_pr_documentation.yml\r\n\r\nindeed no changes are needed"],"created_at":1686154179000,"updated_at":1686305818000,"closed_at":1686304396000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5932","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5932","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5932.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5932.patch","merged_at":1686304396000},"body":"Companion pr to https:\/\/github.com\/huggingface\/doc-builder\/pull\/379","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5932\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5932\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5931","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5931\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5931\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5931\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5931","id":1745408784,"node_id":"I_kwDODunzps5oCNMQ","number":5931,"title":"`datasets.map` not reusing cached copy by default","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This can happen when a map transform cannot be hashed deterministically (e.g., an object referenced by the transform changes its state after the first call - an issue with fast tokenizers). The solution is to provide `cache_file_name` in the `map` call to check this file for the cached result instead of relying on the default caching mechanism."],"created_at":1686128613000,"updated_at":1687364140000,"closed_at":1687364140000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nWhen I load the dataset from local directory, it's cached copy is picked up after first time. However, for `map` operation, the operation is applied again and cached copy is not picked up. Is there any way to pick cached copy instead of processing it again? The only solution I could think of was to use `save_to_disk` after my last transform and then use that in my DataLoader pipeline. Are there any other solutions for the same?\r\n\r\nOne more thing, my dataset is occupying 6GB storage memory after I use `map`, is there any way I can reduce that memory usage?\r\n\r\n\r\n### Steps to reproduce the bug\r\n\r\n```\r\n# make sure that dataset decodes audio with correct sampling rate\r\ndataset_sampling_rate = next(iter(self.raw_datasets.values())).features[\"audio\"].sampling_rate\r\nif dataset_sampling_rate != self.feature_extractor.sampling_rate:\r\n self.raw_datasets = self.raw_datasets.cast_column(\r\n \"audio\", datasets.features.Audio(sampling_rate=self.feature_extractor.sampling_rate)\r\n )\r\n\r\nvectorized_datasets = self.raw_datasets.map(\r\n self.prepare_dataset,\r\n remove_columns=next(iter(self.raw_datasets.values())).column_names,\r\n num_proc=self.num_workers,\r\n desc=\"preprocess datasets\",\r\n)\r\n# filter data that is longer than max_input_length\r\nself.vectorized_datasets = vectorized_datasets.filter(\r\n self.is_audio_in_length_range,\r\n num_proc=self.num_workers,\r\n input_columns=[\"input_length\"],\r\n )\r\n\r\ndef prepare_dataset(self, batch):\r\n # load audio\r\n sample = batch[\"audio\"]\r\n inputs = self.feature_extractor(sample[\"array\"], sampling_rate=sample[\"sampling_rate\"])\r\n batch[\"input_values\"] = inputs.input_values[0]\r\n batch[\"input_length\"] = len(batch[\"input_values\"])\r\n\r\n batch[\"labels\"] = self.tokenizer(batch[\"target_text\"]).input_ids\r\n return batch\r\n\r\n```\r\n\r\n### Expected behavior\r\n\r\n`map` to use cached copy and if possible an alternative technique to reduce memory usage after using `map`\r\n\r\n### Environment info\r\n\r\n\r\n- `datasets` version: 2.12.0\r\n- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17\r\n- Python version: 3.8.16\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 12.0.0\r\n- Pandas version: 2.0.2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5931\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5931\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5930","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5930\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5930\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5930\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5930","id":1745184395,"node_id":"I_kwDODunzps5oBWaL","number":5930,"title":"loading private custom dataset script - authentication error","user":{"login":"flckv","id":103381497,"node_id":"U_kgDOBil5-Q","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/103381497?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/flckv","html_url":"https:\/\/github.com\/flckv","followers_url":"https:\/\/api.github.com\/users\/flckv\/followers","following_url":"https:\/\/api.github.com\/users\/flckv\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/flckv\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/flckv\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/flckv\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/flckv\/orgs","repos_url":"https:\/\/api.github.com\/users\/flckv\/repos","events_url":"https:\/\/api.github.com\/users\/flckv\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/flckv\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This issue seems to have been resolved, so I'm closing it."],"created_at":1686121103000,"updated_at":1686840561000,"closed_at":1686840560000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nTrain model with my custom dataset stored in HuggingFace and loaded with the loading script requires authentication but I am not sure how ?\r\n\r\n\r\nI am logged in in the terminal, in the browser. I receive this error: \r\n\r\n\r\n\/python3.8\/site-packages\/datasets\/utils\/file_utils.py\", line 566, in get_from_cache\r\n raise ConnectionError(f\"Couldn't reach {url} ({repr(head_error)})\")\r\nConnectionError: Couldn't reach https:\/\/huggingface.co\/datasets\/fkov\/s\/blob\/main\/data\/s\/train\/labels `(ConnectionError('Unauthorized for URL `https:\/\/huggingface.co\/datasets\/fkov\/s\/blob\/main\/data\/s\/train\/labels. Please use the parameter `**`use_auth_token=True`**` after logging in with `**`huggingface-cli login`**`'))\r\n\r\nwhen I added: `use_auth_token=True` and logged in via terminal then I received error:\r\n\r\nor the same error in different format: \r\nraise ConnectionError(f\"`Couldn't reach {url} (error {response.status_code}`)\")\r\nConnectionError: Couldn't reach https:\/\/huggingface.co\/datasets\/fkov\/s\/blob\/main\/data\/s\/train\/labels (`error 401`)\r\n\r\n\r\n\n\n### Steps to reproduce the bug\n\n1. cloned transformers library locally:\r\nhttps:\/\/huggingface.co\/docs\/transformers\/v4.15.0\/examples :\r\n\r\n> git clone https:\/\/github.com\/huggingface\/transformers\r\n> cd transformers\r\n> pip install .\r\n> cd \/transformers\/examples\/pytorch\/audio-classification\r\n> pip install -r requirements.txt\r\n\r\n2. created **loading script** \r\n> https:\/\/huggingface.co\/docs\/datasets\/dataset_script added next to dataset:\r\n\r\n3. uploaded **private custom dataset** with loading script to HuggingFace\r\n> https:\/\/huggingface.co\/docs\/datasets\/dataset_script\r\n\r\n4. added dataset loading script to **local directory** in the above cloned transformers library:\r\n> cd \/transformers\/examples\/pytorch\/audio-classification\r\n\r\n5. logged in to HuggingFace on local terminal with :\r\n> **huggingface-cli login**\r\n\r\n6. run the model with the custom dataset stored on HuggingFace with code: https:\/\/github.com\/huggingface\/transformers\/blob\/main\/examples\/pytorch\/audio-classification\/README.md\r\n\r\n cd \/transformers\/examples\/pytorch\/audio-classification\r\n> python run_audio_classification.py \\\r\n> --model_name_or_path facebook\/wav2vec2-base \\\r\n> --output_dir l\/users\/flck\/outputs\/wav2vec2-base-s \\\r\n> --overwrite_output_dir \\\r\n> --dataset_name s \\\r\n> --dataset_config_name s \\\r\n> --remove_unused_columns False \\\r\n> --do_train \\\r\n> --do_eval \\\r\n> --fp16 \\\r\n> --learning_rate 3e-5 \\\r\n> --max_length_seconds 1 \\\r\n> --attention_mask False \\\r\n> --warmup_ratio 0.1 \\\r\n> --num_train_epochs 5 \\\r\n> --per_device_train_batch_size 32 \\\r\n> --gradient_accumulation_steps 4 \\\r\n> --per_device_eval_batch_size 32 \\\r\n> --dataloader_num_workers 4 \\\r\n> --logging_strategy steps \\\r\n> --logging_steps 10 \\\r\n> --evaluation_strategy epoch \\\r\n> --save_strategy epoch \\\r\n> --load_best_model_at_end True \\\r\n> --metric_for_best_model accuracy \\\r\n> --save_total_limit 3 \\\r\n> --seed 0 \\\r\n> --push_to_hub \\\r\n> **--use_auth_token=True** \r\n\r\n\n\n### Expected behavior\n\nBe able to train a model the https:\/\/github.com\/huggingface\/transformers\/blob\/main\/examples\/pytorch\/audio-classification\/ run_audio_classification.py with private custom dataset stored on HuggingFace.\n\n### Environment info\n\n- datasets version: 2.12.0 \r\n- `transformers` version: 4.30.0.dev0\r\n- Platform: Linux-5.4.204-ql-generic-12.0-19-x86_64-with-glibc2.17\r\n- Python version: 3.8.12\r\n- Huggingface_hub version: 0.15.1\r\n- Safetensors version: 0.3.1\r\n- PyTorch version (GPU?): 2.0.1+cu117 (True)\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.24.3\r\n[pip3] torch==2.0.1\r\n[pip3] torchaudio==2.0.2\r\n[conda] numpy 1.24.3 pypi_0 pypi\r\n[conda] torch 2.0.1 pypi_0 pypi\r\n[conda] torchaudio 2.0.2 pypi_0 pypi\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5930\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5930\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5929","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5929\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5929\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5929\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5929","id":1744478456,"node_id":"I_kwDODunzps5n-qD4","number":5929,"title":"Importing PyTorch reduces multiprocessing performance for map","user":{"login":"Maxscha","id":12814709,"node_id":"MDQ6VXNlcjEyODE0NzA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12814709?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Maxscha","html_url":"https:\/\/github.com\/Maxscha","followers_url":"https:\/\/api.github.com\/users\/Maxscha\/followers","following_url":"https:\/\/api.github.com\/users\/Maxscha\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Maxscha\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Maxscha\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Maxscha\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Maxscha\/orgs","repos_url":"https:\/\/api.github.com\/users\/Maxscha\/repos","events_url":"https:\/\/api.github.com\/users\/Maxscha\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Maxscha\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! The times match when I run this code locally or on Colab.\r\n\r\nAlso, we use `multiprocess`, not `multiprocessing`, for parallelization, and torch's `__init__.py` (executed on `import torch` ) slightly modifies the latter.","Hey Mariosasko,\r\n\r\nThanks for looking into it. We further did some investigations after your comment and figured out it's only affecting some hardware\/software configurations with the `pytorch` installation of `conda-forge`. Based on this we found the following issue in PyTorch: https:\/\/github.com\/pytorch\/pytorch\/issues\/102269 with a quick fix for now.\r\n\r\nSince it seems to be a deeper issue with forking processes, the difference between`multiprocess` and `multiprocessing` didn't make a difference.\r\n\r\nClosing this, since the issue comes from `pytorch` not `dataset`. \r\n"],"created_at":1686080545000,"updated_at":1686920952000,"closed_at":1686920952000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nI noticed that the performance of my dataset preprocessing with `map(...,num_proc=32)` decreases when PyTorch is imported.\r\n\r\n### Steps to reproduce the bug\r\n\r\nI created two example scripts to reproduce this behavior:\r\n\r\n```\r\nimport datasets\r\ndatasets.disable_caching()\r\n\r\nfrom datasets import Dataset\r\nimport time\r\n \r\nPROC=32\r\n\r\nif __name__ == \"__main__\":\r\n dataset = [True] * 10000000\r\n dataset = Dataset.from_dict({'train': dataset})\r\n \r\n\r\n start = time.time()\r\n dataset.map(lambda x: x, num_proc=PROC)\r\n end = time.time()\r\n print(end - start)\r\n```\r\nTakes around 4 seconds on my machine.\r\n\r\nWhile the same code, but with an `import torch`:\r\n```\r\nimport datasets\r\ndatasets.disable_caching()\r\n\r\nfrom datasets import Dataset\r\nimport time\r\nimport torch\r\n \r\nPROC=32\r\n\r\nif __name__ == \"__main__\":\r\n dataset = [True] * 10000000\r\n dataset = Dataset.from_dict({'train': dataset})\r\n \r\n\r\n start = time.time()\r\n dataset.map(lambda x: x, num_proc=PROC)\r\n end = time.time()\r\n print(end - start)\r\n```\r\ntakes around 22 seconds.\r\n\r\n\r\n\r\n### Expected behavior\r\n\r\nI would expect that the import of torch to not have such a significant effect on the performance of map using multiprocessing.\r\n\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 2.12.0\r\n- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35\r\n- Python version: 3.11.3\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 12.0.0\r\n- Pandas version: 2.0.2\r\n- torch: 2.0.1","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5929\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5929\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5928","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5928\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5928\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5928\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5928","id":1744098371,"node_id":"PR_kwDODunzps5SUXPC","number":5928,"title":"Fix link to quickstart docs in README.md","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006693 \/ 0.011353 (-0.004660) | 0.004331 \/ 0.011008 (-0.006677) | 0.098022 \/ 0.038508 (0.059514) | 0.032764 \/ 0.023109 (0.009654) | 0.295812 \/ 0.275898 (0.019914) | 0.325029 \/ 0.323480 (0.001550) | 0.005779 \/ 0.007986 (-0.002206) | 0.005381 \/ 0.004328 (0.001052) | 0.075785 \/ 0.004250 (0.071535) | 0.048759 \/ 0.037052 (0.011707) | 0.308986 \/ 0.258489 (0.050497) | 0.348000 \/ 0.293841 (0.054159) | 0.027686 \/ 0.128546 (-0.100860) | 0.008839 \/ 0.075646 (-0.066807) | 0.328389 \/ 0.419271 (-0.090883) | 0.062173 \/ 0.043533 (0.018640) | 0.312257 \/ 0.255139 (0.057119) | 0.325024 \/ 0.283200 (0.041824) | 0.103886 \/ 0.141683 (-0.037797) | 1.440215 \/ 1.452155 (-0.011940) | 1.528665 \/ 1.492716 (0.035948) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.210082 \/ 0.018006 (0.192076) | 0.442480 \/ 0.000490 (0.441990) | 0.006559 \/ 0.000200 (0.006359) | 0.000092 \/ 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026774 \/ 0.037411 (-0.010637) | 0.108362 \/ 0.014526 (0.093837) | 0.117631 \/ 0.176557 (-0.058926) | 0.176657 \/ 0.737135 (-0.560478) | 0.124154 \/ 0.296338 (-0.172184) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.428136 \/ 0.215209 (0.212927) | 4.270287 \/ 2.077655 (2.192632) | 2.014728 \/ 1.504120 (0.510608) | 1.806772 \/ 1.541195 (0.265577) | 1.946284 \/ 1.468490 (0.477794) | 0.525542 \/ 4.584777 (-4.059235) | 3.667025 \/ 3.745712 (-0.078687) | 1.878751 \/ 5.269862 (-3.391111) | 1.048321 \/ 4.565676 (-3.517356) | 0.065550 \/ 0.424275 (-0.358725) | 0.011881 \/ 0.007607 (0.004274) | 0.529873 \/ 0.226044 (0.303829) | 5.289641 \/ 2.268929 (3.020712) | 2.489403 \/ 55.444624 (-52.955221) | 2.141037 \/ 6.876477 (-4.735440) | 2.230735 \/ 2.142072 (0.088662) | 0.639781 \/ 4.805227 (-4.165447) | 0.141410 \/ 6.500664 (-6.359254) | 0.064374 \/ 0.075469 (-0.011095) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.159462 \/ 1.841788 (-0.682325) | 14.524730 \/ 8.074308 (6.450422) | 13.578070 \/ 10.191392 (3.386678) | 0.152138 \/ 0.680424 (-0.528286) | 0.017255 \/ 0.534201 (-0.516946) | 0.387607 \/ 0.579283 (-0.191676) | 0.413652 \/ 0.434364 (-0.020712) | 0.453644 \/ 0.540337 (-0.086693) | 0.550051 \/ 1.386936 (-0.836885) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006668 \/ 0.011353 (-0.004685) | 0.004677 \/ 0.011008 (-0.006331) | 0.075950 \/ 0.038508 (0.037442) | 0.032439 \/ 0.023109 (0.009329) | 0.381839 \/ 0.275898 (0.105941) | 0.419411 \/ 0.323480 (0.095931) | 0.005813 \/ 0.007986 (-0.002172) | 0.004090 \/ 0.004328 (-0.000238) | 0.075052 \/ 0.004250 (0.070802) | 0.048453 \/ 0.037052 (0.011401) | 0.388076 \/ 0.258489 (0.129587) | 0.431793 \/ 0.293841 (0.137952) | 0.028408 \/ 0.128546 (-0.100138) | 0.009028 \/ 0.075646 (-0.066618) | 0.082569 \/ 0.419271 (-0.336702) | 0.046772 \/ 0.043533 (0.003239) | 0.380182 \/ 0.255139 (0.125043) | 0.401828 \/ 0.283200 (0.118629) | 0.105388 \/ 0.141683 (-0.036294) | 1.453356 \/ 1.452155 (0.001201) | 1.561483 \/ 1.492716 (0.068767) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.008922 \/ 0.018006 (-0.009084) | 0.444112 \/ 0.000490 (0.443623) | 0.002756 \/ 0.000200 (0.002556) | 0.000104 \/ 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030408 \/ 0.037411 (-0.007003) | 0.112924 \/ 0.014526 (0.098399) | 0.124625 \/ 0.176557 (-0.051932) | 0.176915 \/ 0.737135 (-0.560220) | 0.129141 \/ 0.296338 (-0.167198) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.448197 \/ 0.215209 (0.232987) | 4.476548 \/ 2.077655 (2.398893) | 2.243977 \/ 1.504120 (0.739857) | 2.054060 \/ 1.541195 (0.512865) | 2.130680 \/ 1.468490 (0.662190) | 0.526815 \/ 4.584777 (-4.057962) | 3.759312 \/ 3.745712 (0.013600) | 3.333618 \/ 5.269862 (-1.936244) | 1.579611 \/ 4.565676 (-2.986065) | 0.065714 \/ 0.424275 (-0.358561) | 0.011939 \/ 0.007607 (0.004332) | 0.550313 \/ 0.226044 (0.324269) | 5.476946 \/ 2.268929 (3.208018) | 2.726521 \/ 55.444624 (-52.718104) | 2.364977 \/ 6.876477 (-4.511499) | 2.450624 \/ 2.142072 (0.308551) | 0.647174 \/ 4.805227 (-4.158053) | 0.141265 \/ 6.500664 (-6.359399) | 0.065493 \/ 0.075469 (-0.009976) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.249702 \/ 1.841788 (-0.592085) | 15.205647 \/ 8.074308 (7.131338) | 14.678310 \/ 10.191392 (4.486918) | 0.141539 \/ 0.680424 (-0.538884) | 0.017323 \/ 0.534201 (-0.516878) | 0.387602 \/ 0.579283 (-0.191681) | 0.415106 \/ 0.434364 (-0.019258) | 0.458146 \/ 0.540337 (-0.082192) | 0.553318 \/ 1.386936 (-0.833618) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#55127d7bf399fd2f3a8713db9822e8cb47cdbbed \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008567 \/ 0.011353 (-0.002786) | 0.005245 \/ 0.011008 (-0.005763) | 0.115074 \/ 0.038508 (0.076566) | 0.032567 \/ 0.023109 (0.009458) | 0.352297 \/ 0.275898 (0.076399) | 0.393403 \/ 0.323480 (0.069923) | 0.006402 \/ 0.007986 (-0.001583) | 0.004353 \/ 0.004328 (0.000025) | 0.087903 \/ 0.004250 (0.083653) | 0.048424 \/ 0.037052 (0.011372) | 0.370078 \/ 0.258489 (0.111588) | 0.410192 \/ 0.293841 (0.116351) | 0.042396 \/ 0.128546 (-0.086150) | 0.014426 \/ 0.075646 (-0.061220) | 0.411358 \/ 0.419271 (-0.007914) | 0.059546 \/ 0.043533 (0.016013) | 0.364721 \/ 0.255139 (0.109582) | 0.385100 \/ 0.283200 (0.101901) | 0.100572 \/ 0.141683 (-0.041111) | 1.741457 \/ 1.452155 (0.289302) | 1.933134 \/ 1.492716 (0.440418) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.217177 \/ 0.018006 (0.199171) | 0.510399 \/ 0.000490 (0.509909) | 0.005542 \/ 0.000200 (0.005342) | 0.000120 \/ 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026852 \/ 0.037411 (-0.010559) | 0.125580 \/ 0.014526 (0.111054) | 0.132164 \/ 0.176557 (-0.044392) | 0.189073 \/ 0.737135 (-0.548063) | 0.135980 \/ 0.296338 (-0.160358) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.601924 \/ 0.215209 (0.386715) | 5.891397 \/ 2.077655 (3.813743) | 2.389494 \/ 1.504120 (0.885375) | 2.044013 \/ 1.541195 (0.502818) | 2.019367 \/ 1.468490 (0.550877) | 0.883807 \/ 4.584777 (-3.700970) | 5.141349 \/ 3.745712 (1.395636) | 2.607415 \/ 5.269862 (-2.662446) | 1.567268 \/ 4.565676 (-2.998409) | 0.102738 \/ 0.424275 (-0.321537) | 0.013480 \/ 0.007607 (0.005873) | 0.744979 \/ 0.226044 (0.518934) | 7.404182 \/ 2.268929 (5.135254) | 2.983406 \/ 55.444624 (-52.461219) | 2.331847 \/ 6.876477 (-4.544630) | 2.465119 \/ 2.142072 (0.323047) | 1.106725 \/ 4.805227 (-3.698502) | 0.205779 \/ 6.500664 (-6.294885) | 0.081019 \/ 0.075469 (0.005550) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.527840 \/ 1.841788 (-0.313947) | 16.989487 \/ 8.074308 (8.915179) | 18.016123 \/ 10.191392 (7.824731) | 0.216157 \/ 0.680424 (-0.464266) | 0.025393 \/ 0.534201 (-0.508808) | 0.496743 \/ 0.579283 (-0.082540) | 0.575365 \/ 0.434364 (0.141002) | 0.559978 \/ 0.540337 (0.019641) | 0.677474 \/ 1.386936 (-0.709462) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008913 \/ 0.011353 (-0.002440) | 0.005540 \/ 0.011008 (-0.005469) | 0.100001 \/ 0.038508 (0.061493) | 0.034432 \/ 0.023109 (0.011323) | 0.419824 \/ 0.275898 (0.143926) | 0.443566 \/ 0.323480 (0.120086) | 0.006372 \/ 0.007986 (-0.001614) | 0.004405 \/ 0.004328 (0.000077) | 0.094927 \/ 0.004250 (0.090677) | 0.050300 \/ 0.037052 (0.013248) | 0.424806 \/ 0.258489 (0.166317) | 0.480793 \/ 0.293841 (0.186952) | 0.050869 \/ 0.128546 (-0.077677) | 0.015899 \/ 0.075646 (-0.059747) | 0.111413 \/ 0.419271 (-0.307859) | 0.058093 \/ 0.043533 (0.014560) | 0.430575 \/ 0.255139 (0.175436) | 0.483786 \/ 0.283200 (0.200586) | 0.106878 \/ 0.141683 (-0.034805) | 1.763576 \/ 1.452155 (0.311422) | 1.837750 \/ 1.492716 (0.345033) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.011565 \/ 0.018006 (-0.006441) | 0.484411 \/ 0.000490 (0.483922) | 0.004869 \/ 0.000200 (0.004669) | 0.000111 \/ 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030706 \/ 0.037411 (-0.006706) | 0.126901 \/ 0.014526 (0.112375) | 0.130367 \/ 0.176557 (-0.046190) | 0.206568 \/ 0.737135 (-0.530567) | 0.146505 \/ 0.296338 (-0.149834) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.627266 \/ 0.215209 (0.412057) | 6.314049 \/ 2.077655 (4.236394) | 2.582920 \/ 1.504120 (1.078800) | 2.249401 \/ 1.541195 (0.708206) | 2.244960 \/ 1.468490 (0.776470) | 0.907770 \/ 4.584777 (-3.677007) | 5.349622 \/ 3.745712 (1.603910) | 4.591244 \/ 5.269862 (-0.678618) | 2.301612 \/ 4.565676 (-2.264064) | 0.108813 \/ 0.424275 (-0.315462) | 0.013187 \/ 0.007607 (0.005580) | 0.806071 \/ 0.226044 (0.580027) | 7.843903 \/ 2.268929 (5.574974) | 3.405968 \/ 55.444624 (-52.038656) | 2.564301 \/ 6.876477 (-4.312176) | 2.652208 \/ 2.142072 (0.510135) | 1.168142 \/ 4.805227 (-3.637086) | 0.218551 \/ 6.500664 (-6.282113) | 0.078120 \/ 0.075469 (0.002651) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.562517 \/ 1.841788 (-0.279271) | 17.519325 \/ 8.074308 (9.445017) | 20.727083 \/ 10.191392 (10.535691) | 0.207135 \/ 0.680424 (-0.473288) | 0.028208 \/ 0.534201 (-0.505993) | 0.496157 \/ 0.579283 (-0.083126) | 0.569239 \/ 0.434364 (0.134875) | 0.566137 \/ 0.540337 (0.025799) | 0.704208 \/ 1.386936 (-0.682728) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#8eb3f34d876da98e722d866be90d7f26135ea9e3 \"CML watermark\")\n"],"created_at":1686064981000,"updated_at":1686066754000,"closed_at":1686066233000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5928","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5928","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5928.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5928.patch","merged_at":1686066233000},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5928\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5928\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5927","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5927\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5927\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5927\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5927","id":1744009032,"node_id":"I_kwDODunzps5n83dI","number":5927,"title":"`IndexError` when indexing `Sequence` of `Array2D` with `None` values","user":{"login":"qgallouedec","id":45557362,"node_id":"MDQ6VXNlcjQ1NTU3MzYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/45557362?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/qgallouedec","html_url":"https:\/\/github.com\/qgallouedec","followers_url":"https:\/\/api.github.com\/users\/qgallouedec\/followers","following_url":"https:\/\/api.github.com\/users\/qgallouedec\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/qgallouedec\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/qgallouedec\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/qgallouedec\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/qgallouedec\/orgs","repos_url":"https:\/\/api.github.com\/users\/qgallouedec\/repos","events_url":"https:\/\/api.github.com\/users\/qgallouedec\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/qgallouedec\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Easy fix would be to add:\r\n\r\n```python\r\nnull_indices -= np.arange(len(null_indices))\r\n```\r\n\r\nbefore L279, but I'm not sure it's the most intuitive way to fix it.","Same issue here:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/7fcbe5b1575c8d162b65b9397b3dfda995a4e048\/src\/datasets\/features\/features.py#L1398\r\n\r\nFixed in #5948 "],"created_at":1686062182000,"updated_at":1686659979000,"closed_at":1686317030000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nHaving `None` values in a `Sequence` of `ArrayND` fails.\r\n\n\n### Steps to reproduce the bug\n\n```python\r\nfrom datasets import Array2D, Dataset, Features, Sequence\r\n\r\ndata = [\r\n [\r\n [[0]],\r\n None,\r\n None,\r\n ]\r\n]\r\nfeature = Sequence(Array2D((1, 1), dtype=\"int64\"))\r\ndataset = Dataset.from_dict({\"a\": data}, features=Features({\"a\": feature}))\r\n\r\ndataset[0] # error raised only when indexing\r\n```\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\/Users\/quentingallouedec\/gia\/c.py\", line 13, in \r\n dataset[0] # error raised only when indexing\r\n File \"\/Users\/quentingallouedec\/gia\/env\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py\", line 2658, in __getitem__\r\n return self._getitem(key)\r\n File \"\/Users\/quentingallouedec\/gia\/env\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py\", line 2643, in _getitem\r\n formatted_output = format_table(\r\n File \"\/Users\/quentingallouedec\/gia\/env\/lib\/python3.10\/site-packages\/datasets\/formatting\/formatting.py\", line 634, in format_table\r\n return formatter(pa_table, query_type=query_type)\r\n File \"\/Users\/quentingallouedec\/gia\/env\/lib\/python3.10\/site-packages\/datasets\/formatting\/formatting.py\", line 406, in __call__\r\n return self.format_row(pa_table)\r\n File \"\/Users\/quentingallouedec\/gia\/env\/lib\/python3.10\/site-packages\/datasets\/formatting\/formatting.py\", line 441, in format_row\r\n row = self.python_arrow_extractor().extract_row(pa_table)\r\n File \"\/Users\/quentingallouedec\/gia\/env\/lib\/python3.10\/site-packages\/datasets\/formatting\/formatting.py\", line 144, in extract_row\r\n return _unnest(pa_table.to_pydict())\r\n File \"pyarrow\/table.pxi\", line 4146, in pyarrow.lib.Table.to_pydict\r\n File \"pyarrow\/table.pxi\", line 1312, in pyarrow.lib.ChunkedArray.to_pylist\r\n File \"pyarrow\/array.pxi\", line 1521, in pyarrow.lib.Array.to_pylist\r\n File \"pyarrow\/scalar.pxi\", line 675, in pyarrow.lib.ListScalar.as_py\r\n File \"\/Users\/quentingallouedec\/gia\/env\/lib\/python3.10\/site-packages\/datasets\/features\/features.py\", line 760, in to_pylist\r\n return self.to_numpy(zero_copy_only=zero_copy_only).tolist()\r\n File \"\/Users\/quentingallouedec\/gia\/env\/lib\/python3.10\/site-packages\/datasets\/features\/features.py\", line 725, in to_numpy\r\n numpy_arr = np.insert(numpy_arr.astype(np.float64), null_indices, np.nan, axis=0)\r\n File \"<__array_function__ internals>\", line 200, in insert\r\n File \"\/Users\/quentingallouedec\/gia\/env\/lib\/python3.10\/site-packages\/numpy\/lib\/function_base.py\", line 5426, in insert\r\n old_mask[indices] = False\r\nIndexError: index 3 is out of bounds for axis 0 with size 3\r\n```\r\n\r\nAFAIK, the problem only occurs when you use a `Sequence` of `ArrayND`.\r\n\r\nI strongly suspect that the problem comes from this line, or `np.insert` is misused:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/02ee418831aba68d0be93227bce8b3f42ef8980f\/src\/datasets\/features\/features.py#L729\r\n\r\nTo put t simply, you want something that do that:\r\n\r\n```python\r\nimport numpy as np\r\nnumpy_arr = np.zeros((1, 1, 1))\r\nnull_indices = np.array([1, 2])\r\nnp.insert(numpy_arr, null_indices, np.nan, axis=0)\r\n# raise an error, instead of outputting \r\n# array([[[ 0.]],\r\n# [[nan]],\r\n# [[nan]]])\r\n```\r\n\r\n\n\n### Expected behavior\n\nThe previous code should not raise an error.\n\n### Environment info\n\n- Python 3.10.11\r\n- datasets 2.10.0\r\n- pyarrow 12.0.0","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5927\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5927\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5926","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5926\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5926\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5926\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5926","id":1743922028,"node_id":"I_kwDODunzps5n8iNs","number":5926,"title":"Uncaught exception when generating the splits from a dataset that miss data","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":{"login":"albertvillanova","id":8515462.0,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @severo.\r\n\r\nThis is a known issue with `fsspec`:\r\n- #5862\r\n- https:\/\/github.com\/fsspec\/filesystem_spec\/issues\/1265"],"created_at":1686059461000,"updated_at":1686124396000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nDataset https:\/\/huggingface.co\/datasets\/blog_authorship_corpus has an issue with its hosting platform, since https:\/\/drive.google.com\/u\/0\/uc?id=1cGy4RNDV87ZHEXbiozABr9gsSrZpPaPz&export=download returns 404 error.\r\n\r\nBut when trying to generate the split names, we get an exception which is now correctly caught.\r\n\r\nSeen originally in https:\/\/github.com\/huggingface\/datasets-server\/blob\/adbdcd6710ffed4e2eb2e4cd905b5e0dff530a15\/services\/worker\/src\/worker\/job_runners\/config\/parquet_and_info.py#L435\n\n### Steps to reproduce the bug\n\n```python\r\n>>> from datasets import StreamingDownloadManager, load_dataset_builder\r\n>>> builder = load_dataset_builder(path=\"blog_authorship_corpus\")\r\nDownloading builder script: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5.60k\/5.60k [00:00<00:00, 23.1MB\/s]\r\nDownloading metadata: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.81k\/2.81k [00:00<00:00, 14.7MB\/s]\r\nDownloading readme: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 7.30k\/7.30k [00:00<00:00, 30.8MB\/s]\r\n>>> dl_manager = StreamingDownloadManager(base_path=builder.base_path)\r\n>>> builder._split_generators(dl_manager)\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/blog_authorship_corpus\/6f5d78241afd8313111956f877a57db7a0e9fc6718255dc85df0928197feb683\/blog_authorship_corpus.py\", line 79, in _split_generators\r\n data = dl_manager.download_and_extract(_DATA_URL)\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/download\/streaming_download_manager.py\", line 1087, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/download\/streaming_download_manager.py\", line 1039, in extract\r\n urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/utils\/py_utils.py\", line 435, in map_nested\r\n return function(data_struct)\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/download\/streaming_download_manager.py\", line 1044, in _extract\r\n protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/download\/streaming_download_manager.py\", line 433, in _get_extraction_protocol\r\n with fsspec.open(urlpath, **kwargs) as f:\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/fsspec\/core.py\", line 439, in open\r\n return open_files(\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/fsspec\/core.py\", line 194, in __getitem__\r\n out = super().__getitem__(item)\r\nIndexError: list index out of range\r\n```\n\n### Expected behavior\n\nWe should have an Exception raised by the datasets library.\n\n### Environment info\n\n\r\n- `datasets` version: 2.12.0\r\n- Platform: Linux-5.19.0-1026-aws-x86_64-with-glibc2.35\r\n- Python version: 3.9.15\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 2.0.2","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5926\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5926\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5925","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5925\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5925\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5925\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5925","id":1741941436,"node_id":"I_kwDODunzps5n0-q8","number":5925,"title":"Breaking API change in datasets.list_datasets caused by change in HfApi.list_datasets","user":{"login":"mtkinit","id":78868366,"node_id":"MDQ6VXNlcjc4ODY4MzY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/78868366?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mtkinit","html_url":"https:\/\/github.com\/mtkinit","followers_url":"https:\/\/api.github.com\/users\/mtkinit\/followers","following_url":"https:\/\/api.github.com\/users\/mtkinit\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mtkinit\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mtkinit\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mtkinit\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mtkinit\/orgs","repos_url":"https:\/\/api.github.com\/users\/mtkinit\/repos","events_url":"https:\/\/api.github.com\/users\/mtkinit\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mtkinit\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1685976364000,"updated_at":1687195363000,"closed_at":1687195363000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nHi all,\r\n\r\nafter an update of the `datasets` library, we observer crashes in our code. We relied on `datasets.list_datasets` returning a `list`. Now, after the API of the HfApi.list_datasets was changed and it returns a `list` instead of an `Iterable`, the `datasets.list_datasets` now sometimes returns a `list` and somesimes an `Iterable`. \r\n\r\nIt would be helpful to indicate that by the return type of the `datasets.list_datasets` function.\r\n\r\nThanks,\r\nMartin \n\n### Steps to reproduce the bug\n\nHere, the code crashed after we updated the `datasets` library:\r\n\r\n```python\r\n# list_datasets no longer returns a list, which leads to an error when one tries to slice it\r\nfor datasets.list_datasets(with_details=True)[:limit]:\r\n ...\r\n```\n\n### Expected behavior\n\nIt would be helpful to indicate that by the return type of the `datasets.list_datasets` function.\n\n### Environment info\n\nUbuntu 22.04\r\ndatasets 2.12.0","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5925\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5925\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5924","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5924\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5924\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5924\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5924","id":1738889236,"node_id":"PR_kwDODunzps5SCiFv","number":5924,"title":"Add parallel module using joblib for Spark","user":{"login":"es94129","id":12763339,"node_id":"MDQ6VXNlcjEyNzYzMzM5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12763339?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/es94129","html_url":"https:\/\/github.com\/es94129","followers_url":"https:\/\/api.github.com\/users\/es94129\/followers","following_url":"https:\/\/api.github.com\/users\/es94129\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/es94129\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/es94129\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/es94129\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/es94129\/orgs","repos_url":"https:\/\/api.github.com\/users\/es94129\/repos","events_url":"https:\/\/api.github.com\/users\/es94129\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/es94129\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq, I added the `parallel` part according to the discussion we had. Could you take a look to see if this is aligned with your proposal?\r\n\r\nMeanwhile I'm working on adding a `parallel_backend` parameter to `load_datasets` so that it can be used like:\r\n```python\r\nwith parallel_backend('spark', steps=['downloading']) as backend:\r\n ds = load_dataset(..., parallel_backend=backend)\r\n```\r\nwhere `parallel_backend` is a `ParallelBackend` class.","_The documentation is not available anymore as the PR was closed or merged._","@lhoestq Thanks for the comments!\r\nWith your suggestion, no changes made to `load_dataset` and I validated that downloading with spark is working now with this:\r\n```py\r\nwith parallel_backend('spark', steps=[\"download\"]):\r\n dataset = load_dataset(..., num_proc=2)\r\n```","@lhoestq Can a maintainer help trigger the tests again?\r\n> One idea is to decorate the download method to set the current global step to \"download\", and then only use joblib if the current step is one of the steps provided in parallel_backend.\r\n\r\nYes I think this is doable in a subsequent PR.\r\nFor throwing `NotImplementedError` I also think it can be done in a subsequent PR, because I'm not sure if `Dataset.map` is the only function that a user would expect to run using `with parallel_backend`.","Just triggered the tests :)\r\n\r\n> Yes I think this is doable in a subsequent PR.\r\nFor throwing NotImplementedError I also think it can be done in a subsequent PR, because I'm not sure if Dataset.map is the only function that a user would expect to run using with parallel_backend.\r\n\r\nI think any Dataset method that has a `num_proc` argument: Dataset.map (the other methods like filter or cast or based on map), and later we can see for the to_xxx methods (to_csv, to_parquet, etc.)","Hi maintainers, I've just addressed most of the comments, please take another look, thank you.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008422 \/ 0.011353 (-0.002931) | 0.005658 \/ 0.011008 (-0.005350) | 0.135372 \/ 0.038508 (0.096864) | 0.044766 \/ 0.023109 (0.021657) | 0.417876 \/ 0.275898 (0.141978) | 0.462785 \/ 0.323480 (0.139305) | 0.005485 \/ 0.007986 (-0.002501) | 0.005640 \/ 0.004328 (0.001311) | 0.105020 \/ 0.004250 (0.100770) | 0.049114 \/ 0.037052 (0.012062) | 0.490450 \/ 0.258489 (0.231961) | 0.467693 \/ 0.293841 (0.173852) | 0.050929 \/ 0.128546 (-0.077617) | 0.014644 \/ 0.075646 (-0.061002) | 0.452373 \/ 0.419271 (0.033101) | 0.074897 \/ 0.043533 (0.031364) | 0.425816 \/ 0.255139 (0.170677) | 0.420415 \/ 0.283200 (0.137215) | 0.134121 \/ 0.141683 (-0.007561) | 1.927744 \/ 1.452155 (0.475589) | 2.014417 \/ 1.492716 (0.521701) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.254811 \/ 0.018006 (0.236805) | 0.550011 \/ 0.000490 (0.549521) | 0.004913 \/ 0.000200 (0.004714) | 0.000117 \/ 0.000054 (0.000062) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032644 \/ 0.037411 (-0.004768) | 0.135672 \/ 0.014526 (0.121146) | 0.158984 \/ 0.176557 (-0.017572) | 0.218267 \/ 0.737135 (-0.518869) | 0.150348 \/ 0.296338 (-0.145991) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.625723 \/ 0.215209 (0.410514) | 6.247559 \/ 2.077655 (4.169905) | 2.626785 \/ 1.504120 (1.122666) | 2.195224 \/ 1.541195 (0.654030) | 2.232140 \/ 1.468490 (0.763650) | 0.943082 \/ 4.584777 (-3.641695) | 5.799262 \/ 3.745712 (2.053550) | 2.849411 \/ 5.269862 (-2.420450) | 1.744160 \/ 4.565676 (-2.821516) | 0.119056 \/ 0.424275 (-0.305219) | 0.014233 \/ 0.007607 (0.006626) | 0.795238 \/ 0.226044 (0.569194) | 7.569586 \/ 2.268929 (5.300657) | 3.179481 \/ 55.444624 (-52.265143) | 2.519772 \/ 6.876477 (-4.356704) | 2.714570 \/ 2.142072 (0.572498) | 1.107197 \/ 4.805227 (-3.698030) | 0.229986 \/ 6.500664 (-6.270678) | 0.087993 \/ 0.075469 (0.012524) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.535610 \/ 1.841788 (-0.306178) | 18.639369 \/ 8.074308 (10.565061) | 21.081844 \/ 10.191392 (10.890452) | 0.253247 \/ 0.680424 (-0.427177) | 0.026711 \/ 0.534201 (-0.507490) | 0.503790 \/ 0.579283 (-0.075493) | 0.600124 \/ 0.434364 (0.165760) | 0.617944 \/ 0.540337 (0.077607) | 0.766947 \/ 1.386936 (-0.619989) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007885 \/ 0.011353 (-0.003468) | 0.004761 \/ 0.011008 (-0.006248) | 0.097995 \/ 0.038508 (0.059487) | 0.033624 \/ 0.023109 (0.010515) | 0.504307 \/ 0.275898 (0.228409) | 0.534803 \/ 0.323480 (0.211323) | 0.006048 \/ 0.007986 (-0.001937) | 0.005042 \/ 0.004328 (0.000714) | 0.102288 \/ 0.004250 (0.098038) | 0.048695 \/ 0.037052 (0.011643) | 0.559086 \/ 0.258489 (0.300597) | 0.553233 \/ 0.293841 (0.259392) | 0.044596 \/ 0.128546 (-0.083950) | 0.013696 \/ 0.075646 (-0.061950) | 0.109875 \/ 0.419271 (-0.309397) | 0.059993 \/ 0.043533 (0.016460) | 0.485579 \/ 0.255139 (0.230440) | 0.519835 \/ 0.283200 (0.236635) | 0.123504 \/ 0.141683 (-0.018179) | 1.820506 \/ 1.452155 (0.368351) | 1.963448 \/ 1.492716 (0.470732) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.292663 \/ 0.018006 (0.274656) | 0.557783 \/ 0.000490 (0.557293) | 0.001330 \/ 0.000200 (0.001130) | 0.000112 \/ 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.036890 \/ 0.037411 (-0.000522) | 0.140373 \/ 0.014526 (0.125847) | 0.140176 \/ 0.176557 (-0.036381) | 0.237378 \/ 0.737135 (-0.499757) | 0.160186 \/ 0.296338 (-0.136152) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.673599 \/ 0.215209 (0.458390) | 6.510280 \/ 2.077655 (4.432625) | 2.981617 \/ 1.504120 (1.477497) | 2.684664 \/ 1.541195 (1.143469) | 2.760471 \/ 1.468490 (1.291981) | 0.975413 \/ 4.584777 (-3.609364) | 5.708933 \/ 3.745712 (1.963220) | 2.772069 \/ 5.269862 (-2.497793) | 1.763627 \/ 4.565676 (-2.802049) | 0.111632 \/ 0.424275 (-0.312643) | 0.013223 \/ 0.007607 (0.005616) | 0.791545 \/ 0.226044 (0.565500) | 8.063287 \/ 2.268929 (5.794359) | 3.671920 \/ 55.444624 (-51.772704) | 3.057248 \/ 6.876477 (-3.819229) | 3.083569 \/ 2.142072 (0.941497) | 1.118136 \/ 4.805227 (-3.687092) | 0.214655 \/ 6.500664 (-6.286009) | 0.083074 \/ 0.075469 (0.007605) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.761731 \/ 1.841788 (-0.080056) | 18.874200 \/ 8.074308 (10.799892) | 22.383693 \/ 10.191392 (12.192301) | 0.240292 \/ 0.680424 (-0.440132) | 0.028850 \/ 0.534201 (-0.505351) | 0.557334 \/ 0.579283 (-0.021949) | 0.627732 \/ 0.434364 (0.193369) | 0.634484 \/ 0.540337 (0.094146) | 0.767372 \/ 1.386936 (-0.619564) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#accaaf2e69fbb5dc5e50229d2eb1591b8ad982b6 \"CML watermark\")\n"],"created_at":1685744725000,"updated_at":1686738310000,"closed_at":1686737746000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5924","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5924","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5924.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5924.patch","merged_at":1686737746000},"body":"Discussion in https:\/\/github.com\/huggingface\/datasets\/issues\/5798","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5924\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5924\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5923","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5923\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5923\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5923\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5923","id":1737436227,"node_id":"I_kwDODunzps5njyxD","number":5923,"title":"Cannot import datasets - ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility","user":{"login":"ehuangc","id":71412682,"node_id":"MDQ6VXNlcjcxNDEyNjgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/71412682?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ehuangc","html_url":"https:\/\/github.com\/ehuangc","followers_url":"https:\/\/api.github.com\/users\/ehuangc\/followers","following_url":"https:\/\/api.github.com\/users\/ehuangc\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ehuangc\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ehuangc\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ehuangc\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ehuangc\/orgs","repos_url":"https:\/\/api.github.com\/users\/ehuangc\/repos","events_url":"https:\/\/api.github.com\/users\/ehuangc\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ehuangc\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Based on https:\/\/github.com\/rapidsai\/cudf\/issues\/10187, this probably means your `pyarrow` installation is not compatible with `datasets`.\r\n\r\nCan you please execute the following commands in the terminal and paste the output here?\r\n```\r\nconda list | grep arrow\r\n``` \r\n```\r\npython -c \"import pyarrow; print(pyarrow.__file__)\"\r\n```\r\n\r\n\r\n","> Based on [rapidsai\/cudf#10187](https:\/\/github.com\/rapidsai\/cudf\/issues\/10187), this probably means your `pyarrow` installation is not compatible with `datasets`.\r\n> \r\n> Can you please execute the following commands in the terminal and paste the output here?\r\n> \r\n> ```\r\n> conda list | grep arrow\r\n> ```\r\n> \r\n> ```\r\n> python -c \"import pyarrow; print(pyarrow.__file__)\"\r\n> ```\r\n\r\n\r\nHere is the output to the first command:\r\n```\r\narrow-cpp 11.0.0 py39h7f74497_0 \r\npyarrow 12.0.0 pypi_0 pypi\r\n```\r\nand the second:\r\n```\r\n\/Users\/edward\/opt\/anaconda3\/envs\/cs235\/lib\/python3.9\/site-packages\/pyarrow\/__init__.py\r\n```\r\nThanks!\r\n\r\n\r\n\r\n","after installing pytesseract 0.3.10, I got the above error. FYI ","RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):\r\npyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject","I got the same error, pyarrow 12.0.0 released May\/2023 (https:\/\/pypi.org\/project\/pyarrow\/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n\r\nDo we need to update dependencies? ","Please note that our CI properly passes all tests with `pyarrow-12.0.0`, for Python 3.7 and Python 3.10, for Ubuntu and Windows: see for example https:\/\/github.com\/huggingface\/datasets\/actions\/runs\/5157324334\/jobs\/9289582291","For conda with python3.8.16 this solved my problem! thanks!\r\n\r\n> I got the same error, pyarrow 12.0.0 released May\/2023 (https:\/\/pypi.org\/project\/pyarrow\/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n> \r\n> Do we need to update dependencies? I can work on that if no one else is working on it.\r\n\r\n","Thanks for replying. I am not sure about those environments but it seems like pyarrow-12.0.0 does not work for conda with python 3.8.16. \r\n\r\n> Please note that our CI properly passes all tests with `pyarrow-12.0.0`, for Python 3.7 and Python 3.10, for Ubuntu and Windows: see for example https:\/\/github.com\/huggingface\/datasets\/actions\/runs\/5157324334\/jobs\/9289582291\r\n\r\n","Got the same error with:\r\n\r\n```\r\narrow-cpp 11.0.0 py310h7516544_0 \r\npyarrow 12.0.0 pypi_0 pypi\r\n\r\npython 3.10.11 h7a1cb2a_2 \r\n\r\ndatasets 2.13.0 pyhd8ed1ab_0 conda-forge\r\n```","> I got the same error, pyarrow 12.0.0 released May\/2023 (https:\/\/pypi.org\/project\/pyarrow\/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n> \r\n> Do we need to update dependencies?\r\n\r\nThis solved the issue for me as well.","> I got the same error, pyarrow 12.0.0 released May\/2023 (https:\/\/pypi.org\/project\/pyarrow\/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n> \r\n> Do we need to update dependencies?\r\n\r\nSolved it for me also","> \u57fa\u4e8e [rapidsai\/cudf#10187](https:\/\/github.com\/rapidsai\/cudf\/issues\/10187)\uff0c\u8fd9\u53ef\u80fd\u610f\u5473\u7740\u60a8\u7684\u5b89\u88c5\u4e0e \u4e0d\u517c\u5bb9\u3002`pyarrow``datasets`\r\n> \r\n> \u60a8\u80fd\u5426\u5728\u7ec8\u7aef\u4e2d\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4\u5e76\u5c06\u8f93\u51fa\u7c98\u8d34\u5230\u6b64\u5904\uff1f\r\n> \r\n> ```\r\n> conda list | grep arrow\r\n> ```\r\n> \r\n> ```\r\n> python -c \"import pyarrow; print(pyarrow.__file__)\"\r\n> ```\r\n\r\narrow-cpp 11.0.0 py310h7516544_0 \r\npyarrow 12.0.1 pypi_0 pypi\r\n\r\n\/root\/miniconda3\/lib\/python3.10\/site-packages\/pyarrow\/__init__.py"],"created_at":1685679392000,"updated_at":1689075437000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nWhen trying to import datasets, I get a pyarrow ValueError:\r\n\r\nTraceback (most recent call last):\r\n File \"\/Users\/edward\/test\/test.py\", line 1, in \r\n import datasets\r\n File \"\/Users\/edward\/opt\/anaconda3\/envs\/cs235\/lib\/python3.9\/site-packages\/datasets\/__init__.py\", line 43, in \r\n from .arrow_dataset import Dataset\r\n File \"\/Users\/edward\/opt\/anaconda3\/envs\/cs235\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py\", line 65, in \r\n from .arrow_reader import ArrowReader\r\n File \"\/Users\/edward\/opt\/anaconda3\/envs\/cs235\/lib\/python3.9\/site-packages\/datasets\/arrow_reader.py\", line 28, in \r\n import pyarrow.parquet as pq\r\n File \"\/Users\/edward\/opt\/anaconda3\/envs\/cs235\/lib\/python3.9\/site-packages\/pyarrow\/parquet\/__init__.py\", line 20, in \r\n from .core import *\r\n File \"\/Users\/edward\/opt\/anaconda3\/envs\/cs235\/lib\/python3.9\/site-packages\/pyarrow\/parquet\/core.py\", line 45, in \r\n from pyarrow.fs import (LocalFileSystem, FileSystem, FileType,\r\n File \"\/Users\/edward\/opt\/anaconda3\/envs\/cs235\/lib\/python3.9\/site-packages\/pyarrow\/fs.py\", line 49, in \r\n from pyarrow._gcsfs import GcsFileSystem # noqa\r\n File \"pyarrow\/_gcsfs.pyx\", line 1, in init pyarrow._gcsfs\r\nValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject\r\n\r\n### Steps to reproduce the bug\r\n\r\n`import datasets`\r\n\r\n### Expected behavior\r\n\r\nSuccessful import\r\n\r\n### Environment info\r\n\r\nConda environment, MacOS\r\npython 3.9.12\r\ndatasets 2.12.0\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5923\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5923\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5922","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5922\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5922\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5922\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5922","id":1736898953,"node_id":"I_kwDODunzps5nhvmJ","number":5922,"title":"Length of table does not accurately reflect the split","user":{"login":"amogkam","id":8068268,"node_id":"MDQ6VXNlcjgwNjgyNjg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8068268?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/amogkam","html_url":"https:\/\/github.com\/amogkam","followers_url":"https:\/\/api.github.com\/users\/amogkam\/followers","following_url":"https:\/\/api.github.com\/users\/amogkam\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/amogkam\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/amogkam\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/amogkam\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/amogkam\/orgs","repos_url":"https:\/\/api.github.com\/users\/amogkam\/repos","events_url":"https:\/\/api.github.com\/users\/amogkam\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/amogkam\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892913,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEz","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/wontfix","name":"wontfix","color":"ffffff","default":true,"description":"This will not be worked on"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["As already replied by @lhoestq (private channel):\r\n> `.train_test_split` (as well as `.shard`, `.select`) doesn't create a new arrow table to save time and disk space. Instead, it uses an indices mapping on top of the table that locate which examples are part of train or test.","This is an optimization that we don't plan to \"fix\", so I'm closing this issue."],"created_at":1685645786000,"updated_at":1685722411000,"closed_at":1685722411000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI load a Huggingface Dataset and do `train_test_split`. I'm expecting the underlying table for the dataset to also be split, but it's not.\n\n### Steps to reproduce the bug\n\n![image](https:\/\/github.com\/huggingface\/datasets\/assets\/8068268\/83e5768f-8b4c-422a-945c-832a7585afff)\r\n\n\n### Expected behavior\n\nThe expected behavior is when `len(hf_dataset[\"train\"].data)` should match the length of the train split, and not be the entire unsplit dataset.\n\n### Environment info\n\ndatasets 2.10.1\r\npython 3.10.11","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5922\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5922\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5921","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5921\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5921\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5921\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5921","id":1736563023,"node_id":"PR_kwDODunzps5R6j-y","number":5921,"title":"Fix streaming parquet with image feature in schema","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007088 \/ 0.011353 (-0.004265) | 0.005216 \/ 0.011008 (-0.005793) | 0.097572 \/ 0.038508 (0.059064) | 0.036510 \/ 0.023109 (0.013401) | 0.316885 \/ 0.275898 (0.040987) | 0.348541 \/ 0.323480 (0.025061) | 0.006513 \/ 0.007986 (-0.001473) | 0.004579 \/ 0.004328 (0.000251) | 0.073779 \/ 0.004250 (0.069529) | 0.057500 \/ 0.037052 (0.020448) | 0.329840 \/ 0.258489 (0.071351) | 0.357530 \/ 0.293841 (0.063690) | 0.028515 \/ 0.128546 (-0.100031) | 0.009156 \/ 0.075646 (-0.066491) | 0.328340 \/ 0.419271 (-0.090932) | 0.068400 \/ 0.043533 (0.024867) | 0.313692 \/ 0.255139 (0.058553) | 0.329170 \/ 0.283200 (0.045971) | 0.111969 \/ 0.141683 (-0.029714) | 1.422096 \/ 1.452155 (-0.030059) | 1.550042 \/ 1.492716 (0.057326) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.285113 \/ 0.018006 (0.267107) | 0.546788 \/ 0.000490 (0.546298) | 0.006992 \/ 0.000200 (0.006792) | 0.000097 \/ 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026841 \/ 0.037411 (-0.010570) | 0.108413 \/ 0.014526 (0.093887) | 0.118375 \/ 0.176557 (-0.058181) | 0.174889 \/ 0.737135 (-0.562246) | 0.122781 \/ 0.296338 (-0.173558) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.404187 \/ 0.215209 (0.188978) | 4.039673 \/ 2.077655 (1.962019) | 1.894616 \/ 1.504120 (0.390496) | 1.729182 \/ 1.541195 (0.187987) | 1.772917 \/ 1.468490 (0.304427) | 0.524046 \/ 4.584777 (-4.060731) | 3.628111 \/ 3.745712 (-0.117601) | 1.866075 \/ 5.269862 (-3.403787) | 1.026435 \/ 4.565676 (-3.539242) | 0.065328 \/ 0.424275 (-0.358947) | 0.012717 \/ 0.007607 (0.005110) | 0.505821 \/ 0.226044 (0.279777) | 5.049518 \/ 2.268929 (2.780589) | 2.338486 \/ 55.444624 (-53.106139) | 2.002874 \/ 6.876477 (-4.873602) | 2.193049 \/ 2.142072 (0.050976) | 0.664638 \/ 4.805227 (-4.140589) | 0.151323 \/ 6.500664 (-6.349341) | 0.063774 \/ 0.075469 (-0.011695) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.168168 \/ 1.841788 (-0.673620) | 15.289200 \/ 8.074308 (7.214891) | 13.614249 \/ 10.191392 (3.422857) | 0.167950 \/ 0.680424 (-0.512474) | 0.017522 \/ 0.534201 (-0.516679) | 0.393480 \/ 0.579283 (-0.185803) | 0.420549 \/ 0.434364 (-0.013815) | 0.461425 \/ 0.540337 (-0.078912) | 0.563583 \/ 1.386936 (-0.823353) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006859 \/ 0.011353 (-0.004493) | 0.004864 \/ 0.011008 (-0.006144) | 0.075084 \/ 0.038508 (0.036576) | 0.033989 \/ 0.023109 (0.010880) | 0.372512 \/ 0.275898 (0.096614) | 0.394725 \/ 0.323480 (0.071246) | 0.006382 \/ 0.007986 (-0.001604) | 0.004521 \/ 0.004328 (0.000193) | 0.076422 \/ 0.004250 (0.072172) | 0.055383 \/ 0.037052 (0.018331) | 0.400974 \/ 0.258489 (0.142485) | 0.411570 \/ 0.293841 (0.117729) | 0.028264 \/ 0.128546 (-0.100282) | 0.009123 \/ 0.075646 (-0.066523) | 0.081257 \/ 0.419271 (-0.338015) | 0.048147 \/ 0.043533 (0.004614) | 0.390735 \/ 0.255139 (0.135596) | 0.376426 \/ 0.283200 (0.093226) | 0.108164 \/ 0.141683 (-0.033518) | 1.429667 \/ 1.452155 (-0.022488) | 1.556291 \/ 1.492716 (0.063575) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.289514 \/ 0.018006 (0.271508) | 0.532860 \/ 0.000490 (0.532370) | 0.003810 \/ 0.000200 (0.003611) | 0.000121 \/ 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031292 \/ 0.037411 (-0.006119) | 0.116530 \/ 0.014526 (0.102005) | 0.127624 \/ 0.176557 (-0.048932) | 0.178276 \/ 0.737135 (-0.558859) | 0.133742 \/ 0.296338 (-0.162597) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.431505 \/ 0.215209 (0.216296) | 4.309206 \/ 2.077655 (2.231551) | 2.174779 \/ 1.504120 (0.670659) | 1.998122 \/ 1.541195 (0.456927) | 2.126478 \/ 1.468490 (0.657988) | 0.528971 \/ 4.584777 (-4.055806) | 3.797608 \/ 3.745712 (0.051895) | 1.876275 \/ 5.269862 (-3.393586) | 1.087458 \/ 4.565676 (-3.478218) | 0.066940 \/ 0.424275 (-0.357335) | 0.012432 \/ 0.007607 (0.004825) | 0.538346 \/ 0.226044 (0.312301) | 5.370968 \/ 2.268929 (3.102039) | 2.613718 \/ 55.444624 (-52.830906) | 2.246585 \/ 6.876477 (-4.629892) | 2.375695 \/ 2.142072 (0.233622) | 0.652227 \/ 4.805227 (-4.153001) | 0.143246 \/ 6.500664 (-6.357418) | 0.066163 \/ 0.075469 (-0.009306) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.291263 \/ 1.841788 (-0.550524) | 16.532281 \/ 8.074308 (8.457973) | 15.038471 \/ 10.191392 (4.847079) | 0.168139 \/ 0.680424 (-0.512285) | 0.017724 \/ 0.534201 (-0.516477) | 0.391636 \/ 0.579283 (-0.187648) | 0.429690 \/ 0.434364 (-0.004674) | 0.474941 \/ 0.540337 (-0.065396) | 0.579461 \/ 1.386936 (-0.807475) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#db690affa0373b08f7cef04e25fe2113ee831ef5 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006083 \/ 0.011353 (-0.005269) | 0.004085 \/ 0.011008 (-0.006923) | 0.098337 \/ 0.038508 (0.059829) | 0.027573 \/ 0.023109 (0.004464) | 0.305688 \/ 0.275898 (0.029790) | 0.341767 \/ 0.323480 (0.018287) | 0.005143 \/ 0.007986 (-0.002842) | 0.003396 \/ 0.004328 (-0.000932) | 0.076925 \/ 0.004250 (0.072674) | 0.041027 \/ 0.037052 (0.003975) | 0.307877 \/ 0.258489 (0.049388) | 0.346559 \/ 0.293841 (0.052718) | 0.025183 \/ 0.128546 (-0.103363) | 0.008575 \/ 0.075646 (-0.067071) | 0.319449 \/ 0.419271 (-0.099823) | 0.043378 \/ 0.043533 (-0.000154) | 0.304563 \/ 0.255139 (0.049424) | 0.332019 \/ 0.283200 (0.048819) | 0.087725 \/ 0.141683 (-0.053958) | 1.484904 \/ 1.452155 (0.032749) | 1.582780 \/ 1.492716 (0.090064) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.197503 \/ 0.018006 (0.179497) | 0.410370 \/ 0.000490 (0.409880) | 0.003840 \/ 0.000200 (0.003640) | 0.000067 \/ 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024179 \/ 0.037411 (-0.013232) | 0.098876 \/ 0.014526 (0.084350) | 0.106189 \/ 0.176557 (-0.070367) | 0.168964 \/ 0.737135 (-0.568171) | 0.109723 \/ 0.296338 (-0.186616) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.429453 \/ 0.215209 (0.214244) | 4.295584 \/ 2.077655 (2.217929) | 2.014330 \/ 1.504120 (0.510210) | 1.841119 \/ 1.541195 (0.299924) | 1.928378 \/ 1.468490 (0.459888) | 0.554571 \/ 4.584777 (-4.030206) | 3.431769 \/ 3.745712 (-0.313943) | 1.716204 \/ 5.269862 (-3.553658) | 0.995054 \/ 4.565676 (-3.570622) | 0.067374 \/ 0.424275 (-0.356902) | 0.012557 \/ 0.007607 (0.004950) | 0.533785 \/ 0.226044 (0.307740) | 5.363360 \/ 2.268929 (3.094431) | 2.535190 \/ 55.444624 (-52.909434) | 2.191646 \/ 6.876477 (-4.684831) | 2.400799 \/ 2.142072 (0.258727) | 0.663961 \/ 4.805227 (-4.141266) | 0.135992 \/ 6.500664 (-6.364672) | 0.067378 \/ 0.075469 (-0.008092) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.235110 \/ 1.841788 (-0.606678) | 13.820695 \/ 8.074308 (5.746387) | 13.667202 \/ 10.191392 (3.475810) | 0.143025 \/ 0.680424 (-0.537399) | 0.016757 \/ 0.534201 (-0.517444) | 0.356262 \/ 0.579283 (-0.223021) | 0.401871 \/ 0.434364 (-0.032493) | 0.423928 \/ 0.540337 (-0.116410) | 0.514598 \/ 1.386936 (-0.872338) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006260 \/ 0.011353 (-0.005093) | 0.004159 \/ 0.011008 (-0.006850) | 0.076780 \/ 0.038508 (0.038272) | 0.027899 \/ 0.023109 (0.004789) | 0.412756 \/ 0.275898 (0.136858) | 0.455145 \/ 0.323480 (0.131665) | 0.005029 \/ 0.007986 (-0.002956) | 0.003482 \/ 0.004328 (-0.000847) | 0.076148 \/ 0.004250 (0.071898) | 0.038969 \/ 0.037052 (0.001917) | 0.429975 \/ 0.258489 (0.171486) | 0.465880 \/ 0.293841 (0.172039) | 0.025555 \/ 0.128546 (-0.102991) | 0.008612 \/ 0.075646 (-0.067034) | 0.082604 \/ 0.419271 (-0.336667) | 0.039690 \/ 0.043533 (-0.003842) | 0.403644 \/ 0.255139 (0.148505) | 0.440438 \/ 0.283200 (0.157238) | 0.090984 \/ 0.141683 (-0.050699) | 1.465915 \/ 1.452155 (0.013760) | 1.564227 \/ 1.492716 (0.071511) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.010502 \/ 0.018006 (-0.007504) | 0.410573 \/ 0.000490 (0.410083) | 0.000384 \/ 0.000200 (0.000184) | 0.000059 \/ 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025726 \/ 0.037411 (-0.011686) | 0.101760 \/ 0.014526 (0.087235) | 0.110102 \/ 0.176557 (-0.066454) | 0.161321 \/ 0.737135 (-0.575815) | 0.112507 \/ 0.296338 (-0.183832) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.469925 \/ 0.215209 (0.254716) | 4.718740 \/ 2.077655 (2.641085) | 2.466272 \/ 1.504120 (0.962152) | 2.267357 \/ 1.541195 (0.726162) | 2.331343 \/ 1.468490 (0.862853) | 0.553448 \/ 4.584777 (-4.031329) | 3.464228 \/ 3.745712 (-0.281484) | 3.060957 \/ 5.269862 (-2.208905) | 1.387261 \/ 4.565676 (-3.178415) | 0.067989 \/ 0.424275 (-0.356286) | 0.012349 \/ 0.007607 (0.004741) | 0.575046 \/ 0.226044 (0.349001) | 5.740322 \/ 2.268929 (3.471394) | 2.925666 \/ 55.444624 (-52.518958) | 2.606535 \/ 6.876477 (-4.269942) | 2.658144 \/ 2.142072 (0.516072) | 0.655157 \/ 4.805227 (-4.150071) | 0.138520 \/ 6.500664 (-6.362144) | 0.069442 \/ 0.075469 (-0.006027) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.306523 \/ 1.841788 (-0.535265) | 14.400380 \/ 8.074308 (6.326072) | 14.231519 \/ 10.191392 (4.040127) | 0.146194 \/ 0.680424 (-0.534230) | 0.016632 \/ 0.534201 (-0.517569) | 0.361151 \/ 0.579283 (-0.218132) | 0.388838 \/ 0.434364 (-0.045526) | 0.419337 \/ 0.540337 (-0.121001) | 0.500483 \/ 1.386936 (-0.886453) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#c0429e9806bf7065d03dc5858c039a30c5af716c \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009430 \/ 0.011353 (-0.001923) | 0.006673 \/ 0.011008 (-0.004335) | 0.125151 \/ 0.038508 (0.086643) | 0.038258 \/ 0.023109 (0.015149) | 0.426383 \/ 0.275898 (0.150485) | 0.432327 \/ 0.323480 (0.108847) | 0.006964 \/ 0.007986 (-0.001022) | 0.005140 \/ 0.004328 (0.000811) | 0.100767 \/ 0.004250 (0.096517) | 0.058663 \/ 0.037052 (0.021610) | 0.424709 \/ 0.258489 (0.166220) | 0.453049 \/ 0.293841 (0.159208) | 0.051042 \/ 0.128546 (-0.077505) | 0.015291 \/ 0.075646 (-0.060355) | 0.456549 \/ 0.419271 (0.037278) | 0.067106 \/ 0.043533 (0.023573) | 0.408959 \/ 0.255139 (0.153820) | 0.445067 \/ 0.283200 (0.161867) | 0.115590 \/ 0.141683 (-0.026092) | 1.929439 \/ 1.452155 (0.477284) | 2.045709 \/ 1.492716 (0.552992) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.250726 \/ 0.018006 (0.232720) | 0.598976 \/ 0.000490 (0.598486) | 0.007542 \/ 0.000200 (0.007342) | 0.000101 \/ 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030317 \/ 0.037411 (-0.007094) | 0.133177 \/ 0.014526 (0.118651) | 0.152761 \/ 0.176557 (-0.023795) | 0.233708 \/ 0.737135 (-0.503428) | 0.147303 \/ 0.296338 (-0.149036) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.633562 \/ 0.215209 (0.418353) | 6.235021 \/ 2.077655 (4.157366) | 2.652573 \/ 1.504120 (1.148454) | 2.223363 \/ 1.541195 (0.682168) | 2.231022 \/ 1.468490 (0.762531) | 0.942218 \/ 4.584777 (-3.642559) | 6.068661 \/ 3.745712 (2.322949) | 2.778604 \/ 5.269862 (-2.491257) | 1.787939 \/ 4.565676 (-2.777737) | 0.117749 \/ 0.424275 (-0.306526) | 0.015613 \/ 0.007607 (0.008006) | 0.810222 \/ 0.226044 (0.584177) | 7.931509 \/ 2.268929 (5.662581) | 3.260679 \/ 55.444624 (-52.183945) | 2.609085 \/ 6.876477 (-4.267391) | 2.867838 \/ 2.142072 (0.725766) | 1.144672 \/ 4.805227 (-3.660555) | 0.224379 \/ 6.500664 (-6.276285) | 0.084490 \/ 0.075469 (0.009021) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.650608 \/ 1.841788 (-0.191179) | 18.919748 \/ 8.074308 (10.845440) | 20.163162 \/ 10.191392 (9.971770) | 0.229427 \/ 0.680424 (-0.450997) | 0.033090 \/ 0.534201 (-0.501111) | 0.535549 \/ 0.579283 (-0.043734) | 0.658629 \/ 0.434364 (0.224265) | 0.631526 \/ 0.540337 (0.091189) | 0.748701 \/ 1.386936 (-0.638235) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009157 \/ 0.011353 (-0.002196) | 0.006153 \/ 0.011008 (-0.004856) | 0.106294 \/ 0.038508 (0.067786) | 0.040947 \/ 0.023109 (0.017837) | 0.493242 \/ 0.275898 (0.217344) | 0.563525 \/ 0.323480 (0.240045) | 0.007256 \/ 0.007986 (-0.000730) | 0.006757 \/ 0.004328 (0.002429) | 0.105151 \/ 0.004250 (0.100901) | 0.056262 \/ 0.037052 (0.019209) | 0.573341 \/ 0.258489 (0.314852) | 0.591125 \/ 0.293841 (0.297284) | 0.047935 \/ 0.128546 (-0.080611) | 0.015385 \/ 0.075646 (-0.060262) | 0.119457 \/ 0.419271 (-0.299814) | 0.066510 \/ 0.043533 (0.022977) | 0.485622 \/ 0.255139 (0.230483) | 0.540929 \/ 0.283200 (0.257730) | 0.132619 \/ 0.141683 (-0.009064) | 1.916905 \/ 1.452155 (0.464750) | 2.152722 \/ 1.492716 (0.660006) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.294823 \/ 0.018006 (0.276817) | 0.569371 \/ 0.000490 (0.568882) | 0.000642 \/ 0.000200 (0.000442) | 0.000091 \/ 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034321 \/ 0.037411 (-0.003090) | 0.134165 \/ 0.014526 (0.119639) | 0.157871 \/ 0.176557 (-0.018685) | 0.210753 \/ 0.737135 (-0.526382) | 0.152961 \/ 0.296338 (-0.143377) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.686810 \/ 0.215209 (0.471601) | 6.890432 \/ 2.077655 (4.812778) | 3.182875 \/ 1.504120 (1.678755) | 2.770836 \/ 1.541195 (1.229641) | 2.790785 \/ 1.468490 (1.322295) | 0.938145 \/ 4.584777 (-3.646632) | 5.861093 \/ 3.745712 (2.115381) | 2.719862 \/ 5.269862 (-2.550000) | 1.760834 \/ 4.565676 (-2.804842) | 0.111317 \/ 0.424275 (-0.312958) | 0.015722 \/ 0.007607 (0.008115) | 0.863032 \/ 0.226044 (0.636988) | 8.482433 \/ 2.268929 (6.213504) | 3.892621 \/ 55.444624 (-51.552003) | 3.207370 \/ 6.876477 (-3.669106) | 3.344412 \/ 2.142072 (1.202339) | 1.133903 \/ 4.805227 (-3.671324) | 0.223456 \/ 6.500664 (-6.277209) | 0.084335 \/ 0.075469 (0.008866) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.794116 \/ 1.841788 (-0.047672) | 19.077447 \/ 8.074308 (11.003139) | 23.102309 \/ 10.191392 (12.910917) | 0.268806 \/ 0.680424 (-0.411617) | 0.027709 \/ 0.534201 (-0.506492) | 0.540488 \/ 0.579283 (-0.038796) | 0.658478 \/ 0.434364 (0.224114) | 0.604769 \/ 0.540337 (0.064431) | 0.722768 \/ 1.386936 (-0.664168) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#7e52021c66666e6953d5be0bd45a079e3ddb8c3f \"CML watermark\")\n"],"created_at":1685632990000,"updated_at":1685700174000,"closed_at":1685699591000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5921","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5921","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5921.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5921.patch","merged_at":1685699591000},"body":"It was not reading the feature type from the parquet arrow schema","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5921\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5921\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5920","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5920\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5920\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5920\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5920","id":1736196991,"node_id":"PR_kwDODunzps5R5TRB","number":5920,"title":"Optimize IterableDataset.from_file using ArrowExamplesIterable","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007439 \/ 0.011353 (-0.003914) | 0.004884 \/ 0.011008 (-0.006124) | 0.098750 \/ 0.038508 (0.060242) | 0.040723 \/ 0.023109 (0.017613) | 0.347242 \/ 0.275898 (0.071344) | 0.381202 \/ 0.323480 (0.057722) | 0.006814 \/ 0.007986 (-0.001171) | 0.004543 \/ 0.004328 (0.000215) | 0.075338 \/ 0.004250 (0.071088) | 0.058976 \/ 0.037052 (0.021924) | 0.344746 \/ 0.258489 (0.086257) | 0.406761 \/ 0.293841 (0.112920) | 0.028961 \/ 0.128546 (-0.099585) | 0.009531 \/ 0.075646 (-0.066115) | 0.337324 \/ 0.419271 (-0.081947) | 0.051071 \/ 0.043533 (0.007538) | 0.341251 \/ 0.255139 (0.086112) | 0.362773 \/ 0.283200 (0.079573) | 0.109423 \/ 0.141683 (-0.032260) | 1.457420 \/ 1.452155 (0.005266) | 1.588824 \/ 1.492716 (0.096108) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.288620 \/ 0.018006 (0.270614) | 0.568975 \/ 0.000490 (0.568485) | 0.003350 \/ 0.000200 (0.003150) | 0.000088 \/ 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028732 \/ 0.037411 (-0.008680) | 0.117820 \/ 0.014526 (0.103294) | 0.120180 \/ 0.176557 (-0.056376) | 0.178736 \/ 0.737135 (-0.558399) | 0.126399 \/ 0.296338 (-0.169939) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.428357 \/ 0.215209 (0.213148) | 4.251989 \/ 2.077655 (2.174334) | 2.005239 \/ 1.504120 (0.501119) | 1.784009 \/ 1.541195 (0.242815) | 1.883763 \/ 1.468490 (0.415272) | 0.555429 \/ 4.584777 (-4.029348) | 3.868146 \/ 3.745712 (0.122434) | 2.081896 \/ 5.269862 (-3.187965) | 1.126047 \/ 4.565676 (-3.439629) | 0.069496 \/ 0.424275 (-0.354779) | 0.012926 \/ 0.007607 (0.005318) | 0.536989 \/ 0.226044 (0.310944) | 5.256052 \/ 2.268929 (2.987124) | 2.526802 \/ 55.444624 (-52.917822) | 2.233346 \/ 6.876477 (-4.643131) | 2.389063 \/ 2.142072 (0.246990) | 0.677107 \/ 4.805227 (-4.128120) | 0.147212 \/ 6.500664 (-6.353452) | 0.067061 \/ 0.075469 (-0.008408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.210651 \/ 1.841788 (-0.631137) | 17.236898 \/ 8.074308 (9.162589) | 14.427301 \/ 10.191392 (4.235909) | 0.207194 \/ 0.680424 (-0.473229) | 0.018079 \/ 0.534201 (-0.516122) | 0.398355 \/ 0.579283 (-0.180929) | 0.462453 \/ 0.434364 (0.028089) | 0.484544 \/ 0.540337 (-0.055794) | 0.590119 \/ 1.386936 (-0.796817) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007392 \/ 0.011353 (-0.003961) | 0.005614 \/ 0.011008 (-0.005394) | 0.075587 \/ 0.038508 (0.037079) | 0.040429 \/ 0.023109 (0.017320) | 0.389901 \/ 0.275898 (0.114003) | 0.429466 \/ 0.323480 (0.105986) | 0.006790 \/ 0.007986 (-0.001196) | 0.006627 \/ 0.004328 (0.002299) | 0.075227 \/ 0.004250 (0.070976) | 0.060298 \/ 0.037052 (0.023246) | 0.391905 \/ 0.258489 (0.133416) | 0.449385 \/ 0.293841 (0.155544) | 0.028794 \/ 0.128546 (-0.099753) | 0.009461 \/ 0.075646 (-0.066185) | 0.083386 \/ 0.419271 (-0.335886) | 0.057968 \/ 0.043533 (0.014435) | 0.377327 \/ 0.255139 (0.122188) | 0.402825 \/ 0.283200 (0.119626) | 0.125477 \/ 0.141683 (-0.016206) | 1.462986 \/ 1.452155 (0.010832) | 1.595959 \/ 1.492716 (0.103243) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.304179 \/ 0.018006 (0.286173) | 0.543113 \/ 0.000490 (0.542623) | 0.004136 \/ 0.000200 (0.003936) | 0.000109 \/ 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032617 \/ 0.037411 (-0.004794) | 0.123596 \/ 0.014526 (0.109070) | 0.128714 \/ 0.176557 (-0.047842) | 0.176344 \/ 0.737135 (-0.560792) | 0.132525 \/ 0.296338 (-0.163813) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.446041 \/ 0.215209 (0.230832) | 4.438799 \/ 2.077655 (2.361144) | 2.210815 \/ 1.504120 (0.706695) | 2.052025 \/ 1.541195 (0.510830) | 2.204687 \/ 1.468490 (0.736197) | 0.535219 \/ 4.584777 (-4.049558) | 3.858407 \/ 3.745712 (0.112695) | 3.826043 \/ 5.269862 (-1.443819) | 1.334149 \/ 4.565676 (-3.231527) | 0.067454 \/ 0.424275 (-0.356821) | 0.012566 \/ 0.007607 (0.004958) | 0.551597 \/ 0.226044 (0.325553) | 5.520054 \/ 2.268929 (3.251126) | 2.817976 \/ 55.444624 (-52.626649) | 2.528074 \/ 6.876477 (-4.348403) | 2.622391 \/ 2.142072 (0.480319) | 0.657632 \/ 4.805227 (-4.147595) | 0.147039 \/ 6.500664 (-6.353625) | 0.069603 \/ 0.075469 (-0.005866) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.300140 \/ 1.841788 (-0.541648) | 17.303907 \/ 8.074308 (9.229599) | 15.657887 \/ 10.191392 (5.466495) | 0.168991 \/ 0.680424 (-0.511433) | 0.021332 \/ 0.534201 (-0.512869) | 0.487261 \/ 0.579283 (-0.092022) | 0.450073 \/ 0.434364 (0.015709) | 0.465865 \/ 0.540337 (-0.074473) | 0.565501 \/ 1.386936 (-0.821435) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#f1723ab75a6b3a5e156ea0a41651e80e91fa9cc6 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006536 \/ 0.011353 (-0.004817) | 0.004254 \/ 0.011008 (-0.006755) | 0.095387 \/ 0.038508 (0.056878) | 0.032885 \/ 0.023109 (0.009776) | 0.298580 \/ 0.275898 (0.022682) | 0.319771 \/ 0.323480 (-0.003709) | 0.005510 \/ 0.007986 (-0.002476) | 0.003891 \/ 0.004328 (-0.000437) | 0.073763 \/ 0.004250 (0.069513) | 0.041625 \/ 0.037052 (0.004573) | 0.294896 \/ 0.258489 (0.036407) | 0.341308 \/ 0.293841 (0.047467) | 0.027898 \/ 0.128546 (-0.100648) | 0.008837 \/ 0.075646 (-0.066809) | 0.325055 \/ 0.419271 (-0.094216) | 0.050652 \/ 0.043533 (0.007119) | 0.298756 \/ 0.255139 (0.043617) | 0.318261 \/ 0.283200 (0.035061) | 0.098927 \/ 0.141683 (-0.042756) | 1.450356 \/ 1.452155 (-0.001798) | 1.508034 \/ 1.492716 (0.015318) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.209009 \/ 0.018006 (0.191003) | 0.439154 \/ 0.000490 (0.438665) | 0.004299 \/ 0.000200 (0.004099) | 0.000142 \/ 0.000054 (0.000087) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025938 \/ 0.037411 (-0.011473) | 0.105954 \/ 0.014526 (0.091429) | 0.113858 \/ 0.176557 (-0.062698) | 0.168887 \/ 0.737135 (-0.568249) | 0.121292 \/ 0.296338 (-0.175046) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.402050 \/ 0.215209 (0.186841) | 4.002310 \/ 2.077655 (1.924655) | 1.816190 \/ 1.504120 (0.312070) | 1.634404 \/ 1.541195 (0.093209) | 1.713632 \/ 1.468490 (0.245142) | 0.519633 \/ 4.584777 (-4.065144) | 3.740291 \/ 3.745712 (-0.005421) | 1.787602 \/ 5.269862 (-3.482260) | 1.038844 \/ 4.565676 (-3.526833) | 0.064973 \/ 0.424275 (-0.359302) | 0.012475 \/ 0.007607 (0.004868) | 0.498152 \/ 0.226044 (0.272108) | 4.970941 \/ 2.268929 (2.702013) | 2.287429 \/ 55.444624 (-53.157195) | 1.998050 \/ 6.876477 (-4.878427) | 2.091903 \/ 2.142072 (-0.050169) | 0.630363 \/ 4.805227 (-4.174864) | 0.138623 \/ 6.500664 (-6.362041) | 0.063293 \/ 0.075469 (-0.012176) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.201802 \/ 1.841788 (-0.639986) | 14.073836 \/ 8.074308 (5.999528) | 12.968665 \/ 10.191392 (2.777273) | 0.144653 \/ 0.680424 (-0.535771) | 0.017613 \/ 0.534201 (-0.516588) | 0.392067 \/ 0.579283 (-0.187216) | 0.416955 \/ 0.434364 (-0.017409) | 0.471492 \/ 0.540337 (-0.068845) | 0.554576 \/ 1.386936 (-0.832360) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006408 \/ 0.011353 (-0.004945) | 0.004452 \/ 0.011008 (-0.006556) | 0.073648 \/ 0.038508 (0.035140) | 0.032536 \/ 0.023109 (0.009427) | 0.358546 \/ 0.275898 (0.082648) | 0.387330 \/ 0.323480 (0.063850) | 0.005542 \/ 0.007986 (-0.002444) | 0.003882 \/ 0.004328 (-0.000447) | 0.073867 \/ 0.004250 (0.069617) | 0.044798 \/ 0.037052 (0.007746) | 0.362303 \/ 0.258489 (0.103814) | 0.400496 \/ 0.293841 (0.106655) | 0.028244 \/ 0.128546 (-0.100302) | 0.008931 \/ 0.075646 (-0.066715) | 0.080617 \/ 0.419271 (-0.338654) | 0.046575 \/ 0.043533 (0.003043) | 0.364283 \/ 0.255139 (0.109145) | 0.373215 \/ 0.283200 (0.090015) | 0.100080 \/ 0.141683 (-0.041603) | 1.430047 \/ 1.452155 (-0.022108) | 1.530957 \/ 1.492716 (0.038240) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.221061 \/ 0.018006 (0.203055) | 0.441753 \/ 0.000490 (0.441263) | 0.003626 \/ 0.000200 (0.003426) | 0.000088 \/ 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029509 \/ 0.037411 (-0.007902) | 0.109578 \/ 0.014526 (0.095053) | 0.121009 \/ 0.176557 (-0.055548) | 0.168950 \/ 0.737135 (-0.568185) | 0.124475 \/ 0.296338 (-0.171864) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.431355 \/ 0.215209 (0.216146) | 4.295507 \/ 2.077655 (2.217852) | 2.167514 \/ 1.504120 (0.663394) | 2.013073 \/ 1.541195 (0.471879) | 1.973730 \/ 1.468490 (0.505240) | 0.529778 \/ 4.584777 (-4.054999) | 3.794702 \/ 3.745712 (0.048989) | 3.062940 \/ 5.269862 (-2.206922) | 1.503426 \/ 4.565676 (-3.062251) | 0.066692 \/ 0.424275 (-0.357583) | 0.011682 \/ 0.007607 (0.004075) | 0.539311 \/ 0.226044 (0.313266) | 5.406342 \/ 2.268929 (3.137414) | 2.652709 \/ 55.444624 (-52.791916) | 2.260066 \/ 6.876477 (-4.616410) | 2.295752 \/ 2.142072 (0.153680) | 0.647199 \/ 4.805227 (-4.158029) | 0.142981 \/ 6.500664 (-6.357683) | 0.065082 \/ 0.075469 (-0.010387) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.279788 \/ 1.841788 (-0.562000) | 14.982845 \/ 8.074308 (6.908536) | 14.277166 \/ 10.191392 (4.085774) | 0.145082 \/ 0.680424 (-0.535342) | 0.017885 \/ 0.534201 (-0.516316) | 0.392071 \/ 0.579283 (-0.187212) | 0.420425 \/ 0.434364 (-0.013939) | 0.461244 \/ 0.540337 (-0.079093) | 0.559956 \/ 1.386936 (-0.826980) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#651d96c1c4083a206c65f11602712d75f1f0453d \"CML watermark\")\n"],"created_at":1685621676000,"updated_at":1685623330000,"closed_at":1685622914000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5920","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5920","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5920.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5920.patch","merged_at":1685622914000},"body":"following https:\/\/github.com\/huggingface\/datasets\/pull\/5893","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5920\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5920\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5919","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5919\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5919\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5919\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5919","id":1735519227,"node_id":"PR_kwDODunzps5R2_EK","number":5919,"title":"add support for storage_options for load_dataset API","user":{"login":"janineguo","id":59083384,"node_id":"MDQ6VXNlcjU5MDgzMzg0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59083384?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/janineguo","html_url":"https:\/\/github.com\/janineguo","followers_url":"https:\/\/api.github.com\/users\/janineguo\/followers","following_url":"https:\/\/api.github.com\/users\/janineguo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/janineguo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/janineguo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/janineguo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/janineguo\/orgs","repos_url":"https:\/\/api.github.com\/users\/janineguo\/repos","events_url":"https:\/\/api.github.com\/users\/janineguo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/janineguo\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["hi @lhoestq,\r\nI saw some errors in my test and found all the failed reasons are `FileNotFoundError` about `test_load_streaming_private_dataset_with_zipped_data` and `test_load_dataset_private_zipped_images` in `test_load.py `, I run pytest on my own Wins and Ubuntu system all the test in `test_load.py ` are succeed. could you help me to check the test environment of our server?\r\n\r\n`2023-06-08T16:50:48.0828281Z FAILED tests\/test_load.py::test_load_streaming_private_dataset_with_zipped_data - FileNotFoundError: Couldn't find a dataset script at D:\\a\\datasets\\datasets\\__DUMMY_TRANSFORMERS_USER__\\repo_zipped_txt_data-16862429577813\\repo_zipped_txt_data-16862429577813.py or any data file in the same directory. Couldn't find '__DUMMY_TRANSFORMERS_USER__\/repo_zipped_txt_data-16862429577813' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in __DUMMY_TRANSFORMERS_USER__\/repo_zipped_txt_data-16862429577813`\r\n`2023-06-08T16:50:48.0830602Z FAILED tests\/test_load.py::test_load_dataset_private_zipped_images[False-False] - FileNotFoundError: Couldn't find a dataset script at D:\\a\\datasets\\datasets\\__DUMMY_TRANSFORMERS_USER__\\repo_zipped_img_data-16862429594168\\repo_zipped_img_data-16862429594168.py or any data file in the same directory. Couldn't find '__DUMMY_TRANSFORMERS_USER__\/repo_zipped_img_data-16862429594168' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in __DUMMY_TRANSFORMERS_USER__\/repo_zipped_img_data-16862429594168`","I just re-ran the CI, hopefully it's fixed","_The documentation is not available anymore as the PR was closed or merged._","> I just re-ran the CI, hopefully it's fixed\r\n\r\nI just checked, still has the same error, maybe need someone to fix it","I think the issue comes from this PR somehow, since the CI fail is related to loading private repositories and this PR touches authentication related code. Let me check what's the issue, and I'll also review your PR later (sorry I don't have a ton of bandwidth atm)","The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_5919). All of your documentation changes will be reflected on that endpoint.","@lhoestq Hi sorry to bother you, the CI check_code_quality failed and it said `would reformat \/home\/runner\/work\/datasets\/datasets\/src\/datasets\/download\/streaming_download_manager.py` but I cant see any changes when I run `python3 -m black --check tests src benchmarks metrics` and `python3 -m ruff tests src benchmarks metrics` on my own computer, is there any version requirements on the tools? I didn't specific the version.","I just ran `make style` and pushed the changes.\r\nYou can install the right versions of black and ruff using `pip install -e .[quality]` ;)","I am working on this issue right now https:\/\/github.com\/huggingface\/datasets\/issues\/6017 which is strongly connected to your PR, and I might end up cherry-picking some of your commits (keeping attribution of course !). Would you be ok with that ?","it's totally ok for me, I just wish the S3 File system could support streaming too.\r\n","\r\nI already adjust the code and test on my local Mac, you can check it now, and you can make any changes to it."],"created_at":1685598752000,"updated_at":1689227553000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5919","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5919","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5919.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5919.patch","merged_at":null},"body":"to solve the issue in #5880 \r\n\r\n1. add s3 support in the link check step, previous we only check `http` and `https`,\r\n\r\n2. change the parameter of `use_auth_token` to `download_config` to support both `storage_options` and `use_auth_token` parameter when trying to handle(list, open, read, etc,.) the remote files.\r\n\r\n3. integrate the check part's duplicate code to make adding or deleting other sources easier.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5919\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5919\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5918","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5918\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5918\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5918\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5918","id":1735313549,"node_id":"I_kwDODunzps5nbsiN","number":5918,"title":"File not found for audio dataset","user":{"login":"RobertBaruch","id":1783950,"node_id":"MDQ6VXNlcjE3ODM5NTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1783950?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/RobertBaruch","html_url":"https:\/\/github.com\/RobertBaruch","followers_url":"https:\/\/api.github.com\/users\/RobertBaruch\/followers","following_url":"https:\/\/api.github.com\/users\/RobertBaruch\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/RobertBaruch\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/RobertBaruch\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/RobertBaruch\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/RobertBaruch\/orgs","repos_url":"https:\/\/api.github.com\/users\/RobertBaruch\/repos","events_url":"https:\/\/api.github.com\/users\/RobertBaruch\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/RobertBaruch\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["load_dataset () did not work for loading local files either "],"created_at":1685585729000,"updated_at":1686463345000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nAfter loading an audio dataset, and looking at a sample entry, the `path` element, which is supposed to be the path to the audio file, doesn't actually exist.\r\n\n\n### Steps to reproduce the bug\n\nRun bug.py:\r\n\r\n```py\r\nimport os.path\r\n\r\nfrom datasets import load_dataset\r\n\r\ndef run() -> None:\r\n cv13 = load_dataset(\r\n \"mozilla-foundation\/common_voice_13_0\",\r\n \"hi\",\r\n split=\"train\",\r\n )\r\n\r\n print(cv13[0])\r\n audio_file = cv13[0][\"path\"]\r\n if not os.path.exists(audio_file):\r\n raise ValueError(f'File {audio_file} does not exist.')\r\n\r\nif __name__ == \"__main__\":\r\n run()\r\n```\r\n\r\nThe result (on my machine):\r\n\r\n```json\r\n{'client_id': '0f018a99663f33afbb7d38aee281fb1afcfd07f9e7acd00383f604e1e17c38d6ed8adf1bd2ccbf927a52c5adefb8ac4b158ce27a7c2ed9581e71202eb302dfb3', 'path': 'C:\\\\Users\\\\rober\\\\.cache\\\\huggingface\\\\datasets\\\\downloads\\\\extracted\\\\8d1479bc09b4609bc2675bd02d6869a4d5e09f7e6616f540bd55eacef46c6e2b\\\\common_voice_hi_26008353.mp3', 'audio': {'path': 'C:\\\\Users\\\\rober\\\\.cache\\\\huggingface\\\\datasets\\\\downloads\\\\extracted\\\\8d1479bc09b4609bc2675bd02d6869a4d5e09f7e6616f540bd55eacef46c6e2b\\\\common_voice_hi_26008353.mp3', 'array': array([ 6.46234854e-26, -1.35709319e-25, -8.07793567e-26, ...,\r\n 1.06425944e-07, 4.46417090e-08, 2.61451660e-09]), 'sampling_rate': 48000}, 'sentence': '\u0939\u092e\u0928\u0947 \u0909\u0938\u0915\u093e \u091c\u0928\u094d\u092e\u0926\u093f\u0928 \u092e\u0928\u093e\u092f\u093e\u0964', 'up_votes': 2, 'down_votes': 0, 'age': '', 'gender': '', 'accent': '', 'locale': 'hi', 'segment': '' ', 'variant': ''}\r\n```\r\n\r\n```txt\r\nTraceback (most recent call last):\r\n File \"F:\\eo-reco\\bug.py\", line 18, in \r\n run()\r\n File \"F:\\eo-reco\\bug.py\", line 15, in run\r\n raise ValueError(f'File {audio_file} does not exist.')\r\nValueError: File C:\\Users\\rober\\.cache\\huggingface\\datasets\\downloads\\extracted\\8d1479bc09b4609bc2675bd02d6869a4d5e09f7e6616f540bd55eacef46c6e2b\\common_voice_hi_26008353.mp3 does not exist.\r\n```\r\n\n\n### Expected behavior\n\nThe `path` element points to the correct file, which happens to be:\r\n\r\n```\r\nC:\\Users\\rober\\.cache\\huggingface\\datasets\\downloads\\extracted\\8d1479bc09b4609bc2675bd02d6869a4d5e09f7e6616f540bd55eacef46c6e2b\\hi_train_0\\common_voice_hi_26008353.mp3\r\n```\r\n\r\nThat is, there's an extra directory `hi_train_0` that is not in the `path` element.\r\n\n\n### Environment info\n\n- `datasets` version: 2.12.0\r\n- Platform: Windows-10-10.0.22621-SP0\r\n- Python version: 3.11.3\r\n- Huggingface_hub version: 0.14.1\r\n- PyArrow version: 12.0.0\r\n- Pandas version: 2.0.1\r\n- ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5918\/reactions","total_count":2,"+1":2,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5918\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5917","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5917\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5917\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5917\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5917","id":1733661588,"node_id":"PR_kwDODunzps5RwoRU","number":5917,"title":"Refactor extensions","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008358 \/ 0.011353 (-0.002995) | 0.005673 \/ 0.011008 (-0.005335) | 0.124034 \/ 0.038508 (0.085526) | 0.037550 \/ 0.023109 (0.014441) | 0.331301 \/ 0.275898 (0.055403) | 0.383542 \/ 0.323480 (0.060062) | 0.006940 \/ 0.007986 (-0.001046) | 0.005959 \/ 0.004328 (0.001631) | 0.084670 \/ 0.004250 (0.080419) | 0.054214 \/ 0.037052 (0.017162) | 0.359897 \/ 0.258489 (0.101408) | 0.383260 \/ 0.293841 (0.089419) | 0.047642 \/ 0.128546 (-0.080904) | 0.013902 \/ 0.075646 (-0.061744) | 0.380232 \/ 0.419271 (-0.039040) | 0.077790 \/ 0.043533 (0.034257) | 0.376648 \/ 0.255139 (0.121509) | 0.387536 \/ 0.283200 (0.104336) | 0.104644 \/ 0.141683 (-0.037038) | 1.618560 \/ 1.452155 (0.166406) | 1.742569 \/ 1.492716 (0.249853) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.257218 \/ 0.018006 (0.239212) | 0.636801 \/ 0.000490 (0.636311) | 0.000634 \/ 0.000200 (0.000434) | 0.000101 \/ 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.037874 \/ 0.037411 (0.000462) | 0.107454 \/ 0.014526 (0.092928) | 0.117855 \/ 0.176557 (-0.058702) | 0.204067 \/ 0.737135 (-0.533068) | 0.134029 \/ 0.296338 (-0.162310) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.583657 \/ 0.215209 (0.368447) | 5.761289 \/ 2.077655 (3.683635) | 2.280201 \/ 1.504120 (0.776081) | 2.033442 \/ 1.541195 (0.492247) | 2.035343 \/ 1.468490 (0.566853) | 0.868122 \/ 4.584777 (-3.716655) | 5.352591 \/ 3.745712 (1.606879) | 2.432814 \/ 5.269862 (-2.837047) | 1.560765 \/ 4.565676 (-3.004911) | 0.098793 \/ 0.424275 (-0.325482) | 0.017327 \/ 0.007607 (0.009720) | 0.734676 \/ 0.226044 (0.508631) | 7.070318 \/ 2.268929 (4.801390) | 2.972701 \/ 55.444624 (-52.471924) | 2.442189 \/ 6.876477 (-4.434288) | 2.604379 \/ 2.142072 (0.462307) | 1.028853 \/ 4.805227 (-3.776374) | 0.210390 \/ 6.500664 (-6.290274) | 0.069329 \/ 0.075469 (-0.006140) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.469586 \/ 1.841788 (-0.372202) | 16.570305 \/ 8.074308 (8.495997) | 19.187845 \/ 10.191392 (8.996453) | 0.219162 \/ 0.680424 (-0.461262) | 0.026356 \/ 0.534201 (-0.507845) | 0.447370 \/ 0.579283 (-0.131913) | 0.555893 \/ 0.434364 (0.121529) | 0.574958 \/ 0.540337 (0.034621) | 0.639166 \/ 1.386936 (-0.747770) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008166 \/ 0.011353 (-0.003187) | 0.005577 \/ 0.011008 (-0.005431) | 0.103578 \/ 0.038508 (0.065070) | 0.040563 \/ 0.023109 (0.017454) | 0.441996 \/ 0.275898 (0.166098) | 0.483594 \/ 0.323480 (0.160114) | 0.007329 \/ 0.007986 (-0.000657) | 0.004546 \/ 0.004328 (0.000218) | 0.090471 \/ 0.004250 (0.086220) | 0.052740 \/ 0.037052 (0.015688) | 0.442197 \/ 0.258489 (0.183708) | 0.524310 \/ 0.293841 (0.230469) | 0.042487 \/ 0.128546 (-0.086060) | 0.012917 \/ 0.075646 (-0.062730) | 0.103992 \/ 0.419271 (-0.315280) | 0.060570 \/ 0.043533 (0.017037) | 0.441956 \/ 0.255139 (0.186817) | 0.477084 \/ 0.283200 (0.193885) | 0.103815 \/ 0.141683 (-0.037868) | 1.696963 \/ 1.452155 (0.244809) | 1.747849 \/ 1.492716 (0.255132) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.292465 \/ 0.018006 (0.274458) | 0.571518 \/ 0.000490 (0.571028) | 0.000476 \/ 0.000200 (0.000276) | 0.000077 \/ 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028697 \/ 0.037411 (-0.008714) | 0.111671 \/ 0.014526 (0.097145) | 0.138826 \/ 0.176557 (-0.037731) | 0.189697 \/ 0.737135 (-0.547439) | 0.125454 \/ 0.296338 (-0.170884) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.619273 \/ 0.215209 (0.404064) | 6.138669 \/ 2.077655 (4.061015) | 2.558622 \/ 1.504120 (1.054502) | 2.201550 \/ 1.541195 (0.660356) | 2.279034 \/ 1.468490 (0.810544) | 0.850752 \/ 4.584777 (-3.734025) | 5.438185 \/ 3.745712 (1.692473) | 2.529343 \/ 5.269862 (-2.740518) | 1.572178 \/ 4.565676 (-2.993499) | 0.100768 \/ 0.424275 (-0.323507) | 0.013902 \/ 0.007607 (0.006295) | 0.726660 \/ 0.226044 (0.500616) | 7.794918 \/ 2.268929 (5.525990) | 3.311695 \/ 55.444624 (-52.132930) | 2.729167 \/ 6.876477 (-4.147310) | 2.630984 \/ 2.142072 (0.488911) | 1.018534 \/ 4.805227 (-3.786693) | 0.194602 \/ 6.500664 (-6.306062) | 0.070876 \/ 0.075469 (-0.004593) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.573005 \/ 1.841788 (-0.268783) | 17.042710 \/ 8.074308 (8.968401) | 19.615320 \/ 10.191392 (9.423928) | 0.229405 \/ 0.680424 (-0.451019) | 0.027560 \/ 0.534201 (-0.506641) | 0.447984 \/ 0.579283 (-0.131299) | 0.598392 \/ 0.434364 (0.164028) | 0.571769 \/ 0.540337 (0.031431) | 0.653025 \/ 1.386936 (-0.733911) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#9dca2ff89a8589595313e9535d16597ce10e3700 \"CML watermark\")\n"],"created_at":1685521982000,"updated_at":1685540075000,"closed_at":1685539557000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5917","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5917","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5917.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5917.patch","merged_at":1685539557000},"body":"Related to:\r\n- #5850","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5917\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5917\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5916","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5916\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5916\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5916\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5916","id":1732456392,"node_id":"PR_kwDODunzps5RskTb","number":5916,"title":"Unpin responses","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006113 \/ 0.011353 (-0.005239) | 0.004195 \/ 0.011008 (-0.006813) | 0.098103 \/ 0.038508 (0.059595) | 0.027970 \/ 0.023109 (0.004860) | 0.300992 \/ 0.275898 (0.025094) | 0.335402 \/ 0.323480 (0.011922) | 0.005079 \/ 0.007986 (-0.002906) | 0.003516 \/ 0.004328 (-0.000813) | 0.077311 \/ 0.004250 (0.073061) | 0.037863 \/ 0.037052 (0.000810) | 0.302638 \/ 0.258489 (0.044149) | 0.346554 \/ 0.293841 (0.052713) | 0.025218 \/ 0.128546 (-0.103328) | 0.008630 \/ 0.075646 (-0.067017) | 0.319748 \/ 0.419271 (-0.099523) | 0.049182 \/ 0.043533 (0.005650) | 0.306233 \/ 0.255139 (0.051094) | 0.331040 \/ 0.283200 (0.047840) | 0.089203 \/ 0.141683 (-0.052480) | 1.496104 \/ 1.452155 (0.043949) | 1.567878 \/ 1.492716 (0.075162) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.215774 \/ 0.018006 (0.197768) | 0.436810 \/ 0.000490 (0.436320) | 0.000307 \/ 0.000200 (0.000107) | 0.000059 \/ 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024102 \/ 0.037411 (-0.013310) | 0.095459 \/ 0.014526 (0.080933) | 0.106564 \/ 0.176557 (-0.069992) | 0.169894 \/ 0.737135 (-0.567241) | 0.109152 \/ 0.296338 (-0.187186) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.429066 \/ 0.215209 (0.213857) | 4.297385 \/ 2.077655 (2.219730) | 2.054854 \/ 1.504120 (0.550734) | 1.846844 \/ 1.541195 (0.305649) | 1.840807 \/ 1.468490 (0.372317) | 0.553193 \/ 4.584777 (-4.031584) | 3.366788 \/ 3.745712 (-0.378924) | 1.727337 \/ 5.269862 (-3.542525) | 0.994357 \/ 4.565676 (-3.571319) | 0.067790 \/ 0.424275 (-0.356485) | 0.012002 \/ 0.007607 (0.004395) | 0.533335 \/ 0.226044 (0.307291) | 5.341341 \/ 2.268929 (3.072412) | 2.543581 \/ 55.444624 (-52.901043) | 2.220374 \/ 6.876477 (-4.656103) | 2.321656 \/ 2.142072 (0.179583) | 0.654408 \/ 4.805227 (-4.150819) | 0.134693 \/ 6.500664 (-6.365971) | 0.066926 \/ 0.075469 (-0.008544) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.209463 \/ 1.841788 (-0.632325) | 13.568221 \/ 8.074308 (5.493913) | 13.965418 \/ 10.191392 (3.774026) | 0.145049 \/ 0.680424 (-0.535375) | 0.016936 \/ 0.534201 (-0.517265) | 0.371587 \/ 0.579283 (-0.207696) | 0.386363 \/ 0.434364 (-0.048001) | 0.437137 \/ 0.540337 (-0.103201) | 0.514779 \/ 1.386936 (-0.872157) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006245 \/ 0.011353 (-0.005108) | 0.004232 \/ 0.011008 (-0.006776) | 0.075682 \/ 0.038508 (0.037174) | 0.027858 \/ 0.023109 (0.004749) | 0.425325 \/ 0.275898 (0.149427) | 0.466732 \/ 0.323480 (0.143253) | 0.005240 \/ 0.007986 (-0.002745) | 0.003506 \/ 0.004328 (-0.000823) | 0.075294 \/ 0.004250 (0.071044) | 0.041677 \/ 0.037052 (0.004624) | 0.426552 \/ 0.258489 (0.168063) | 0.469452 \/ 0.293841 (0.175611) | 0.025443 \/ 0.128546 (-0.103104) | 0.008526 \/ 0.075646 (-0.067120) | 0.082190 \/ 0.419271 (-0.337081) | 0.040906 \/ 0.043533 (-0.002626) | 0.428406 \/ 0.255139 (0.173267) | 0.446795 \/ 0.283200 (0.163595) | 0.093837 \/ 0.141683 (-0.047846) | 1.518639 \/ 1.452155 (0.066484) | 1.620214 \/ 1.492716 (0.127498) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.223259 \/ 0.018006 (0.205253) | 0.425077 \/ 0.000490 (0.424588) | 0.001980 \/ 0.000200 (0.001780) | 0.000077 \/ 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025813 \/ 0.037411 (-0.011599) | 0.103062 \/ 0.014526 (0.088536) | 0.108958 \/ 0.176557 (-0.067598) | 0.161591 \/ 0.737135 (-0.575544) | 0.112130 \/ 0.296338 (-0.184209) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.472843 \/ 0.215209 (0.257634) | 4.713281 \/ 2.077655 (2.635626) | 2.458216 \/ 1.504120 (0.954096) | 2.272467 \/ 1.541195 (0.731273) | 2.324456 \/ 1.468490 (0.855965) | 0.554686 \/ 4.584777 (-4.030091) | 3.445079 \/ 3.745712 (-0.300634) | 3.451896 \/ 5.269862 (-1.817966) | 1.431065 \/ 4.565676 (-3.134612) | 0.067868 \/ 0.424275 (-0.356407) | 0.012093 \/ 0.007607 (0.004486) | 0.573571 \/ 0.226044 (0.347526) | 5.820452 \/ 2.268929 (3.551523) | 2.934858 \/ 55.444624 (-52.509767) | 2.602719 \/ 6.876477 (-4.273758) | 2.645999 \/ 2.142072 (0.503927) | 0.660688 \/ 4.805227 (-4.144540) | 0.137490 \/ 6.500664 (-6.363174) | 0.068311 \/ 0.075469 (-0.007158) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.321709 \/ 1.841788 (-0.520079) | 14.592346 \/ 8.074308 (6.518038) | 14.520748 \/ 10.191392 (4.329356) | 0.132689 \/ 0.680424 (-0.547735) | 0.016422 \/ 0.534201 (-0.517779) | 0.370071 \/ 0.579283 (-0.209212) | 0.397091 \/ 0.434364 (-0.037273) | 0.431979 \/ 0.540337 (-0.108358) | 0.509965 \/ 1.386936 (-0.876971) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#8bcd061ab2082a0862f30329bc52f6e0d321805c \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006182 \/ 0.011353 (-0.005171) | 0.004153 \/ 0.011008 (-0.006855) | 0.095715 \/ 0.038508 (0.057207) | 0.032457 \/ 0.023109 (0.009347) | 0.314961 \/ 0.275898 (0.039063) | 0.353696 \/ 0.323480 (0.030216) | 0.005256 \/ 0.007986 (-0.002729) | 0.004870 \/ 0.004328 (0.000541) | 0.072442 \/ 0.004250 (0.068192) | 0.046102 \/ 0.037052 (0.009050) | 0.324410 \/ 0.258489 (0.065921) | 0.366861 \/ 0.293841 (0.073020) | 0.027088 \/ 0.128546 (-0.101458) | 0.008572 \/ 0.075646 (-0.067075) | 0.325988 \/ 0.419271 (-0.093284) | 0.049494 \/ 0.043533 (0.005961) | 0.311221 \/ 0.255139 (0.056082) | 0.359720 \/ 0.283200 (0.076521) | 0.095101 \/ 0.141683 (-0.046581) | 1.472821 \/ 1.452155 (0.020667) | 1.516157 \/ 1.492716 (0.023441) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.210456 \/ 0.018006 (0.192450) | 0.439440 \/ 0.000490 (0.438950) | 0.003764 \/ 0.000200 (0.003564) | 0.000087 \/ 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024076 \/ 0.037411 (-0.013335) | 0.104886 \/ 0.014526 (0.090360) | 0.114164 \/ 0.176557 (-0.062393) | 0.167289 \/ 0.737135 (-0.569847) | 0.116457 \/ 0.296338 (-0.179882) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.400039 \/ 0.215209 (0.184830) | 3.973243 \/ 2.077655 (1.895588) | 1.801991 \/ 1.504120 (0.297871) | 1.592017 \/ 1.541195 (0.050822) | 1.612564 \/ 1.468490 (0.144074) | 0.527475 \/ 4.584777 (-4.057302) | 3.676246 \/ 3.745712 (-0.069466) | 1.806423 \/ 5.269862 (-3.463438) | 1.176921 \/ 4.565676 (-3.388756) | 0.065902 \/ 0.424275 (-0.358373) | 0.012245 \/ 0.007607 (0.004638) | 0.490883 \/ 0.226044 (0.264838) | 4.905270 \/ 2.268929 (2.636341) | 2.218694 \/ 55.444624 (-53.225930) | 1.903074 \/ 6.876477 (-4.973403) | 1.979505 \/ 2.142072 (-0.162567) | 0.644415 \/ 4.805227 (-4.160812) | 0.142433 \/ 6.500664 (-6.358231) | 0.063564 \/ 0.075469 (-0.011905) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.193756 \/ 1.841788 (-0.648032) | 14.673103 \/ 8.074308 (6.598795) | 13.410951 \/ 10.191392 (3.219559) | 0.159175 \/ 0.680424 (-0.521249) | 0.017076 \/ 0.534201 (-0.517125) | 0.388880 \/ 0.579283 (-0.190403) | 0.409974 \/ 0.434364 (-0.024390) | 0.454494 \/ 0.540337 (-0.085844) | 0.556873 \/ 1.386936 (-0.830063) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006107 \/ 0.011353 (-0.005246) | 0.004433 \/ 0.011008 (-0.006575) | 0.073892 \/ 0.038508 (0.035384) | 0.032386 \/ 0.023109 (0.009277) | 0.370339 \/ 0.275898 (0.094441) | 0.388996 \/ 0.323480 (0.065516) | 0.005438 \/ 0.007986 (-0.002548) | 0.003875 \/ 0.004328 (-0.000454) | 0.073867 \/ 0.004250 (0.069617) | 0.048350 \/ 0.037052 (0.011298) | 0.380328 \/ 0.258489 (0.121839) | 0.411373 \/ 0.293841 (0.117532) | 0.028183 \/ 0.128546 (-0.100363) | 0.008924 \/ 0.075646 (-0.066723) | 0.082484 \/ 0.419271 (-0.336787) | 0.047321 \/ 0.043533 (0.003788) | 0.371702 \/ 0.255139 (0.116563) | 0.380535 \/ 0.283200 (0.097335) | 0.100772 \/ 0.141683 (-0.040911) | 1.475038 \/ 1.452155 (0.022883) | 1.564293 \/ 1.492716 (0.071577) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.214589 \/ 0.018006 (0.196583) | 0.437193 \/ 0.000490 (0.436703) | 0.003676 \/ 0.000200 (0.003476) | 0.000094 \/ 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027991 \/ 0.037411 (-0.009421) | 0.111154 \/ 0.014526 (0.096628) | 0.120365 \/ 0.176557 (-0.056191) | 0.173601 \/ 0.737135 (-0.563535) | 0.126244 \/ 0.296338 (-0.170094) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.442848 \/ 0.215209 (0.227639) | 4.398336 \/ 2.077655 (2.320681) | 2.217058 \/ 1.504120 (0.712938) | 2.011155 \/ 1.541195 (0.469960) | 2.123086 \/ 1.468490 (0.654596) | 0.525857 \/ 4.584777 (-4.058920) | 3.730191 \/ 3.745712 (-0.015521) | 3.517680 \/ 5.269862 (-1.752181) | 1.557940 \/ 4.565676 (-3.007736) | 0.066309 \/ 0.424275 (-0.357967) | 0.011788 \/ 0.007607 (0.004181) | 0.548506 \/ 0.226044 (0.322462) | 5.483615 \/ 2.268929 (3.214687) | 2.663784 \/ 55.444624 (-52.780840) | 2.325744 \/ 6.876477 (-4.550732) | 2.344179 \/ 2.142072 (0.202106) | 0.644217 \/ 4.805227 (-4.161010) | 0.141546 \/ 6.500664 (-6.359118) | 0.063730 \/ 0.075469 (-0.011739) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.296032 \/ 1.841788 (-0.545756) | 14.903729 \/ 8.074308 (6.829421) | 14.505409 \/ 10.191392 (4.314017) | 0.170478 \/ 0.680424 (-0.509946) | 0.017876 \/ 0.534201 (-0.516325) | 0.401047 \/ 0.579283 (-0.178236) | 0.417855 \/ 0.434364 (-0.016509) | 0.472138 \/ 0.540337 (-0.068200) | 0.570859 \/ 1.386936 (-0.816077) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#5a4d530965eb35c66955ef89df79210c66b7f5e6 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008495 \/ 0.011353 (-0.002858) | 0.005322 \/ 0.011008 (-0.005686) | 0.125471 \/ 0.038508 (0.086962) | 0.034604 \/ 0.023109 (0.011495) | 0.419831 \/ 0.275898 (0.143933) | 0.415707 \/ 0.323480 (0.092227) | 0.007471 \/ 0.007986 (-0.000515) | 0.005441 \/ 0.004328 (0.001112) | 0.095412 \/ 0.004250 (0.091162) | 0.053865 \/ 0.037052 (0.016812) | 0.375257 \/ 0.258489 (0.116768) | 0.438114 \/ 0.293841 (0.144273) | 0.046183 \/ 0.128546 (-0.082363) | 0.013663 \/ 0.075646 (-0.061984) | 0.438317 \/ 0.419271 (0.019045) | 0.065665 \/ 0.043533 (0.022133) | 0.387640 \/ 0.255139 (0.132501) | 0.431350 \/ 0.283200 (0.148150) | 0.112841 \/ 0.141683 (-0.028842) | 1.778639 \/ 1.452155 (0.326484) | 1.891948 \/ 1.492716 (0.399232) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.284371 \/ 0.018006 (0.266365) | 0.598247 \/ 0.000490 (0.597758) | 0.013674 \/ 0.000200 (0.013474) | 0.000483 \/ 0.000054 (0.000428) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032437 \/ 0.037411 (-0.004974) | 0.120547 \/ 0.014526 (0.106021) | 0.129845 \/ 0.176557 (-0.046711) | 0.203455 \/ 0.737135 (-0.533680) | 0.140039 \/ 0.296338 (-0.156300) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.596549 \/ 0.215209 (0.381340) | 6.138766 \/ 2.077655 (4.061111) | 2.515506 \/ 1.504120 (1.011386) | 2.124472 \/ 1.541195 (0.583277) | 2.160812 \/ 1.468490 (0.692322) | 0.898965 \/ 4.584777 (-3.685812) | 5.588152 \/ 3.745712 (1.842440) | 2.717580 \/ 5.269862 (-2.552282) | 1.683641 \/ 4.565676 (-2.882036) | 0.108045 \/ 0.424275 (-0.316230) | 0.014089 \/ 0.007607 (0.006481) | 0.749567 \/ 0.226044 (0.523523) | 7.518051 \/ 2.268929 (5.249123) | 3.198238 \/ 55.444624 (-52.246386) | 2.575156 \/ 6.876477 (-4.301321) | 2.725818 \/ 2.142072 (0.583745) | 1.149338 \/ 4.805227 (-3.655889) | 0.220443 \/ 6.500664 (-6.280221) | 0.081452 \/ 0.075469 (0.005983) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.624462 \/ 1.841788 (-0.217325) | 18.204963 \/ 8.074308 (10.130655) | 21.379169 \/ 10.191392 (11.187777) | 0.248520 \/ 0.680424 (-0.431903) | 0.030121 \/ 0.534201 (-0.504080) | 0.499542 \/ 0.579283 (-0.079741) | 0.599783 \/ 0.434364 (0.165419) | 0.597642 \/ 0.540337 (0.057305) | 0.681948 \/ 1.386936 (-0.704988) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008431 \/ 0.011353 (-0.002921) | 0.006143 \/ 0.011008 (-0.004865) | 0.107531 \/ 0.038508 (0.069023) | 0.036308 \/ 0.023109 (0.013199) | 0.480555 \/ 0.275898 (0.204657) | 0.556407 \/ 0.323480 (0.232927) | 0.007614 \/ 0.007986 (-0.000372) | 0.004749 \/ 0.004328 (0.000421) | 0.105734 \/ 0.004250 (0.101484) | 0.051619 \/ 0.037052 (0.014567) | 0.514821 \/ 0.258489 (0.256332) | 0.562143 \/ 0.293841 (0.268302) | 0.042957 \/ 0.128546 (-0.085589) | 0.015142 \/ 0.075646 (-0.060505) | 0.143161 \/ 0.419271 (-0.276111) | 0.061910 \/ 0.043533 (0.018377) | 0.496923 \/ 0.255139 (0.241784) | 0.556302 \/ 0.283200 (0.273102) | 0.136700 \/ 0.141683 (-0.004983) | 1.886184 \/ 1.452155 (0.434029) | 2.004087 \/ 1.492716 (0.511371) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.235530 \/ 0.018006 (0.217523) | 0.600796 \/ 0.000490 (0.600306) | 0.009074 \/ 0.000200 (0.008874) | 0.000203 \/ 0.000054 (0.000149) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.036345 \/ 0.037411 (-0.001066) | 0.126112 \/ 0.014526 (0.111586) | 0.143369 \/ 0.176557 (-0.033188) | 0.211381 \/ 0.737135 (-0.525755) | 0.151095 \/ 0.296338 (-0.145243) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.695022 \/ 0.215209 (0.479813) | 6.685981 \/ 2.077655 (4.608326) | 3.104521 \/ 1.504120 (1.600401) | 2.758323 \/ 1.541195 (1.217128) | 2.706286 \/ 1.468490 (1.237796) | 0.941182 \/ 4.584777 (-3.643595) | 5.715839 \/ 3.745712 (1.970127) | 5.089636 \/ 5.269862 (-0.180226) | 2.594739 \/ 4.565676 (-1.970937) | 0.112621 \/ 0.424275 (-0.311655) | 0.014001 \/ 0.007607 (0.006394) | 0.812990 \/ 0.226044 (0.586945) | 8.060890 \/ 2.268929 (5.791961) | 3.832506 \/ 55.444624 (-51.612119) | 3.148051 \/ 6.876477 (-3.728425) | 3.110096 \/ 2.142072 (0.968023) | 1.105050 \/ 4.805227 (-3.700178) | 0.219835 \/ 6.500664 (-6.280829) | 0.078600 \/ 0.075469 (0.003131) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.707551 \/ 1.841788 (-0.134237) | 19.238194 \/ 8.074308 (11.163885) | 22.167076 \/ 10.191392 (11.975684) | 0.233458 \/ 0.680424 (-0.446966) | 0.025131 \/ 0.534201 (-0.509070) | 0.525241 \/ 0.579283 (-0.054042) | 0.649666 \/ 0.434364 (0.215303) | 0.602941 \/ 0.540337 (0.062603) | 0.718472 \/ 1.386936 (-0.668464) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#ac3a42c525d91cb630273702a0c110a71c9bf54b \"CML watermark\")\n"],"created_at":1685458788000,"updated_at":1685469790000,"closed_at":1685469209000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5916","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5916","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5916.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5916.patch","merged_at":1685469209000},"body":"Fix #5906","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5916\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5916\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5915","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5915\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5915\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5915\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5915","id":1732389984,"node_id":"PR_kwDODunzps5RsVzj","number":5915,"title":"Raise error in `DatasetBuilder.as_dataset` when `file_format` is not `\"arrow\"`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006416 \/ 0.011353 (-0.004937) | 0.004278 \/ 0.011008 (-0.006731) | 0.097562 \/ 0.038508 (0.059054) | 0.029488 \/ 0.023109 (0.006379) | 0.308648 \/ 0.275898 (0.032750) | 0.339879 \/ 0.323480 (0.016399) | 0.005288 \/ 0.007986 (-0.002697) | 0.005033 \/ 0.004328 (0.000704) | 0.074666 \/ 0.004250 (0.070416) | 0.034888 \/ 0.037052 (-0.002164) | 0.309960 \/ 0.258489 (0.051471) | 0.344276 \/ 0.293841 (0.050435) | 0.025564 \/ 0.128546 (-0.102982) | 0.008579 \/ 0.075646 (-0.067067) | 0.319796 \/ 0.419271 (-0.099476) | 0.044786 \/ 0.043533 (0.001253) | 0.308888 \/ 0.255139 (0.053749) | 0.334001 \/ 0.283200 (0.050802) | 0.089917 \/ 0.141683 (-0.051766) | 1.456696 \/ 1.452155 (0.004541) | 1.542273 \/ 1.492716 (0.049557) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.213236 \/ 0.018006 (0.195230) | 0.425139 \/ 0.000490 (0.424650) | 0.008831 \/ 0.000200 (0.008631) | 0.000209 \/ 0.000054 (0.000155) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023990 \/ 0.037411 (-0.013421) | 0.096787 \/ 0.014526 (0.082261) | 0.105783 \/ 0.176557 (-0.070774) | 0.167182 \/ 0.737135 (-0.569954) | 0.108896 \/ 0.296338 (-0.187442) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.419844 \/ 0.215209 (0.204635) | 4.201909 \/ 2.077655 (2.124254) | 1.910784 \/ 1.504120 (0.406664) | 1.685183 \/ 1.541195 (0.143988) | 1.716927 \/ 1.468490 (0.248437) | 0.548261 \/ 4.584777 (-4.036516) | 3.414168 \/ 3.745712 (-0.331544) | 1.695446 \/ 5.269862 (-3.574415) | 0.989668 \/ 4.565676 (-3.576008) | 0.067328 \/ 0.424275 (-0.356948) | 0.012084 \/ 0.007607 (0.004477) | 0.523799 \/ 0.226044 (0.297754) | 5.240589 \/ 2.268929 (2.971661) | 2.331618 \/ 55.444624 (-53.113007) | 1.996094 \/ 6.876477 (-4.880383) | 2.105450 \/ 2.142072 (-0.036623) | 0.654614 \/ 4.805227 (-4.150613) | 0.134721 \/ 6.500664 (-6.365943) | 0.066227 \/ 0.075469 (-0.009242) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.196266 \/ 1.841788 (-0.645521) | 13.990045 \/ 8.074308 (5.915737) | 13.928126 \/ 10.191392 (3.736734) | 0.142600 \/ 0.680424 (-0.537824) | 0.016462 \/ 0.534201 (-0.517739) | 0.363113 \/ 0.579283 (-0.216170) | 0.428590 \/ 0.434364 (-0.005773) | 0.452594 \/ 0.540337 (-0.087743) | 0.551678 \/ 1.386936 (-0.835258) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005992 \/ 0.011353 (-0.005361) | 0.004161 \/ 0.011008 (-0.006847) | 0.076098 \/ 0.038508 (0.037589) | 0.028559 \/ 0.023109 (0.005450) | 0.411696 \/ 0.275898 (0.135798) | 0.444519 \/ 0.323480 (0.121040) | 0.004965 \/ 0.007986 (-0.003021) | 0.003452 \/ 0.004328 (-0.000876) | 0.075107 \/ 0.004250 (0.070857) | 0.037305 \/ 0.037052 (0.000252) | 0.429728 \/ 0.258489 (0.171239) | 0.444313 \/ 0.293841 (0.150472) | 0.025278 \/ 0.128546 (-0.103268) | 0.008527 \/ 0.075646 (-0.067120) | 0.081502 \/ 0.419271 (-0.337770) | 0.041237 \/ 0.043533 (-0.002296) | 0.417848 \/ 0.255139 (0.162709) | 0.426615 \/ 0.283200 (0.143415) | 0.094641 \/ 0.141683 (-0.047041) | 1.525141 \/ 1.452155 (0.072987) | 1.615608 \/ 1.492716 (0.122892) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.192867 \/ 0.018006 (0.174861) | 0.414979 \/ 0.000490 (0.414490) | 0.000815 \/ 0.000200 (0.000615) | 0.000068 \/ 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025354 \/ 0.037411 (-0.012058) | 0.102085 \/ 0.014526 (0.087559) | 0.107930 \/ 0.176557 (-0.068626) | 0.160483 \/ 0.737135 (-0.576652) | 0.112341 \/ 0.296338 (-0.183997) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.446938 \/ 0.215209 (0.231728) | 4.480057 \/ 2.077655 (2.402402) | 2.154825 \/ 1.504120 (0.650705) | 1.942774 \/ 1.541195 (0.401580) | 1.996418 \/ 1.468490 (0.527928) | 0.556728 \/ 4.584777 (-4.028049) | 3.441228 \/ 3.745712 (-0.304484) | 3.004179 \/ 5.269862 (-2.265683) | 1.314104 \/ 4.565676 (-3.251573) | 0.068670 \/ 0.424275 (-0.355606) | 0.011972 \/ 0.007607 (0.004365) | 0.556604 \/ 0.226044 (0.330560) | 5.561783 \/ 2.268929 (3.292855) | 2.631262 \/ 55.444624 (-52.813363) | 2.262143 \/ 6.876477 (-4.614333) | 2.364243 \/ 2.142072 (0.222170) | 0.660621 \/ 4.805227 (-4.144607) | 0.137371 \/ 6.500664 (-6.363293) | 0.069104 \/ 0.075469 (-0.006365) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.305706 \/ 1.841788 (-0.536081) | 14.015932 \/ 8.074308 (5.941624) | 14.353580 \/ 10.191392 (4.162187) | 0.146172 \/ 0.680424 (-0.534251) | 0.016699 \/ 0.534201 (-0.517502) | 0.357970 \/ 0.579283 (-0.221313) | 0.389067 \/ 0.434364 (-0.045297) | 0.415470 \/ 0.540337 (-0.124867) | 0.501359 \/ 1.386936 (-0.885577) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#b2b837b4e7267db9e32d2613d8bf8d70d2ce0b47 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006800 \/ 0.011353 (-0.004552) | 0.004721 \/ 0.011008 (-0.006287) | 0.097760 \/ 0.038508 (0.059252) | 0.034192 \/ 0.023109 (0.011083) | 0.298240 \/ 0.275898 (0.022342) | 0.331119 \/ 0.323480 (0.007639) | 0.005826 \/ 0.007986 (-0.002160) | 0.003968 \/ 0.004328 (-0.000360) | 0.073833 \/ 0.004250 (0.069582) | 0.046288 \/ 0.037052 (0.009236) | 0.303018 \/ 0.258489 (0.044529) | 0.342163 \/ 0.293841 (0.048322) | 0.028504 \/ 0.128546 (-0.100042) | 0.009031 \/ 0.075646 (-0.066615) | 0.331617 \/ 0.419271 (-0.087655) | 0.060911 \/ 0.043533 (0.017379) | 0.304044 \/ 0.255139 (0.048905) | 0.328959 \/ 0.283200 (0.045759) | 0.113174 \/ 0.141683 (-0.028509) | 1.424652 \/ 1.452155 (-0.027502) | 1.531392 \/ 1.492716 (0.038676) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.206175 \/ 0.018006 (0.188169) | 0.435916 \/ 0.000490 (0.435426) | 0.002587 \/ 0.000200 (0.002387) | 0.000083 \/ 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026996 \/ 0.037411 (-0.010415) | 0.106722 \/ 0.014526 (0.092196) | 0.117655 \/ 0.176557 (-0.058902) | 0.176969 \/ 0.737135 (-0.560166) | 0.122577 \/ 0.296338 (-0.173762) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.396086 \/ 0.215209 (0.180877) | 3.972465 \/ 2.077655 (1.894811) | 1.800798 \/ 1.504120 (0.296678) | 1.616747 \/ 1.541195 (0.075552) | 1.680711 \/ 1.468490 (0.212221) | 0.526479 \/ 4.584777 (-4.058298) | 3.791528 \/ 3.745712 (0.045816) | 2.989518 \/ 5.269862 (-2.280344) | 1.463221 \/ 4.565676 (-3.102455) | 0.065649 \/ 0.424275 (-0.358626) | 0.012155 \/ 0.007607 (0.004548) | 0.500241 \/ 0.226044 (0.274197) | 5.008895 \/ 2.268929 (2.739966) | 2.315288 \/ 55.444624 (-53.129336) | 1.959409 \/ 6.876477 (-4.917067) | 2.102371 \/ 2.142072 (-0.039701) | 0.639611 \/ 4.805227 (-4.165617) | 0.140101 \/ 6.500664 (-6.360563) | 0.063599 \/ 0.075469 (-0.011870) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.206729 \/ 1.841788 (-0.635059) | 15.127250 \/ 8.074308 (7.052942) | 14.397228 \/ 10.191392 (4.205836) | 0.148802 \/ 0.680424 (-0.531622) | 0.017628 \/ 0.534201 (-0.516573) | 0.396150 \/ 0.579283 (-0.183133) | 0.435826 \/ 0.434364 (0.001462) | 0.471215 \/ 0.540337 (-0.069122) | 0.559413 \/ 1.386936 (-0.827523) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006479 \/ 0.011353 (-0.004874) | 0.004520 \/ 0.011008 (-0.006488) | 0.074395 \/ 0.038508 (0.035887) | 0.033400 \/ 0.023109 (0.010291) | 0.388411 \/ 0.275898 (0.112513) | 0.396714 \/ 0.323480 (0.073234) | 0.005736 \/ 0.007986 (-0.002250) | 0.004038 \/ 0.004328 (-0.000291) | 0.073595 \/ 0.004250 (0.069345) | 0.045207 \/ 0.037052 (0.008155) | 0.378096 \/ 0.258489 (0.119607) | 0.417830 \/ 0.293841 (0.123989) | 0.028365 \/ 0.128546 (-0.100181) | 0.008887 \/ 0.075646 (-0.066760) | 0.080766 \/ 0.419271 (-0.338505) | 0.046923 \/ 0.043533 (0.003390) | 0.376190 \/ 0.255139 (0.121051) | 0.385875 \/ 0.283200 (0.102675) | 0.107542 \/ 0.141683 (-0.034141) | 1.409257 \/ 1.452155 (-0.042898) | 1.518475 \/ 1.492716 (0.025759) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.223299 \/ 0.018006 (0.205292) | 0.440640 \/ 0.000490 (0.440150) | 0.000397 \/ 0.000200 (0.000197) | 0.000056 \/ 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031388 \/ 0.037411 (-0.006024) | 0.113078 \/ 0.014526 (0.098552) | 0.124398 \/ 0.176557 (-0.052159) | 0.173802 \/ 0.737135 (-0.563333) | 0.129555 \/ 0.296338 (-0.166783) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.440220 \/ 0.215209 (0.225011) | 4.398052 \/ 2.077655 (2.320398) | 2.188396 \/ 1.504120 (0.684276) | 1.997811 \/ 1.541195 (0.456616) | 2.093338 \/ 1.468490 (0.624847) | 0.519597 \/ 4.584777 (-4.065180) | 3.885795 \/ 3.745712 (0.140083) | 2.896327 \/ 5.269862 (-2.373534) | 1.245785 \/ 4.565676 (-3.319891) | 0.065675 \/ 0.424275 (-0.358600) | 0.011729 \/ 0.007607 (0.004121) | 0.541526 \/ 0.226044 (0.315482) | 5.406763 \/ 2.268929 (3.137834) | 2.722914 \/ 55.444624 (-52.721711) | 2.471111 \/ 6.876477 (-4.405366) | 2.541488 \/ 2.142072 (0.399415) | 0.633566 \/ 4.805227 (-4.171661) | 0.139622 \/ 6.500664 (-6.361042) | 0.064220 \/ 0.075469 (-0.011249) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.296097 \/ 1.841788 (-0.545690) | 15.095320 \/ 8.074308 (7.021012) | 14.300821 \/ 10.191392 (4.109429) | 0.145470 \/ 0.680424 (-0.534954) | 0.017496 \/ 0.534201 (-0.516705) | 0.400589 \/ 0.579283 (-0.178694) | 0.423091 \/ 0.434364 (-0.011273) | 0.468258 \/ 0.540337 (-0.072079) | 0.570873 \/ 1.386936 (-0.816063) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#aee6c67034d6ff298b2153a2fcdab97f14ee6d66 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005918 \/ 0.011353 (-0.005435) | 0.004393 \/ 0.011008 (-0.006615) | 0.091677 \/ 0.038508 (0.053169) | 0.033546 \/ 0.023109 (0.010437) | 0.344682 \/ 0.275898 (0.068784) | 0.388906 \/ 0.323480 (0.065426) | 0.005412 \/ 0.007986 (-0.002574) | 0.004909 \/ 0.004328 (0.000580) | 0.082589 \/ 0.004250 (0.078339) | 0.045242 \/ 0.037052 (0.008190) | 0.339191 \/ 0.258489 (0.080702) | 0.349673 \/ 0.293841 (0.055832) | 0.026805 \/ 0.128546 (-0.101742) | 0.007529 \/ 0.075646 (-0.068117) | 0.319108 \/ 0.419271 (-0.100164) | 0.049482 \/ 0.043533 (0.005949) | 0.320013 \/ 0.255139 (0.064874) | 0.342059 \/ 0.283200 (0.058859) | 0.096623 \/ 0.141683 (-0.045060) | 1.458204 \/ 1.452155 (0.006049) | 1.571172 \/ 1.492716 (0.078455) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.235171 \/ 0.018006 (0.217165) | 0.479678 \/ 0.000490 (0.479188) | 0.006627 \/ 0.000200 (0.006427) | 0.000257 \/ 0.000054 (0.000202) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025716 \/ 0.037411 (-0.011696) | 0.107730 \/ 0.014526 (0.093204) | 0.111595 \/ 0.176557 (-0.064962) | 0.171316 \/ 0.737135 (-0.565819) | 0.118962 \/ 0.296338 (-0.177377) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.376318 \/ 0.215209 (0.161109) | 4.039484 \/ 2.077655 (1.961829) | 1.811548 \/ 1.504120 (0.307428) | 1.646728 \/ 1.541195 (0.105533) | 1.688071 \/ 1.468490 (0.219581) | 0.551256 \/ 4.584777 (-4.033520) | 4.153931 \/ 3.745712 (0.408218) | 3.424154 \/ 5.269862 (-1.845707) | 1.734860 \/ 4.565676 (-2.830816) | 0.067753 \/ 0.424275 (-0.356522) | 0.012699 \/ 0.007607 (0.005092) | 0.505722 \/ 0.226044 (0.279677) | 4.997321 \/ 2.268929 (2.728392) | 2.258755 \/ 55.444624 (-53.185869) | 1.954382 \/ 6.876477 (-4.922095) | 1.967545 \/ 2.142072 (-0.174527) | 0.630489 \/ 4.805227 (-4.174738) | 0.138738 \/ 6.500664 (-6.361926) | 0.064907 \/ 0.075469 (-0.010562) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.209634 \/ 1.841788 (-0.632154) | 15.055062 \/ 8.074308 (6.980754) | 12.721606 \/ 10.191392 (2.530214) | 0.164908 \/ 0.680424 (-0.515516) | 0.019528 \/ 0.534201 (-0.514673) | 0.400136 \/ 0.579283 (-0.179147) | 0.451640 \/ 0.434364 (0.017276) | 0.466272 \/ 0.540337 (-0.074065) | 0.553258 \/ 1.386936 (-0.833679) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006341 \/ 0.011353 (-0.005011) | 0.004617 \/ 0.011008 (-0.006391) | 0.077953 \/ 0.038508 (0.039445) | 0.031104 \/ 0.023109 (0.007995) | 0.360328 \/ 0.275898 (0.084430) | 0.408403 \/ 0.323480 (0.084923) | 0.005704 \/ 0.007986 (-0.002282) | 0.003588 \/ 0.004328 (-0.000741) | 0.071441 \/ 0.004250 (0.067190) | 0.043520 \/ 0.037052 (0.006468) | 0.375798 \/ 0.258489 (0.117309) | 0.400955 \/ 0.293841 (0.107114) | 0.028166 \/ 0.128546 (-0.100381) | 0.008578 \/ 0.075646 (-0.067068) | 0.086673 \/ 0.419271 (-0.332598) | 0.046424 \/ 0.043533 (0.002891) | 0.367276 \/ 0.255139 (0.112137) | 0.414550 \/ 0.283200 (0.131351) | 0.097355 \/ 0.141683 (-0.044328) | 1.465191 \/ 1.452155 (0.013036) | 1.555028 \/ 1.492716 (0.062312) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.196642 \/ 0.018006 (0.178636) | 0.464221 \/ 0.000490 (0.463731) | 0.002726 \/ 0.000200 (0.002526) | 0.000110 \/ 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028078 \/ 0.037411 (-0.009333) | 0.110762 \/ 0.014526 (0.096236) | 0.122212 \/ 0.176557 (-0.054344) | 0.164758 \/ 0.737135 (-0.572377) | 0.133969 \/ 0.296338 (-0.162370) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.448134 \/ 0.215209 (0.232925) | 4.339335 \/ 2.077655 (2.261680) | 2.129209 \/ 1.504120 (0.625089) | 1.957805 \/ 1.541195 (0.416611) | 1.994038 \/ 1.468490 (0.525548) | 0.497101 \/ 4.584777 (-4.087676) | 4.114432 \/ 3.745712 (0.368720) | 3.437305 \/ 5.269862 (-1.832556) | 1.692810 \/ 4.565676 (-2.872866) | 0.071077 \/ 0.424275 (-0.353198) | 0.012735 \/ 0.007607 (0.005128) | 0.534393 \/ 0.226044 (0.308348) | 5.217445 \/ 2.268929 (2.948517) | 2.594858 \/ 55.444624 (-52.849766) | 2.317464 \/ 6.876477 (-4.559012) | 2.337974 \/ 2.142072 (0.195902) | 0.622291 \/ 4.805227 (-4.182936) | 0.144934 \/ 6.500664 (-6.355730) | 0.068524 \/ 0.075469 (-0.006945) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.310601 \/ 1.841788 (-0.531187) | 15.771527 \/ 8.074308 (7.697219) | 13.952032 \/ 10.191392 (3.760640) | 0.212473 \/ 0.680424 (-0.467951) | 0.017963 \/ 0.534201 (-0.516238) | 0.400755 \/ 0.579283 (-0.178528) | 0.439817 \/ 0.434364 (0.005453) | 0.472614 \/ 0.540337 (-0.067724) | 0.558410 \/ 1.386936 (-0.828526) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#1b51429d02a0da1ff798873afe655309136c5689 \"CML watermark\")\n"],"created_at":1685456875000,"updated_at":1685539881000,"closed_at":1685539434000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5915","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5915","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5915.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5915.patch","merged_at":1685539434000},"body":"Raise an error in `DatasetBuilder.as_dataset` when `file_format != \"arrow\"` (and fix the docstring)\r\n\r\nFix #5874 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5915\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5915\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5914","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5914\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5914\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5914\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5914","id":1731483996,"node_id":"I_kwDODunzps5nNFlc","number":5914,"title":"array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size in Datasets","user":{"login":"ravenouse","id":85110830,"node_id":"MDQ6VXNlcjg1MTEwODMw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/85110830?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ravenouse","html_url":"https:\/\/github.com\/ravenouse","followers_url":"https:\/\/api.github.com\/users\/ravenouse\/followers","following_url":"https:\/\/api.github.com\/users\/ravenouse\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ravenouse\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ravenouse\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ravenouse\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ravenouse\/orgs","repos_url":"https:\/\/api.github.com\/users\/ravenouse\/repos","events_url":"https:\/\/api.github.com\/users\/ravenouse\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ravenouse\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1685420700000,"updated_at":1685420700000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nWhen using the `filter` or `map` function to preprocess a dataset, a ValueError is encountered with the error message \"array is too big; arr.size * arr.dtype.itemsize is larger than the maximum possible size.\" \r\n\r\nDetailed error message:\r\nTraceback (most recent call last):\r\n File \"data_processing.py\", line 26, in \r\n processed_dataset[split] = samromur_children[split].map(prepare_dataset, cache_file_name=cache_dict[split],writer_batch_size = 50)\r\n File \"\/projects\/zhwa3087\/software\/anaconda\/envs\/mycustomenv\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 2405, in map\r\n desc=desc,\r\n File \"\/projects\/zhwa3087\/software\/anaconda\/envs\/mycustomenv\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 557, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/projects\/zhwa3087\/software\/anaconda\/envs\/mycustomenv\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 524, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/projects\/zhwa3087\/software\/anaconda\/envs\/mycustomenv\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py\", line 480, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"\/projects\/zhwa3087\/software\/anaconda\/envs\/mycustomenv\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 2756, in _map_single\r\n example = apply_function_on_filtered_inputs(example, i, offset=offset)\r\n File \"\/projects\/zhwa3087\/software\/anaconda\/envs\/mycustomenv\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 2655, in apply_function_on_filtered_inputs\r\n processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)\r\n File \"\/projects\/zhwa3087\/software\/anaconda\/envs\/mycustomenv\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 2347, in decorated\r\n result = f(decorated_item, *args, **kwargs)\r\n File \"data_processing.py\", line 11, in prepare_dataset\r\n audio = batch[\"audio\"]\r\n File \"\/projects\/zhwa3087\/software\/anaconda\/envs\/mycustomenv\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 123, in __getitem__\r\n value = decode_nested_example(self.features[key], value) if value is not None else None\r\n File \"\/projects\/zhwa3087\/software\/anaconda\/envs\/mycustomenv\/lib\/python3.7\/site-packages\/datasets\/features\/features.py\", line 1260, in decode_nested_example\r\n return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None\r\n File \"\/projects\/zhwa3087\/software\/anaconda\/envs\/mycustomenv\/lib\/python3.7\/site-packages\/datasets\/features\/audio.py\", line 156, in decode_example\r\n array, sampling_rate = self._decode_non_mp3_path_like(path, token_per_repo_id=token_per_repo_id)\r\n File \"\/projects\/zhwa3087\/software\/anaconda\/envs\/mycustomenv\/lib\/python3.7\/site-packages\/datasets\/features\/audio.py\", line 257, in _decode_non_mp3_path_like\r\n array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)\r\n File \"\/projects\/zhwa3087\/software\/anaconda\/envs\/mycustomenv\/lib\/python3.7\/site-packages\/librosa\/core\/audio.py\", line 176, in load\r\n y, sr_native = __soundfile_load(path, offset, duration, dtype)\r\n File \"\/projects\/zhwa3087\/software\/anaconda\/envs\/mycustomenv\/lib\/python3.7\/site-packages\/librosa\/core\/audio.py\", line 222, in __soundfile_load\r\n y = sf_desc.read(frames=frame_duration, dtype=dtype, always_2d=False).T\r\n File \"\/projects\/zhwa3087\/software\/anaconda\/envs\/mycustomenv\/lib\/python3.7\/site-packages\/soundfile.py\", line 891, in read\r\n out = self._create_empty_array(frames, always_2d, dtype)\r\n File \"\/projects\/zhwa3087\/software\/anaconda\/envs\/mycustomenv\/lib\/python3.7\/site-packages\/soundfile.py\", line 1323, in _create_empty_array\r\n return np.empty(shape, dtype, order='C')\r\nValueError: array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size.\r\n\n\n### Steps to reproduce the bug\n\n```python\r\nfrom datasets import load_dataset, DatasetDict\r\nfrom transformers import WhisperFeatureExtractor\r\nfrom transformers import WhisperTokenizer\r\n\r\nsamromur_children= load_dataset(\"language-and-voice-lab\/samromur_children\")\r\nfeature_extractor = WhisperFeatureExtractor.from_pretrained(\"openai\/whisper-small\")\r\ntokenizer = WhisperTokenizer.from_pretrained(\"openai\/whisper-small\", language=\"icelandic\", task=\"transcribe\")\r\n\r\ndef prepare_dataset(batch):\r\n # load and resample audio data from 48 to 16kHz\r\n audio = batch[\"audio\"]\r\n\r\n # compute log-Mel input features from input audio array \r\n batch[\"input_features\"] = feature_extractor(audio[\"array\"], sampling_rate=16000).input_features[0]\r\n\r\n # encode target text to label ids \r\n batch[\"labels\"] = tokenizer(batch[\"normalized_text\"]).input_ids\r\n return batch\r\n\r\ncache_dict = {\"train\": \".\/cache\/audio_train.cache\", \\\r\n \"validation\": \".\/cache\/audio_validation.cache\", \\\r\n \"test\": \".\/cache\/audio_test.cache\"}\r\nfilter_cache_dict = {\"train\": \".\/cache\/filter_train.arrow\", \\\r\n \"validation\": \".\/cache\/filter_validation.arrow\", \\\r\n \"test\": \".\/cache\/filter_test.arrow\"}\r\n\r\nprint(\"before filtering\")\r\nprint(samromur_children)\r\n#filter the dataset to only include examples with more than 2 seconds of audio\r\nsamromur_children = samromur_children.filter(lambda example: example[\"audio\"][\"array\"].shape[0] > 16000*2, cache_file_names=filter_cache_dict) \r\nprint(\"after filtering\")\r\nprint(samromur_children)\r\nprocessed_dataset = DatasetDict()\r\n# processed_dataset = samromur_children.map(prepare_dataset, cache_file_names=cache_dict, num_proc=10,)\r\nfor split in [\"train\", \"validation\", \"test\"]:\r\n processed_dataset[split] = samromur_children[split].map(prepare_dataset, cache_file_name=cache_dict[split])\r\n```\n\n### Expected behavior\n\nThe dataset is successfully processed and ready to train the model.\n\n### Environment info\n\nPython version: 3.7.13\r\ndatasets package version: 2.4.0\r\nlibrosa package version: 0.10.0.post2","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5914\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5914\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5913","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5913\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5913\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5913\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5913","id":1731427484,"node_id":"I_kwDODunzps5nM3yc","number":5913,"title":"I tried to load a custom dataset using the following statement: dataset = load_dataset('json', data_files=data_files). The dataset contains 50 million text-image pairs, but an error occurred.","user":{"login":"cjt222","id":17508662,"node_id":"MDQ6VXNlcjE3NTA4NjYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17508662?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cjt222","html_url":"https:\/\/github.com\/cjt222","followers_url":"https:\/\/api.github.com\/users\/cjt222\/followers","following_url":"https:\/\/api.github.com\/users\/cjt222\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cjt222\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cjt222\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cjt222\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cjt222\/orgs","repos_url":"https:\/\/api.github.com\/users\/cjt222\/repos","events_url":"https:\/\/api.github.com\/users\/cjt222\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cjt222\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting, @cjt222.\r\n\r\nWhat is the structure of your JSON files. Please note that it is normally simpler if the data file format is JSON-Lines instead. ","> Thanks for reporting, @cjt222.\r\n> \r\n> What is the structure of your JSON files. Please note that it is normally simpler if the data file format is JSON-Lines instead.\r\n\r\nThanks! I have encountered similar problems. I modify the json format from list to line and works!"],"created_at":1685415326000,"updated_at":1686385512000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nFile \"\/home\/kas\/.conda\/envs\/diffusers\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 1858, in _prepare_split_single\r\nDownloading and preparing dataset json\/default to \/home\/kas\/diffusers\/examples\/dreambooth\/cache_data\/datasets\/json\/default-acf423d8c6ef99d0\/0.0.0\/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...\r\nDownloading data files: 0%| | 0\/1 [00:00 using set_format\/set_transform for the 1st transform and then passing the transformed example\/batch to the 2nd transform\r\n\r\nHow would that go, I thought you can't chain them?\r\n\r\nAs for the custom formatter, is it possible to reference an existing formatter, in my case `torch_formatter` inside of my custom formatter?\r\n\r\nmaybe I can inherit from it and just call `super.recursive_tensorize()`?","> How would that go, I thought you can't chain them?\r\n\r\nYes, they cannot be chained. This is what I meant:\r\n```python\r\nds.set_transform(first_transform)\r\n# calling the 2nd transform on each accessed batch\r\nsecond_transform(ds[2:3])\r\n```\r\n\r\n> As for the custom formatter, is it possible to reference an existing formatter, in my case torch_formatter inside of my custom formatter?\r\n>\r\n>maybe I can inherit from it and just call super.recursive_tensorize()?\r\n\r\nYes, subclassing makes the most sense.","Great, thank you for the details.","https:\/\/github.com\/huggingface\/datasets\/issues\/6012"],"created_at":1685215343000,"updated_at":1688938854000,"closed_at":1686926484000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nI need to process some data using the set_transform method but I also need the data to be formatted for pytorch before processing it.\r\n\r\nI don't see anywhere in the documentation something that says that both methods cannot be used at the same time.\r\n\r\n### Steps to reproduce the bug\r\n\r\n```\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"mnist\", split=\"train\")\r\nds.set_format(type=\"torch\")\r\ndef transform(entry):\r\n return entry[\"image\"].double()\r\nds.set_transform(transform)\r\n\r\nprint(ds[0])\r\n```\r\n\r\n### Expected behavior\r\n\r\nIt should print the pytorch tensor image as a double, but it errors because \"entry\" in the transform function doesn't receive a pytorch tensor to begin with, it receives a PIL Image -> entry.double() errors because entry isn't a pytorch tensor.\r\n\r\n### Environment info\r\nLatest versions.\r\n\r\n\r\n### Note:\r\nIt would be at least handy to have access to a function that can do the dataset.set_format in the set_transform function.\r\n\r\nSomething like:\r\n```\r\nfrom datasets import load_dataset, do_format\r\nds = load_dataset(\"mnist\", split=\"train\")\r\ndef transform(entry):\r\n entry = do_format(entry, type=\"torch\")\r\n return entry[\"image\"].double()\r\nds.set_transform(transform)\r\n\r\nprint(ds[0])\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5910\/reactions","total_count":2,"+1":2,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5910\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5909","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5909\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5909\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5909\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5909","id":1728900068,"node_id":"PR_kwDODunzps5Rgga6","number":5909,"title":"Use more efficient and idiomatic way to construct list.","user":{"login":"ttsugriy","id":172294,"node_id":"MDQ6VXNlcjE3MjI5NA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/172294?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ttsugriy","html_url":"https:\/\/github.com\/ttsugriy","followers_url":"https:\/\/api.github.com\/users\/ttsugriy\/followers","following_url":"https:\/\/api.github.com\/users\/ttsugriy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ttsugriy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ttsugriy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ttsugriy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ttsugriy\/orgs","repos_url":"https:\/\/api.github.com\/users\/ttsugriy\/repos","events_url":"https:\/\/api.github.com\/users\/ttsugriy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ttsugriy\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008156 \/ 0.011353 (-0.003197) | 0.005563 \/ 0.011008 (-0.005445) | 0.118319 \/ 0.038508 (0.079810) | 0.044305 \/ 0.023109 (0.021195) | 0.366221 \/ 0.275898 (0.090323) | 0.407585 \/ 0.323480 (0.084105) | 0.006961 \/ 0.007986 (-0.001024) | 0.004841 \/ 0.004328 (0.000513) | 0.089949 \/ 0.004250 (0.085698) | 0.062197 \/ 0.037052 (0.025144) | 0.360721 \/ 0.258489 (0.102232) | 0.415332 \/ 0.293841 (0.121491) | 0.035709 \/ 0.128546 (-0.092837) | 0.010617 \/ 0.075646 (-0.065030) | 0.397454 \/ 0.419271 (-0.021817) | 0.063490 \/ 0.043533 (0.019958) | 0.374289 \/ 0.255139 (0.119150) | 0.382827 \/ 0.283200 (0.099628) | 0.121014 \/ 0.141683 (-0.020669) | 1.729933 \/ 1.452155 (0.277779) | 1.896222 \/ 1.492716 (0.403506) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.254030 \/ 0.018006 (0.236023) | 0.491225 \/ 0.000490 (0.490736) | 0.018933 \/ 0.000200 (0.018734) | 0.000413 \/ 0.000054 (0.000358) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033085 \/ 0.037411 (-0.004327) | 0.132837 \/ 0.014526 (0.118311) | 0.143275 \/ 0.176557 (-0.033282) | 0.215800 \/ 0.737135 (-0.521335) | 0.149802 \/ 0.296338 (-0.146536) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.474688 \/ 0.215209 (0.259479) | 4.743223 \/ 2.077655 (2.665569) | 2.163107 \/ 1.504120 (0.658988) | 1.946396 \/ 1.541195 (0.405201) | 2.057538 \/ 1.468490 (0.589047) | 0.618836 \/ 4.584777 (-3.965941) | 4.605934 \/ 3.745712 (0.860222) | 2.201537 \/ 5.269862 (-3.068324) | 1.275758 \/ 4.565676 (-3.289919) | 0.077782 \/ 0.424275 (-0.346493) | 0.014830 \/ 0.007607 (0.007223) | 0.593372 \/ 0.226044 (0.367328) | 5.927000 \/ 2.268929 (3.658072) | 2.687293 \/ 55.444624 (-52.757331) | 2.301797 \/ 6.876477 (-4.574679) | 2.489928 \/ 2.142072 (0.347856) | 0.756779 \/ 4.805227 (-4.048449) | 0.168065 \/ 6.500664 (-6.332600) | 0.077276 \/ 0.075469 (0.001807) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.608169 \/ 1.841788 (-0.233619) | 19.048790 \/ 8.074308 (10.974482) | 16.100228 \/ 10.191392 (5.908836) | 0.215346 \/ 0.680424 (-0.465077) | 0.022293 \/ 0.534201 (-0.511907) | 0.535899 \/ 0.579283 (-0.043384) | 0.533729 \/ 0.434364 (0.099365) | 0.562697 \/ 0.540337 (0.022360) | 0.764082 \/ 1.386936 (-0.622854) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.010087 \/ 0.011353 (-0.001266) | 0.005357 \/ 0.011008 (-0.005651) | 0.092678 \/ 0.038508 (0.054170) | 0.041207 \/ 0.023109 (0.018098) | 0.437464 \/ 0.275898 (0.161566) | 0.527867 \/ 0.323480 (0.204387) | 0.006861 \/ 0.007986 (-0.001125) | 0.006131 \/ 0.004328 (0.001802) | 0.093741 \/ 0.004250 (0.089490) | 0.064142 \/ 0.037052 (0.027090) | 0.433577 \/ 0.258489 (0.175088) | 0.537148 \/ 0.293841 (0.243307) | 0.035339 \/ 0.128546 (-0.093207) | 0.010432 \/ 0.075646 (-0.065214) | 0.102838 \/ 0.419271 (-0.316434) | 0.057905 \/ 0.043533 (0.014372) | 0.437956 \/ 0.255139 (0.182817) | 0.509562 \/ 0.283200 (0.226362) | 0.120620 \/ 0.141683 (-0.021063) | 1.798686 \/ 1.452155 (0.346531) | 2.013290 \/ 1.492716 (0.520574) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.249067 \/ 0.018006 (0.231061) | 0.462219 \/ 0.000490 (0.461729) | 0.000476 \/ 0.000200 (0.000276) | 0.000068 \/ 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033988 \/ 0.037411 (-0.003424) | 0.135863 \/ 0.014526 (0.121337) | 0.144082 \/ 0.176557 (-0.032474) | 0.201715 \/ 0.737135 (-0.535421) | 0.152079 \/ 0.296338 (-0.144259) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.522820 \/ 0.215209 (0.307611) | 5.216723 \/ 2.077655 (3.139068) | 2.582355 \/ 1.504120 (1.078235) | 2.352799 \/ 1.541195 (0.811604) | 2.451943 \/ 1.468490 (0.983453) | 0.620381 \/ 4.584777 (-3.964396) | 4.537841 \/ 3.745712 (0.792129) | 2.206431 \/ 5.269862 (-3.063431) | 1.269865 \/ 4.565676 (-3.295811) | 0.078744 \/ 0.424275 (-0.345531) | 0.014375 \/ 0.007607 (0.006768) | 0.648215 \/ 0.226044 (0.422171) | 6.482809 \/ 2.268929 (4.213881) | 3.210670 \/ 55.444624 (-52.233954) | 2.847485 \/ 6.876477 (-4.028992) | 2.820946 \/ 2.142072 (0.678873) | 0.762711 \/ 4.805227 (-4.042516) | 0.171235 \/ 6.500664 (-6.329429) | 0.080230 \/ 0.075469 (0.004761) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.646840 \/ 1.841788 (-0.194948) | 19.400451 \/ 8.074308 (11.326142) | 16.758845 \/ 10.191392 (6.567453) | 0.171377 \/ 0.680424 (-0.509046) | 0.020400 \/ 0.534201 (-0.513801) | 0.467675 \/ 0.579283 (-0.111608) | 0.529745 \/ 0.434364 (0.095381) | 0.605989 \/ 0.540337 (0.065652) | 0.694659 \/ 1.386936 (-0.692277) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#006bf33ac5c308f9c70f4df4868abd539eb6c366 \"CML watermark\")\n","It's faster because all the items are the same object, but this also means modifying one of them will alter each unless these items are immutable, and they are in this case (tuples). So we should be careful when using this idiom."],"created_at":1685213687000,"updated_at":1685547431000,"closed_at":1685539709000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5909","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5909","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5909.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5909.patch","merged_at":1685539708000},"body":"Using `*` is ~2X faster according to [benchmark](https:\/\/colab.research.google.com\/gist\/ttsugriy\/c964a2604edf70c41911b10335729b6a\/for-vs-mult.ipynb) with just 4 patterns. This doesn't matter much since this tiny difference is not going to be noticeable, but why not?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5909\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5909\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5908","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5908\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5908\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5908\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5908","id":1728653935,"node_id":"I_kwDODunzps5nCSpv","number":5908,"title":"Unbearably slow sorting on big mapped datasets","user":{"login":"maximxlss","id":29152154,"node_id":"MDQ6VXNlcjI5MTUyMTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29152154?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/maximxlss","html_url":"https:\/\/github.com\/maximxlss","followers_url":"https:\/\/api.github.com\/users\/maximxlss\/followers","following_url":"https:\/\/api.github.com\/users\/maximxlss\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/maximxlss\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/maximxlss\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/maximxlss\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/maximxlss\/orgs","repos_url":"https:\/\/api.github.com\/users\/maximxlss\/repos","events_url":"https:\/\/api.github.com\/users\/maximxlss\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/maximxlss\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! `shard` currently returns a slow dataset by default, with examples evenly distributed in the dataset.\r\n\r\nYou can get a fast dataset using `contiguous=True` (which should be the default imo):\r\n\r\n```python\r\ndataset = dataset.shard(10, 0, contiguous=True)\r\n```\r\n\r\nThis way you don't need to flatten_indices() and sort should be fast as well","@lhoestq \r\n\r\n> contiguous=True (which should be the default imo)\r\n\r\nFor `IterableDataset`, it's not possible to implement contiguous sharding without knowing the number of examples in advance, so setting the default value to `contiguous=True` would result in an inconsistency between `Dataset` and `IterableDataset` (when we add `IterableDataset.shard`)","Actually sharded iterable datasets are made of sub iterables that generally yield contiguous data no ? So in a way it's possible to shard an iterable dataset contiguously.\r\n\r\nIf the dataset is made of one shard it's indeed not possible to shard it contiguously though","> Actually sharded iterable datasets are made of sub iterables that generally yield contiguous data no ? So in a way it's possible to shard an iterable dataset contiguously.\r\n\r\nBut sharding an iterable dataset by sharding its `gen_kwargs` would still yield approximate shards(not equal to `Dataset.shard`), no? ","Yes indeed !","I understand the issue doesn't exist with non-mapped datasets, but if flattening is so much more efficient than sorting the indices, that's an issue in itself.\n\nThere are plenty of issues people posted for which the root cause turns out to be the same. It seems like mapped datasets are terribly inefficient. I think I saw some issue like that somewhere (about the mapped datasets in general), but can't find it now.\n\nMaybe indices should be flattened before any additional processing, then."],"created_at":1685185712000,"updated_at":1686678310000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nFor me, with ~40k lines, sorting took 3.5 seconds on a flattened dataset (including the flatten operation) and 22.7 seconds on a mapped dataset (right after sharding), which is about x5 slowdown. Moreover, it seems like it slows down exponentially with bigger datasets (wasn't able to sort 700k lines at all, with flattening takes about a minute).\n\n### Steps to reproduce the bug\n\n```Python\r\nfrom datasets import load_dataset\r\nimport time\r\n\r\ndataset = load_dataset(\"xnli\", \"en\", split=\"train\")\r\n\r\ndataset = dataset.shard(10, 0)\r\n\r\nprint(len(dataset))\r\n\r\nt = time.time()\r\n\r\n# dataset = dataset.flatten_indices() # uncomment this line and it's fast\r\n\r\ndataset = dataset.sort(\"label\", reverse=True, load_from_cache_file=False)\r\n\r\nprint(f\"finished in {time.time() - t:.4f} seconds\")\r\n\r\n```\n\n### Expected behavior\n\nExpect sorting to take the same or less time than flattening and then sorting.\n\n### Environment info\n\n- `datasets` version: 2.12.1.dev0 (same with 2.12.0 too)\r\n- Platform: Windows-10-10.0.22621-SP0\r\n- Python version: 3.10.10\r\n- Huggingface_hub version: 0.14.1\r\n- PyArrow version: 12.0.0\r\n- Pandas version: 2.0.1","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5908\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5908\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5907","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5907\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5907\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5907\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5907","id":1728648560,"node_id":"PR_kwDODunzps5RfqUU","number":5907,"title":"Add `flatten_indices` to `DatasetDict`","user":{"login":"maximxlss","id":29152154,"node_id":"MDQ6VXNlcjI5MTUyMTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29152154?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/maximxlss","html_url":"https:\/\/github.com\/maximxlss","followers_url":"https:\/\/api.github.com\/users\/maximxlss\/followers","following_url":"https:\/\/api.github.com\/users\/maximxlss\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/maximxlss\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/maximxlss\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/maximxlss\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/maximxlss\/orgs","repos_url":"https:\/\/api.github.com\/users\/maximxlss\/repos","events_url":"https:\/\/api.github.com\/users\/maximxlss\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/maximxlss\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006192 \/ 0.011353 (-0.005161) | 0.004410 \/ 0.011008 (-0.006598) | 0.095990 \/ 0.038508 (0.057482) | 0.032662 \/ 0.023109 (0.009553) | 0.322827 \/ 0.275898 (0.046929) | 0.352542 \/ 0.323480 (0.029062) | 0.005398 \/ 0.007986 (-0.002588) | 0.003926 \/ 0.004328 (-0.000403) | 0.075131 \/ 0.004250 (0.070880) | 0.046205 \/ 0.037052 (0.009153) | 0.330957 \/ 0.258489 (0.072468) | 0.360166 \/ 0.293841 (0.066325) | 0.027880 \/ 0.128546 (-0.100666) | 0.008813 \/ 0.075646 (-0.066833) | 0.327316 \/ 0.419271 (-0.091955) | 0.050071 \/ 0.043533 (0.006539) | 0.319939 \/ 0.255139 (0.064800) | 0.331593 \/ 0.283200 (0.048393) | 0.096745 \/ 0.141683 (-0.044938) | 1.445165 \/ 1.452155 (-0.006990) | 1.515538 \/ 1.492716 (0.022821) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.209365 \/ 0.018006 (0.191358) | 0.437007 \/ 0.000490 (0.436518) | 0.003207 \/ 0.000200 (0.003007) | 0.000088 \/ 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027261 \/ 0.037411 (-0.010151) | 0.105101 \/ 0.014526 (0.090575) | 0.117163 \/ 0.176557 (-0.059394) | 0.176237 \/ 0.737135 (-0.560898) | 0.122559 \/ 0.296338 (-0.173779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.406792 \/ 0.215209 (0.191583) | 4.060831 \/ 2.077655 (1.983176) | 1.829691 \/ 1.504120 (0.325571) | 1.633155 \/ 1.541195 (0.091960) | 1.704817 \/ 1.468490 (0.236327) | 0.525325 \/ 4.584777 (-4.059452) | 3.752907 \/ 3.745712 (0.007194) | 1.857513 \/ 5.269862 (-3.412349) | 1.222237 \/ 4.565676 (-3.343439) | 0.065941 \/ 0.424275 (-0.358334) | 0.012498 \/ 0.007607 (0.004891) | 0.495009 \/ 0.226044 (0.268965) | 4.968074 \/ 2.268929 (2.699145) | 2.277898 \/ 55.444624 (-53.166727) | 1.936656 \/ 6.876477 (-4.939821) | 1.970698 \/ 2.142072 (-0.171374) | 0.635221 \/ 4.805227 (-4.170006) | 0.140539 \/ 6.500664 (-6.360125) | 0.064111 \/ 0.075469 (-0.011358) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.238151 \/ 1.841788 (-0.603637) | 14.681262 \/ 8.074308 (6.606954) | 13.405525 \/ 10.191392 (3.214133) | 0.163225 \/ 0.680424 (-0.517199) | 0.017282 \/ 0.534201 (-0.516918) | 0.395526 \/ 0.579283 (-0.183757) | 0.429156 \/ 0.434364 (-0.005208) | 0.470806 \/ 0.540337 (-0.069531) | 0.571290 \/ 1.386936 (-0.815646) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006444 \/ 0.011353 (-0.004909) | 0.004388 \/ 0.011008 (-0.006621) | 0.075004 \/ 0.038508 (0.036496) | 0.032904 \/ 0.023109 (0.009795) | 0.375360 \/ 0.275898 (0.099462) | 0.413684 \/ 0.323480 (0.090204) | 0.005854 \/ 0.007986 (-0.002132) | 0.005504 \/ 0.004328 (0.001175) | 0.075049 \/ 0.004250 (0.070799) | 0.047973 \/ 0.037052 (0.010920) | 0.377943 \/ 0.258489 (0.119454) | 0.427039 \/ 0.293841 (0.133198) | 0.028248 \/ 0.128546 (-0.100298) | 0.008972 \/ 0.075646 (-0.066674) | 0.081848 \/ 0.419271 (-0.337424) | 0.047935 \/ 0.043533 (0.004402) | 0.377980 \/ 0.255139 (0.122841) | 0.407856 \/ 0.283200 (0.124656) | 0.103454 \/ 0.141683 (-0.038229) | 1.469051 \/ 1.452155 (0.016896) | 1.590657 \/ 1.492716 (0.097941) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.192380 \/ 0.018006 (0.174374) | 0.440995 \/ 0.000490 (0.440505) | 0.004082 \/ 0.000200 (0.003882) | 0.000096 \/ 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029584 \/ 0.037411 (-0.007828) | 0.110051 \/ 0.014526 (0.095525) | 0.121196 \/ 0.176557 (-0.055361) | 0.172249 \/ 0.737135 (-0.564886) | 0.125380 \/ 0.296338 (-0.170958) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.435218 \/ 0.215209 (0.220009) | 4.354811 \/ 2.077655 (2.277156) | 2.102050 \/ 1.504120 (0.597930) | 1.913454 \/ 1.541195 (0.372260) | 1.974624 \/ 1.468490 (0.506134) | 0.529975 \/ 4.584777 (-4.054802) | 3.801605 \/ 3.745712 (0.055893) | 3.162408 \/ 5.269862 (-2.107454) | 1.599576 \/ 4.565676 (-2.966101) | 0.066710 \/ 0.424275 (-0.357565) | 0.012158 \/ 0.007607 (0.004551) | 0.549187 \/ 0.226044 (0.323142) | 5.489930 \/ 2.268929 (3.221002) | 2.646787 \/ 55.444624 (-52.797837) | 2.311915 \/ 6.876477 (-4.564562) | 2.335645 \/ 2.142072 (0.193572) | 0.641067 \/ 4.805227 (-4.164160) | 0.142227 \/ 6.500664 (-6.358437) | 0.065303 \/ 0.075469 (-0.010166) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.283209 \/ 1.841788 (-0.558579) | 15.241809 \/ 8.074308 (7.167501) | 14.131471 \/ 10.191392 (3.940079) | 0.143921 \/ 0.680424 (-0.536503) | 0.017497 \/ 0.534201 (-0.516704) | 0.402236 \/ 0.579283 (-0.177047) | 0.418917 \/ 0.434364 (-0.015447) | 0.461745 \/ 0.540337 (-0.078593) | 0.560212 \/ 1.386936 (-0.826724) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#7098922130cabfbfa6b8a3885ff2e6f032d6203d \"CML watermark\")\n"],"created_at":1685184944000,"updated_at":1685619995000,"closed_at":1685619576000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5907","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5907","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5907.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5907.patch","merged_at":1685619575000},"body":"Add `flatten_indices` to `DatasetDict` for convinience","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5907\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5907\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5906","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5906\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5906\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5906\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5906","id":1728171113,"node_id":"I_kwDODunzps5nAcxp","number":5906,"title":"Could you unpin responses version?","user":{"login":"kenimou","id":47789026,"node_id":"MDQ6VXNlcjQ3Nzg5MDI2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47789026?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kenimou","html_url":"https:\/\/github.com\/kenimou","followers_url":"https:\/\/api.github.com\/users\/kenimou\/followers","following_url":"https:\/\/api.github.com\/users\/kenimou\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kenimou\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kenimou\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kenimou\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kenimou\/orgs","repos_url":"https:\/\/api.github.com\/users\/kenimou\/repos","events_url":"https:\/\/api.github.com\/users\/kenimou\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kenimou\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1685131334000,"updated_at":1685469211000,"closed_at":1685469211000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nCould you unpin [this](https:\/\/github.com\/huggingface\/datasets\/blob\/main\/setup.py#L139) or move it to test requirements? This is a testing library and we also use it for our tests as well. We do not want to use a very outdated version.\n\n### Steps to reproduce the bug\n\ncould not install this library due to dependency conflict.\n\n### Expected behavior\n\ncan install datasets\n\n### Environment info\n\nlinux 64","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5906\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5906\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5905","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5905\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5905\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5905\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5905","id":1727541392,"node_id":"I_kwDODunzps5m-DCQ","number":5905,"title":"Offer an alternative to Iterable Dataset that allows lazy loading and processing while skipping batches efficiently","user":{"login":"Hubert-Bonisseur","id":48770768,"node_id":"MDQ6VXNlcjQ4NzcwNzY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48770768?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Hubert-Bonisseur","html_url":"https:\/\/github.com\/Hubert-Bonisseur","followers_url":"https:\/\/api.github.com\/users\/Hubert-Bonisseur\/followers","following_url":"https:\/\/api.github.com\/users\/Hubert-Bonisseur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Hubert-Bonisseur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Hubert-Bonisseur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Hubert-Bonisseur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Hubert-Bonisseur\/orgs","repos_url":"https:\/\/api.github.com\/users\/Hubert-Bonisseur\/repos","events_url":"https:\/\/api.github.com\/users\/Hubert-Bonisseur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Hubert-Bonisseur\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We plan to improve this eventually (see https:\/\/github.com\/huggingface\/datasets\/issues\/5454 and https:\/\/github.com\/huggingface\/datasets\/issues\/5380).\r\n\r\n> Is it possible to lazily load samples of a mapped dataset ? I'm used to [dataset scripts](https:\/\/huggingface.co\/docs\/datasets\/dataset_script), maybe something can be done there.\r\nIf not, I could do it using a plain Pytorch dataset. Then I would need to convert it to a datasets' dataset to get all the features of datasets. Is it something possible ?\r\n\r\nYes, by creating a mapped dataset that stores audio URLs. Indexing a dataset in such format only downloads and decodes the bytes of the accessed samples (without storing them on disk).\r\n\r\nYou can do the following to create this dataset:\r\n```python\r\n\r\ndef gen():\r\n # Generator that yields (audio URL, text) pairs as dict\r\n ...\r\n yield {\"audio\": \"audio_url\", \"text\": \"some text\"}\r\n\r\nfeatures = Features({\"audio\": datasets.Audio(), \"text\": datasets.Value(\"string\")})\r\nds = Dataset.from_generator(gen, features=features)\r\nds[2:5] # downloads and decodes the samples each time they are accessed\r\n```"],"created_at":1685104382000,"updated_at":1686836058000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\r\n\r\nI would like a way to resume training from a checkpoint without waiting for a very long time when using an iterable dataset.\r\n\r\n### Motivation\r\n\r\nI am training models on the speech-recognition task. I have very large datasets that I can't comfortably store on a disk and also quite computationally intensive audio processing to do. As a result I want to load data from my remote when it is needed and perform all processing on the fly.\r\n\r\nI am currently using the iterable dataset feature of _datasets_. It does everything I need with one exception. My issue is that when resuming training at a step n, we have to download all the data and perform the processing of steps < n, just to get the iterable at the right step. In my case it takes almost as long as training for the same steps, which make resuming training from a checkpoint useless in practice.\r\n\r\nI understand that the nature of iterators make it probably nearly impossible to quickly resume training.\r\n\r\nI thought about a possible solution nonetheless : \r\n\r\nI could in fact index my large dataset and make it a mapped dataset. Then I could use set_transform to perform the processing on the fly. Finally, if I'm not mistaken, the _accelerate_ package allows to [skip steps efficiently](https:\/\/github.com\/huggingface\/accelerate\/blob\/a73898027a211c3f6dc4460351b0ec246aa824aa\/src\/accelerate\/data_loader.py#L827) for a mapped dataset.\r\n\r\nIs it possible to lazily load samples of a mapped dataset ? I'm used to [dataset scripts](https:\/\/huggingface.co\/docs\/datasets\/dataset_script), maybe something can be done there.\r\nIf not, I could do it using a plain _Pytorch_ dataset. Then I would need to convert it to a _datasets_' dataset to get all the features of _datasets_. Is it something possible ?\r\n\r\n### Your contribution\r\n\r\nI could provide a PR to allow lazy loading of mapped dataset or the conversion of a mapped _Pytorch_ dataset into a _Datasets_ dataset if you think it is an useful new feature.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5905\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5905\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5904","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5904\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5904\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5904\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5904","id":1727415626,"node_id":"PR_kwDODunzps5Rbfks","number":5904,"title":"Validate name parameter in make_file_instructions","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007401 \/ 0.011353 (-0.003952) | 0.005198 \/ 0.011008 (-0.005810) | 0.112317 \/ 0.038508 (0.073809) | 0.038406 \/ 0.023109 (0.015297) | 0.358008 \/ 0.275898 (0.082110) | 0.395350 \/ 0.323480 (0.071870) | 0.006201 \/ 0.007986 (-0.001785) | 0.004368 \/ 0.004328 (0.000039) | 0.087718 \/ 0.004250 (0.083467) | 0.055299 \/ 0.037052 (0.018247) | 0.350481 \/ 0.258489 (0.091992) | 0.419876 \/ 0.293841 (0.126035) | 0.032459 \/ 0.128546 (-0.096087) | 0.010635 \/ 0.075646 (-0.065011) | 0.383282 \/ 0.419271 (-0.035989) | 0.059241 \/ 0.043533 (0.015708) | 0.365101 \/ 0.255139 (0.109962) | 0.378144 \/ 0.283200 (0.094944) | 0.114287 \/ 0.141683 (-0.027396) | 1.680870 \/ 1.452155 (0.228715) | 1.788183 \/ 1.492716 (0.295467) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.242919 \/ 0.018006 (0.224913) | 0.489850 \/ 0.000490 (0.489360) | 0.011408 \/ 0.000200 (0.011208) | 0.000444 \/ 0.000054 (0.000389) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030742 \/ 0.037411 (-0.006669) | 0.123092 \/ 0.014526 (0.108566) | 0.138246 \/ 0.176557 (-0.038311) | 0.207299 \/ 0.737135 (-0.529836) | 0.142647 \/ 0.296338 (-0.153691) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.472553 \/ 0.215209 (0.257344) | 4.671763 \/ 2.077655 (2.594108) | 2.119986 \/ 1.504120 (0.615866) | 1.891851 \/ 1.541195 (0.350656) | 1.979094 \/ 1.468490 (0.510604) | 0.617956 \/ 4.584777 (-3.966821) | 4.969418 \/ 3.745712 (1.223706) | 4.672083 \/ 5.269862 (-0.597779) | 2.119049 \/ 4.565676 (-2.446627) | 0.077466 \/ 0.424275 (-0.346809) | 0.014434 \/ 0.007607 (0.006827) | 0.580746 \/ 0.226044 (0.354701) | 5.805458 \/ 2.268929 (3.536530) | 2.622498 \/ 55.444624 (-52.822126) | 2.259499 \/ 6.876477 (-4.616978) | 2.362078 \/ 2.142072 (0.220006) | 0.719911 \/ 4.805227 (-4.085317) | 0.164939 \/ 6.500664 (-6.335725) | 0.074762 \/ 0.075469 (-0.000707) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.496709 \/ 1.841788 (-0.345079) | 18.247499 \/ 8.074308 (10.173191) | 15.397075 \/ 10.191392 (5.205683) | 0.181163 \/ 0.680424 (-0.499261) | 0.022604 \/ 0.534201 (-0.511597) | 0.462791 \/ 0.579283 (-0.116492) | 0.504473 \/ 0.434364 (0.070109) | 0.582254 \/ 0.540337 (0.041917) | 0.673849 \/ 1.386936 (-0.713087) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007633 \/ 0.011353 (-0.003720) | 0.004859 \/ 0.011008 (-0.006149) | 0.091194 \/ 0.038508 (0.052686) | 0.038255 \/ 0.023109 (0.015146) | 0.460972 \/ 0.275898 (0.185074) | 0.470441 \/ 0.323480 (0.146961) | 0.006482 \/ 0.007986 (-0.001504) | 0.004500 \/ 0.004328 (0.000172) | 0.089998 \/ 0.004250 (0.085748) | 0.055470 \/ 0.037052 (0.018418) | 0.459188 \/ 0.258489 (0.200699) | 0.491255 \/ 0.293841 (0.197414) | 0.032200 \/ 0.128546 (-0.096346) | 0.010372 \/ 0.075646 (-0.065274) | 0.097429 \/ 0.419271 (-0.321843) | 0.052469 \/ 0.043533 (0.008936) | 0.452492 \/ 0.255139 (0.197353) | 0.475210 \/ 0.283200 (0.192010) | 0.116976 \/ 0.141683 (-0.024707) | 1.752742 \/ 1.452155 (0.300587) | 1.849535 \/ 1.492716 (0.356819) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.229822 \/ 0.018006 (0.211816) | 0.472259 \/ 0.000490 (0.471770) | 0.000455 \/ 0.000200 (0.000255) | 0.000067 \/ 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033796 \/ 0.037411 (-0.003615) | 0.136151 \/ 0.014526 (0.121625) | 0.144015 \/ 0.176557 (-0.032542) | 0.199337 \/ 0.737135 (-0.537798) | 0.150024 \/ 0.296338 (-0.146315) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.522737 \/ 0.215209 (0.307528) | 5.165223 \/ 2.077655 (3.087568) | 2.630334 \/ 1.504120 (1.126214) | 2.392383 \/ 1.541195 (0.851188) | 2.488966 \/ 1.468490 (1.020476) | 0.608981 \/ 4.584777 (-3.975796) | 4.711545 \/ 3.745712 (0.965833) | 2.121537 \/ 5.269862 (-3.148325) | 1.205477 \/ 4.565676 (-3.360199) | 0.078277 \/ 0.424275 (-0.345998) | 0.014175 \/ 0.007607 (0.006568) | 0.640720 \/ 0.226044 (0.414675) | 6.391173 \/ 2.268929 (4.122245) | 3.265131 \/ 55.444624 (-52.179493) | 2.939188 \/ 6.876477 (-3.937289) | 2.919217 \/ 2.142072 (0.777145) | 0.745095 \/ 4.805227 (-4.060132) | 0.164065 \/ 6.500664 (-6.336599) | 0.076993 \/ 0.075469 (0.001524) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.539971 \/ 1.841788 (-0.301817) | 18.597296 \/ 8.074308 (10.522988) | 16.899330 \/ 10.191392 (6.707938) | 0.169005 \/ 0.680424 (-0.511419) | 0.020447 \/ 0.534201 (-0.513754) | 0.465862 \/ 0.579283 (-0.113421) | 0.522819 \/ 0.434364 (0.088455) | 0.547111 \/ 0.540337 (0.006773) | 0.657777 \/ 1.386936 (-0.729159) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#56aff9ecb4e565eb95faad525558914648cc22f1 \"CML watermark\")\n"],"created_at":1685099566000,"updated_at":1685519012000,"closed_at":1685518497000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5904","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5904","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5904.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5904.patch","merged_at":1685518497000},"body":"Validate `name` parameter in `make_file_instructions`.\r\n\r\nThis way users get more informative error messages, instead of:\r\n```stacktrace\r\n...\/huggingface\/datasets\/src\/datasets\/arrow_reader.py in make_file_instructions(name, split_infos, instruction, filetype_suffix, prefix_path)\r\n 110 name2len = {info.name: info.num_examples for info in split_infos}\r\n 111 name2shard_lengths = {info.name: info.shard_lengths for info in split_infos}\r\n--> 112 name2filenames = {\r\n 113 info.name: filenames_for_dataset_split(\r\n 114 path=prefix_path,\r\n\r\n...\/huggingface\/datasets\/src\/datasets\/arrow_reader.py in (.0)\r\n 111 name2shard_lengths = {info.name: info.shard_lengths for info in split_infos}\r\n 112 name2filenames = {\r\n--> 113 info.name: filenames_for_dataset_split(\r\n 114 path=prefix_path,\r\n 115 dataset_name=name,\r\n\r\n...\/huggingface\/datasets\/src\/datasets\/naming.py in filenames_for_dataset_split(path, dataset_name, split, filetype_suffix, shard_lengths)\r\n 68 \r\n 69 def filenames_for_dataset_split(path, dataset_name, split, filetype_suffix=None, shard_lengths=None):\r\n---> 70 prefix = filename_prefix_for_split(dataset_name, split)\r\n 71 prefix = os.path.join(path, prefix)\r\n 72 \r\n\r\n...\/huggingface\/datasets\/src\/datasets\/naming.py in filename_prefix_for_split(name, split)\r\n 52 \r\n 53 def filename_prefix_for_split(name, split):\r\n---> 54 if os.path.basename(name) != name:\r\n 55 raise ValueError(f\"Should be a dataset name, not a path: {name}\")\r\n 56 if not re.match(_split_re, split):\r\n\r\n...\/lib\/python3.9\/posixpath.py in basename(p)\r\n 140 def basename(p):\r\n 141 \"\"\"Returns the final component of a pathname\"\"\"\r\n--> 142 p = os.fspath(p)\r\n 143 sep = _get_sep(p)\r\n 144 i = p.rfind(sep) + 1\r\n\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType\r\n```\r\n\r\nRelated to #5895.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5904\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5904\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5903","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5903\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5903\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5903\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5903","id":1727372549,"node_id":"PR_kwDODunzps5RbV82","number":5903,"title":"Relax `ci.yml` trigger for `pull_request` based on modified paths","user":{"login":"alvarobartt","id":36760800,"node_id":"MDQ6VXNlcjM2NzYwODAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36760800?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alvarobartt","html_url":"https:\/\/github.com\/alvarobartt","followers_url":"https:\/\/api.github.com\/users\/alvarobartt\/followers","following_url":"https:\/\/api.github.com\/users\/alvarobartt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alvarobartt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alvarobartt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alvarobartt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alvarobartt\/orgs","repos_url":"https:\/\/api.github.com\/users\/alvarobartt\/repos","events_url":"https:\/\/api.github.com\/users\/alvarobartt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alvarobartt\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Also this could be extended to the rest of the GitHub Action `yml` files, so let me know whether you want me to have a look into it! \ud83e\udd17","The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_5903). All of your documentation changes will be reflected on that endpoint."],"created_at":1685098012000,"updated_at":1685098297000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5903","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5903","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5903.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5903.patch","merged_at":null},"body":"## What's in this PR?\r\n\r\nAs of a previous PR at #5902, I've seen that the CI was automatically trigger on any file, in that case when modifying a Jupyter Notebook (.ipynb), which IMO could be skipped, as the modification on the Jupyter Notebook has no effect\/impact on the `ci.yml` outcome. So this PR controls the paths that trigger the `ci.yml` to avoid wasting resources when not needed.\r\n\r\n## What's pending in this PR?\r\n\r\nI would like to confirm whether this should affect both `push` and `pull_request`, since just modifications in those files won't change the `ci.yml` outcome, so maybe it's worth skipping it too in the `push` trigger.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5903\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5903\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5902","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5902\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5902\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5902\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5902","id":1727342194,"node_id":"PR_kwDODunzps5RbPS9","number":5902,"title":"Fix `Overview.ipynb` & detach Jupyter Notebooks from `datasets` repository","user":{"login":"alvarobartt","id":36760800,"node_id":"MDQ6VXNlcjM2NzYwODAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36760800?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alvarobartt","html_url":"https:\/\/github.com\/alvarobartt","followers_url":"https:\/\/api.github.com\/users\/alvarobartt\/followers","following_url":"https:\/\/api.github.com\/users\/alvarobartt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alvarobartt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alvarobartt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alvarobartt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alvarobartt\/orgs","repos_url":"https:\/\/api.github.com\/users\/alvarobartt\/repos","events_url":"https:\/\/api.github.com\/users\/alvarobartt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alvarobartt\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Random fact: previous run was showing that the Hub was hosting 13336 datasets, while the most recent run shows 36662 \ud83d\udc40\ud83c\udf89","The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_5902). All of your documentation changes will be reflected on that endpoint.","Thanks! \r\n\r\nHowever, I think we should stop linking this notebook and use the notebook version of the Quickstart doc page instead of it for easier maintenance (we would have the \"Open in Colab\" button in the Quickstart doc as Transformers [does](https:\/\/huggingface.co\/docs\/transformers\/quicktour)). \r\n\r\n@stevhliu should be able to help with this. If I'm not mistaken, this can be done by adding the `[[open in colab]]` marker to the doc page.\r\n\r\nAlso, if some useful info from the Overview notebook is not in the docs, feel free to add it so we don't lose it \ud83d\ude42.","Cool, makes sense @mariosasko, then I'll check both notebooks and see whether there's something in `Overview.ipynb` worth including in the `docs\/source\/quickstart.mdx` and remove `Overview.ipynb` and update references in favour of `docs\/source\/quickstart.mdx`\r\n\r\nAre you OK if I do that @stevhliu @mariosasko? Thanks \ud83e\udd17 ","For the moment I've just updated the `quickstart.mdx` to be more similar to [quicktour.mdx](https:\/\/github.com\/huggingface\/transformers\/blob\/main\/docs\/source\/en\/quicktour.mdx), but regarding the `Overview.ipynb` notebook I was planning to create a PR in https:\/\/github.com\/huggingface\/notebooks to add it there, does that make sense @stevhliu? And then to create a `README.md` in this repository in `notebooks\/` as `transformers` does to point to the related notebooks hosted in https:\/\/github.com\/huggingface\/notebooks, WDYT? \ud83e\udd17 ","Hi @stevhliu thanks for the feedback! Already applied your suggestions, I'll also add the pointers to both audio and image datasets in the \"What's next\" section.\r\n\r\nBesides that, let me know if I can help with the notebook being hosted in `huggingface\/notebooks` instead, and I'll happily do so!","Thanks a lot for the detailed feedback @mariosasko, I'll apply the changes today!","> Besides that, let me know if I can help with the notebook being hosted in `huggingface\/notebooks` instead, and I'll happily do so!\r\n\r\nAwesome! If you're up for it, I think you can go ahead and open a PR with the changes I've outlined [here](https:\/\/github.com\/huggingface\/datasets\/pull\/5902#pullrequestreview-1475236887) to add the notebook building workflow. ","Hi @stevhliu @mariosasko, sorry for the delay I had a busy week, I'll tackle this either today or tomorrow to ideally close it before the weekend, thanks again for the help and guidance \ud83d\ude04 "],"created_at":1685096701000,"updated_at":1688022466000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5902","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5902","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5902.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5902.patch","merged_at":null},"body":"## What's in this PR?\r\n\r\nThis PR solves #5887 since there was a mismatch between the tokenizer and the model used, since the tokenizer was `bert-base-cased` while the model was `distilbert-base-case` both for the PyTorch and TensorFlow alternatives. Since DistilBERT doesn't use\/need the `token_type_ids`, the `**batch` was failing, as the batch contained `input_ids`, `attention_mask`, `token_type_ids`, `start_positions` and `end_positions`, and `token_type_ids` was not required.\r\n\r\nBesides that, at the end `seqeval` was being used to evaluate the model predictions, and just `evaluate` was being installed, so I've also included the `seqeval` installation.\r\n\r\nFinally, I've re-run everything in Google Colab, and every cell was successfully executed!\r\n\r\n## What was done on top of the original PR?\r\n\r\nBased on the comments from @mariosasko and @stevhliu, I've updated the contents of this PR to also review the `quickstart.mdx` and update what was needed, besides that, we may eventually move the `Overview.ipynb` dataset to `huggingface\/notebooks` following @stevhliu suggestions.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5902\/reactions","total_count":2,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5902\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5901","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5901\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5901\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5901\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5901","id":1727179016,"node_id":"PR_kwDODunzps5Rarux","number":5901,"title":"Make prepare_split more robust if errors in metadata dataset_info splits","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008809 \/ 0.011353 (-0.002544) | 0.005641 \/ 0.011008 (-0.005367) | 0.124986 \/ 0.038508 (0.086477) | 0.037311 \/ 0.023109 (0.014202) | 0.388915 \/ 0.275898 (0.113017) | 0.430123 \/ 0.323480 (0.106643) | 0.007447 \/ 0.007986 (-0.000538) | 0.009593 \/ 0.004328 (0.005264) | 0.099148 \/ 0.004250 (0.094898) | 0.052393 \/ 0.037052 (0.015341) | 0.399779 \/ 0.258489 (0.141290) | 0.439109 \/ 0.293841 (0.145268) | 0.043409 \/ 0.128546 (-0.085137) | 0.016286 \/ 0.075646 (-0.059360) | 0.431198 \/ 0.419271 (0.011927) | 0.064932 \/ 0.043533 (0.021400) | 0.390650 \/ 0.255139 (0.135511) | 0.432883 \/ 0.283200 (0.149684) | 0.110978 \/ 0.141683 (-0.030705) | 1.796121 \/ 1.452155 (0.343967) | 1.960097 \/ 1.492716 (0.467381) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.286292 \/ 0.018006 (0.268286) | 0.659495 \/ 0.000490 (0.659005) | 0.008294 \/ 0.000200 (0.008094) | 0.000485 \/ 0.000054 (0.000431) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029325 \/ 0.037411 (-0.008086) | 0.125454 \/ 0.014526 (0.110928) | 0.136459 \/ 0.176557 (-0.040097) | 0.221075 \/ 0.737135 (-0.516060) | 0.140281 \/ 0.296338 (-0.156058) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.602401 \/ 0.215209 (0.387192) | 6.124553 \/ 2.077655 (4.046898) | 2.453141 \/ 1.504120 (0.949021) | 2.038611 \/ 1.541195 (0.497416) | 2.073611 \/ 1.468490 (0.605121) | 0.938040 \/ 4.584777 (-3.646737) | 5.755972 \/ 3.745712 (2.010260) | 4.450935 \/ 5.269862 (-0.818926) | 2.337219 \/ 4.565676 (-2.228457) | 0.107118 \/ 0.424275 (-0.317157) | 0.015201 \/ 0.007607 (0.007594) | 0.785833 \/ 0.226044 (0.559788) | 7.732984 \/ 2.268929 (5.464055) | 3.236892 \/ 55.444624 (-52.207733) | 2.696402 \/ 6.876477 (-4.180074) | 2.805036 \/ 2.142072 (0.662964) | 1.108612 \/ 4.805227 (-3.696616) | 0.221067 \/ 6.500664 (-6.279597) | 0.085538 \/ 0.075469 (0.010068) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.600311 \/ 1.841788 (-0.241476) | 18.528118 \/ 8.074308 (10.453810) | 21.107199 \/ 10.191392 (10.915807) | 0.219489 \/ 0.680424 (-0.460934) | 0.028927 \/ 0.534201 (-0.505274) | 0.503446 \/ 0.579283 (-0.075837) | 0.619833 \/ 0.434364 (0.185469) | 0.582454 \/ 0.540337 (0.042117) | 0.709154 \/ 1.386936 (-0.677782) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008516 \/ 0.011353 (-0.002837) | 0.006090 \/ 0.011008 (-0.004918) | 0.104574 \/ 0.038508 (0.066066) | 0.042676 \/ 0.023109 (0.019566) | 0.458623 \/ 0.275898 (0.182725) | 0.568479 \/ 0.323480 (0.244999) | 0.008374 \/ 0.007986 (0.000389) | 0.004677 \/ 0.004328 (0.000349) | 0.105946 \/ 0.004250 (0.101695) | 0.055256 \/ 0.037052 (0.018204) | 0.511036 \/ 0.258489 (0.252547) | 0.598383 \/ 0.293841 (0.304542) | 0.043612 \/ 0.128546 (-0.084934) | 0.014707 \/ 0.075646 (-0.060940) | 0.116350 \/ 0.419271 (-0.302921) | 0.061413 \/ 0.043533 (0.017880) | 0.477785 \/ 0.255139 (0.222646) | 0.542643 \/ 0.283200 (0.259443) | 0.120431 \/ 0.141683 (-0.021252) | 1.994083 \/ 1.452155 (0.541928) | 2.100600 \/ 1.492716 (0.607883) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.298480 \/ 0.018006 (0.280474) | 0.601921 \/ 0.000490 (0.601432) | 0.000445 \/ 0.000200 (0.000245) | 0.000086 \/ 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034784 \/ 0.037411 (-0.002627) | 0.133555 \/ 0.014526 (0.119029) | 0.138541 \/ 0.176557 (-0.038015) | 0.203114 \/ 0.737135 (-0.534021) | 0.153477 \/ 0.296338 (-0.142861) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.780484 \/ 0.215209 (0.565275) | 7.150876 \/ 2.077655 (5.073222) | 3.168590 \/ 1.504120 (1.664470) | 2.698746 \/ 1.541195 (1.157552) | 2.695678 \/ 1.468490 (1.227188) | 1.037706 \/ 4.584777 (-3.547071) | 5.672631 \/ 3.745712 (1.926918) | 2.798137 \/ 5.269862 (-2.471725) | 1.738588 \/ 4.565676 (-2.827088) | 0.111160 \/ 0.424275 (-0.313115) | 0.013878 \/ 0.007607 (0.006271) | 0.800191 \/ 0.226044 (0.574146) | 8.546676 \/ 2.268929 (6.277748) | 4.116852 \/ 55.444624 (-51.327773) | 3.331271 \/ 6.876477 (-3.545206) | 3.307410 \/ 2.142072 (1.165337) | 1.191019 \/ 4.805227 (-3.614208) | 0.248953 \/ 6.500664 (-6.251711) | 0.086632 \/ 0.075469 (0.011162) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.795057 \/ 1.841788 (-0.046730) | 18.038785 \/ 8.074308 (9.964476) | 21.865566 \/ 10.191392 (11.674174) | 0.211058 \/ 0.680424 (-0.469366) | 0.026956 \/ 0.534201 (-0.507245) | 0.518855 \/ 0.579283 (-0.060428) | 0.618105 \/ 0.434364 (0.183741) | 0.569227 \/ 0.540337 (0.028889) | 0.705431 \/ 1.386936 (-0.681505) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#074925b9b7c1dfd33b8675aa99c07cc26375665c \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008900 \/ 0.011353 (-0.002453) | 0.005726 \/ 0.011008 (-0.005283) | 0.131747 \/ 0.038508 (0.093239) | 0.040585 \/ 0.023109 (0.017476) | 0.420531 \/ 0.275898 (0.144633) | 0.459430 \/ 0.323480 (0.135950) | 0.007642 \/ 0.007986 (-0.000344) | 0.006750 \/ 0.004328 (0.002421) | 0.099147 \/ 0.004250 (0.094897) | 0.055852 \/ 0.037052 (0.018799) | 0.423653 \/ 0.258489 (0.165164) | 0.453304 \/ 0.293841 (0.159463) | 0.045247 \/ 0.128546 (-0.083300) | 0.016034 \/ 0.075646 (-0.059612) | 0.443115 \/ 0.419271 (0.023843) | 0.078853 \/ 0.043533 (0.035320) | 0.417508 \/ 0.255139 (0.162369) | 0.440936 \/ 0.283200 (0.157736) | 0.115603 \/ 0.141683 (-0.026080) | 1.844610 \/ 1.452155 (0.392456) | 1.998497 \/ 1.492716 (0.505781) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.272622 \/ 0.018006 (0.254616) | 0.598045 \/ 0.000490 (0.597556) | 0.007088 \/ 0.000200 (0.006888) | 0.000159 \/ 0.000054 (0.000105) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032976 \/ 0.037411 (-0.004436) | 0.143970 \/ 0.014526 (0.129444) | 0.142172 \/ 0.176557 (-0.034384) | 0.216747 \/ 0.737135 (-0.520389) | 0.146004 \/ 0.296338 (-0.150334) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.687507 \/ 0.215209 (0.472298) | 6.549524 \/ 2.077655 (4.471870) | 2.924142 \/ 1.504120 (1.420022) | 2.504471 \/ 1.541195 (0.963277) | 2.496280 \/ 1.468490 (1.027790) | 0.959054 \/ 4.584777 (-3.625723) | 5.851742 \/ 3.745712 (2.106030) | 4.983357 \/ 5.269862 (-0.286504) | 2.627403 \/ 4.565676 (-1.938274) | 0.112955 \/ 0.424275 (-0.311320) | 0.016206 \/ 0.007607 (0.008599) | 0.819158 \/ 0.226044 (0.593114) | 8.416949 \/ 2.268929 (6.148020) | 3.776765 \/ 55.444624 (-51.667859) | 3.002397 \/ 6.876477 (-3.874080) | 3.158852 \/ 2.142072 (1.016779) | 1.197099 \/ 4.805227 (-3.608129) | 0.280654 \/ 6.500664 (-6.220010) | 0.099471 \/ 0.075469 (0.024002) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.687007 \/ 1.841788 (-0.154781) | 19.411976 \/ 8.074308 (11.337668) | 22.053482 \/ 10.191392 (11.862090) | 0.228038 \/ 0.680424 (-0.452386) | 0.028226 \/ 0.534201 (-0.505975) | 0.527695 \/ 0.579283 (-0.051588) | 0.635911 \/ 0.434364 (0.201547) | 0.618205 \/ 0.540337 (0.077868) | 0.735164 \/ 1.386936 (-0.651772) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009450 \/ 0.011353 (-0.001903) | 0.006566 \/ 0.011008 (-0.004442) | 0.108919 \/ 0.038508 (0.070411) | 0.050010 \/ 0.023109 (0.026900) | 0.505168 \/ 0.275898 (0.229270) | 0.552190 \/ 0.323480 (0.228710) | 0.007569 \/ 0.007986 (-0.000417) | 0.006807 \/ 0.004328 (0.002478) | 0.116621 \/ 0.004250 (0.112371) | 0.060374 \/ 0.037052 (0.023321) | 0.515165 \/ 0.258489 (0.256676) | 0.572125 \/ 0.293841 (0.278284) | 0.046561 \/ 0.128546 (-0.081986) | 0.016159 \/ 0.075646 (-0.059487) | 0.114568 \/ 0.419271 (-0.304704) | 0.064689 \/ 0.043533 (0.021157) | 0.497870 \/ 0.255139 (0.242731) | 0.567332 \/ 0.283200 (0.284132) | 0.126254 \/ 0.141683 (-0.015429) | 1.954074 \/ 1.452155 (0.501919) | 2.057682 \/ 1.492716 (0.564966) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.013857 \/ 0.018006 (-0.004149) | 0.601561 \/ 0.000490 (0.601071) | 0.002897 \/ 0.000200 (0.002697) | 0.000108 \/ 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.038480 \/ 0.037411 (0.001069) | 0.142480 \/ 0.014526 (0.127954) | 0.160479 \/ 0.176557 (-0.016077) | 0.217942 \/ 0.737135 (-0.519194) | 0.159908 \/ 0.296338 (-0.136431) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.697926 \/ 0.215209 (0.482717) | 6.869754 \/ 2.077655 (4.792100) | 3.125463 \/ 1.504120 (1.621343) | 2.729123 \/ 1.541195 (1.187928) | 2.855747 \/ 1.468490 (1.387257) | 1.015345 \/ 4.584777 (-3.569432) | 5.839176 \/ 3.745712 (2.093463) | 5.019678 \/ 5.269862 (-0.250184) | 2.080489 \/ 4.565676 (-2.485187) | 0.118884 \/ 0.424275 (-0.305391) | 0.021381 \/ 0.007607 (0.013774) | 0.877847 \/ 0.226044 (0.651803) | 8.714561 \/ 2.268929 (6.445633) | 3.933399 \/ 55.444624 (-51.511226) | 3.281809 \/ 6.876477 (-3.594668) | 3.330342 \/ 2.142072 (1.188269) | 1.235005 \/ 4.805227 (-3.570222) | 0.239686 \/ 6.500664 (-6.260978) | 0.093546 \/ 0.075469 (0.018077) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.787916 \/ 1.841788 (-0.053872) | 20.094828 \/ 8.074308 (12.020520) | 22.902101 \/ 10.191392 (12.710709) | 0.249315 \/ 0.680424 (-0.431109) | 0.028058 \/ 0.534201 (-0.506143) | 0.524960 \/ 0.579283 (-0.054323) | 0.643881 \/ 0.434364 (0.209517) | 0.621203 \/ 0.540337 (0.080866) | 0.723337 \/ 1.386936 (-0.663599) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#074925b9b7c1dfd33b8675aa99c07cc26375665c \"CML watermark\")\n"],"created_at":1685090902000,"updated_at":1685685998000,"closed_at":1685626780000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5901","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5901","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5901.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5901.patch","merged_at":1685626779000},"body":"This PR uses `split_generator.split_info` as default value for `split_info` if any exception is raised while trying to get `split_generator.name` from `self.info.splits` (this may happen if there is any error in the metadata dataset_info splits).\r\n\r\nPlease note that `split_info` is only used by the logger.\r\n\r\nFix #5895 if passed `verification_mode=\"no_checks\"`:\r\n```python\r\nds = load_dataset(\r\n \"ArmelR\/stack-exchange-instruction\", \r\n data_dir=\"data\/finetune\", \r\n split=\"train\", \r\n verification_mode=\"no_checks\", \r\n revision=\"c609f1caade5cfbf3b9fe9cfa17d7cb000b457bd\",\r\n)\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5901\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5901\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5900","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5900\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5900\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5900\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5900","id":1727129617,"node_id":"PR_kwDODunzps5RahTR","number":5900,"title":"Fix minor typo in docs loading.mdx","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006763 \/ 0.011353 (-0.004589) | 0.004548 \/ 0.011008 (-0.006460) | 0.095631 \/ 0.038508 (0.057123) | 0.034046 \/ 0.023109 (0.010936) | 0.298064 \/ 0.275898 (0.022166) | 0.330391 \/ 0.323480 (0.006911) | 0.006058 \/ 0.007986 (-0.001928) | 0.004163 \/ 0.004328 (-0.000165) | 0.073260 \/ 0.004250 (0.069010) | 0.048885 \/ 0.037052 (0.011832) | 0.304651 \/ 0.258489 (0.046162) | 0.345882 \/ 0.293841 (0.052042) | 0.028061 \/ 0.128546 (-0.100485) | 0.008823 \/ 0.075646 (-0.066823) | 0.325620 \/ 0.419271 (-0.093651) | 0.064480 \/ 0.043533 (0.020948) | 0.303373 \/ 0.255139 (0.048234) | 0.321672 \/ 0.283200 (0.038472) | 0.116353 \/ 0.141683 (-0.025330) | 1.442327 \/ 1.452155 (-0.009827) | 1.567553 \/ 1.492716 (0.074837) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.213042 \/ 0.018006 (0.195035) | 0.457646 \/ 0.000490 (0.457156) | 0.003989 \/ 0.000200 (0.003789) | 0.000078 \/ 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028068 \/ 0.037411 (-0.009344) | 0.114791 \/ 0.014526 (0.100265) | 0.120870 \/ 0.176557 (-0.055686) | 0.183006 \/ 0.737135 (-0.554130) | 0.126772 \/ 0.296338 (-0.169567) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.406438 \/ 0.215209 (0.191229) | 4.041890 \/ 2.077655 (1.964235) | 1.839967 \/ 1.504120 (0.335847) | 1.646857 \/ 1.541195 (0.105662) | 1.729372 \/ 1.468490 (0.260882) | 0.525540 \/ 4.584777 (-4.059237) | 3.809996 \/ 3.745712 (0.064284) | 1.842598 \/ 5.269862 (-3.427263) | 1.062815 \/ 4.565676 (-3.502862) | 0.065301 \/ 0.424275 (-0.358974) | 0.012027 \/ 0.007607 (0.004420) | 0.505459 \/ 0.226044 (0.279415) | 5.051177 \/ 2.268929 (2.782248) | 2.354368 \/ 55.444624 (-53.090256) | 2.035482 \/ 6.876477 (-4.840995) | 2.120493 \/ 2.142072 (-0.021579) | 0.642233 \/ 4.805227 (-4.162994) | 0.141690 \/ 6.500664 (-6.358974) | 0.063933 \/ 0.075469 (-0.011536) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.186261 \/ 1.841788 (-0.655527) | 14.919653 \/ 8.074308 (6.845345) | 14.534003 \/ 10.191392 (4.342611) | 0.183165 \/ 0.680424 (-0.497259) | 0.017581 \/ 0.534201 (-0.516620) | 0.397284 \/ 0.579283 (-0.181999) | 0.431363 \/ 0.434364 (-0.003001) | 0.510774 \/ 0.540337 (-0.029564) | 0.614421 \/ 1.386936 (-0.772516) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006682 \/ 0.011353 (-0.004671) | 0.004558 \/ 0.011008 (-0.006450) | 0.076272 \/ 0.038508 (0.037764) | 0.034285 \/ 0.023109 (0.011176) | 0.395594 \/ 0.275898 (0.119696) | 0.402702 \/ 0.323480 (0.079222) | 0.006093 \/ 0.007986 (-0.001893) | 0.005538 \/ 0.004328 (0.001209) | 0.075797 \/ 0.004250 (0.071547) | 0.051638 \/ 0.037052 (0.014585) | 0.396071 \/ 0.258489 (0.137582) | 0.409282 \/ 0.293841 (0.115441) | 0.028193 \/ 0.128546 (-0.100354) | 0.008827 \/ 0.075646 (-0.066819) | 0.083182 \/ 0.419271 (-0.336089) | 0.047605 \/ 0.043533 (0.004072) | 0.391148 \/ 0.255139 (0.136009) | 0.386784 \/ 0.283200 (0.103584) | 0.115303 \/ 0.141683 (-0.026380) | 1.463666 \/ 1.452155 (0.011512) | 1.566147 \/ 1.492716 (0.073431) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.213846 \/ 0.018006 (0.195839) | 0.454769 \/ 0.000490 (0.454279) | 0.004767 \/ 0.000200 (0.004567) | 0.000099 \/ 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030369 \/ 0.037411 (-0.007042) | 0.115585 \/ 0.014526 (0.101059) | 0.125181 \/ 0.176557 (-0.051376) | 0.179247 \/ 0.737135 (-0.557888) | 0.129336 \/ 0.296338 (-0.167003) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.446040 \/ 0.215209 (0.230831) | 4.462644 \/ 2.077655 (2.384989) | 2.254511 \/ 1.504120 (0.750392) | 2.062679 \/ 1.541195 (0.521484) | 2.180766 \/ 1.468490 (0.712276) | 0.530928 \/ 4.584777 (-4.053849) | 3.781392 \/ 3.745712 (0.035680) | 3.522539 \/ 5.269862 (-1.747322) | 1.506960 \/ 4.565676 (-3.058717) | 0.067101 \/ 0.424275 (-0.357174) | 0.012011 \/ 0.007607 (0.004404) | 0.546407 \/ 0.226044 (0.320362) | 5.429894 \/ 2.268929 (3.160965) | 2.702244 \/ 55.444624 (-52.742381) | 2.367559 \/ 6.876477 (-4.508917) | 2.556032 \/ 2.142072 (0.413960) | 0.639690 \/ 4.805227 (-4.165538) | 0.144538 \/ 6.500664 (-6.356126) | 0.067822 \/ 0.075469 (-0.007647) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.284977 \/ 1.841788 (-0.556811) | 15.546489 \/ 8.074308 (7.472181) | 14.747519 \/ 10.191392 (4.556127) | 0.160044 \/ 0.680424 (-0.520380) | 0.017746 \/ 0.534201 (-0.516454) | 0.390140 \/ 0.579283 (-0.189143) | 0.420342 \/ 0.434364 (-0.014021) | 0.459788 \/ 0.540337 (-0.080549) | 0.556360 \/ 1.386936 (-0.830576) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#d646afbac7ea3dc0996fa2cb6ffd8a98e158e742 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006493 \/ 0.011353 (-0.004860) | 0.004532 \/ 0.011008 (-0.006476) | 0.096509 \/ 0.038508 (0.058001) | 0.033084 \/ 0.023109 (0.009974) | 0.297802 \/ 0.275898 (0.021904) | 0.345880 \/ 0.323480 (0.022400) | 0.005461 \/ 0.007986 (-0.002525) | 0.005282 \/ 0.004328 (0.000954) | 0.073719 \/ 0.004250 (0.069469) | 0.045035 \/ 0.037052 (0.007983) | 0.295504 \/ 0.258489 (0.037015) | 0.345400 \/ 0.293841 (0.051559) | 0.027880 \/ 0.128546 (-0.100666) | 0.008804 \/ 0.075646 (-0.066842) | 0.328017 \/ 0.419271 (-0.091255) | 0.050169 \/ 0.043533 (0.006637) | 0.299642 \/ 0.255139 (0.044503) | 0.313573 \/ 0.283200 (0.030374) | 0.103359 \/ 0.141683 (-0.038323) | 1.482145 \/ 1.452155 (0.029990) | 1.554584 \/ 1.492716 (0.061867) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.212860 \/ 0.018006 (0.194853) | 0.444823 \/ 0.000490 (0.444334) | 0.003014 \/ 0.000200 (0.002815) | 0.000108 \/ 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026906 \/ 0.037411 (-0.010506) | 0.108056 \/ 0.014526 (0.093530) | 0.118721 \/ 0.176557 (-0.057835) | 0.176646 \/ 0.737135 (-0.560489) | 0.123285 \/ 0.296338 (-0.173053) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.430157 \/ 0.215209 (0.214948) | 4.279362 \/ 2.077655 (2.201707) | 1.999732 \/ 1.504120 (0.495612) | 1.803787 \/ 1.541195 (0.262592) | 1.868322 \/ 1.468490 (0.399832) | 0.529314 \/ 4.584777 (-4.055463) | 3.785101 \/ 3.745712 (0.039389) | 2.812608 \/ 5.269862 (-2.457254) | 1.373460 \/ 4.565676 (-3.192216) | 0.066208 \/ 0.424275 (-0.358067) | 0.012173 \/ 0.007607 (0.004566) | 0.528716 \/ 0.226044 (0.302672) | 5.295003 \/ 2.268929 (3.026074) | 2.450188 \/ 55.444624 (-52.994437) | 2.114560 \/ 6.876477 (-4.761917) | 2.268468 \/ 2.142072 (0.126395) | 0.651706 \/ 4.805227 (-4.153521) | 0.142185 \/ 6.500664 (-6.358479) | 0.064862 \/ 0.075469 (-0.010607) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.184933 \/ 1.841788 (-0.656854) | 14.503903 \/ 8.074308 (6.429595) | 13.928965 \/ 10.191392 (3.737573) | 0.156788 \/ 0.680424 (-0.523636) | 0.017320 \/ 0.534201 (-0.516881) | 0.391366 \/ 0.579283 (-0.187918) | 0.416261 \/ 0.434364 (-0.018103) | 0.461951 \/ 0.540337 (-0.078387) | 0.553496 \/ 1.386936 (-0.833440) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006623 \/ 0.011353 (-0.004730) | 0.004617 \/ 0.011008 (-0.006392) | 0.075579 \/ 0.038508 (0.037071) | 0.033863 \/ 0.023109 (0.010754) | 0.357097 \/ 0.275898 (0.081199) | 0.396177 \/ 0.323480 (0.072697) | 0.005712 \/ 0.007986 (-0.002274) | 0.004232 \/ 0.004328 (-0.000097) | 0.074669 \/ 0.004250 (0.070418) | 0.048253 \/ 0.037052 (0.011201) | 0.362453 \/ 0.258489 (0.103964) | 0.405423 \/ 0.293841 (0.111582) | 0.028709 \/ 0.128546 (-0.099837) | 0.008884 \/ 0.075646 (-0.066763) | 0.083042 \/ 0.419271 (-0.336230) | 0.048074 \/ 0.043533 (0.004541) | 0.355314 \/ 0.255139 (0.100175) | 0.372536 \/ 0.283200 (0.089336) | 0.111548 \/ 0.141683 (-0.030135) | 1.466353 \/ 1.452155 (0.014198) | 1.555077 \/ 1.492716 (0.062361) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.217016 \/ 0.018006 (0.199010) | 0.450145 \/ 0.000490 (0.449655) | 0.001910 \/ 0.000200 (0.001711) | 0.000098 \/ 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029787 \/ 0.037411 (-0.007624) | 0.115282 \/ 0.014526 (0.100756) | 0.121962 \/ 0.176557 (-0.054595) | 0.173424 \/ 0.737135 (-0.563711) | 0.127519 \/ 0.296338 (-0.168819) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.438211 \/ 0.215209 (0.223002) | 4.346352 \/ 2.077655 (2.268697) | 2.140197 \/ 1.504120 (0.636077) | 1.957890 \/ 1.541195 (0.416696) | 2.044300 \/ 1.468490 (0.575810) | 0.527958 \/ 4.584777 (-4.056819) | 3.805079 \/ 3.745712 (0.059367) | 2.601763 \/ 5.269862 (-2.668098) | 1.359469 \/ 4.565676 (-3.206208) | 0.065358 \/ 0.424275 (-0.358917) | 0.011571 \/ 0.007607 (0.003964) | 0.538513 \/ 0.226044 (0.312469) | 5.363508 \/ 2.268929 (3.094580) | 2.640495 \/ 55.444624 (-52.804129) | 2.335930 \/ 6.876477 (-4.540547) | 2.407782 \/ 2.142072 (0.265710) | 0.641637 \/ 4.805227 (-4.163590) | 0.142196 \/ 6.500664 (-6.358468) | 0.065041 \/ 0.075469 (-0.010428) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.296031 \/ 1.841788 (-0.545757) | 14.950424 \/ 8.074308 (6.876115) | 14.371304 \/ 10.191392 (4.179912) | 0.148157 \/ 0.680424 (-0.532267) | 0.017506 \/ 0.534201 (-0.516695) | 0.392037 \/ 0.579283 (-0.187246) | 0.423238 \/ 0.434364 (-0.011126) | 0.464608 \/ 0.540337 (-0.075730) | 0.563876 \/ 1.386936 (-0.823060) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#04b1d0371408beb0c7bc587a69c382bd8d0bec36 \"CML watermark\")\n"],"created_at":1685088654000,"updated_at":1685093655000,"closed_at":1685093112000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5900","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5900","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5900.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5900.patch","merged_at":1685093112000},"body":"Minor fix.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5900\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5900\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5899","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5899\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5899\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5899\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5899","id":1726279011,"node_id":"PR_kwDODunzps5RXods","number":5899,"title":"canonicalize data dir in config ID hash","user":{"login":"kylrth","id":5044802,"node_id":"MDQ6VXNlcjUwNDQ4MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5044802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kylrth","html_url":"https:\/\/github.com\/kylrth","followers_url":"https:\/\/api.github.com\/users\/kylrth\/followers","following_url":"https:\/\/api.github.com\/users\/kylrth\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kylrth\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kylrth\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kylrth\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kylrth\/orgs","repos_url":"https:\/\/api.github.com\/users\/kylrth\/repos","events_url":"https:\/\/api.github.com\/users\/kylrth\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kylrth\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009137 \/ 0.011353 (-0.002216) | 0.006119 \/ 0.011008 (-0.004889) | 0.136530 \/ 0.038508 (0.098022) | 0.038434 \/ 0.023109 (0.015325) | 0.427900 \/ 0.275898 (0.152002) | 0.449757 \/ 0.323480 (0.126277) | 0.007673 \/ 0.007986 (-0.000313) | 0.007147 \/ 0.004328 (0.002818) | 0.108029 \/ 0.004250 (0.103778) | 0.055072 \/ 0.037052 (0.018020) | 0.439245 \/ 0.258489 (0.180756) | 0.477285 \/ 0.293841 (0.183444) | 0.044838 \/ 0.128546 (-0.083708) | 0.020814 \/ 0.075646 (-0.054832) | 0.436098 \/ 0.419271 (0.016826) | 0.067459 \/ 0.043533 (0.023926) | 0.427470 \/ 0.255139 (0.172331) | 0.443260 \/ 0.283200 (0.160060) | 0.125466 \/ 0.141683 (-0.016216) | 1.996756 \/ 1.452155 (0.544601) | 2.100679 \/ 1.492716 (0.607962) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.278407 \/ 0.018006 (0.260401) | 0.625855 \/ 0.000490 (0.625365) | 0.005544 \/ 0.000200 (0.005344) | 0.000107 \/ 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033495 \/ 0.037411 (-0.003916) | 0.134718 \/ 0.014526 (0.120192) | 0.150151 \/ 0.176557 (-0.026406) | 0.221385 \/ 0.737135 (-0.515751) | 0.150932 \/ 0.296338 (-0.145406) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.668845 \/ 0.215209 (0.453636) | 6.678436 \/ 2.077655 (4.600781) | 2.714074 \/ 1.504120 (1.209954) | 2.275784 \/ 1.541195 (0.734589) | 2.332852 \/ 1.468490 (0.864361) | 1.014877 \/ 4.584777 (-3.569900) | 6.086455 \/ 3.745712 (2.340743) | 2.990029 \/ 5.269862 (-2.279832) | 1.862236 \/ 4.565676 (-2.703441) | 0.122179 \/ 0.424275 (-0.302096) | 0.015706 \/ 0.007607 (0.008099) | 0.873473 \/ 0.226044 (0.647429) | 8.580109 \/ 2.268929 (6.311180) | 3.458360 \/ 55.444624 (-51.986264) | 2.738801 \/ 6.876477 (-4.137676) | 2.918428 \/ 2.142072 (0.776356) | 1.224910 \/ 4.805227 (-3.580317) | 0.243006 \/ 6.500664 (-6.257658) | 0.087121 \/ 0.075469 (0.011652) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.757802 \/ 1.841788 (-0.083986) | 19.447999 \/ 8.074308 (11.373691) | 24.518157 \/ 10.191392 (14.326765) | 0.245013 \/ 0.680424 (-0.435411) | 0.032290 \/ 0.534201 (-0.501911) | 0.542043 \/ 0.579283 (-0.037240) | 0.708154 \/ 0.434364 (0.273790) | 0.660584 \/ 0.540337 (0.120247) | 0.794868 \/ 1.386936 (-0.592068) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009496 \/ 0.011353 (-0.001857) | 0.005842 \/ 0.011008 (-0.005166) | 0.112813 \/ 0.038508 (0.074305) | 0.039120 \/ 0.023109 (0.016011) | 0.489717 \/ 0.275898 (0.213819) | 0.532586 \/ 0.323480 (0.209107) | 0.007681 \/ 0.007986 (-0.000304) | 0.005337 \/ 0.004328 (0.001009) | 0.107244 \/ 0.004250 (0.102994) | 0.056847 \/ 0.037052 (0.019794) | 0.499447 \/ 0.258489 (0.240958) | 0.548995 \/ 0.293841 (0.255154) | 0.058047 \/ 0.128546 (-0.070499) | 0.015468 \/ 0.075646 (-0.060179) | 0.124600 \/ 0.419271 (-0.294671) | 0.060940 \/ 0.043533 (0.017407) | 0.488370 \/ 0.255139 (0.233231) | 0.518540 \/ 0.283200 (0.235341) | 0.124147 \/ 0.141683 (-0.017536) | 1.902922 \/ 1.452155 (0.450767) | 2.033519 \/ 1.492716 (0.540803) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.319527 \/ 0.018006 (0.301521) | 0.629641 \/ 0.000490 (0.629152) | 0.000721 \/ 0.000200 (0.000521) | 0.000101 \/ 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033150 \/ 0.037411 (-0.004262) | 0.134250 \/ 0.014526 (0.119724) | 0.161273 \/ 0.176557 (-0.015283) | 0.211471 \/ 0.737135 (-0.525664) | 0.155326 \/ 0.296338 (-0.141012) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.705244 \/ 0.215209 (0.490035) | 7.043040 \/ 2.077655 (4.965386) | 3.308948 \/ 1.504120 (1.804828) | 2.885050 \/ 1.541195 (1.343855) | 2.810260 \/ 1.468490 (1.341770) | 1.027095 \/ 4.584777 (-3.557682) | 6.111398 \/ 3.745712 (2.365686) | 5.385545 \/ 5.269862 (0.115684) | 2.521668 \/ 4.565676 (-2.044009) | 0.122419 \/ 0.424275 (-0.301856) | 0.016376 \/ 0.007607 (0.008768) | 0.830856 \/ 0.226044 (0.604811) | 8.952199 \/ 2.268929 (6.683271) | 4.207875 \/ 55.444624 (-51.236749) | 3.346624 \/ 6.876477 (-3.529853) | 3.395316 \/ 2.142072 (1.253244) | 1.351816 \/ 4.805227 (-3.453411) | 0.303056 \/ 6.500664 (-6.197608) | 0.098713 \/ 0.075469 (0.023244) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.841903 \/ 1.841788 (0.000116) | 20.472125 \/ 8.074308 (12.397817) | 23.433200 \/ 10.191392 (13.241808) | 0.242599 \/ 0.680424 (-0.437825) | 0.030701 \/ 0.534201 (-0.503500) | 0.541614 \/ 0.579283 (-0.037669) | 0.657827 \/ 0.434364 (0.223463) | 0.652448 \/ 0.540337 (0.112111) | 0.773743 \/ 1.386936 (-0.613193) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#02ee418831aba68d0be93227bce8b3f42ef8980f \"CML watermark\")\n"],"created_at":1685038630000,"updated_at":1685721735000,"closed_at":1685721124000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5899","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5899","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5899.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5899.patch","merged_at":1685721124000},"body":"fixes #5871 \r\n\r\nThe second commit is optional but improves readability.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5899\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5899\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5898","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5898\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5898\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5898\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5898","id":1726190481,"node_id":"I_kwDODunzps5m45OR","number":5898,"title":"Loading The flores data set for specific language","user":{"login":"106AbdulBasit","id":36159918,"node_id":"MDQ6VXNlcjM2MTU5OTE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36159918?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/106AbdulBasit","html_url":"https:\/\/github.com\/106AbdulBasit","followers_url":"https:\/\/api.github.com\/users\/106AbdulBasit\/followers","following_url":"https:\/\/api.github.com\/users\/106AbdulBasit\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/106AbdulBasit\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/106AbdulBasit\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/106AbdulBasit\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/106AbdulBasit\/orgs","repos_url":"https:\/\/api.github.com\/users\/106AbdulBasit\/repos","events_url":"https:\/\/api.github.com\/users\/106AbdulBasit\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/106AbdulBasit\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["got that the syntax is like this\r\n\r\ndataset = load_dataset(\"facebook\/flores\", \"ace_Arab\")"],"created_at":1685034535000,"updated_at":1685035298000,"closed_at":1685035297000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI am trying to load the Flores data set\r\n\r\nthe code which is given is\r\n```\r\n\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"facebook\/flores\")\r\n\r\n```\r\nThis gives the error of config name \r\n\r\n\"\"ValueError: Config name is missing\"\r\n\r\nNow if I add some config it gives me the some error\r\n\r\n\"HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'facebook\/flores, 'ace_Arab''.\r\n\"\r\n\r\nHow I can load the data of the specific language ?\r\n\r\nCouldn't find any tutorial\r\n\r\nany one can help me out?\n\n### Steps to reproduce the bug\n\nstep one load the data set \r\n\r\n`from datasets import load_dataset\r\n\r\ndataset = load_dataset(\"facebook\/flores\")`\r\n\r\nit gives the error of config\r\n\r\nonce config is given \r\n\r\nit gives the error of\r\n\r\n\"HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'facebook\/flores, 'ace_Arab''.\r\n\"\n\n### Expected behavior\n\nData set should be loaded but I am receiving error\n\n### Environment info\n\nDatasets , python , ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5898\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5898\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5897","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5897\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5897\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5897\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5897","id":1726135494,"node_id":"PR_kwDODunzps5RXJaY","number":5897,"title":"Fix `FixedSizeListArray` casting","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006213 \/ 0.011353 (-0.005140) | 0.004230 \/ 0.011008 (-0.006778) | 0.098014 \/ 0.038508 (0.059506) | 0.028659 \/ 0.023109 (0.005550) | 0.303272 \/ 0.275898 (0.027374) | 0.337186 \/ 0.323480 (0.013706) | 0.005126 \/ 0.007986 (-0.002860) | 0.003563 \/ 0.004328 (-0.000765) | 0.075295 \/ 0.004250 (0.071045) | 0.036836 \/ 0.037052 (-0.000216) | 0.309612 \/ 0.258489 (0.051123) | 0.346484 \/ 0.293841 (0.052643) | 0.025714 \/ 0.128546 (-0.102832) | 0.008562 \/ 0.075646 (-0.067085) | 0.323475 \/ 0.419271 (-0.095796) | 0.044072 \/ 0.043533 (0.000539) | 0.308261 \/ 0.255139 (0.053122) | 0.330903 \/ 0.283200 (0.047703) | 0.091805 \/ 0.141683 (-0.049878) | 1.517011 \/ 1.452155 (0.064856) | 1.570815 \/ 1.492716 (0.078099) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.211265 \/ 0.018006 (0.193259) | 0.438860 \/ 0.000490 (0.438370) | 0.001127 \/ 0.000200 (0.000927) | 0.000069 \/ 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023337 \/ 0.037411 (-0.014074) | 0.096243 \/ 0.014526 (0.081717) | 0.103529 \/ 0.176557 (-0.073028) | 0.161171 \/ 0.737135 (-0.575964) | 0.105904 \/ 0.296338 (-0.190435) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.417042 \/ 0.215209 (0.201833) | 4.155067 \/ 2.077655 (2.077412) | 1.879657 \/ 1.504120 (0.375537) | 1.669341 \/ 1.541195 (0.128146) | 1.717623 \/ 1.468490 (0.249133) | 0.556246 \/ 4.584777 (-4.028531) | 3.484535 \/ 3.745712 (-0.261177) | 1.728845 \/ 5.269862 (-3.541017) | 0.997477 \/ 4.565676 (-3.568199) | 0.068355 \/ 0.424275 (-0.355920) | 0.012445 \/ 0.007607 (0.004837) | 0.519023 \/ 0.226044 (0.292978) | 5.173506 \/ 2.268929 (2.904577) | 2.332435 \/ 55.444624 (-53.112190) | 1.986348 \/ 6.876477 (-4.890129) | 2.076885 \/ 2.142072 (-0.065187) | 0.656738 \/ 4.805227 (-4.148489) | 0.135308 \/ 6.500664 (-6.365356) | 0.065486 \/ 0.075469 (-0.009984) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.208874 \/ 1.841788 (-0.632914) | 13.994200 \/ 8.074308 (5.919892) | 14.160978 \/ 10.191392 (3.969586) | 0.146009 \/ 0.680424 (-0.534415) | 0.016573 \/ 0.534201 (-0.517628) | 0.356082 \/ 0.579283 (-0.223202) | 0.387766 \/ 0.434364 (-0.046598) | 0.419130 \/ 0.540337 (-0.121208) | 0.508634 \/ 1.386936 (-0.878302) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006238 \/ 0.011353 (-0.005115) | 0.004221 \/ 0.011008 (-0.006788) | 0.075155 \/ 0.038508 (0.036646) | 0.028491 \/ 0.023109 (0.005382) | 0.355606 \/ 0.275898 (0.079708) | 0.388986 \/ 0.323480 (0.065506) | 0.005941 \/ 0.007986 (-0.002044) | 0.003510 \/ 0.004328 (-0.000819) | 0.074905 \/ 0.004250 (0.070655) | 0.039111 \/ 0.037052 (0.002059) | 0.358492 \/ 0.258489 (0.100003) | 0.398763 \/ 0.293841 (0.104922) | 0.025535 \/ 0.128546 (-0.103012) | 0.008580 \/ 0.075646 (-0.067067) | 0.080461 \/ 0.419271 (-0.338811) | 0.041381 \/ 0.043533 (-0.002152) | 0.355498 \/ 0.255139 (0.100359) | 0.379163 \/ 0.283200 (0.095963) | 0.096450 \/ 0.141683 (-0.045233) | 1.503248 \/ 1.452155 (0.051093) | 1.595616 \/ 1.492716 (0.102900) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.238065 \/ 0.018006 (0.220058) | 0.422800 \/ 0.000490 (0.422311) | 0.002274 \/ 0.000200 (0.002074) | 0.000074 \/ 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025746 \/ 0.037411 (-0.011665) | 0.103319 \/ 0.014526 (0.088793) | 0.112155 \/ 0.176557 (-0.064401) | 0.163034 \/ 0.737135 (-0.574101) | 0.113377 \/ 0.296338 (-0.182962) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.440522 \/ 0.215209 (0.225313) | 4.398123 \/ 2.077655 (2.320468) | 2.143538 \/ 1.504120 (0.639418) | 1.946084 \/ 1.541195 (0.404890) | 1.996556 \/ 1.468490 (0.528066) | 0.550108 \/ 4.584777 (-4.034669) | 3.455774 \/ 3.745712 (-0.289938) | 2.862474 \/ 5.269862 (-2.407387) | 1.213446 \/ 4.565676 (-3.352230) | 0.067987 \/ 0.424275 (-0.356288) | 0.012413 \/ 0.007607 (0.004806) | 0.543990 \/ 0.226044 (0.317945) | 5.454807 \/ 2.268929 (3.185879) | 2.669195 \/ 55.444624 (-52.775429) | 2.332948 \/ 6.876477 (-4.543528) | 2.383870 \/ 2.142072 (0.241797) | 0.652017 \/ 4.805227 (-4.153210) | 0.135508 \/ 6.500664 (-6.365156) | 0.068238 \/ 0.075469 (-0.007231) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.322669 \/ 1.841788 (-0.519118) | 14.368136 \/ 8.074308 (6.293828) | 14.167431 \/ 10.191392 (3.976039) | 0.159371 \/ 0.680424 (-0.521052) | 0.016638 \/ 0.534201 (-0.517563) | 0.357106 \/ 0.579283 (-0.222177) | 0.392491 \/ 0.434364 (-0.041873) | 0.419458 \/ 0.540337 (-0.120880) | 0.504662 \/ 1.386936 (-0.882274) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#bf764819ba6754cb7edf15899db517be0548676f \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006296 \/ 0.011353 (-0.005057) | 0.004185 \/ 0.011008 (-0.006823) | 0.096170 \/ 0.038508 (0.057662) | 0.029212 \/ 0.023109 (0.006102) | 0.315356 \/ 0.275898 (0.039458) | 0.335214 \/ 0.323480 (0.011734) | 0.005108 \/ 0.007986 (-0.002877) | 0.003634 \/ 0.004328 (-0.000694) | 0.074186 \/ 0.004250 (0.069936) | 0.038716 \/ 0.037052 (0.001663) | 0.311041 \/ 0.258489 (0.052551) | 0.341202 \/ 0.293841 (0.047361) | 0.025584 \/ 0.128546 (-0.102962) | 0.008499 \/ 0.075646 (-0.067148) | 0.318660 \/ 0.419271 (-0.100611) | 0.043745 \/ 0.043533 (0.000212) | 0.314824 \/ 0.255139 (0.059685) | 0.328117 \/ 0.283200 (0.044917) | 0.093425 \/ 0.141683 (-0.048258) | 1.478732 \/ 1.452155 (0.026578) | 1.531743 \/ 1.492716 (0.039027) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.203484 \/ 0.018006 (0.185478) | 0.416131 \/ 0.000490 (0.415641) | 0.007352 \/ 0.000200 (0.007152) | 0.000211 \/ 0.000054 (0.000156) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.022908 \/ 0.037411 (-0.014503) | 0.098641 \/ 0.014526 (0.084115) | 0.103426 \/ 0.176557 (-0.073131) | 0.161658 \/ 0.737135 (-0.575477) | 0.106506 \/ 0.296338 (-0.189832) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.430781 \/ 0.215209 (0.215572) | 4.315677 \/ 2.077655 (2.238022) | 2.022302 \/ 1.504120 (0.518182) | 1.832043 \/ 1.541195 (0.290849) | 1.789302 \/ 1.468490 (0.320812) | 0.560484 \/ 4.584777 (-4.024293) | 3.448204 \/ 3.745712 (-0.297508) | 1.725016 \/ 5.269862 (-3.544846) | 1.002649 \/ 4.565676 (-3.563027) | 0.068480 \/ 0.424275 (-0.355795) | 0.012617 \/ 0.007607 (0.005010) | 0.532291 \/ 0.226044 (0.306246) | 5.319352 \/ 2.268929 (3.050423) | 2.520730 \/ 55.444624 (-52.923894) | 2.213881 \/ 6.876477 (-4.662596) | 2.352477 \/ 2.142072 (0.210404) | 0.662516 \/ 4.805227 (-4.142711) | 0.136481 \/ 6.500664 (-6.364183) | 0.066597 \/ 0.075469 (-0.008872) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.224537 \/ 1.841788 (-0.617251) | 13.849920 \/ 8.074308 (5.775612) | 14.026358 \/ 10.191392 (3.834966) | 0.131018 \/ 0.680424 (-0.549405) | 0.016756 \/ 0.534201 (-0.517445) | 0.358091 \/ 0.579283 (-0.221192) | 0.397709 \/ 0.434364 (-0.036655) | 0.450024 \/ 0.540337 (-0.090314) | 0.542609 \/ 1.386936 (-0.844327) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006179 \/ 0.011353 (-0.005174) | 0.004145 \/ 0.011008 (-0.006863) | 0.077482 \/ 0.038508 (0.038974) | 0.028005 \/ 0.023109 (0.004896) | 0.400010 \/ 0.275898 (0.124112) | 0.408206 \/ 0.323480 (0.084726) | 0.005049 \/ 0.007986 (-0.002937) | 0.003608 \/ 0.004328 (-0.000721) | 0.076841 \/ 0.004250 (0.072590) | 0.036714 \/ 0.037052 (-0.000338) | 0.406020 \/ 0.258489 (0.147531) | 0.412392 \/ 0.293841 (0.118551) | 0.025626 \/ 0.128546 (-0.102920) | 0.008560 \/ 0.075646 (-0.067087) | 0.084088 \/ 0.419271 (-0.335183) | 0.039707 \/ 0.043533 (-0.003826) | 0.396909 \/ 0.255139 (0.141770) | 0.403623 \/ 0.283200 (0.120424) | 0.095137 \/ 0.141683 (-0.046546) | 1.515670 \/ 1.452155 (0.063515) | 1.568379 \/ 1.492716 (0.075662) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.181802 \/ 0.018006 (0.163795) | 0.408778 \/ 0.000490 (0.408289) | 0.000393 \/ 0.000200 (0.000193) | 0.000060 \/ 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025940 \/ 0.037411 (-0.011471) | 0.099992 \/ 0.014526 (0.085466) | 0.106280 \/ 0.176557 (-0.070276) | 0.161729 \/ 0.737135 (-0.575406) | 0.108625 \/ 0.296338 (-0.187713) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.459802 \/ 0.215209 (0.244593) | 4.603002 \/ 2.077655 (2.525347) | 2.406851 \/ 1.504120 (0.902732) | 2.265422 \/ 1.541195 (0.724227) | 2.306305 \/ 1.468490 (0.837815) | 0.553903 \/ 4.584777 (-4.030874) | 3.482052 \/ 3.745712 (-0.263660) | 2.969855 \/ 5.269862 (-2.300007) | 1.309285 \/ 4.565676 (-3.256391) | 0.068130 \/ 0.424275 (-0.356145) | 0.012189 \/ 0.007607 (0.004582) | 0.571299 \/ 0.226044 (0.345254) | 5.711420 \/ 2.268929 (3.442492) | 2.716748 \/ 55.444624 (-52.727876) | 2.369869 \/ 6.876477 (-4.506608) | 2.544240 \/ 2.142072 (0.402167) | 0.659955 \/ 4.805227 (-4.145272) | 0.136684 \/ 6.500664 (-6.363980) | 0.068962 \/ 0.075469 (-0.006507) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.297659 \/ 1.841788 (-0.544129) | 14.012758 \/ 8.074308 (5.938449) | 14.324644 \/ 10.191392 (4.133252) | 0.144894 \/ 0.680424 (-0.535530) | 0.016751 \/ 0.534201 (-0.517450) | 0.361547 \/ 0.579283 (-0.217736) | 0.396595 \/ 0.434364 (-0.037769) | 0.422375 \/ 0.540337 (-0.117962) | 0.508209 \/ 1.386936 (-0.878727) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#ba5f81357b53099b1bedfbb277211dba3952257b \"CML watermark\")\n","_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006303 \/ 0.011353 (-0.005050) | 0.004043 \/ 0.011008 (-0.006965) | 0.096239 \/ 0.038508 (0.057731) | 0.029608 \/ 0.023109 (0.006498) | 0.321058 \/ 0.275898 (0.045160) | 0.367066 \/ 0.323480 (0.043587) | 0.005236 \/ 0.007986 (-0.002749) | 0.003342 \/ 0.004328 (-0.000987) | 0.074407 \/ 0.004250 (0.070157) | 0.038810 \/ 0.037052 (0.001757) | 0.332597 \/ 0.258489 (0.074108) | 0.363562 \/ 0.293841 (0.069721) | 0.025460 \/ 0.128546 (-0.103086) | 0.008426 \/ 0.075646 (-0.067221) | 0.316998 \/ 0.419271 (-0.102273) | 0.043621 \/ 0.043533 (0.000088) | 0.338043 \/ 0.255139 (0.082904) | 0.366441 \/ 0.283200 (0.083241) | 0.092061 \/ 0.141683 (-0.049622) | 1.461531 \/ 1.452155 (0.009376) | 1.538047 \/ 1.492716 (0.045331) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.206796 \/ 0.018006 (0.188790) | 0.517959 \/ 0.000490 (0.517469) | 0.002745 \/ 0.000200 (0.002545) | 0.000070 \/ 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.022902 \/ 0.037411 (-0.014510) | 0.097901 \/ 0.014526 (0.083375) | 0.103664 \/ 0.176557 (-0.072893) | 0.163516 \/ 0.737135 (-0.573619) | 0.108561 \/ 0.296338 (-0.187778) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.418964 \/ 0.215209 (0.203755) | 4.159113 \/ 2.077655 (2.081458) | 1.843946 \/ 1.504120 (0.339827) | 1.641083 \/ 1.541195 (0.099888) | 1.686848 \/ 1.468490 (0.218358) | 0.554583 \/ 4.584777 (-4.030194) | 3.409862 \/ 3.745712 (-0.335850) | 2.647904 \/ 5.269862 (-2.621958) | 1.355424 \/ 4.565676 (-3.210253) | 0.068229 \/ 0.424275 (-0.356046) | 0.012217 \/ 0.007607 (0.004610) | 0.515895 \/ 0.226044 (0.289851) | 5.144920 \/ 2.268929 (2.875991) | 2.298046 \/ 55.444624 (-53.146579) | 1.964735 \/ 6.876477 (-4.911741) | 2.075580 \/ 2.142072 (-0.066492) | 0.657104 \/ 4.805227 (-4.148123) | 0.134759 \/ 6.500664 (-6.365905) | 0.067545 \/ 0.075469 (-0.007924) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.233075 \/ 1.841788 (-0.608713) | 13.896762 \/ 8.074308 (5.822454) | 14.055143 \/ 10.191392 (3.863751) | 0.145507 \/ 0.680424 (-0.534917) | 0.016702 \/ 0.534201 (-0.517499) | 0.365157 \/ 0.579283 (-0.214126) | 0.385842 \/ 0.434364 (-0.048522) | 0.459993 \/ 0.540337 (-0.080344) | 0.547115 \/ 1.386936 (-0.839821) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006174 \/ 0.011353 (-0.005179) | 0.004191 \/ 0.011008 (-0.006817) | 0.078311 \/ 0.038508 (0.039803) | 0.028038 \/ 0.023109 (0.004928) | 0.360056 \/ 0.275898 (0.084158) | 0.398081 \/ 0.323480 (0.074602) | 0.005069 \/ 0.007986 (-0.002916) | 0.003464 \/ 0.004328 (-0.000864) | 0.077858 \/ 0.004250 (0.073608) | 0.039420 \/ 0.037052 (0.002367) | 0.361743 \/ 0.258489 (0.103254) | 0.404829 \/ 0.293841 (0.110988) | 0.025604 \/ 0.128546 (-0.102943) | 0.008573 \/ 0.075646 (-0.067074) | 0.084944 \/ 0.419271 (-0.334328) | 0.042652 \/ 0.043533 (-0.000881) | 0.368549 \/ 0.255139 (0.113410) | 0.385682 \/ 0.283200 (0.102482) | 0.099085 \/ 0.141683 (-0.042598) | 1.495815 \/ 1.452155 (0.043661) | 1.548168 \/ 1.492716 (0.055452) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.193737 \/ 0.018006 (0.175730) | 0.421871 \/ 0.000490 (0.421381) | 0.002306 \/ 0.000200 (0.002106) | 0.000073 \/ 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025928 \/ 0.037411 (-0.011483) | 0.103410 \/ 0.014526 (0.088885) | 0.107931 \/ 0.176557 (-0.068626) | 0.157127 \/ 0.737135 (-0.580008) | 0.111892 \/ 0.296338 (-0.184446) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.477562 \/ 0.215209 (0.262353) | 4.772711 \/ 2.077655 (2.695056) | 2.458725 \/ 1.504120 (0.954605) | 2.269871 \/ 1.541195 (0.728676) | 2.365502 \/ 1.468490 (0.897012) | 0.556182 \/ 4.584777 (-4.028595) | 3.408016 \/ 3.745712 (-0.337697) | 1.730639 \/ 5.269862 (-3.539222) | 1.000973 \/ 4.565676 (-3.564704) | 0.068293 \/ 0.424275 (-0.355982) | 0.012119 \/ 0.007607 (0.004512) | 0.581281 \/ 0.226044 (0.355236) | 5.811930 \/ 2.268929 (3.543001) | 2.890337 \/ 55.444624 (-52.554288) | 2.592156 \/ 6.876477 (-4.284321) | 2.687764 \/ 2.142072 (0.545691) | 0.664282 \/ 4.805227 (-4.140946) | 0.136029 \/ 6.500664 (-6.364635) | 0.067493 \/ 0.075469 (-0.007976) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.330723 \/ 1.841788 (-0.511064) | 14.379172 \/ 8.074308 (6.304864) | 14.153286 \/ 10.191392 (3.961894) | 0.142942 \/ 0.680424 (-0.537482) | 0.016698 \/ 0.534201 (-0.517503) | 0.361044 \/ 0.579283 (-0.218239) | 0.393174 \/ 0.434364 (-0.041190) | 0.423107 \/ 0.540337 (-0.117231) | 0.514299 \/ 1.386936 (-0.872637) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#1cb02285358ab4be6386e0a2aae40d267ff561fc \"CML watermark\")\n"],"created_at":1685031993000,"updated_at":1685103724000,"closed_at":1685102236000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5897","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5897","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5897.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5897.patch","merged_at":1685102236000},"body":"Fix cast on sliced `FixedSizeListArray`s.\r\n\r\nFix #5866","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5897\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5897\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5896","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5896\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5896\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5896\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5896","id":1726022500,"node_id":"I_kwDODunzps5m4QNk","number":5896,"title":"HuggingFace does not cache downloaded files aggressively\/early enough","user":{"login":"geajack","id":2124157,"node_id":"MDQ6VXNlcjIxMjQxNTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2124157?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/geajack","html_url":"https:\/\/github.com\/geajack","followers_url":"https:\/\/api.github.com\/users\/geajack\/followers","following_url":"https:\/\/api.github.com\/users\/geajack\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/geajack\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/geajack\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/geajack\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/geajack\/orgs","repos_url":"https:\/\/api.github.com\/users\/geajack\/repos","events_url":"https:\/\/api.github.com\/users\/geajack\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/geajack\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1685027676000,"updated_at":1685027676000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI wrote the following script:\r\n\r\n```\r\nimport datasets\r\n\r\ndataset = datasets.load.load_dataset(\"wikipedia\", \"20220301.en\", split=\"train[:10000]\")\r\n```\r\n\r\nI ran it and spent 90 minutes downloading a 20GB file. Then I saw:\r\n\r\n```\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 20.3G\/20.3G [1:30:29<00:00, 3.73MB\/s]\r\nTraceback (most recent call last):\r\n File \"\/home\/jack\/Code\/Projects\/Transformers\/Codebase\/main.py\", line 5, in \r\n dataset = datasets.load.load_dataset(\"wikipedia\", \"20220301.en\", split=\"train[:10000]\")\r\n File \"\/home\/jack\/.local\/lib\/python3.10\/site-packages\/datasets\/load.py\", line 1782, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/jack\/.local\/lib\/python3.10\/site-packages\/datasets\/builder.py\", line 883, in download_and_prepare\r\n self._save_info()\r\n File \"\/home\/jack\/.local\/lib\/python3.10\/site-packages\/datasets\/builder.py\", line 2037, in _save_info\r\n import apache_beam as beam\r\nModuleNotFoundError: No module named 'apache_beam'\r\n```\r\n\r\nAnd the 20GB of data was seemingly instantly gone forever, because when I ran the script again, it had to do the download again.\n\n### Steps to reproduce the bug\n\nSee above\n\n### Expected behavior\n\nSee above\n\n### Environment info\n\ndatasets 2.10.1\r\nPython 3.10","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5896\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5896\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5895","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5895\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5895\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5895\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5895","id":1725467252,"node_id":"I_kwDODunzps5m2Ip0","number":5895,"title":"The dir name and split strings are confused when loading ArmelR\/stack-exchange-instruction dataset","user":{"login":"DongHande","id":45357817,"node_id":"MDQ6VXNlcjQ1MzU3ODE3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/45357817?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/DongHande","html_url":"https:\/\/github.com\/DongHande","followers_url":"https:\/\/api.github.com\/users\/DongHande\/followers","following_url":"https:\/\/api.github.com\/users\/DongHande\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/DongHande\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/DongHande\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/DongHande\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/DongHande\/orgs","repos_url":"https:\/\/api.github.com\/users\/DongHande\/repos","events_url":"https:\/\/api.github.com\/users\/DongHande\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/DongHande\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting, @DongHande.\r\n\r\nI think the issue is caused by the metadata in the dataset card: in the header of the `README.md`, they state that the dataset has 4 splits (\"finetune\", \"reward\", \"rl\", \"evaluation\"). \r\n```yaml\r\n splits:\r\n - name: finetune\r\n num_bytes: 6674567576\r\n num_examples: 3000000\r\n - name: reward\r\n num_bytes: 6674341521\r\n num_examples: 3000000\r\n - name: rl\r\n num_bytes: 6679279968\r\n num_examples: 3000000\r\n - name: evaluation\r\n num_bytes: 4022714493\r\n num_examples: 1807695\r\n```\r\n\r\n\r\nI guess the user wanted to define these as configs, instead of splits. This is not yet supported for no-script datasets, but will be soon supported. See:\r\n- #5331\r\n\r\nI think we should contact the dataset author to inform about the issue with the split names, as you already did: https:\/\/huggingface.co\/datasets\/ArmelR\/stack-exchange-instruction\/discussions\/1\r\nLet's continue the discussion there!","Thank you! It has been fixed. "],"created_at":1685007546000,"updated_at":1685327532000,"closed_at":1685327532000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nWhen I load the ArmelR\/stack-exchange-instruction dataset, I encounter a bug that may be raised by confusing the dir name string and the split string about the dataset. \r\n\r\nWhen I use the script \"datasets.load_dataset('ArmelR\/stack-exchange-instruction', data_dir=\"data\/finetune\", split=\"train\", use_auth_token=True)\", it fails. But it succeeds when I add the \"streaming = True\" parameter. \r\n\r\nThe website of the dataset is https:\/\/huggingface.co\/datasets\/ArmelR\/stack-exchange-instruction\/ .\r\n\r\nThe traceback logs are as below: \r\n\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/xxx\/miniconda3\/envs\/code\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 1797, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/xxx\/miniconda3\/envs\/code\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 890, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/xxx\/miniconda3\/envs\/code\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 985, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/xxx\/miniconda3\/envs\/code\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 1706, in _prepare_split\r\n split_info = self.info.splits[split_generator.name]\r\n File \"\/home\/xxx\/miniconda3\/envs\/code\/lib\/python3.9\/site-packages\/datasets\/splits.py\", line 530, in __getitem__\r\n instructions = make_file_instructions(\r\n File \"\/home\/xxx\/miniconda3\/envs\/code\/lib\/python3.9\/site-packages\/datasets\/arrow_reader.py\", line 112, in make_file_instructions\r\n name2filenames = {\r\n File \"\/home\/xxx\/miniconda3\/envs\/code\/lib\/python3.9\/site-packages\/datasets\/arrow_reader.py\", line 113, in \r\n info.name: filenames_for_dataset_split(\r\n File \"\/home\/xxx\/miniconda3\/envs\/code\/lib\/python3.9\/site-packages\/datasets\/naming.py\", line 70, in filenames_for_dataset_split\r\n prefix = filename_prefix_for_split(dataset_name, split)\r\n File \"\/home\/xxx\/miniconda3\/envs\/code\/lib\/python3.9\/site-packages\/datasets\/naming.py\", line 54, in filename_prefix_for_split\r\n if os.path.basename(name) != name:\r\n File \"\/home\/xxx\/miniconda3\/envs\/code\/lib\/python3.9\/posixpath.py\", line 142, in basename\r\n p = os.fspath(p)\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType\r\n\r\n\r\n\r\n\r\n\r\n### Steps to reproduce the bug\r\n\r\n1. import datasets library function: ```from datasets import load_dataset```\r\n2. load dataset: ```ds=load_dataset('ArmelR\/stack-exchange-instruction', data_dir=\"data\/finetune\", split=\"train\", use_auth_token=True)```\r\n\r\n### Expected behavior\r\n\r\nThe dataset can be loaded successfully without the streaming setting. \r\n\r\n### Environment info\r\n\r\nLinux, \r\npython=3.9\r\ndatasets=2.12.0","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5895\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5895\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5894","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5894\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5894\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5894\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5894","id":1724774910,"node_id":"PR_kwDODunzps5RSjot","number":5894,"title":"Force overwrite existing filesystem protocol","user":{"login":"baskrahmer","id":24520725,"node_id":"MDQ6VXNlcjI0NTIwNzI1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24520725?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/baskrahmer","html_url":"https:\/\/github.com\/baskrahmer","followers_url":"https:\/\/api.github.com\/users\/baskrahmer\/followers","following_url":"https:\/\/api.github.com\/users\/baskrahmer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/baskrahmer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/baskrahmer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/baskrahmer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/baskrahmer\/orgs","repos_url":"https:\/\/api.github.com\/users\/baskrahmer\/repos","events_url":"https:\/\/api.github.com\/users\/baskrahmer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/baskrahmer\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009139 \/ 0.011353 (-0.002214) | 0.005634 \/ 0.011008 (-0.005374) | 0.129587 \/ 0.038508 (0.091079) | 0.038298 \/ 0.023109 (0.015189) | 0.428149 \/ 0.275898 (0.152251) | 0.443744 \/ 0.323480 (0.120264) | 0.007501 \/ 0.007986 (-0.000485) | 0.005999 \/ 0.004328 (0.001671) | 0.100796 \/ 0.004250 (0.096546) | 0.053236 \/ 0.037052 (0.016184) | 0.423868 \/ 0.258489 (0.165379) | 0.460110 \/ 0.293841 (0.166269) | 0.041255 \/ 0.128546 (-0.087291) | 0.013790 \/ 0.075646 (-0.061856) | 0.438398 \/ 0.419271 (0.019127) | 0.063086 \/ 0.043533 (0.019553) | 0.414826 \/ 0.255139 (0.159687) | 0.460652 \/ 0.283200 (0.177453) | 0.121223 \/ 0.141683 (-0.020460) | 1.754430 \/ 1.452155 (0.302275) | 1.900037 \/ 1.492716 (0.407320) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.027222 \/ 0.018006 (0.009216) | 0.617666 \/ 0.000490 (0.617176) | 0.022443 \/ 0.000200 (0.022243) | 0.000820 \/ 0.000054 (0.000766) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030397 \/ 0.037411 (-0.007014) | 0.125732 \/ 0.014526 (0.111206) | 0.149805 \/ 0.176557 (-0.026752) | 0.234048 \/ 0.737135 (-0.503087) | 0.143108 \/ 0.296338 (-0.153231) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.631189 \/ 0.215209 (0.415980) | 6.182871 \/ 2.077655 (4.105216) | 2.635730 \/ 1.504120 (1.131610) | 2.231429 \/ 1.541195 (0.690235) | 2.438360 \/ 1.468490 (0.969870) | 0.861170 \/ 4.584777 (-3.723607) | 5.785984 \/ 3.745712 (2.040272) | 2.758358 \/ 5.269862 (-2.511504) | 1.678095 \/ 4.565676 (-2.887582) | 0.105961 \/ 0.424275 (-0.318314) | 0.013659 \/ 0.007607 (0.006052) | 0.762943 \/ 0.226044 (0.536898) | 7.774399 \/ 2.268929 (5.505471) | 3.319027 \/ 55.444624 (-52.125598) | 2.700248 \/ 6.876477 (-4.176229) | 3.008581 \/ 2.142072 (0.866509) | 1.122522 \/ 4.805227 (-3.682705) | 0.214832 \/ 6.500664 (-6.285832) | 0.085281 \/ 0.075469 (0.009811) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.647610 \/ 1.841788 (-0.194177) | 18.178316 \/ 8.074308 (10.104008) | 21.199177 \/ 10.191392 (11.007785) | 0.247063 \/ 0.680424 (-0.433361) | 0.030443 \/ 0.534201 (-0.503758) | 0.512527 \/ 0.579283 (-0.066757) | 0.640758 \/ 0.434364 (0.206394) | 0.639986 \/ 0.540337 (0.099649) | 0.760113 \/ 1.386936 (-0.626823) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008293 \/ 0.011353 (-0.003060) | 0.005360 \/ 0.011008 (-0.005648) | 0.102932 \/ 0.038508 (0.064424) | 0.037457 \/ 0.023109 (0.014347) | 0.444114 \/ 0.275898 (0.168216) | 0.512855 \/ 0.323480 (0.189375) | 0.007030 \/ 0.007986 (-0.000956) | 0.004954 \/ 0.004328 (0.000625) | 0.095757 \/ 0.004250 (0.091507) | 0.051239 \/ 0.037052 (0.014187) | 0.471118 \/ 0.258489 (0.212629) | 0.517764 \/ 0.293841 (0.223923) | 0.041953 \/ 0.128546 (-0.086593) | 0.013748 \/ 0.075646 (-0.061898) | 0.118089 \/ 0.419271 (-0.301182) | 0.060159 \/ 0.043533 (0.016626) | 0.466011 \/ 0.255139 (0.210872) | 0.489180 \/ 0.283200 (0.205980) | 0.123250 \/ 0.141683 (-0.018433) | 1.714738 \/ 1.452155 (0.262584) | 1.838571 \/ 1.492716 (0.345855) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.267792 \/ 0.018006 (0.249785) | 0.624313 \/ 0.000490 (0.623824) | 0.007315 \/ 0.000200 (0.007115) | 0.000136 \/ 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033751 \/ 0.037411 (-0.003661) | 0.122819 \/ 0.014526 (0.108293) | 0.148270 \/ 0.176557 (-0.028286) | 0.198581 \/ 0.737135 (-0.538554) | 0.144845 \/ 0.296338 (-0.151494) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.620631 \/ 0.215209 (0.405422) | 6.224665 \/ 2.077655 (4.147010) | 2.856592 \/ 1.504120 (1.352473) | 2.525089 \/ 1.541195 (0.983894) | 2.600198 \/ 1.468490 (1.131708) | 0.872038 \/ 4.584777 (-3.712739) | 5.571650 \/ 3.745712 (1.825937) | 5.907643 \/ 5.269862 (0.637782) | 2.348770 \/ 4.565676 (-2.216906) | 0.111665 \/ 0.424275 (-0.312610) | 0.013886 \/ 0.007607 (0.006278) | 0.762154 \/ 0.226044 (0.536109) | 7.792686 \/ 2.268929 (5.523758) | 3.601122 \/ 55.444624 (-51.843503) | 2.939412 \/ 6.876477 (-3.937064) | 2.973430 \/ 2.142072 (0.831358) | 1.065016 \/ 4.805227 (-3.740211) | 0.221701 \/ 6.500664 (-6.278963) | 0.088157 \/ 0.075469 (0.012688) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.771061 \/ 1.841788 (-0.070727) | 18.826926 \/ 8.074308 (10.752618) | 21.283830 \/ 10.191392 (11.092438) | 0.239233 \/ 0.680424 (-0.441191) | 0.026159 \/ 0.534201 (-0.508042) | 0.487074 \/ 0.579283 (-0.092209) | 0.623241 \/ 0.434364 (0.188877) | 0.600506 \/ 0.540337 (0.060169) | 0.691271 \/ 1.386936 (-0.695665) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#1bbe2c3496498a6415765b517ac4bc600a02ad06 \"CML watermark\")\n"],"created_at":1684964513000,"updated_at":1684997528000,"closed_at":1684996953000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5894","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5894","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5894.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5894.patch","merged_at":1684996953000},"body":"Fix #5876","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5894\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5894\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5893","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5893\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5893\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5893\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5893","id":1722519056,"node_id":"PR_kwDODunzps5RK40K","number":5893,"title":"Load cached dataset as iterable ","user":{"login":"mariusz-jachimowicz-83","id":10278877,"node_id":"MDQ6VXNlcjEwMjc4ODc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10278877?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83","html_url":"https:\/\/github.com\/mariusz-jachimowicz-83","followers_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/followers","following_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/repos","events_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariusz-jachimowicz-83\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Could you please look into that and review?","_The documentation is not available anymore as the PR was closed or merged._","@lhoestq I refactored the code. Could you please check is it what you requested?","@lhoestq Thanks for a review. Excellent tips. All tips applied. ","I think there is just PythonFormatter that needs to be imported in the test file and we should be good to merge","@lhoestq that is weird. I have linter error when I do it.","@lhoestq Now it should work properly.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006152 \/ 0.011353 (-0.005201) | 0.004169 \/ 0.011008 (-0.006839) | 0.097968 \/ 0.038508 (0.059460) | 0.028325 \/ 0.023109 (0.005216) | 0.308958 \/ 0.275898 (0.033060) | 0.341832 \/ 0.323480 (0.018352) | 0.005098 \/ 0.007986 (-0.002887) | 0.004721 \/ 0.004328 (0.000393) | 0.075067 \/ 0.004250 (0.070817) | 0.040514 \/ 0.037052 (0.003462) | 0.308355 \/ 0.258489 (0.049866) | 0.351063 \/ 0.293841 (0.057222) | 0.025261 \/ 0.128546 (-0.103285) | 0.008483 \/ 0.075646 (-0.067163) | 0.321219 \/ 0.419271 (-0.098052) | 0.058258 \/ 0.043533 (0.014725) | 0.312572 \/ 0.255139 (0.057433) | 0.330667 \/ 0.283200 (0.047467) | 0.091047 \/ 0.141683 (-0.050635) | 1.536541 \/ 1.452155 (0.084387) | 1.606566 \/ 1.492716 (0.113850) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.213234 \/ 0.018006 (0.195228) | 0.494801 \/ 0.000490 (0.494311) | 0.003764 \/ 0.000200 (0.003564) | 0.000074 \/ 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023653 \/ 0.037411 (-0.013758) | 0.097176 \/ 0.014526 (0.082650) | 0.102961 \/ 0.176557 (-0.073595) | 0.164285 \/ 0.737135 (-0.572851) | 0.107586 \/ 0.296338 (-0.188753) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.421402 \/ 0.215209 (0.206193) | 4.195828 \/ 2.077655 (2.118174) | 1.884664 \/ 1.504120 (0.380544) | 1.679750 \/ 1.541195 (0.138556) | 1.719725 \/ 1.468490 (0.251235) | 0.552290 \/ 4.584777 (-4.032486) | 3.386337 \/ 3.745712 (-0.359375) | 1.771527 \/ 5.269862 (-3.498334) | 1.133327 \/ 4.565676 (-3.432349) | 0.067911 \/ 0.424275 (-0.356364) | 0.012572 \/ 0.007607 (0.004965) | 0.518004 \/ 0.226044 (0.291960) | 5.192381 \/ 2.268929 (2.923453) | 2.316032 \/ 55.444624 (-53.128592) | 1.993264 \/ 6.876477 (-4.883212) | 2.071009 \/ 2.142072 (-0.071063) | 0.655062 \/ 4.805227 (-4.150165) | 0.135488 \/ 6.500664 (-6.365177) | 0.067273 \/ 0.075469 (-0.008196) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.217731 \/ 1.841788 (-0.624056) | 13.812927 \/ 8.074308 (5.738619) | 13.137886 \/ 10.191392 (2.946494) | 0.143102 \/ 0.680424 (-0.537322) | 0.016884 \/ 0.534201 (-0.517317) | 0.370106 \/ 0.579283 (-0.209178) | 0.392349 \/ 0.434364 (-0.042015) | 0.424501 \/ 0.540337 (-0.115837) | 0.509830 \/ 1.386936 (-0.877106) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006210 \/ 0.011353 (-0.005142) | 0.004215 \/ 0.011008 (-0.006793) | 0.076129 \/ 0.038508 (0.037621) | 0.027825 \/ 0.023109 (0.004716) | 0.403973 \/ 0.275898 (0.128075) | 0.441089 \/ 0.323480 (0.117609) | 0.005420 \/ 0.007986 (-0.002566) | 0.004870 \/ 0.004328 (0.000542) | 0.075558 \/ 0.004250 (0.071308) | 0.039464 \/ 0.037052 (0.002411) | 0.404329 \/ 0.258489 (0.145840) | 0.447213 \/ 0.293841 (0.153372) | 0.025877 \/ 0.128546 (-0.102669) | 0.008660 \/ 0.075646 (-0.066987) | 0.081849 \/ 0.419271 (-0.337422) | 0.044551 \/ 0.043533 (0.001018) | 0.379102 \/ 0.255139 (0.123963) | 0.403104 \/ 0.283200 (0.119905) | 0.094754 \/ 0.141683 (-0.046929) | 1.460772 \/ 1.452155 (0.008617) | 1.569531 \/ 1.492716 (0.076815) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.183923 \/ 0.018006 (0.165917) | 0.420708 \/ 0.000490 (0.420219) | 0.002091 \/ 0.000200 (0.001891) | 0.000080 \/ 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026180 \/ 0.037411 (-0.011231) | 0.101529 \/ 0.014526 (0.087003) | 0.108739 \/ 0.176557 (-0.067818) | 0.160702 \/ 0.737135 (-0.576433) | 0.111739 \/ 0.296338 (-0.184600) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.448671 \/ 0.215209 (0.233462) | 4.469287 \/ 2.077655 (2.391632) | 2.244335 \/ 1.504120 (0.740215) | 2.107495 \/ 1.541195 (0.566301) | 2.224763 \/ 1.468490 (0.756272) | 0.554006 \/ 4.584777 (-4.030771) | 3.390109 \/ 3.745712 (-0.355603) | 1.744189 \/ 5.269862 (-3.525673) | 1.008515 \/ 4.565676 (-3.557161) | 0.067904 \/ 0.424275 (-0.356371) | 0.012243 \/ 0.007607 (0.004636) | 0.557635 \/ 0.226044 (0.331590) | 5.610383 \/ 2.268929 (3.341454) | 2.687326 \/ 55.444624 (-52.757298) | 2.405262 \/ 6.876477 (-4.471214) | 2.527300 \/ 2.142072 (0.385227) | 0.662282 \/ 4.805227 (-4.142945) | 0.136225 \/ 6.500664 (-6.364439) | 0.068136 \/ 0.075469 (-0.007334) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.310791 \/ 1.841788 (-0.530997) | 14.370381 \/ 8.074308 (6.296072) | 14.122675 \/ 10.191392 (3.931283) | 0.152302 \/ 0.680424 (-0.528122) | 0.016624 \/ 0.534201 (-0.517577) | 0.359395 \/ 0.579283 (-0.219888) | 0.392131 \/ 0.434364 (-0.042233) | 0.423796 \/ 0.540337 (-0.116542) | 0.511387 \/ 1.386936 (-0.875549) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#d6a61a1af1502677a6f2333896a6ffeede9ca21b \"CML watermark\")\n"],"created_at":1684863635000,"updated_at":1685620704000,"closed_at":1685620289000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5893","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5893","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5893.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5893.patch","merged_at":1685620289000},"body":"To be used to train models it allows to load an IterableDataset from the cached Arrow file. \r\nSee https:\/\/github.com\/huggingface\/datasets\/issues\/5481","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5893\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5893\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5892","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5892\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5892\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5892\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5892","id":1722503824,"node_id":"I_kwDODunzps5mq1KQ","number":5892,"title":"User access requests with manual review do not notify the dataset owner","user":{"login":"leondz","id":121934,"node_id":"MDQ6VXNlcjEyMTkzNA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/121934?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/leondz","html_url":"https:\/\/github.com\/leondz","followers_url":"https:\/\/api.github.com\/users\/leondz\/followers","following_url":"https:\/\/api.github.com\/users\/leondz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/leondz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/leondz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/leondz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/leondz\/orgs","repos_url":"https:\/\/api.github.com\/users\/leondz\/repos","events_url":"https:\/\/api.github.com\/users\/leondz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/leondz\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["cc @SBrandeis"],"created_at":1684862866000,"updated_at":1684864489000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nWhen a user access requests are enabled, and new requests are set to Manual Review, the dataset owner should be notified of the pending requests. However, instead, currently nothing happens, and so the dataset request can go unanswered for quite some time until the owner happens to check that particular dataset's Settings pane.\n\n### Steps to reproduce the bug\n\n1. Enable a dataset's user access requests\r\n2. Set to Manual Review\r\n3. Ask another HF user to request access to the dataset\r\n4. Dataset owner is not notified\n\n### Expected behavior\n\nThe dataset owner should receive some kind of notification, perhaps in their HF site inbox, or by email, when a dataset access request is made and manual review is enabled.\n\n### Environment info\n\nn\/a","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5892\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5892\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5891","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5891\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5891\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5891\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5891","id":1722384135,"node_id":"PR_kwDODunzps5RKchn","number":5891,"title":"Make split slicing consisten with list slicing","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_5891). All of your documentation changes will be reflected on that endpoint.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006916 \/ 0.011353 (-0.004437) | 0.004749 \/ 0.011008 (-0.006259) | 0.096086 \/ 0.038508 (0.057578) | 0.035448 \/ 0.023109 (0.012338) | 0.299645 \/ 0.275898 (0.023747) | 0.331279 \/ 0.323480 (0.007799) | 0.006018 \/ 0.007986 (-0.001968) | 0.004210 \/ 0.004328 (-0.000118) | 0.072998 \/ 0.004250 (0.068747) | 0.050082 \/ 0.037052 (0.013030) | 0.297714 \/ 0.258489 (0.039225) | 0.365523 \/ 0.293841 (0.071682) | 0.028081 \/ 0.128546 (-0.100465) | 0.009072 \/ 0.075646 (-0.066574) | 0.327628 \/ 0.419271 (-0.091643) | 0.051165 \/ 0.043533 (0.007633) | 0.295091 \/ 0.255139 (0.039952) | 0.320052 \/ 0.283200 (0.036852) | 0.109841 \/ 0.141683 (-0.031842) | 1.467867 \/ 1.452155 (0.015712) | 1.572600 \/ 1.492716 (0.079884) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.281490 \/ 0.018006 (0.263484) | 0.499259 \/ 0.000490 (0.498770) | 0.000691 \/ 0.000200 (0.000491) | 0.000062 \/ 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027548 \/ 0.037411 (-0.009863) | 0.106592 \/ 0.014526 (0.092066) | 0.118654 \/ 0.176557 (-0.057902) | 0.174313 \/ 0.737135 (-0.562822) | 0.124491 \/ 0.296338 (-0.171848) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.399674 \/ 0.215209 (0.184465) | 3.984092 \/ 2.077655 (1.906437) | 1.790935 \/ 1.504120 (0.286815) | 1.593612 \/ 1.541195 (0.052417) | 1.694595 \/ 1.468490 (0.226105) | 0.517588 \/ 4.584777 (-4.067189) | 3.724353 \/ 3.745712 (-0.021359) | 3.244807 \/ 5.269862 (-2.025054) | 1.602929 \/ 4.565676 (-2.962748) | 0.065334 \/ 0.424275 (-0.358941) | 0.012259 \/ 0.007607 (0.004652) | 0.501355 \/ 0.226044 (0.275311) | 4.996546 \/ 2.268929 (2.727618) | 2.279333 \/ 55.444624 (-53.165291) | 1.940126 \/ 6.876477 (-4.936351) | 2.122945 \/ 2.142072 (-0.019128) | 0.626104 \/ 4.805227 (-4.179123) | 0.141278 \/ 6.500664 (-6.359386) | 0.064522 \/ 0.075469 (-0.010947) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.195351 \/ 1.841788 (-0.646436) | 15.258932 \/ 8.074308 (7.184624) | 14.627623 \/ 10.191392 (4.436231) | 0.266897 \/ 0.680424 (-0.413527) | 0.017557 \/ 0.534201 (-0.516644) | 0.392932 \/ 0.579283 (-0.186351) | 0.416409 \/ 0.434364 (-0.017955) | 0.469100 \/ 0.540337 (-0.071237) | 0.556247 \/ 1.386936 (-0.830689) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006880 \/ 0.011353 (-0.004473) | 0.004837 \/ 0.011008 (-0.006171) | 0.074518 \/ 0.038508 (0.036010) | 0.034204 \/ 0.023109 (0.011095) | 0.365100 \/ 0.275898 (0.089202) | 0.394976 \/ 0.323480 (0.071496) | 0.006364 \/ 0.007986 (-0.001621) | 0.004269 \/ 0.004328 (-0.000060) | 0.073531 \/ 0.004250 (0.069281) | 0.051334 \/ 0.037052 (0.014281) | 0.373904 \/ 0.258489 (0.115415) | 0.413662 \/ 0.293841 (0.119821) | 0.028779 \/ 0.128546 (-0.099767) | 0.009292 \/ 0.075646 (-0.066354) | 0.081574 \/ 0.419271 (-0.337698) | 0.046531 \/ 0.043533 (0.002998) | 0.368995 \/ 0.255139 (0.113856) | 0.376938 \/ 0.283200 (0.093739) | 0.112576 \/ 0.141683 (-0.029107) | 1.458880 \/ 1.452155 (0.006725) | 1.550918 \/ 1.492716 (0.058202) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.319521 \/ 0.018006 (0.301515) | 0.510146 \/ 0.000490 (0.509656) | 0.000438 \/ 0.000200 (0.000238) | 0.000059 \/ 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033082 \/ 0.037411 (-0.004329) | 0.118009 \/ 0.014526 (0.103483) | 0.127108 \/ 0.176557 (-0.049448) | 0.176600 \/ 0.737135 (-0.560535) | 0.133790 \/ 0.296338 (-0.162549) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.437360 \/ 0.215209 (0.222151) | 4.367426 \/ 2.077655 (2.289771) | 2.193646 \/ 1.504120 (0.689526) | 2.025002 \/ 1.541195 (0.483808) | 2.142347 \/ 1.468490 (0.673856) | 0.525497 \/ 4.584777 (-4.059280) | 3.751275 \/ 3.745712 (0.005563) | 1.912271 \/ 5.269862 (-3.357590) | 1.087286 \/ 4.565676 (-3.478390) | 0.066328 \/ 0.424275 (-0.357947) | 0.011904 \/ 0.007607 (0.004297) | 0.545870 \/ 0.226044 (0.319825) | 5.434481 \/ 2.268929 (3.165552) | 2.719745 \/ 55.444624 (-52.724880) | 2.445001 \/ 6.876477 (-4.431476) | 2.500205 \/ 2.142072 (0.358133) | 0.645735 \/ 4.805227 (-4.159492) | 0.144210 \/ 6.500664 (-6.356455) | 0.065688 \/ 0.075469 (-0.009781) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.273522 \/ 1.841788 (-0.568265) | 15.771778 \/ 8.074308 (7.697470) | 14.685261 \/ 10.191392 (4.493869) | 0.176523 \/ 0.680424 (-0.503900) | 0.017877 \/ 0.534201 (-0.516324) | 0.392687 \/ 0.579283 (-0.186596) | 0.449992 \/ 0.434364 (0.015628) | 0.462851 \/ 0.540337 (-0.077487) | 0.560178 \/ 1.386936 (-0.826758) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#0fa3ef6eba906ee1214e0596d15a78fc358909f4 \"CML watermark\")\n"],"created_at":1684857873000,"updated_at":1684858272000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5891","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5891","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5891.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5891.patch","merged_at":null},"body":"Fix #1774, fix #5875 \r\n\r\nTODO: a test","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5891\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5891\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5889","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5889\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5889\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5889\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5889","id":1722373618,"node_id":"I_kwDODunzps5mqVXy","number":5889,"title":"Token Alignment for input and output data over train and test batch\/dataset.","user":{"login":"akesh1235","id":125154243,"node_id":"U_kgDOB3Wzww","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/125154243?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/akesh1235","html_url":"https:\/\/github.com\/akesh1235","followers_url":"https:\/\/api.github.com\/users\/akesh1235\/followers","following_url":"https:\/\/api.github.com\/users\/akesh1235\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/akesh1235\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/akesh1235\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/akesh1235\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/akesh1235\/orgs","repos_url":"https:\/\/api.github.com\/users\/akesh1235\/repos","events_url":"https:\/\/api.github.com\/users\/akesh1235\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/akesh1235\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1684857535000,"updated_at":1684857535000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"`data`\r\n> DatasetDict({\r\n train: Dataset({\r\n features: ['input', 'output'],\r\n num_rows: 4500\r\n })\r\n test: Dataset({\r\n features: ['input', 'output'],\r\n num_rows: 500\r\n })\r\n})\r\n\r\n**# input (in-correct sentence)**\r\n`data['train'][0]['input']`\r\n**>>** 'We are meet sunday 10am12pmET in Crown Heights Brooklyn New York'\r\n**# output (correct sentence)**\r\n`data['train'][0]['output']`\r\n**>>** 'We meet Sundays 10am-12pmET in Crown Heights, Brooklyn, New York.'\r\n\r\n**I Want to align the output tokens with input**\r\n\r\n```\r\n`# tokenize both inputs and targets\r\ndef tokenize_fn(batch):\r\n # tokenize the input sequence first\r\n # this populates input_ids, attention_mask, etc.\r\n\r\n tokenized_inputs = tokenizer(\r\n batch['input']\r\n )\r\n \r\n labels_batch = tokenizer.tokenize(batch['output']) # original targets\r\n\r\n aligned_labels_batch = []\r\n for i, labels in enumerate(labels_batch):\r\n word_ids = tokenized_inputs[i].word_ids()\r\n aligned_labels_batch.append(align_targets(labels, word_ids)) # align_targets is another user defined function which is been called here\r\n\r\n # recall: the 'target' must be stored in key called 'labels'\r\n tokenized_inputs['labels'] = aligned_labels_batch\r\n\r\n return tokenized_inputs`\r\n```\r\n```\r\ndata.map(\r\n tokenize_fn,\r\n batched=True,\r\n remove_columns=data['train'].column_names,\r\n)\r\n```\r\n\r\nWhen this user defined function is mapped to every records of train and test batch am getting following error:\r\n\r\n**1.** **raise DatasetTransformationNotAllowedError(\r\n 3457 \"Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it.\"**\r\n\r\n**2.** **TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]**","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5889\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5889\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5887","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5887\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5887\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5887\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5887","id":1722166382,"node_id":"I_kwDODunzps5mpixu","number":5887,"title":"HuggingsFace dataset example give error","user":{"login":"donhuvy","id":1328316,"node_id":"MDQ6VXNlcjEzMjgzMTY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1328316?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/donhuvy","html_url":"https:\/\/github.com\/donhuvy","followers_url":"https:\/\/api.github.com\/users\/donhuvy\/followers","following_url":"https:\/\/api.github.com\/users\/donhuvy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/donhuvy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/donhuvy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/donhuvy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/donhuvy\/orgs","repos_url":"https:\/\/api.github.com\/users\/donhuvy\/repos","events_url":"https:\/\/api.github.com\/users\/donhuvy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/donhuvy\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":{"login":"alvarobartt","id":36760800.0,"node_id":"MDQ6VXNlcjM2NzYwODAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36760800?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alvarobartt","html_url":"https:\/\/github.com\/alvarobartt","followers_url":"https:\/\/api.github.com\/users\/alvarobartt\/followers","following_url":"https:\/\/api.github.com\/users\/alvarobartt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alvarobartt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alvarobartt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alvarobartt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alvarobartt\/orgs","repos_url":"https:\/\/api.github.com\/users\/alvarobartt\/repos","events_url":"https:\/\/api.github.com\/users\/alvarobartt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alvarobartt\/received_events","type":"User","site_admin":false},"assignees":[{"login":"alvarobartt","id":36760800,"node_id":"MDQ6VXNlcjM2NzYwODAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36760800?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alvarobartt","html_url":"https:\/\/github.com\/alvarobartt","followers_url":"https:\/\/api.github.com\/users\/alvarobartt\/followers","following_url":"https:\/\/api.github.com\/users\/alvarobartt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alvarobartt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alvarobartt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alvarobartt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alvarobartt\/orgs","repos_url":"https:\/\/api.github.com\/users\/alvarobartt\/repos","events_url":"https:\/\/api.github.com\/users\/alvarobartt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alvarobartt\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Nice catch @donhuvy, that's because some models don't need the `token_type_ids`, as in this case, as the example is using `distilbert-base-cased`, and according to the DistilBert documentation at https:\/\/huggingface.co\/transformers\/v3.0.2\/model_doc\/distilbert.html, `DistilBert doesn\u2019t have token_type_ids, you don\u2019t need to indicate which token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or [SEP])`. `token_type_ids` are neither required in some other well known models such as RoBERTa. \r\n\r\nHere the issue comes due to a mismatch between the tokenizer and the model, as the Colab is using a BERT tokenizer (`bert-base-cased`), while the model is a DistilBERT (`distilbert-base-cased`), so aligning the tokenizer and the model solves it!","#self-assign","@donhuvy I've created https:\/\/github.com\/huggingface\/datasets\/pull\/5902 to solve it! \ud83e\udd17"],"created_at":1684850945000,"updated_at":1685096874000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\n![image](https:\/\/github.com\/huggingface\/datasets\/assets\/1328316\/1f4f0086-3db9-4c79-906b-05a375357cce)\r\n\r\n\r\n![image](https:\/\/github.com\/huggingface\/datasets\/assets\/1328316\/733ebd3d-89b9-4ece-b80a-00ab5b0a4122)\r\n\r\n\r\n### Steps to reproduce the bug\r\n\r\nUse link as reference document written https:\/\/colab.research.google.com\/github\/huggingface\/datasets\/blob\/main\/notebooks\/Overview.ipynb#scrollTo=biqDH9vpvSVz\r\n\r\n```python\r\n# Now let's train our model\r\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\r\n\r\nmodel.train().to(device)\r\nfor i, batch in enumerate(dataloader):\r\n batch.to(device)\r\n outputs = model(**batch)\r\n loss = outputs.loss\r\n loss.backward()\r\n optimizer.step()\r\n model.zero_grad()\r\n print(f'Step {i} - loss: {loss:.3}')\r\n if i > 5:\r\n break\r\n```\r\n\r\nError\r\n\r\n```python\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n[](https:\/\/localhost:8080\/#) in ()\r\n 5 for i, batch in enumerate(dataloader):\r\n 6 batch.to(device)\r\n----> 7 outputs = model(**batch)\r\n 8 loss = outputs.loss\r\n 9 loss.backward()\r\n\r\n[\/usr\/local\/lib\/python3.10\/dist-packages\/torch\/nn\/modules\/module.py](https:\/\/localhost:8080\/#) in _call_impl(self, *args, **kwargs)\r\n 1499 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1500 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1501 return forward_call(*args, **kwargs)\r\n 1502 # Do not call functions when jit is used\r\n 1503 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nTypeError: DistilBertForQuestionAnswering.forward() got an unexpected keyword argument 'token_type_ids'\r\n```\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/assets\/1328316\/5d8b1d61-9337-4d59-8423-4f37f834c156\r\n\r\n\r\n\r\n### Expected behavior\r\n\r\nRun success on Google Colab (free)\r\n\r\n### Environment info\r\n\r\nWindows 11 x64, Google Colab free (my Google Drive just empty about 200 MB, but I don't think it cause problem)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5887\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5887\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5886","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5886\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5886\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5886\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5886","id":1721070225,"node_id":"I_kwDODunzps5mlXKR","number":5886,"title":"Use work-stealing algorithm when parallel computing","user":{"login":"1014661165","id":46060451,"node_id":"MDQ6VXNlcjQ2MDYwNDUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46060451?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/1014661165","html_url":"https:\/\/github.com\/1014661165","followers_url":"https:\/\/api.github.com\/users\/1014661165\/followers","following_url":"https:\/\/api.github.com\/users\/1014661165\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/1014661165\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/1014661165\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/1014661165\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/1014661165\/orgs","repos_url":"https:\/\/api.github.com\/users\/1014661165\/repos","events_url":"https:\/\/api.github.com\/users\/1014661165\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/1014661165\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Alternatively we could set the number of shards to be a factor than the number of processes (current they're equal) - this way it will be less likely to end up with a shard that is significantly slower than all the other ones."],"created_at":1684811324000,"updated_at":1684942209000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\r\n\r\nwhen i used Dataset.map api to process data concurrently, i found that\r\n it gets slower and slower as it gets closer to completion. Then i read the source code of arrow_dataset.py and found that it shard the dataset and use multiprocessing pool to execute each shard.It may cause the slowest task to drag out the entire program's execution time,especially when processing huge dataset.\r\n\r\n### Motivation\r\n\r\nusing work-stealing algorithm instead of sharding and parallel computing to optimize performance. \r\n\r\n### Your contribution\r\n\r\njust an idea.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5886\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5886\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5885","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5885\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5885\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5885\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5885","id":1720954440,"node_id":"PR_kwDODunzps5RFjTL","number":5885,"title":"Modify `is_remote_filesystem` to return True for FUSE-mounted paths","user":{"login":"maddiedawson","id":106995444,"node_id":"U_kgDOBmCe9A","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/106995444?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/maddiedawson","html_url":"https:\/\/github.com\/maddiedawson","followers_url":"https:\/\/api.github.com\/users\/maddiedawson\/followers","following_url":"https:\/\/api.github.com\/users\/maddiedawson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/maddiedawson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/maddiedawson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/maddiedawson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/maddiedawson\/orgs","repos_url":"https:\/\/api.github.com\/users\/maddiedawson\/repos","events_url":"https:\/\/api.github.com\/users\/maddiedawson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/maddiedawson\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_5885). All of your documentation changes will be reflected on that endpoint.","@lhoestq would you or another maintainer be able to review please? :)","Why you do need to support FUSE mounted paths ?\r\n\r\n`datasets` uses data that live on disk for fast lookups - FUSE mounted disks would lead to poor performance and I wouldn't recomment using it.","Fuse is commonly used to mount remote file systems (e.g. S3, DBFS) as a local directory. Since it's slower than using an actual local device, it's better to treat it as remote to reduce latency.","I think people would be confused if they don't have the same dataset behavior depending on the disk type.\r\n\r\nIf they want to use a remote bucket they should use the remote URI instead, e.g. `s3:\/\/...`. Advancements on this are tracked at #5281 "],"created_at":1684803894000,"updated_at":1685004648000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5885","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5885","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5885.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5885.patch","merged_at":null},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5885\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5885\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5888","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5888\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5888\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5888\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5888","id":1722290363,"node_id":"I_kwDODunzps5mqBC7","number":5888,"title":"A way to upload and visualize .mp4 files (millions of them) as part of a dataset","user":{"login":"AntreasAntoniou","id":10792502,"node_id":"MDQ6VXNlcjEwNzkyNTAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10792502?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AntreasAntoniou","html_url":"https:\/\/github.com\/AntreasAntoniou","followers_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/followers","following_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/orgs","repos_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/repos","events_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AntreasAntoniou\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! \r\n\r\nYou want to use `push_to_hub` (creates Parquet files) instead of `save_to_disk` (creates Arrow files) when creating a Hub dataset. Parquet is designed for long-term storage and takes less space than the Arrow format, and, most importantly, `load_dataset` can parse it, which should fix the viewer. \r\n\r\nRegarding the dataset generation, `Dataset.from_generator` with the video data represented as `datasets.Value(\"binary\")` followed by `push_to_hub` should work (if the `push_to_hub` step times out, restart it to resume uploading)\r\n\r\nPS: Once the dataset is uploaded, to make working with the dataset easier, it's a good idea to add a [transform](https:\/\/huggingface.co\/docs\/datasets\/main\/en\/process#format-transform) to the README that shows how to decode the binary video data into something a model can understand. Also, if you get an `ArrowInvalid` error (can happen when working with large binary data) in `Dataset.from_generator`, reduce the value of `writer_batch_size` (the default is 1000) to fix it.","One issue here is that Dataset.from_generator can work well for the non 'infinite sampling' version of the dataset. The training set for example is often sampled dynamically given the video files that I have uploaded. I worry that storing the video data as binary means that I'll end up duplicating a lot of the data. Furthermore, storing video data as anything but .mp4 would quickly make the dataset size from 1.9TB to 1PB. ","> storing video data as anything but .mp4\r\n\r\nWhat I mean by storing as `datasets.Value(\"binary\")` is embedding raw MP4 bytes in the Arrow table, but, indeed, this would waste a lot of space if there are duplicates.\r\n\r\nSo I see two options:\r\n* if one video is not mapped to too many samples, you can embed the video bytes and do \"group by\" on the rest of the columns (this would turn them into lists) to avoid duplicating them (then, it should be easy to define a `map` in the README that samples the video data to \"unpack\" the samples)\r\n* you can create a dataset script that downloads the video files and embeds their file paths into the Arrow file\r\n\r\nAlso, I misread MP4 as MP3. We need to add a `Video` feature to the `datasets` lib to support MP4 files in the viewer (a bit trickier to implement than the `Image` feature due to the Arrow limitations).","I'm transferring this issue to the `datasets` repo, as it's not related to `huggingface_hub`","@mariosasko Right. If I want my dataset to be streamable, what are the necessary requirements to achieve that within the context of .mp4 binaries like we have here? I guess your second point here would not support that right?","The streaming would work, but the video paths would require using `fsspec.open` to get the content.","Are there any plans to make video playable on the hub?","Not yet. The (open source) tooling for video is not great in terms of ease of use\/performance, so we are discussing internally the best way to support it (one option is creating a new library for video IO, but this will require a lot of work)","True. I spend a good 4 months just mixing and matching existing solutions so I could get performance that would not IO bound my model training. \r\n\r\nThis is what I ended up with, in case it's useful\r\n\r\nhttps:\/\/github.com\/AntreasAntoniou\/TALI\/blob\/045cf9e5aa75b1bf2c6d5351fb910fa10e3ff32c\/tali\/data\/data_plus.py#L85"],"created_at":1684778726000,"updated_at":1687491436000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nI recently chose to use huggingface hub as the home for a large multi modal dataset I've been building. https:\/\/huggingface.co\/datasets\/Antreas\/TALI\r\n\r\nIt combines images, text, audio and video. Now, I could very easily upload a dataset made via datasets.Dataset.from_generator, as long as it did not include video files. I found that including .mp4 files in the entries would not auto-upload those files. \r\n\r\nHence I tried to upload them myself. I quickly found out that uploading many small files is a very bad way to use git lfs, and that it would take ages, so, I resorted to using 7z to pack them all up. But then I had a new problem.\r\n\r\nMy dataset had a size of 1.9TB. Trying to upload such a large file with the default huggingface_hub API always resulted in time outs etc. So I decided to split the large files into chunks of 5GB each and reupload. \r\n\r\nSo, eventually it all worked out. But now the dataset can't be properly and natively used by the datasets API because of all the needed preprocessing -- and furthermore the hub is unable to visualize things. \r\n\r\n**Describe the solution you'd like**\r\nA native way to upload large datasets that include .mp4 or other video types.\r\n\r\n**Describe alternatives you've considered**\r\nAlready explained earlier\r\n\r\n**Additional context**\r\nhttps:\/\/huggingface.co\/datasets\/Antreas\/TALI\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5888\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5888\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5884","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5884\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5884\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5884\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5884","id":1719548172,"node_id":"I_kwDODunzps5mfjkM","number":5884,"title":"`Dataset.to_tf_dataset` fails when strings cannot be encoded as `np.bytes_`","user":{"login":"alvarobartt","id":36760800,"node_id":"MDQ6VXNlcjM2NzYwODAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36760800?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alvarobartt","html_url":"https:\/\/github.com\/alvarobartt","followers_url":"https:\/\/api.github.com\/users\/alvarobartt\/followers","following_url":"https:\/\/api.github.com\/users\/alvarobartt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alvarobartt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alvarobartt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alvarobartt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alvarobartt\/orgs","repos_url":"https:\/\/api.github.com\/users\/alvarobartt\/repos","events_url":"https:\/\/api.github.com\/users\/alvarobartt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alvarobartt\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"alvarobartt","id":36760800.0,"node_id":"MDQ6VXNlcjM2NzYwODAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36760800?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alvarobartt","html_url":"https:\/\/github.com\/alvarobartt","followers_url":"https:\/\/api.github.com\/users\/alvarobartt\/followers","following_url":"https:\/\/api.github.com\/users\/alvarobartt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alvarobartt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alvarobartt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alvarobartt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alvarobartt\/orgs","repos_url":"https:\/\/api.github.com\/users\/alvarobartt\/repos","events_url":"https:\/\/api.github.com\/users\/alvarobartt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alvarobartt\/received_events","type":"User","site_admin":false},"assignees":[{"login":"alvarobartt","id":36760800,"node_id":"MDQ6VXNlcjM2NzYwODAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36760800?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alvarobartt","html_url":"https:\/\/github.com\/alvarobartt","followers_url":"https:\/\/api.github.com\/users\/alvarobartt\/followers","following_url":"https:\/\/api.github.com\/users\/alvarobartt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alvarobartt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alvarobartt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alvarobartt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alvarobartt\/orgs","repos_url":"https:\/\/api.github.com\/users\/alvarobartt\/repos","events_url":"https:\/\/api.github.com\/users\/alvarobartt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alvarobartt\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["May eventually be solved in #5883 ","#self-assign"],"created_at":1684756986000,"updated_at":1686326696000,"closed_at":1686326695000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nWhen loading any dataset that contains a column with strings that are not ASCII-compatible, looping over those records raises the following exception e.g. for `\u00e9` character `UnicodeEncodeError: 'ascii' codec can't encode character '\\xe9' in position 0: ordinal not in range(128)`.\n\n### Steps to reproduce the bug\n\nRunning the following script will eventually fail, when reaching to the batch that contains non-ASCII compatible strings.\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"imdb\", split=\"train\")\r\ntfds = ds.to_tf_dataset(batch_size=16)\r\n\r\nfor batch in tfds:\r\n print(batch)\r\n>>> UnicodeEncodeError: 'ascii' codec can't encode character '\\xe9' in position 0: ordinal not in range(128)\r\n```\n\n### Expected behavior\n\nThe following script to run properly, making sure that the strings are either `numpy.unicode_` or `numpy.string` instead of `numpy.bytes_` since some characters are not ASCII compatible and that would lead to an issue when applying the `map`.\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"imdb\", split=\"train\")\r\ntfds = ds.to_tf_dataset(batch_size=16)\r\n\r\nfor batch in tfds:\r\n print(batch)\r\n```\n\n### Environment info\n\n- `datasets` version: 2.12.1.dev0\r\n- Platform: macOS-13.3.1-arm64-arm-64bit\r\n- Python version: 3.10.11\r\n- Huggingface_hub version: 0.14.1\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 1.5.3","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5884\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5884\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5883","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5883\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5883\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5883\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5883","id":1719527597,"node_id":"PR_kwDODunzps5RAkYi","number":5883,"title":"Fix string-encoding, make `batch_size` optional, and minor improvements in `Dataset.to_tf_dataset`","user":{"login":"alvarobartt","id":36760800,"node_id":"MDQ6VXNlcjM2NzYwODAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36760800?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alvarobartt","html_url":"https:\/\/github.com\/alvarobartt","followers_url":"https:\/\/api.github.com\/users\/alvarobartt\/followers","following_url":"https:\/\/api.github.com\/users\/alvarobartt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alvarobartt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alvarobartt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alvarobartt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alvarobartt\/orgs","repos_url":"https:\/\/api.github.com\/users\/alvarobartt\/repos","events_url":"https:\/\/api.github.com\/users\/alvarobartt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alvarobartt\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","To showcase the current issue, here's a Colab Gist, that shows that the `imdb` dataset cannot be read\/iterated, since one or more samples contain a non-ascii character that is being converted to `numpy.bytes_`, and so on fails.\r\n\r\nColab Gist at https:\/\/gist.github.com\/alvarobartt\/1746959d1abb9a33e0c593f3bd82a2fb\r\n\r\nAlso, here's a quick sample of what's happening:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"imdb\", split=\"train\")\r\ntfds = ds.to_tf_dataset(batch_size=16)\r\nfor batch in tfds:\r\n print(batch)\r\n>>> UnicodeEncodeError: 'ascii' codec can't encode character '\\xe9' in position 0: ordinal not in range(128)\r\n```\r\n\r\nA more detailed version of it:\r\n\r\n```python\r\nfrom datasets import Dataset\r\n\r\nds = Dataset.from_dict(\r\n {\r\n \"a\": [1],\r\n \"b\": [\"\u00e9\"],\r\n }\r\n)\r\ntfds = ds.to_tf_dataset(batch_size=1)\r\nfor batch in tfds:\r\n print(batch)\r\n>>> UnicodeEncodeError: 'ascii' codec can't encode character '\\xe9' in position 0: ordinal not in range(128)\r\n```\r\n\r\nThe original issue comes from https:\/\/github.com\/tensorflow\/tensorflow\/blob\/388d952114e59a1aeda440ed4737b29f8b7c6e8a\/tensorflow\/python\/ops\/script_ops.py#LL234C4-L234C4, which could easily be solved by replacing that line with `return result.astype(np.unicode_)` but they are mentioning that it may lead to issues.\r\n\r\nEven the following fails in `numpy`:\r\n\r\n```python\r\nimport numpy as np\r\n\r\nx = np.array([\"\u00e9\"]).astype(np.bytes_)\r\n```","cc. @lhoestq :hugs:","cc @Rocketknight1 ","> Nice ! Could you add some tests to make sure that batch_size=None works as expected ?\r\n\r\nSure, I'll add the tests for everything, including the string-encoding issue to make sure it's solved!","Thanks for the review @lhoestq and @Rocketknight1! I do understand that processing it in batches is always more efficient than processing it one-by-one, it was just to make `batch_size` optional. What we can do is default it to a certain batch size e.g. 16 as before, and that's it, but I think it can still remain optional.","@Rocketknight1 then I'll add the integration tests for the optional `batch_size` as well as for the encoding of non-ASCII compatible characters \ud83d\ude04 Do we set the default `batch_size` to 16 instead of `None`?","@alvarobartt I think 16 is a reasonable default, yep!","I think default should be None, not 16.\r\nUsers won't expect to have it batched by default.","Then I'll leave it as is, and add the unit\/integration tests, thanks @Rocketknight1 and @lhoestq ","Hi @Rocketknight1 @lhoestq! So the string-encoding issue is already solved, but I've got one doubt about the `batch_size` being optional in the multiprocessing approach, since in that case I assume the `batch_size` should be mandatory, for the moment I'm assuming it is\/should be mandatory, but let me know if you want me to add a check to disallow `batch_size=None` when `num_workers>1`. Thanks!","> To showcase the current issue, here's a Colab Gist, that shows that the `imdb` dataset cannot be read\/iterated, since one or more samples contain a non-ascii character that is being converted to `numpy.bytes_`, and so on fails.\r\n> \r\n> Colab Gist at https:\/\/gist.github.com\/alvarobartt\/1746959d1abb9a33e0c593f3bd82a2fb\r\n\r\nI've used the Colab shared above for testing purposes, and it works fine, plus the unit\/integration tests are passing. I've also trained a `KerasNLP` model with incoming data from \ud83e\udd17`datasets` with no issue at all!","> in the multiprocessing approach, since in that case I assume the batch_size should be mandatory,\r\n\r\nNo I think they're quite orthogonal, no need to have it mandatory","> No I think they're quite orthogonal, no need to have it mandatory\r\n\r\nBut it will break if `batch_size=None` as the multiprocessing approach will aim to prepare batches and distribute those to every worker, and assuming `batch_size=1` when `batch_size=None` I guess is not a good assumption, right?","Ah I see. Multiprocessing should support batch_size=None indeed. If you have ideas you can do it in this PR, or raise a NotImplementedError and we can see later","Sure @lhoestq, I can add a `NotImplementedError` for the moment, and prepare the next PR straight-away to tackle the multiprocessing approach with `batch_size=None`, but not sure if that may eventually collide with @Rocketknight1 PR at https:\/\/github.com\/huggingface\/datasets\/pull\/5863","Yes, let me merge the PR at #5863 after this one, and then we can open another to improve the behaviour with multiprocessing and `batch_size=None`!","Sure @Rocketknight1 makes complete sense to me! Do you want me to add the `raise NotImplementedError` and then we merge this PR? Or you prefer to directly merge the current?","`raise NotImplementedError` for now with an error telling the user that multiprocessing needs them to specify a batch size, I think!","Since you recently approved @Rocketknight1, are we ready to merge? Thanks \ud83e\udd17","Ah actually it looks like `minimal_tf_collate_fn` doesn't support batch_size=None","Hi @lhoestq so I didn't include the call to `collate_fn`, as we won't need to collate the incoming data e.g. \"str\" should remain a \"str\" not a [\"str\"], and the `minimal_collate_fn` was indeed putting everything into a list, so the output was not un-batched, but batched with size 1","What if the user passes a collate_fn ? The torch DataLoader still applies it if batch_size=None for example.\r\n\r\nDoes my last change look of to you ? If so I think we can merge","> What if the user passes a collate_fn ? The torch DataLoader still applies it if batch_size=None for example.\r\n> \r\n> Does my last change look of to you ? If so I think we can merge\r\n\r\nI think we're good, since it won't batch it under the scenario of `str` being provided instead of `List[str]`, and the unit\/integration tests are passing, so I'm OK to merge. Maybe we can double check with Matt? cc @Rocketknight1 ","Yes, and sorry for the delay! I'm happy to merge.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006555 \/ 0.011353 (-0.004798) | 0.004521 \/ 0.011008 (-0.006487) | 0.096633 \/ 0.038508 (0.058125) | 0.032859 \/ 0.023109 (0.009750) | 0.294632 \/ 0.275898 (0.018734) | 0.325140 \/ 0.323480 (0.001660) | 0.005676 \/ 0.007986 (-0.002310) | 0.005252 \/ 0.004328 (0.000924) | 0.074349 \/ 0.004250 (0.070099) | 0.045836 \/ 0.037052 (0.008784) | 0.302919 \/ 0.258489 (0.044430) | 0.340686 \/ 0.293841 (0.046845) | 0.028398 \/ 0.128546 (-0.100148) | 0.008942 \/ 0.075646 (-0.066704) | 0.326994 \/ 0.419271 (-0.092278) | 0.049556 \/ 0.043533 (0.006023) | 0.293883 \/ 0.255139 (0.038744) | 0.316522 \/ 0.283200 (0.033322) | 0.097385 \/ 0.141683 (-0.044298) | 1.405334 \/ 1.452155 (-0.046821) | 1.521529 \/ 1.492716 (0.028812) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.212269 \/ 0.018006 (0.194263) | 0.445692 \/ 0.000490 (0.445203) | 0.004930 \/ 0.000200 (0.004730) | 0.000093 \/ 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026907 \/ 0.037411 (-0.010504) | 0.108607 \/ 0.014526 (0.094081) | 0.116806 \/ 0.176557 (-0.059751) | 0.178428 \/ 0.737135 (-0.558707) | 0.122326 \/ 0.296338 (-0.174012) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.404211 \/ 0.215209 (0.189002) | 4.045374 \/ 2.077655 (1.967719) | 1.877237 \/ 1.504120 (0.373117) | 1.706276 \/ 1.541195 (0.165081) | 1.750610 \/ 1.468490 (0.282120) | 0.522331 \/ 4.584777 (-4.062446) | 3.742286 \/ 3.745712 (-0.003426) | 1.791285 \/ 5.269862 (-3.478577) | 1.043872 \/ 4.565676 (-3.521805) | 0.065176 \/ 0.424275 (-0.359099) | 0.011821 \/ 0.007607 (0.004214) | 0.507374 \/ 0.226044 (0.281329) | 5.088803 \/ 2.268929 (2.819875) | 2.282742 \/ 55.444624 (-53.161882) | 1.950737 \/ 6.876477 (-4.925740) | 2.042262 \/ 2.142072 (-0.099810) | 0.636525 \/ 4.805227 (-4.168702) | 0.140837 \/ 6.500664 (-6.359827) | 0.063223 \/ 0.075469 (-0.012246) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.188070 \/ 1.841788 (-0.653718) | 14.622681 \/ 8.074308 (6.548372) | 13.247988 \/ 10.191392 (3.056596) | 0.165858 \/ 0.680424 (-0.514566) | 0.017476 \/ 0.534201 (-0.516725) | 0.391973 \/ 0.579283 (-0.187310) | 0.433326 \/ 0.434364 (-0.001038) | 0.467163 \/ 0.540337 (-0.073175) | 0.568359 \/ 1.386936 (-0.818577) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006076 \/ 0.011353 (-0.005276) | 0.004439 \/ 0.011008 (-0.006570) | 0.074496 \/ 0.038508 (0.035988) | 0.031396 \/ 0.023109 (0.008287) | 0.372237 \/ 0.275898 (0.096339) | 0.403412 \/ 0.323480 (0.079932) | 0.005430 \/ 0.007986 (-0.002555) | 0.003846 \/ 0.004328 (-0.000483) | 0.074403 \/ 0.004250 (0.070153) | 0.045398 \/ 0.037052 (0.008346) | 0.394133 \/ 0.258489 (0.135644) | 0.421769 \/ 0.293841 (0.127928) | 0.027936 \/ 0.128546 (-0.100610) | 0.008962 \/ 0.075646 (-0.066685) | 0.083158 \/ 0.419271 (-0.336113) | 0.044863 \/ 0.043533 (0.001331) | 0.393834 \/ 0.255139 (0.138695) | 0.391537 \/ 0.283200 (0.108337) | 0.097971 \/ 0.141683 (-0.043712) | 1.496632 \/ 1.452155 (0.044477) | 1.585511 \/ 1.492716 (0.092795) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.010094 \/ 0.018006 (-0.007913) | 0.437811 \/ 0.000490 (0.437321) | 0.000963 \/ 0.000200 (0.000763) | 0.000084 \/ 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028864 \/ 0.037411 (-0.008547) | 0.112480 \/ 0.014526 (0.097954) | 0.120938 \/ 0.176557 (-0.055619) | 0.170888 \/ 0.737135 (-0.566247) | 0.125903 \/ 0.296338 (-0.170435) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.426716 \/ 0.215209 (0.211507) | 4.238380 \/ 2.077655 (2.160725) | 2.052889 \/ 1.504120 (0.548769) | 1.871043 \/ 1.541195 (0.329848) | 1.890405 \/ 1.468490 (0.421915) | 0.522059 \/ 4.584777 (-4.062718) | 3.813331 \/ 3.745712 (0.067619) | 2.891651 \/ 5.269862 (-2.378210) | 1.323836 \/ 4.565676 (-3.241841) | 0.065124 \/ 0.424275 (-0.359151) | 0.011498 \/ 0.007607 (0.003891) | 0.525102 \/ 0.226044 (0.299057) | 5.245190 \/ 2.268929 (2.976261) | 2.531149 \/ 55.444624 (-52.913476) | 2.197323 \/ 6.876477 (-4.679153) | 2.197314 \/ 2.142072 (0.055241) | 0.633423 \/ 4.805227 (-4.171804) | 0.140248 \/ 6.500664 (-6.360416) | 0.064432 \/ 0.075469 (-0.011037) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.270639 \/ 1.841788 (-0.571149) | 14.856678 \/ 8.074308 (6.782369) | 14.337631 \/ 10.191392 (4.146239) | 0.195319 \/ 0.680424 (-0.485105) | 0.017628 \/ 0.534201 (-0.516573) | 0.393984 \/ 0.579283 (-0.185299) | 0.421987 \/ 0.434364 (-0.012376) | 0.459245 \/ 0.540337 (-0.081092) | 0.557786 \/ 1.386936 (-0.829150) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#a129219a48c1b07c06d4bc1db32c317bf513089d \"CML watermark\")\n","Will you eventually need help with your PR @Rocketknight1? I'll be happy to help if needed \ud83d\ude04 ","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007577 \/ 0.011353 (-0.003776) | 0.004960 \/ 0.011008 (-0.006048) | 0.113622 \/ 0.038508 (0.075114) | 0.037981 \/ 0.023109 (0.014872) | 0.355312 \/ 0.275898 (0.079414) | 0.393384 \/ 0.323480 (0.069904) | 0.006575 \/ 0.007986 (-0.001411) | 0.005941 \/ 0.004328 (0.001612) | 0.085976 \/ 0.004250 (0.081726) | 0.053784 \/ 0.037052 (0.016732) | 0.369358 \/ 0.258489 (0.110869) | 0.399402 \/ 0.293841 (0.105561) | 0.032155 \/ 0.128546 (-0.096391) | 0.010448 \/ 0.075646 (-0.065199) | 0.389009 \/ 0.419271 (-0.030263) | 0.057377 \/ 0.043533 (0.013844) | 0.354968 \/ 0.255139 (0.099829) | 0.382404 \/ 0.283200 (0.099204) | 0.111056 \/ 0.141683 (-0.030627) | 1.807986 \/ 1.452155 (0.355832) | 1.866070 \/ 1.492716 (0.373354) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.244439 \/ 0.018006 (0.226432) | 0.491942 \/ 0.000490 (0.491452) | 0.001910 \/ 0.000200 (0.001710) | 0.000112 \/ 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031024 \/ 0.037411 (-0.006387) | 0.129674 \/ 0.014526 (0.115148) | 0.142974 \/ 0.176557 (-0.033583) | 0.213568 \/ 0.737135 (-0.523568) | 0.147794 \/ 0.296338 (-0.148545) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.480333 \/ 0.215209 (0.265124) | 4.792901 \/ 2.077655 (2.715246) | 2.233145 \/ 1.504120 (0.729025) | 2.036291 \/ 1.541195 (0.495096) | 2.109631 \/ 1.468490 (0.641140) | 0.624546 \/ 4.584777 (-3.960231) | 4.543511 \/ 3.745712 (0.797799) | 3.961345 \/ 5.269862 (-1.308517) | 1.903634 \/ 4.565676 (-2.662042) | 0.076584 \/ 0.424275 (-0.347691) | 0.014590 \/ 0.007607 (0.006983) | 0.593195 \/ 0.226044 (0.367151) | 5.928740 \/ 2.268929 (3.659811) | 2.781164 \/ 55.444624 (-52.663460) | 2.364303 \/ 6.876477 (-4.512173) | 2.510139 \/ 2.142072 (0.368067) | 0.770886 \/ 4.805227 (-4.034341) | 0.167995 \/ 6.500664 (-6.332669) | 0.076622 \/ 0.075469 (0.001153) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.402398 \/ 1.841788 (-0.439390) | 17.921233 \/ 8.074308 (9.846925) | 17.036738 \/ 10.191392 (6.845346) | 0.168997 \/ 0.680424 (-0.511427) | 0.020259 \/ 0.534201 (-0.513941) | 0.465322 \/ 0.579283 (-0.113962) | 0.500435 \/ 0.434364 (0.066071) | 0.546846 \/ 0.540337 (0.006509) | 0.658130 \/ 1.386936 (-0.728806) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007624 \/ 0.011353 (-0.003729) | 0.005265 \/ 0.011008 (-0.005744) | 0.086886 \/ 0.038508 (0.048377) | 0.038235 \/ 0.023109 (0.015126) | 0.463969 \/ 0.275898 (0.188071) | 0.502451 \/ 0.323480 (0.178971) | 0.006285 \/ 0.007986 (-0.001701) | 0.004525 \/ 0.004328 (0.000197) | 0.086557 \/ 0.004250 (0.082307) | 0.052414 \/ 0.037052 (0.015362) | 0.482167 \/ 0.258489 (0.223678) | 0.513684 \/ 0.293841 (0.219843) | 0.032929 \/ 0.128546 (-0.095618) | 0.010249 \/ 0.075646 (-0.065397) | 0.093377 \/ 0.419271 (-0.325895) | 0.054114 \/ 0.043533 (0.010582) | 0.466116 \/ 0.255139 (0.210977) | 0.488977 \/ 0.283200 (0.205777) | 0.115446 \/ 0.141683 (-0.026237) | 1.762912 \/ 1.452155 (0.310757) | 1.874191 \/ 1.492716 (0.381475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.012666 \/ 0.018006 (-0.005341) | 0.485962 \/ 0.000490 (0.485473) | 0.002621 \/ 0.000200 (0.002421) | 0.000128 \/ 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033661 \/ 0.037411 (-0.003751) | 0.135395 \/ 0.014526 (0.120869) | 0.147230 \/ 0.176557 (-0.029326) | 0.205847 \/ 0.737135 (-0.531288) | 0.151496 \/ 0.296338 (-0.144842) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.514097 \/ 0.215209 (0.298887) | 5.134093 \/ 2.077655 (3.056438) | 2.496775 \/ 1.504120 (0.992655) | 2.268078 \/ 1.541195 (0.726883) | 2.342153 \/ 1.468490 (0.873663) | 0.623130 \/ 4.584777 (-3.961647) | 4.601787 \/ 3.745712 (0.856075) | 3.414249 \/ 5.269862 (-1.855613) | 1.849603 \/ 4.565676 (-2.716073) | 0.078350 \/ 0.424275 (-0.345925) | 0.013785 \/ 0.007607 (0.006178) | 0.638783 \/ 0.226044 (0.412739) | 6.378356 \/ 2.268929 (4.109427) | 3.072867 \/ 55.444624 (-52.371757) | 2.668123 \/ 6.876477 (-4.208354) | 2.693905 \/ 2.142072 (0.551833) | 0.764583 \/ 4.805227 (-4.040644) | 0.166854 \/ 6.500664 (-6.333810) | 0.076883 \/ 0.075469 (0.001414) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.502003 \/ 1.841788 (-0.339784) | 18.674205 \/ 8.074308 (10.599897) | 16.837759 \/ 10.191392 (6.646367) | 0.176995 \/ 0.680424 (-0.503428) | 0.020126 \/ 0.534201 (-0.514075) | 0.464480 \/ 0.579283 (-0.114803) | 0.516477 \/ 0.434364 (0.082113) | 0.549818 \/ 0.540337 (0.009481) | 0.659927 \/ 1.386936 (-0.727009) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#a129219a48c1b07c06d4bc1db32c317bf513089d \"CML watermark\")\n","@alvarobartt Yes, I'll ping you for a review once it's ready!"],"created_at":1684756267000,"updated_at":1686222543000,"closed_at":1686070155000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5883","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5883","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5883.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5883.patch","merged_at":1686070155000},"body":"## What's in this PR?\r\n\r\nThis PR addresses some minor fixes and general improvements in the `to_tf_dataset` method of `datasets.Dataset`, to convert a \ud83e\udd17HuggingFace Dataset as a TensorFlow Dataset.\r\n\r\nThe main bug solved in this PR comes with the string-encoding, since for safety purposes the internal conversion of `numpy.arrays` when `dtype` is unicode\/string, is to convert it into `numpy.bytes`, more information in the docstring of https:\/\/github.com\/tensorflow\/tensorflow\/blob\/388d952114e59a1aeda440ed4737b29f8b7c6e8a\/tensorflow\/python\/ops\/script_ops.py#L210. That's triggered when using `tensorflow.numpy_function` as it's applying another type cast besides the one that `datasets` does, so the casting is applied at least twice per entry\/batch. So this means that the definition of the `numpy.unicode_` dtype when the data in the batch is a string, is ignored, and replaced by `numpy.bytes_`.\r\n\r\nBesides that, some other minor things have been fixed:\r\n\r\n* Made `batch_size` an optional parameter in `to_tf_dataset`\r\n* Map the `tensorflow` output dtypes just once, and not in every `tf.function` call during `map`\r\n* Keep `numpy` formatting in the `datasets.Dataset` if already formatted like it, no need to format it again as `numpy`\r\n* Docstring indentation in `dataset_to_tf` and `multiprocess_dataset_to_tf`\r\n\r\n## What's missing in this PR?\r\n\r\nI can include some integration tests if needed, to validate that `batch_size` is optional, and that the tensors in the TF-Dataset can be looped over with no issues as before.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5883\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5883\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5881","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5881\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5881\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5881\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5881","id":1719402643,"node_id":"I_kwDODunzps5mfACT","number":5881,"title":"Split dataset by node: index error when sharding iterable dataset","user":{"login":"sanchit-gandhi","id":93869735,"node_id":"U_kgDOBZhWpw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/93869735?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sanchit-gandhi","html_url":"https:\/\/github.com\/sanchit-gandhi","followers_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/followers","following_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/orgs","repos_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/repos","events_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["cc @lhoestq in case you have any ideas here! Might need a multi-host set-up to debug (can give you access to a JAX one if you need)"],"created_at":1684751773000,"updated_at":1684830734000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nContext: we're splitting an iterable dataset by node and then passing it to a torch data loader with multiple workers\r\n\r\nWhen we iterate over it for 5 steps, we don't get an error\r\n\r\nWhen we instead iterate over it for 8 steps, we get an `IndexError` when fetching the data if we have too many workers\r\n\r\n### Steps to reproduce the bug\r\n\r\nHere, we have 2 JAX processes (`jax.process_count() = 2`) which we split the dataset over. The dataset loading script can be found here: https:\/\/huggingface.co\/datasets\/distil-whisper\/librispeech_asr\/blob\/c6a1e805cbfeed5057400ac5937327d7e30281b8\/librispeech_asr.py#L310\r\n\r\n
\r\n\r\n Code to reproduce <\/summary>\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nimport jax\r\nfrom datasets.distributed import split_dataset_by_node\r\nfrom torch.utils.data import DataLoader\r\nfrom tqdm import tqdm\r\n\r\n# load an example dataset (https:\/\/huggingface.co\/datasets\/distil-whisper\/librispeech_asr)\r\ndataset = load_dataset(\"distil-whisper\/librispeech_asr\", \"all\", split=\"train.clean.100\", streaming=True)\r\n# just keep the text column -> no need to define a collator\r\ndataset_text = dataset.remove_columns(set(dataset.features.keys()) - {\"text\"})\r\n\r\n# define some constants\r\nbatch_size = 256\r\nnum_examples = 5 # works for 5 examples, doesn't for 8\r\nnum_workers = dataset_text.n_shards\r\n\r\n# try with multiple workers\r\ndataloader = DataLoader(dataset_text, batch_size=batch_size, num_workers=num_workers, drop_last=True)\r\n\r\nfor i, batch in tqdm(enumerate(dataloader), total=num_examples, desc=\"Multiple workers\"):\r\n if i == num_examples:\r\n break\r\n\r\n# try splitting by node (we can't do this with `dataset_text` since `split_dataset_by_node` expects the Audio column for an ASR dataset)\r\ndataset = split_dataset_by_node(dataset, rank=jax.process_index(), world_size=jax.process_count())\r\n# remove the text column again\r\ndataset_text = dataset.remove_columns(set(dataset.features.keys()) - {\"text\"})\r\ndataloader = DataLoader(dataset_text, batch_size=16, num_workers=num_workers \/\/ 2, drop_last=True)\r\n\r\nfor i, batch in tqdm(enumerate(dataloader), total=num_examples, desc=\"Split by node\"):\r\n if i == num_examples:\r\n break\r\n\r\n# too many workers\r\ndataloader = DataLoader(dataset_text, batch_size=256, num_workers=num_workers, drop_last=True)\r\nfor i, batch in tqdm(enumerate(dataloader), total=num_examples, desc=\"Too many workers\"):\r\n if i == num_examples:\r\n break\r\n```\r\n\r\n<\/details>\r\n\r\n
\r\n\r\n With 5 examples: <\/summary>\r\n\r\n```\r\nMultiple workers: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5\/5 [00:16<00:00, 3.33s\/it]\r\nAssigning 7 shards (or data sources) of the dataset to each node. \r\nSplit by node: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5\/5 [00:13<00:00, 2.76s\/it]\r\nAssigning 7 shards (or data sources) of the dataset to each node. \r\nToo many dataloader workers: 14 (max is dataset.n_shards=7). Stopping 7 dataloader workers. \r\nTo parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary t\r\no have a number of workers greater than dataset.n_shards=7. To enable more parallelism, please split the dataset in more\r\n files than 7. \r\nToo many workers: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5\/5 [00:15<00:00, 3.03s\/it]\r\n```\r\n\r\n<\/details>\r\n\r\n
\r\n\r\n With 7 examples: <\/summary>\r\n\r\n```\r\nMultiple workers: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 8\/8 [00:13<00:00, 1.71s\/it]\r\nAssigning 7 shards (or data sources) of the dataset to each node.\r\nSplit by node: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 8\/8 [00:11<00:00, 1.38s\/it]\r\nAssigning 7 shards (or data sources) of the dataset to each node.\r\nToo many dataloader workers: 14 (max is dataset.n_shards=7). Stopping 7 dataloader workers.\r\nTo parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary to have a number of workers greater than dataset.n_shards=7. To enable more parallelism, please split the dataset in more files than 7.\r\nToo many workers: 88%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258b | 7\/8 [00:13<00:01, 1.89s\/it]\r\nTraceback (most recent call last):\r\n File \"distil-whisper\/test_librispeech.py\", line 36, in \r\n for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc=\"Too many workers\"):\r\n File \"\/home\/sanchitgandhi\/hf\/lib\/python3.8\/site-packages\/tqdm\/std.py\", line 1178, in __iter__\r\n for obj in iterable:\r\n File \"\/home\/sanchitgandhi\/hf\/lib\/python3.8\/site-packages\/torch\/utils\/data\/dataloader.py\", line 633, in __next__\r\n data = self._next_data()\r\n File \"\/home\/sanchitgandhi\/hf\/lib\/python3.8\/site-packages\/torch\/utils\/data\/dataloader.py\", line 1325, in _next_data\r\n return self._process_data(data)\r\n File \"\/home\/sanchitgandhi\/hf\/lib\/python3.8\/site-packages\/torch\/utils\/data\/dataloader.py\", line 1371, in _process_data\r\n data.reraise()\r\n File \"\/home\/sanchitgandhi\/hf\/lib\/python3.8\/site-packages\/torch\/_utils.py\", line 644, in reraise\r\n raise exception\r\nIndexError: Caught IndexError in DataLoader worker process 7.\r\nOriginal Traceback (most recent call last):\r\n File \"\/home\/sanchitgandhi\/hf\/lib\/python3.8\/site-packages\/torch\/utils\/data\/_utils\/worker.py\", line 308, in _worker_loop\r\n data = fetcher.fetch(index)\r\n File \"\/home\/sanchitgandhi\/hf\/lib\/python3.8\/site-packages\/torch\/utils\/data\/_utils\/fetch.py\", line 32, in fetch\r\n data.append(next(self.dataset_iter))\r\n File \"\/home\/sanchitgandhi\/datasets\/src\/datasets\/iterable_dataset.py\", line 986, in __iter__\r\n yield from self._iter_pytorch(ex_iterable)\r\n File \"\/home\/sanchitgandhi\/datasets\/src\/datasets\/iterable_dataset.py\", line 920, in _iter_pytorch\r\n for key, example in ex_iterable.shard_data_sources(worker_info.id, worker_info.num_workers):\r\n File \"\/home\/sanchitgandhi\/datasets\/src\/datasets\/iterable_dataset.py\", line 540, in shard_data_sources\r\n self.ex_iterable.shard_data_sources(worker_id, num_workers),\r\n File \"\/home\/sanchitgandhi\/datasets\/src\/datasets\/iterable_dataset.py\", line 796, in shard_data_sources\r\n self.ex_iterable.shard_data_sources(worker_id, num_workers),\r\n File \"\/home\/sanchitgandhi\/datasets\/src\/datasets\/iterable_dataset.py\", line 126, in shard_data_sources\r\n requested_gen_kwargs = _merge_gen_kwargs([gen_kwargs_list[i] for i in shard_indices])\r\n File \"\/home\/sanchitgandhi\/datasets\/src\/datasets\/utils\/sharding.py\", line 76, in _merge_gen_kwargs\r\n for key in gen_kwargs_list[0]\r\nIndexError: list index out of range\r\n```\r\n\r\n<\/details>\r\n\r\n### Expected behavior\r\n\r\nShould pass for both 5 and 7 examples\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 2.12.1.dev0\r\n- Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- Huggingface_hub version: 0.14.1\r\n- PyArrow version: 12.0.0\r\n- Pandas version: 2.0.1","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5881\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5881\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5880","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5880\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5880\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5880\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5880","id":1719090101,"node_id":"I_kwDODunzps5mdzu1","number":5880,"title":"load_dataset from s3 file system through streaming can't not iterate data ","user":{"login":"janineguo","id":59083384,"node_id":"MDQ6VXNlcjU5MDgzMzg0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59083384?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/janineguo","html_url":"https:\/\/github.com\/janineguo","followers_url":"https:\/\/api.github.com\/users\/janineguo\/followers","following_url":"https:\/\/api.github.com\/users\/janineguo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/janineguo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/janineguo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/janineguo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/janineguo\/orgs","repos_url":"https:\/\/api.github.com\/users\/janineguo\/repos","events_url":"https:\/\/api.github.com\/users\/janineguo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/janineguo\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This sounds related to #5281.\r\n\r\nCan you try passing `storage_options=s3_client.storage_options` instead passing it to `use_auth_token=` ?","I tried `storage_options` before, but it doesn't work, I checked our source code and I found that we even didn't pass this parameter to the following process. if I use `storage_options` instead of `use_auth_token`, then I also need to change another place of the code. the last line of `streaming_download_manager.py`. our code only passes the `use_auth_token` to the following handler, but does nothing to the `storage_options`\r\n\"image\"\r\n","Cloud storage support is still experimental indeed and you can expect some bugs.\r\n\r\nI think we need to pass the storage options anywhere use_auth_token is passed in indeed. Let me know if you'd be interested in contributing a fix !","Oh, that's great, I really like to fix it. because datasets is really useful and most of our projects need to use it, but we can store our data on the internet due to security reasons. fix it not only make our own work more efficient but also can benefit others who use it."],"created_at":1684741227000,"updated_at":1685105528000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI have a JSON file in my s3 file system(minio), I can use load_dataset to get the file link, but I can't iterate it\r\n\"image\"\r\n\"image\"\r\n\r\nwe can change 4 lines to fix this bug, you can check whether it is ok for us.\r\n\"image\"\n\n### Steps to reproduce the bug\n\n1. storage a file in you s3 file system\r\n2. use load_dataset to read it through streaming\r\n3. iterate it\n\n### Expected behavior\n\ncan iterate it successfully\n\n### Environment info\n\n- `datasets` version: 2.12.0\r\n- Platform: macOS-10.16-x86_64-i386-64bit\r\n- Python version: 3.8.16\r\n- Huggingface_hub version: 0.14.1\r\n- PyArrow version: 12.0.0\r\n- Pandas version: 2.0.1\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5880\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":1},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5880\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5878","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5878\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5878\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5878\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5878","id":1718203843,"node_id":"I_kwDODunzps5mabXD","number":5878,"title":"Prefetching for IterableDataset","user":{"login":"vyeevani","id":30946190,"node_id":"MDQ6VXNlcjMwOTQ2MTkw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30946190?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vyeevani","html_url":"https:\/\/github.com\/vyeevani","followers_url":"https:\/\/api.github.com\/users\/vyeevani\/followers","following_url":"https:\/\/api.github.com\/users\/vyeevani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vyeevani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vyeevani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vyeevani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vyeevani\/orgs","repos_url":"https:\/\/api.github.com\/users\/vyeevani\/repos","events_url":"https:\/\/api.github.com\/users\/vyeevani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vyeevani\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Very cool! Do you have a link to the code that you're using to eagerly fetch the data? Would also be interested in hacking around something here for pre-fetching iterable datasets","I ended up just switching back to the pytorch dataloader and using it's multiprocessing functionality to handle this :(. I'm just not that familiar with python multiprocessing to get something to work in jupyter (kept having weird behaviors happening with zombies living after the cell finished).","Ultimately settled on using webdataset to circumvent huggingface datasets entirely. Would definitely switch back if: https:\/\/github.com\/huggingface\/datasets\/issues\/5337 was resolved.","Hi! You can combine `datasets` with `torchdata` to prefetch `IterableDataset`'s samples:\r\n```python\r\nfrom datasets import load_dataset\r\nfrom torchdata.datapipes.iter import IterableWrapper, HuggingFaceHubReader\r\nfrom torch.utils.data import DataLoader\r\n\r\nds = load_dataset(\"sst\", split=\"train\", streaming=True)\r\n# processing...\r\ndp = IterableWrapper(ds)\r\ndp = dp.prefetch(100)\r\ndl = DataLoader(dp, batch_size=8)\r\n\r\ni = iter(dl)\r\nnext(i)\r\n```","Hey @mariosasko! Thanks for the tip here - introducing prefetch with `torchdata` didn't really give me any performance difference vs not prefetching, but the concept is definitely one that could be really beneficial. Are there any benchmarks that show the speed-up you can get with `torchdata`'s prefetch just for comparison?"],"created_at":1684596340000,"updated_at":1685641200000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\r\n\r\nAdd support for prefetching the next n batches through iterabledataset to reduce batch loading bottleneck in training loop.\r\n\r\n### Motivation\r\n\r\nThe primary motivation behind this is to use hardware accelerators alongside a streaming dataset. This is required when you are in a low ram or low disk space setting as well as quick iteration where you're iterating though different accelerator environments (e.x changing ec2 instances quickly to figure out batch\/sec for a particular architecture). \r\n\r\nCurrently, using the IterableDataset results in accelerators becoming basically useless due to the massive bottleneck induced by the dataset lazy loading\/transform\/mapping.\r\n\r\nI've considered two alternatives:\r\nPyTorch dataloader that handles this. However, I'm using jax, and I believe this is a piece of functionality that should live in the stream class.\r\n\r\nReplicating the \"num_workers\" part of the PyTorch DataLoader to eagerly load batches and apply the transform so Arrow caching will automatically cache results and make them accessible.\r\n\r\n### Your contribution\r\n\r\nI may or may not have time to do this. Currently, I've written the basic multiprocessor approach to handle the eager DataLoader for my own use case with code that's not integrated to datasets. I'd definitely see this as being the default over the regular Dataset for most people given that they wouldn't have to wait on the datasets while also not worrying about performance.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5878\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5878\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5877","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5877\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5877\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5877\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5877","id":1717983961,"node_id":"I_kwDODunzps5mZlrZ","number":5877,"title":"Request for text deduplication feature","user":{"login":"SupreethRao99","id":55043035,"node_id":"MDQ6VXNlcjU1MDQzMDM1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/55043035?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SupreethRao99","html_url":"https:\/\/github.com\/SupreethRao99","followers_url":"https:\/\/api.github.com\/users\/SupreethRao99\/followers","following_url":"https:\/\/api.github.com\/users\/SupreethRao99\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SupreethRao99\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SupreethRao99\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SupreethRao99\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SupreethRao99\/orgs","repos_url":"https:\/\/api.github.com\/users\/SupreethRao99\/repos","events_url":"https:\/\/api.github.com\/users\/SupreethRao99\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SupreethRao99\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The \"exact match\" deduplication will be possible when we resolve https:\/\/github.com\/huggingface\/datasets\/issues\/2514 (first, https:\/\/github.com\/apache\/arrow\/issues\/30950 needs to be addressed on the Arrow side). In the meantime, you can use Polars or DuckDB (e.g., via [datasets-sql](https:\/\/github.com\/mariosasko\/datasets_sql)).\r\n\r\nFuzzy deduplication is out-of-scope for now ([splink](https:\/\/github.com\/moj-analytical-services\/splink) is probably the best tool for it).","This library can be an intermediate solution : https:\/\/github.com\/ChenghaoMou\/text-dedup\/tree\/main"],"created_at":1684547760000,"updated_at":1685651178000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\n\nIt would be great if there would be support for high performance, highly scalable text deduplication algorithms as part of the datasets library.\n\n### Motivation\n\nMotivated by this blog post https:\/\/huggingface.co\/blog\/dedup and this library https:\/\/github.com\/google-research\/deduplicate-text-datasets, but slightly frustrated by how its not very easy to work with these tools I am proposing this feature.\n\n### Your contribution\n\nI would be happy to contribute to the development effort of this feature. would love to collaborate with others in the development effort. ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5877\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5877\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5876","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5876\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5876\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5876\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5876","id":1717978985,"node_id":"I_kwDODunzps5mZkdp","number":5876,"title":"Incompatibility with DataLab","user":{"login":"helpmefindaname","id":26192135,"node_id":"MDQ6VXNlcjI2MTkyMTM1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26192135?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/helpmefindaname","html_url":"https:\/\/github.com\/helpmefindaname","followers_url":"https:\/\/api.github.com\/users\/helpmefindaname\/followers","following_url":"https:\/\/api.github.com\/users\/helpmefindaname\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/helpmefindaname\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/helpmefindaname\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/helpmefindaname\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/helpmefindaname\/orgs","repos_url":"https:\/\/api.github.com\/users\/helpmefindaname\/repos","events_url":"https:\/\/api.github.com\/users\/helpmefindaname\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/helpmefindaname\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892877,"node_id":"MDU6TGFiZWwxOTM1ODkyODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20first%20issue","name":"good first issue","color":"7057ff","default":true,"description":"Good for newcomers"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Indeed, `clobber=True` (with a warning if the existing protocol will be overwritten) should fix the issue, but maybe a better solution is to register our compression filesystem before the script is executed and unregister them afterward. WDYT @lhoestq @albertvillanova?","I think we should use clobber and show a warning if it overwrote a registered filesystem indeed ! This way the user can re-register the filesystems if needed. Though they should probably be compatible (and maybe do the exact same thing) so I wouldn't de-register the `datasets` filesystems"],"created_at":1684546751000,"updated_at":1684996954000,"closed_at":1684996954000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nHello,\r\nI am currently working on a project where both [DataLab](https:\/\/github.com\/ExpressAI\/DataLab) and [datasets](https:\/\/github.com\/huggingface\/datasets) are subdependencies.\r\nI noticed that I cannot import both libraries, as they both register FileSystems in `fsspec`, expecting the FileSystems not being registered before.\r\n\r\nWhen running the code below, I get the following error:\r\n```\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"C:\\Users\\Bened\\anaconda3\\envs\\ner-eval-dashboard2\\lib\\site-packages\\datalabs\\__init__.py\", line 28, in \r\n from datalabs.arrow_dataset import concatenate_datasets, Dataset\r\n File \"C:\\Users\\Bened\\anaconda3\\envs\\ner-eval-dashboard2\\lib\\site-packages\\datalabs\\arrow_dataset.py\", line 60, in \r\n from datalabs.arrow_writer import ArrowWriter, OptimizedTypedSequence\r\n File \"C:\\Users\\Bened\\anaconda3\\envs\\ner-eval-dashboard2\\lib\\site-packages\\datalabs\\arrow_writer.py\", line 28, in \r\n from datalabs.features import (\r\n File \"C:\\Users\\Bened\\anaconda3\\envs\\ner-eval-dashboard2\\lib\\site-packages\\datalabs\\features\\__init__.py\", line 2, in \r\n from datalabs.features.audio import Audio\r\n File \"C:\\Users\\Bened\\anaconda3\\envs\\ner-eval-dashboard2\\lib\\site-packages\\datalabs\\features\\audio.py\", line 21, in \r\n from datalabs.utils.streaming_download_manager import xopen\r\n File \"C:\\Users\\Bened\\anaconda3\\envs\\ner-eval-dashboard2\\lib\\site-packages\\datalabs\\utils\\streaming_download_manager.py\", line 16, in \r\n from datalabs.filesystems import COMPRESSION_FILESYSTEMS\r\n File \"C:\\Users\\Bened\\anaconda3\\envs\\ner-eval-dashboard2\\lib\\site-packages\\datalabs\\filesystems\\__init__.py\", line 37, in \r\n fsspec.register_implementation(fs_class.protocol, fs_class)\r\n File \"C:\\Users\\Bened\\anaconda3\\envs\\ner-eval-dashboard2\\lib\\site-packages\\fsspec\\registry.py\", line 51, in register_implementation\r\n raise ValueError(\r\nValueError: Name (bz2) already in the registry and clobber is False\r\n```\r\n\r\nI think as simple solution would be to just set `clobber=True` in https:\/\/github.com\/huggingface\/datasets\/blob\/main\/src\/datasets\/filesystems\/__init__.py#L28. This allows the register to discard previous registrations. This should work, as the datalabs FileSystems are copies of the datasets FileSystems. However, I don't know if it is guaranteed to be compatible with other libraries that might use the same protocols.\r\n\r\n\r\nI am linking the symmetric issue on [DataLab](https:\/\/github.com\/ExpressAI\/DataLab\/issues\/425) as ideally the issue is solved in both libraries the same way. Otherwise, it could lead to different behaviors depending on which library gets imported first.\r\n\n\n### Steps to reproduce the bug\n\n1. Run `pip install datalabs==0.4.15 datasets==2.12.0`\r\n2. Run the following python code:\r\n ```\r\n import datalabs\r\n import datasets\r\n ```\n\n### Expected behavior\n\nIt should be possible to import both libraries without getting a Value Error\n\n### Environment info\n\ndatalabs==0.4.15\r\ndatasets==2.12.0\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5876\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5876\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5875","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5875\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5875\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5875\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5875","id":1716770394,"node_id":"I_kwDODunzps5mU9Za","number":5875,"title":"Why split slicing doesn't behave like list slicing ?","user":{"login":"astariul","id":43774355,"node_id":"MDQ6VXNlcjQzNzc0MzU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43774355?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/astariul","html_url":"https:\/\/github.com\/astariul","followers_url":"https:\/\/api.github.com\/users\/astariul\/followers","following_url":"https:\/\/api.github.com\/users\/astariul\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/astariul\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/astariul\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/astariul\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/astariul\/orgs","repos_url":"https:\/\/api.github.com\/users\/astariul\/repos","events_url":"https:\/\/api.github.com\/users\/astariul\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/astariul\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892865,"node_id":"MDU6TGFiZWwxOTM1ODkyODY1","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/duplicate","name":"duplicate","color":"cfd3d7","default":true,"description":"This issue or pull request already exists"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["A duplicate of https:\/\/github.com\/huggingface\/datasets\/issues\/1774"],"created_at":1684480870000,"updated_at":1684857734000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nIf I want to get the first 10 samples of my dataset, I can do :\r\n\r\n```\r\nds = datasets.load_dataset('mnist', split='train[:10]')\r\n```\r\n\r\nBut if I exceed the number of samples in the dataset, an exception is raised : \r\n\r\n```\r\nds = datasets.load_dataset('mnist', split='train[:999999999]')\r\n```\r\n\r\n> ValueError: Requested slice [:999999999] incompatible with 60000 examples.\n\n### Steps to reproduce the bug\n\n```\r\nds = datasets.load_dataset('mnist', split='train[:999999999]')\r\n```\n\n### Expected behavior\n\nI would expect it to behave like python lists (no exception raised, the whole list is kept) : \r\n\r\n```\r\nd = list(range(1000))[:999999]\r\nprint(len(d)) # > 1000\r\n```\n\n### Environment info\n\n- `datasets` version: 2.9.0\r\n- Platform: macOS-12.6-arm64-arm-64bit\r\n- Python version: 3.9.12\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 1.5.3","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5875\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5875\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5874","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5874\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5874\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5874\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5874","id":1715708930,"node_id":"I_kwDODunzps5mQ6QC","number":5874,"title":"Using as_dataset on a \"parquet\" builder ","user":{"login":"rems75","id":9039058,"node_id":"MDQ6VXNlcjkwMzkwNTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9039058?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rems75","html_url":"https:\/\/github.com\/rems75","followers_url":"https:\/\/api.github.com\/users\/rems75\/followers","following_url":"https:\/\/api.github.com\/users\/rems75\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rems75\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rems75\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rems75\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rems75\/orgs","repos_url":"https:\/\/api.github.com\/users\/rems75\/repos","events_url":"https:\/\/api.github.com\/users\/rems75\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rems75\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! You can refer to [this doc](https:\/\/huggingface.co\/docs\/datasets\/filesystems#load-and-save-your-datasets-using-your-cloud-storage-filesystem) to see the intended usage (basically, it skips the Arrow -> Parquet conversion step in `ds = load_dataset(...); ds.to_parquet(\"path\/to\/parquet\")`) and allows writing Parquet to remote storage unlike `to_parquet`).\r\n\r\n> I guess I'd expect as_dataset to generate the dataset in arrow format if it has to, or to suggest an alternative way to load the dataset (I've also tried other methods with load_dataset to no avail, probably due to misunderstandings on my part).\r\n\r\n`as_dataset` does not work with `file_format=\"parquet\"` files as Parquet files cannot be memory-mapped, so I think we should just raise an error in that case.\r\n"],"created_at":1684418943000,"updated_at":1685539435000,"closed_at":1685539435000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI used a custom builder to ``download_and_prepare`` a dataset. The first (very minor) issue is that the doc seems to suggest ``download_and_prepare`` will return the dataset, while it does not ([builder.py](https:\/\/github.com\/huggingface\/datasets\/blob\/main\/src\/datasets\/builder.py#L718-L738)).\r\n```\r\n >>> from datasets import load_dataset_builder\r\n >>> builder = load_dataset_builder(\"rotten_tomatoes\")\r\n >>> ds = builder.download_and_prepare(\".\/output_dir\", file_format=\"parquet\")\r\n```\r\n\r\nThe main issue I am facing is loading the dataset from those parquet files. I used the `as_dataset` method suggested by the doc, however it returns:\r\n`\r\nFileNotFoundError: [Errno 2] Failed to open local file 'output_dir\/__main__-train-00000-of-00245.arrow'. Detail:\r\n[errno 2] No such file or directory.\r\n` \n\n### Steps to reproduce the bug\n\n1. Create a custom builder of some sort: `builder = CustomBuilder()`.\r\n2. Run `download_and_prepare` with the parquet format: `builder.download_and_prepare(\".\/output_dir\", file_format=\"parquet\")`.\r\n3. Run `dataset = builder.as_dataset()`. \n\n### Expected behavior\n\nI guess I'd expect `as_dataset` to generate the dataset in arrow format if it has to, or to suggest an alternative way to load the dataset (I've also tried other methods with `load_dataset` to no avail, probably due to misunderstandings on my part). \n\n### Environment info\n\n```\r\n- `datasets` version: 2.12.0\r\n- Platform: Linux-5.15.0-1027-gcp-x86_64-with-glibc2.31\r\n- Python version: 3.10.0\r\n- Huggingface_hub version: 0.14.1\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.5.3\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5874\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5874\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5873","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5873\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5873\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5873\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5873","id":1713269724,"node_id":"I_kwDODunzps5mHmvc","number":5873,"title":"Allow setting the environment variable for the lock file path","user":{"login":"xin3he","id":83260933,"node_id":"MDQ6VXNlcjgzMjYwOTMz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/83260933?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/xin3he","html_url":"https:\/\/github.com\/xin3he","followers_url":"https:\/\/api.github.com\/users\/xin3he\/followers","following_url":"https:\/\/api.github.com\/users\/xin3he\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/xin3he\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/xin3he\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/xin3he\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/xin3he\/orgs","repos_url":"https:\/\/api.github.com\/users\/xin3he\/repos","events_url":"https:\/\/api.github.com\/users\/xin3he\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/xin3he\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1684307402000,"updated_at":1684307465000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\r\n\r\nAdd an environment variable to replace the default lock file path.\r\n\r\n### Motivation\r\n\r\nUsually, dataset path is a read-only path while the lock file needs to be modified each time. It would be convenient if the path can be reset individually.\r\n\r\n### Your contribution\r\n\r\n```\/src\/datasets\/utils\/filelock.py\r\nclass UnixFileLock(BaseFileLock):\r\n def __init__(self, lock_file, timeout=-1, max_filename_length=None):\r\n #-------------------\r\n if os.getenv('DS_TMP_PATH'):\r\n file_name = str(lock_file).split('\/')[-1]\r\n dataset_tmp_path = os.getenv('DS_TMP_PATH')\r\n lock_file = os.path.join(dataset_tmp_path, file_name)\r\n #-------------------\r\n max_filename_length = os.statvfs(os.path.dirname(lock_file)).f_namemax\r\n super().__init__(lock_file, timeout=timeout, max_filename_length=max_filename_length)\r\n```\r\nA simple demo is as upper. Thanks.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5873\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5873\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5872","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5872\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5872\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5872\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5872","id":1713174662,"node_id":"PR_kwDODunzps5QrQ5o","number":5872,"title":"Fix infer module for uppercase extensions","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007049 \/ 0.011353 (-0.004304) | 0.005034 \/ 0.011008 (-0.005974) | 0.097737 \/ 0.038508 (0.059229) | 0.033280 \/ 0.023109 (0.010170) | 0.301017 \/ 0.275898 (0.025119) | 0.336593 \/ 0.323480 (0.013113) | 0.005567 \/ 0.007986 (-0.002419) | 0.005384 \/ 0.004328 (0.001056) | 0.072980 \/ 0.004250 (0.068730) | 0.045030 \/ 0.037052 (0.007978) | 0.303280 \/ 0.258489 (0.044791) | 0.367528 \/ 0.293841 (0.073687) | 0.034131 \/ 0.128546 (-0.094415) | 0.012118 \/ 0.075646 (-0.063528) | 0.331677 \/ 0.419271 (-0.087594) | 0.049211 \/ 0.043533 (0.005678) | 0.297535 \/ 0.255139 (0.042396) | 0.318136 \/ 0.283200 (0.034936) | 0.101574 \/ 0.141683 (-0.040109) | 1.472769 \/ 1.452155 (0.020615) | 1.541724 \/ 1.492716 (0.049007) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.014646 \/ 0.018006 (-0.003360) | 0.439050 \/ 0.000490 (0.438560) | 0.008575 \/ 0.000200 (0.008375) | 0.000297 \/ 0.000054 (0.000242) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027591 \/ 0.037411 (-0.009820) | 0.111639 \/ 0.014526 (0.097113) | 0.117098 \/ 0.176557 (-0.059458) | 0.173281 \/ 0.737135 (-0.563855) | 0.123197 \/ 0.296338 (-0.173141) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.397507 \/ 0.215209 (0.182298) | 3.971457 \/ 2.077655 (1.893803) | 1.781158 \/ 1.504120 (0.277038) | 1.590419 \/ 1.541195 (0.049224) | 1.716374 \/ 1.468490 (0.247884) | 0.687150 \/ 4.584777 (-3.897627) | 3.691009 \/ 3.745712 (-0.054703) | 2.050900 \/ 5.269862 (-3.218961) | 1.304893 \/ 4.565676 (-3.260784) | 0.084507 \/ 0.424275 (-0.339768) | 0.012231 \/ 0.007607 (0.004624) | 0.493033 \/ 0.226044 (0.266988) | 4.929957 \/ 2.268929 (2.661028) | 2.209069 \/ 55.444624 (-53.235555) | 1.885992 \/ 6.876477 (-4.990485) | 2.007004 \/ 2.142072 (-0.135069) | 0.827265 \/ 4.805227 (-3.977963) | 0.168225 \/ 6.500664 (-6.332439) | 0.064988 \/ 0.075469 (-0.010481) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.182341 \/ 1.841788 (-0.659447) | 14.691983 \/ 8.074308 (6.617674) | 14.350720 \/ 10.191392 (4.159328) | 0.164307 \/ 0.680424 (-0.516117) | 0.017480 \/ 0.534201 (-0.516720) | 0.421843 \/ 0.579283 (-0.157441) | 0.417481 \/ 0.434364 (-0.016883) | 0.496587 \/ 0.540337 (-0.043751) | 0.581208 \/ 1.386936 (-0.805728) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007070 \/ 0.011353 (-0.004283) | 0.005083 \/ 0.011008 (-0.005926) | 0.075009 \/ 0.038508 (0.036500) | 0.032343 \/ 0.023109 (0.009234) | 0.366788 \/ 0.275898 (0.090890) | 0.392273 \/ 0.323480 (0.068794) | 0.005512 \/ 0.007986 (-0.002474) | 0.003999 \/ 0.004328 (-0.000329) | 0.073743 \/ 0.004250 (0.069492) | 0.046203 \/ 0.037052 (0.009151) | 0.367874 \/ 0.258489 (0.109385) | 0.409154 \/ 0.293841 (0.115313) | 0.035227 \/ 0.128546 (-0.093319) | 0.012223 \/ 0.075646 (-0.063424) | 0.087149 \/ 0.419271 (-0.332122) | 0.045648 \/ 0.043533 (0.002115) | 0.362414 \/ 0.255139 (0.107275) | 0.379970 \/ 0.283200 (0.096770) | 0.100631 \/ 0.141683 (-0.041052) | 1.439733 \/ 1.452155 (-0.012422) | 1.506266 \/ 1.492716 (0.013550) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.227071 \/ 0.018006 (0.209065) | 0.451243 \/ 0.000490 (0.450753) | 0.000406 \/ 0.000200 (0.000206) | 0.000060 \/ 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028952 \/ 0.037411 (-0.008459) | 0.111934 \/ 0.014526 (0.097408) | 0.124080 \/ 0.176557 (-0.052477) | 0.174022 \/ 0.737135 (-0.563113) | 0.126811 \/ 0.296338 (-0.169527) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.436423 \/ 0.215209 (0.221214) | 4.331959 \/ 2.077655 (2.254304) | 2.111914 \/ 1.504120 (0.607794) | 1.921338 \/ 1.541195 (0.380143) | 1.994425 \/ 1.468490 (0.525935) | 0.699164 \/ 4.584777 (-3.885613) | 3.722143 \/ 3.745712 (-0.023569) | 3.516538 \/ 5.269862 (-1.753323) | 1.867245 \/ 4.565676 (-2.698431) | 0.085923 \/ 0.424275 (-0.338352) | 0.012059 \/ 0.007607 (0.004452) | 0.586147 \/ 0.226044 (0.360102) | 5.395823 \/ 2.268929 (3.126894) | 2.594430 \/ 55.444624 (-52.850194) | 2.275021 \/ 6.876477 (-4.601456) | 2.347810 \/ 2.142072 (0.205737) | 0.835118 \/ 4.805227 (-3.970109) | 0.167089 \/ 6.500664 (-6.333575) | 0.064893 \/ 0.075469 (-0.010576) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.291423 \/ 1.841788 (-0.550365) | 14.992696 \/ 8.074308 (6.918388) | 13.307842 \/ 10.191392 (3.116450) | 0.163799 \/ 0.680424 (-0.516625) | 0.017315 \/ 0.534201 (-0.516886) | 0.461319 \/ 0.579283 (-0.117965) | 0.430474 \/ 0.434364 (-0.003889) | 0.568115 \/ 0.540337 (0.027777) | 0.647909 \/ 1.386936 (-0.739027) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#a5161c9ecdcdde9cc99c7f212da13523d5ba6bdb \"CML watermark\")\n"],"created_at":1684303005000,"updated_at":1684333619000,"closed_at":1684333158000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5872","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5872","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5872.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5872.patch","merged_at":1684333158000},"body":"Fix the `infer_module_for_data_files` and `infer_module_for_data_files_in_archives` functions when passed a data file name with uppercase extension, e.g. `filename.TXT`.\r\n\r\nBefore, `None` module was returned.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5872\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5872\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5871","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5871\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5871\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5871\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5871","id":1712573073,"node_id":"I_kwDODunzps5mE8qR","number":5871,"title":"data configuration hash suffix depends on uncanonicalized data_dir","user":{"login":"kylrth","id":5044802,"node_id":"MDQ6VXNlcjUwNDQ4MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5044802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kylrth","html_url":"https:\/\/github.com\/kylrth","followers_url":"https:\/\/api.github.com\/users\/kylrth\/followers","following_url":"https:\/\/api.github.com\/users\/kylrth\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kylrth\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kylrth\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kylrth\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kylrth\/orgs","repos_url":"https:\/\/api.github.com\/users\/kylrth\/repos","events_url":"https:\/\/api.github.com\/users\/kylrth\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kylrth\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892877,"node_id":"MDU6TGFiZWwxOTM1ODkyODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20first%20issue","name":"good first issue","color":"7057ff","default":true,"description":"Good for newcomers"}],"state":"closed","locked":false,"assignee":{"login":"kylrth","id":5044802.0,"node_id":"MDQ6VXNlcjUwNDQ4MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5044802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kylrth","html_url":"https:\/\/github.com\/kylrth","followers_url":"https:\/\/api.github.com\/users\/kylrth\/followers","following_url":"https:\/\/api.github.com\/users\/kylrth\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kylrth\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kylrth\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kylrth\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kylrth\/orgs","repos_url":"https:\/\/api.github.com\/users\/kylrth\/repos","events_url":"https:\/\/api.github.com\/users\/kylrth\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kylrth\/received_events","type":"User","site_admin":false},"assignees":[{"login":"kylrth","id":5044802,"node_id":"MDQ6VXNlcjUwNDQ4MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5044802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kylrth","html_url":"https:\/\/github.com\/kylrth","followers_url":"https:\/\/api.github.com\/users\/kylrth\/followers","following_url":"https:\/\/api.github.com\/users\/kylrth\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kylrth\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kylrth\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kylrth\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kylrth\/orgs","repos_url":"https:\/\/api.github.com\/users\/kylrth\/repos","events_url":"https:\/\/api.github.com\/users\/kylrth\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kylrth\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["It could even use `os.path.realpath` to resolve symlinks.","Indeed, it makes sense to normalize `data_dir`. Feel free to submit a PR (this can be \"fixed\" [here](https:\/\/github.com\/huggingface\/datasets\/blob\/89f775226321ba94e5bf4670a323c0fb44f5f65c\/src\/datasets\/builder.py#L173))","#self-assign"],"created_at":1684263364000,"updated_at":1685721125000,"closed_at":1685721125000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nI am working with the `recipe_nlg` dataset, which requires manual download. Once it's downloaded, I've noticed that the hash in the custom data configuration is different if I add a trailing `\/` to my `data_dir`. It took me a while to notice that the hashes were different, and to understand that that was the cause of my dataset being processed anew instead of the cached version being used.\r\n\r\n### Steps to reproduce the bug\r\n\r\n1. Follow the steps to manually download the `recipe_nlg` dataset to `\/data\/recipenlg`.\r\n2. Load it using `load_dataset`, once without a trailing slash and once with one:\r\n\r\n ```python\r\n >>> ds = load_dataset(\"recipe_nlg\", data_dir=\"\/data\/recipenlg\")\r\n Using custom data configuration default-082278caeea85765\r\n Downloading and preparing dataset recipe_nlg\/default to \/home\/kyle\/.cache\/huggingface\/datasets\/recipe_nlg\/default-082278caeea85765\/1.0.0\/aa4f120223637bedf7360cecb70a9bd108acfd64e38207ca90c9f385d21e5e74...\r\n Dataset recipe_nlg downloaded and prepared to \/home\/kyle\/.cache\/huggingface\/datasets\/recipe_nlg\/default-082278caeea85765\/1.0.0\/aa4f120223637bedf7360cecb70a9bd108acfd64e38207ca90c9f385d21e5e74. Subsequent calls will reuse this data.\r\n 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:01<00:00, 1.10s\/it]\r\n DatasetDict({\r\n train: Dataset({\r\n features: ['id', 'title', 'ingredients', 'directions', 'link', 'source', 'ner'],\r\n num_rows: 2231142\r\n })\r\n })\r\n >>> ds = load_dataset(\"recipe_nlg\", data_dir=\"\/data\/recipenlg\/\")\r\n Using custom data configuration default-83e87680785d0493\r\n Downloading and preparing dataset recipe_nlg\/default to \/home\/user\/.cache\/huggingface\/datasets\/recipe_nlg\/default-83e87680785d0493\/1.0.0\/aa4f120223637bedf7360cecb70a9bd108acfd64e38207ca90c9f385d21e5e74...\r\n Generating train split: 1%| | 12701\/2231142 [00:04<13:15, 2790.25 examples\/s\r\n ^C\r\n ```\r\n\r\n3. Observe that the hash suffix in the custom data configuration changes due to the altered string.\r\n\r\n### Expected behavior\r\n\r\nI think I would expect the hash to remain constant if it actually points to the same location on disk. I would expect the use of `os.path.normpath` to canonicalize the paths.\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 2.8.0\r\n- Platform: Linux-5.4.0-147-generic-x86_64-with-glibc2.31\r\n- Python version: 3.10.8\r\n- PyArrow version: 10.0.1\r\n- Pandas version: 1.5.2","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5871\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5871\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5870","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5870\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5870\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5870\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5870","id":1712156282,"node_id":"I_kwDODunzps5mDW56","number":5870,"title":"Behaviour difference between datasets.map and IterableDatasets.map","user":{"login":"llStringll","id":30209072,"node_id":"MDQ6VXNlcjMwMjA5MDcy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30209072?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/llStringll","html_url":"https:\/\/github.com\/llStringll","followers_url":"https:\/\/api.github.com\/users\/llStringll\/followers","following_url":"https:\/\/api.github.com\/users\/llStringll\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/llStringll\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/llStringll\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/llStringll\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/llStringll\/orgs","repos_url":"https:\/\/api.github.com\/users\/llStringll\/repos","events_url":"https:\/\/api.github.com\/users\/llStringll\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/llStringll\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["PS - some work is definitely needed for 'special cases' docs, not explanations, just usages of 'functions' under mixture of special cases, like a combination of custom databuilder + iterable dataset for large size + dynamic .map() application."],"created_at":1684247577000,"updated_at":1684247765000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nAll the examples in all the docs mentioned throughout huggingface datasets correspond to datasets object, and not IterableDatasets object. At one point of time, they might have been in sync, but the code for datasets version >=2.9.0 is very different as compared to the docs. \r\nI basically need to .map() a transform on images in an iterable dataset, which was made using a custom databuilder config.\r\nThis works very good in map-styles datasets, but the .map() fails in IterableDatasets, show behvaiour as such:\r\n\"pixel_values\" key not found, KeyError in examples object\/dict passed into transform function for map, which works fine with map style, even as batch.\r\nIn iterable style, the object\/dict passed into map() paramter callable function is completely different as what is mentioned in all examples.\r\nPlease look into this. Thank you\r\n\r\nMy databuilder class is inherited as such:\r\n\r\n def _info(self):\r\n print (\"Config: \",self.config.__dict__.keys())\r\n return datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=datasets.Features(\r\n {\r\n \"labels\": datasets.Sequence(datasets.Value(\"uint16\")),\r\n # \"labels_name\": datasets.Value(\"string\"),\r\n # \"pixel_values\": datasets.Array3D(shape=(3, 1280, 960), dtype=\"float32\"),\r\n \"pixel_values\": datasets.Array3D(shape=(1280, 960, 3), dtype=\"uint8\"),\r\n \"image_s3_path\": datasets.Value(\"string\"),\r\n }\r\n ),\r\n supervised_keys=None,\r\n homepage=\"none\",\r\n citation=\"\",\r\n )\r\n\r\n def _split_generators(self, dl_manager):\r\n records_train = list(db.mini_set.find({'split':'train'},{'image_s3_path':1, 'ocwen_template_name':1}))[:10000]\r\n records_val = list(db.mini_set.find({'split':'val'},{'image_s3_path':1, 'ocwen_template_name':1}))[:1000]\r\n # print (len(records),self.config.num_shards)\r\n # shard_size_train = len(records_train)\/\/self.config.num_shards\r\n # sharded_records_train = [records_train[i:i+shard_size_train] for i in range(0,len(records_train),shard_size_train)]\r\n # shard_size_val = len(records_val)\/\/self.config.num_shards\r\n # sharded_records_val = [records_val[i:i+shard_size_val] for i in range(0,len(records_val),shard_size_val)]\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN, gen_kwargs={\"records\":records_train} # passing list of records, for sharding to take over\r\n ),\r\n datasets.SplitGenerator(\r\n name=datasets.Split.VALIDATION, gen_kwargs={\"records\":records_val} # passing list of records, for sharding to take over\r\n ),\r\n ]\r\n\r\n def _generate_examples(self, records):\r\n # print (\"Generating examples for [{}] shards\".format(len(shards)))\r\n # initiate_db_connection()\r\n # records = list(db.mini_set.find({'split':split},{'image_s3_path':1, 'ocwen_template_name':1}))[:10]\r\n id_ = 0\r\n # for records in shards:\r\n for i,rec in enumerate(records):\r\n img_local_path = fetch_file(rec['image_s3_path'],self.config.buffer_dir)\r\n # t = self.config.processor(Image.open(img_local_path), random_padding=True, return_tensors=\"np\").pixel_values.squeeze()\r\n # print (t.shape, type(t),type(t[0][0][0]))\r\n # sys.exit()\r\n pvs = np.array(Image.open(img_local_path).resize((1280,960))) # image object is wxh, so resize as per that, numpy array of it is hxwxc, transposing to cxwxh\r\n # pvs = self.config.processor(Image.open(img_local_path), random_padding=True, return_tensors=\"np\").pixel_values.astype(np.float16).squeeze()\r\n # print (type(pvs[0][0][0]))\r\n lblids = self.config.processor.tokenizer(''+rec['ocwen_template_name']+'<\/s_class>'+'<\/s>', add_special_tokens=False, padding=False, truncation=False, return_tensors=\"np\")[\"input_ids\"].squeeze(0) # take padding later, as per batch collating\r\n # print (len(lblids),type(lblids[0]))\r\n # print (type(pvs),pvs.shape,type(pvs[0][0][0]), type(lblids))\r\n yield id_, {\"labels\":lblids,\"pixel_values\":pvs,\"image_s3_path\":rec['image_s3_path']}\r\n id_+=1\r\n os.remove(img_local_path)\r\n\r\nand I load it inside my trainer script as such\r\n`ds = load_dataset(\"\/tmp\/DonutDS\/dataset\/\", split=\"train\", streaming=True) # iterable dataset, where .map() falls`\r\nor also as\r\n`ds = load_from_disk('\/tmp\/DonutDS\/dataset\/') #map style dataset`\r\n\r\nThank you to the team for having such a great library, and for this bug fix in advance!\n\n### Steps to reproduce the bug\n\nAbove config can allow one to reproduce the said bug\n\n### Expected behavior\n\n.map() should show some consistency b\/w map-style and iterable-style datasets, or atleast the docs should address iterable-style datasets behaviour and examples. I honestly do not figure the use of such docs.\n\n### Environment info\n\ndatasets==2.9.0\r\ntransformers==4.26.0","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5870\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5870\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5869","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5869\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5869\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5869\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5869","id":1711990003,"node_id":"I_kwDODunzps5mCuTz","number":5869,"title":"Image Encoding Issue when submitting a Parquet Dataset","user":{"login":"PhilippeMoussalli","id":47530815,"node_id":"MDQ6VXNlcjQ3NTMwODE1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47530815?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PhilippeMoussalli","html_url":"https:\/\/github.com\/PhilippeMoussalli","followers_url":"https:\/\/api.github.com\/users\/PhilippeMoussalli\/followers","following_url":"https:\/\/api.github.com\/users\/PhilippeMoussalli\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PhilippeMoussalli\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PhilippeMoussalli\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PhilippeMoussalli\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PhilippeMoussalli\/orgs","repos_url":"https:\/\/api.github.com\/users\/PhilippeMoussalli\/repos","events_url":"https:\/\/api.github.com\/users\/PhilippeMoussalli\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PhilippeMoussalli\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @PhilippeMoussalli thanks for opening a detailed issue. It seems the issue is more related to the `datasets` library so I'll ping @lhoestq @mariosasko on this one :) \n\n(edit: also can one of you move the issue to the datasets repo? Thanks in advance \ud83d\ude4f)","Hi ! The `Image()` info is stored in the **schema metadata**. More precisely there should be a \"huggingface\" field in the schema metadata that contains the `datasets` feature type of each column.\r\n\r\nTo fix your issue, you can use the same schema as the original Parquet files to write the new ones. You can also get the schema with metadata from a `Features` object, e.g.\r\n\r\n```python\r\nfrom datasets import Features, Image, Value\r\n\r\nfeatures = Features({\"image\": Image(), \"text\": Value(\"string\")})\r\nschema = features.arrow_schema\r\nprint(schema.metadata)\r\n# {b'huggingface': b'{\"info\": {\"features\": {\"image\": {\"_type\": \"Image\"}, \"text\": {\"dtype\": \"string\", \"_type\": \"Value\"}}}}'}\r\n```","It appears that the parquet files at `hf:\/\/datasets\/lambdalabs\/pokemon-blip-captions` don't have this metadata, and it is defined in the dataset_infos.json instead (legacy).\r\n\r\nYou can get the right schema with the HF metadata this way:\r\n\r\n```python\r\nfrom datasets import load_dataset_builder\r\n\r\nfeatures = load_dataset_builder(\"lambdalabs\/pokemon-blip-captions\").info.features\r\nschema = features.arrow_schema\r\n```","Btw in the future we might add support for an dedicated Image extension type in Arrow so that you won't need to add the schema metadata anymore ;)","Thanks @Wauplin @lhoestq for the quick reply :)! \r\n\r\nI tried your approach by passing the huggingface schema to the dask writer \r\n\r\n```\r\nfrom datasets import Features, Image, Value\r\ndf = dd.read_parquet(f\"hf:\/\/datasets\/lambdalabs\/pokemon-blip-captions\",index=False)\r\nfeatures = Features({\"image\": Image(), \"text\": Value(\"string\")})\r\nschema = features.arrow_schema\r\ndd.to_parquet(df, path = \"hf:\/\/datasets\/philippemo\/dummy_dataset\/data\", schema=schema)\r\n```\r\nAt first it didn't work as I was not able to visualize the images, so then I manually added the `dataset_infos.json` from the example dataset and it worked :)\r\n\r\nHowever, It's not very ideal since there are some metadata in that file that need to be computed in order to load the data properly such as `num_of_bytes` and `num_examples` which might be unknown in my use case. \r\n\r\n![Screenshot from 2023-05-16 16-54-55](https:\/\/github.com\/huggingface\/datasets\/assets\/47530815\/b2b448d2-d3d8-43a7-9682-9c0187a5192b)\r\n\r\nDo you have any pointers there? you mentioned that `datasets_info.json` will be deprecated\/legacy. Could you point me to some example image datasets on the hub that are stored as parquet and don't have the `datasets_info.json`?\r\n\r\n","You don't need the dataset_infos.json file as long as you have the schema with HF metadata ;)\r\nI could also check that it works fine myself on the git revision without the dataset_infos.json file.\r\n\r\nWhat made you think it didn't work ?","> You don't need the dataset_infos.json file as long as you have the schema with HF metadata ;) I could also check that it works fine myself on the git revision without the dataset_infos.json file.\r\n> \r\n> What made you think it didn't work ?\r\n\r\nThose are two identical dataset repos where both were pushed with dask with the specified schema you mentioned above. I then uploaded the `dataset_infos.json` manually taken from the original example dataset into one of them. \r\n\r\n* **With schema**: https:\/\/huggingface.co\/datasets\/philippemo\/dummy_dataset_with_schema\r\n* **Without schema**: https:\/\/huggingface.co\/datasets\/philippemo\/dummy_dataset_without_schema\r\n\r\nYou can see that in the examples without schema the images fail to render properly. When loaded with `datasets` they return an dict and not a Pillow Image ","I see ! I think it's a bug on our side - it should work without the metadata - let me investigate","Alright, it's fixed: https:\/\/huggingface.co\/datasets\/philippemo\/dummy_dataset_without_schema\r\n\r\nIt shows the image correctly now - even without the extra metadata :)","Thanks @lhoestq! \r\nI tested pushing a dataset again without the metadata and it works perfectly! \r\nI appreciate the help","Hi @lhoestq, \r\n\r\nI'v tried pushing another dataset again and I think the issue reappeared again: \r\n\r\n```\r\ndf = dd.read_parquet(f\"hf:\/\/datasets\/lambdalabs\/pokemon-blip-captions\")\r\nfeatures = datasets.Features({\"image\": datasets.Image(), \"text\": datasets.Value(\"string\")})\r\nschema = features.arrow_schema\r\ndd.to_parquet(df, path = \"hf:\/\/datasets\/philippemo\/dummy_dataset_without_schema_12_06\/data\", schema=schema)\r\n```\r\n\r\nHere is the dataset: \r\n https:\/\/huggingface.co\/datasets\/philippemo\/dummy_dataset_without_schema_12_06\r\nThe one that was working 2 weeks ago still seems to be intact though, it might be that It rendered properly when it was initially submitted and after this something was reverted from your side:\r\nhttps:\/\/huggingface.co\/datasets\/philippemo\/dummy_dataset_without_schema\r\n\r\nIt's weird because nothing really changed from the implementation, might be another issue in the hub backend. Do you have any pointers on how to resolve this? ","We're doing some changes in the way we're handling image parquet datasets right now. We'll include the fix from https:\/\/github.com\/huggingface\/datasets\/pull\/5921 in the new datasets-server version in the coming days","alright thanks for the update :), would that be part of the new release of datasets or is it something separate? if so, where can I track it? ","Once the new version of `datasets` is released (tomorrow probably) we'll open an issue on https:\/\/github.com\/huggingface\/datasets-server to update to this version :)","Alright we did the update :) This is fixed for good now","Yes thanks \ud83c\udf89\ud83c\udf89\ud83c\udf89"],"created_at":1684230178000,"updated_at":1686919718000,"closed_at":1686907848000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nHello,\r\n\r\nI'd like to report an issue related to pushing a dataset represented as a Parquet file to a dataset repository using Dask. Here are the details:\r\n\r\nWe attempted to load an example dataset in Parquet format from the Hugging Face (HF) filesystem using Dask with the following code snippet:\r\n```\r\nimport dask.dataframe as dd\r\ndf = dd.read_parquet(\"hf:\/\/datasets\/lambdalabs\/pokemon-blip-captions\",index=False)\r\n```\r\nIn this dataset, the \"image\" column is represented as a dictionary\/struct with the format:\r\n\r\n```\r\ndf = df.compute()\r\ndf[\"image\"].iloc[0].keys()\r\n-> dict_keys(['bytes', 'path'])\r\n```\r\nI think this is the format encoded by the [`Image`](https:\/\/huggingface.co\/docs\/datasets\/v2.0.0\/en\/package_reference\/main_classes#datasets.Image) feature extractor from datasets to format suitable for Arrow. \r\n\r\nThe next step was to push the dataset to a repository that I created:\r\n```\r\ndd.to_parquet(dask_df, path = \"hf:\/\/datasets\/philippemo\/dummy_dataset\/data\")\r\n```\r\n\r\nHowever, after pushing the dataset using Dask, the \"image\" column is now represented as the encoded dictionary `(['bytes', 'path'])`, and the images are not properly visualized. You can find the dataset here: [Link to the problematic dataset](https:\/\/huggingface.co\/datasets\/philippemo\/dummy_dataset).\r\n\r\nIt's worth noting that both the original dataset and the one submitted with Dask have the same schema with minor alterations related to metadata:\r\n\r\n**[ Schema of original dummy example.](https:\/\/huggingface.co\/datasets\/lambdalabs\/pokemon-blip-captions\/blob\/main\/data\/train-00000-of-00001-566cc9b19d7203f8.parquet)** \r\n```\r\nimage: struct\r\n child 0, bytes: binary\r\n child 1, path: null\r\ntext: string\r\n```\r\n**[ Schema of pushed dataset with dask](https:\/\/huggingface.co\/datasets\/philippemo\/dummy_dataset\/blob\/main\/data\/part.0.parquet)**\r\n```\r\nimage: struct\r\n child 0, bytes: binary\r\n child 1, path: null\r\ntext: string\r\n```\r\n\r\nThis issue seems to be related to an encoding type that occurs when pushing a model to the hub. Normally, models should be represented as an HF dataset before pushing, but we are working with an example where we need to push large datasets using Dask.\r\n\r\nCould you please provide clarification on how to resolve this issue?\r\n\r\nThank you!\r\n\r\n\r\n### Reproduction\r\n\r\nTo get the schema I downloaded the parquet files and used pyarrow.parquet to read the schema\r\n```\r\nimport pyarrow.parquet\r\npyarrow.parquet.read_schema(, memory_map=True)\r\n```\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### System info\r\n\r\n```shell\r\n- huggingface_hub version: 0.14.1\r\n- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.6\r\n- Running in iPython ?: No\r\n- Running in notebook ?: No\r\n- Running in Google Colab ?: No\r\n- Token path ?: \/home\/philippe\/.cache\/huggingface\/token\r\n- Has saved token ?: True\r\n- Who am I ?: philippemo\r\n- Configured git credential helpers: cache\r\n- FastAI: N\/A\r\n- Tensorflow: N\/A\r\n- Torch: N\/A\r\n- Jinja2: 3.1.2\r\n- Graphviz: N\/A\r\n- Pydot: N\/A\r\n- Pillow: 9.4.0\r\n- hf_transfer: N\/A\r\n- gradio: N\/A\r\n- ENDPOINT: https:\/\/huggingface.co\r\n- HUGGINGFACE_HUB_CACHE: \/home\/philippe\/.cache\/huggingface\/hub\r\n- HUGGINGFACE_ASSETS_CACHE: \/home\/philippe\/.cache\/huggingface\/assets\r\n- HF_TOKEN_PATH: \/home\/philippe\/.cache\/huggingface\/token\r\n- HF_HUB_OFFLINE: False\r\n- HF_HUB_DISABLE_TELEMETRY: False\r\n- HF_HUB_DISABLE_PROGRESS_BARS: None\r\n- HF_HUB_DISABLE_SYMLINKS_WARNING: False\r\n- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False\r\n- HF_HUB_DISABLE_IMPLICIT_TOKEN: False\r\n- HF_HUB_ENABLE_HF_TRANSFER: False\r\n```\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5869\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5869\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5868","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5868\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5868\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5868\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5868","id":1711173098,"node_id":"I_kwDODunzps5l_m3q","number":5868,"title":"Is it possible to change a cached file and 're-cache' it instead of re-generating?","user":{"login":"zyh3826","id":31238754,"node_id":"MDQ6VXNlcjMxMjM4NzU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31238754?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zyh3826","html_url":"https:\/\/github.com\/zyh3826","followers_url":"https:\/\/api.github.com\/users\/zyh3826\/followers","following_url":"https:\/\/api.github.com\/users\/zyh3826\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zyh3826\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zyh3826\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zyh3826\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zyh3826\/orgs","repos_url":"https:\/\/api.github.com\/users\/zyh3826\/repos","events_url":"https:\/\/api.github.com\/users\/zyh3826\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zyh3826\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Arrow files\/primitives (tables and arrays) are immutable, so re-generating them is the only option, I'm afraid.","> \r\n\r\nGot it, thanks for your reply"],"created_at":1684208742000,"updated_at":1684322496000,"closed_at":1684322496000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\n\nHi,\r\nI have a huge cached file using `map`(over 500GB), and I want to change an attribution of each element, is there possible to do it using some method instead of re-generating, because `map` takes over 24 hours\n\n### Motivation\n\nFor large datasets, I think it is very important because we always face the problem which is changing something in the original cache without re-generating it.\n\n### Your contribution\n\nFor now, I can't help, sorry.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5868\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5868\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5867","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5867\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5867\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5867\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5867","id":1710656067,"node_id":"PR_kwDODunzps5QizOn","number":5867,"title":"Add logic for hashing modules\/functions optimized with `torch.compile`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006598 \/ 0.011353 (-0.004755) | 0.004565 \/ 0.011008 (-0.006443) | 0.099063 \/ 0.038508 (0.060555) | 0.028334 \/ 0.023109 (0.005225) | 0.323539 \/ 0.275898 (0.047641) | 0.372462 \/ 0.323480 (0.048982) | 0.005120 \/ 0.007986 (-0.002865) | 0.004797 \/ 0.004328 (0.000468) | 0.076862 \/ 0.004250 (0.072611) | 0.038021 \/ 0.037052 (0.000968) | 0.337801 \/ 0.258489 (0.079312) | 0.374601 \/ 0.293841 (0.080760) | 0.031158 \/ 0.128546 (-0.097389) | 0.011672 \/ 0.075646 (-0.063974) | 0.324913 \/ 0.419271 (-0.094359) | 0.051702 \/ 0.043533 (0.008169) | 0.339440 \/ 0.255139 (0.084301) | 0.372502 \/ 0.283200 (0.089303) | 0.097590 \/ 0.141683 (-0.044093) | 1.534238 \/ 1.452155 (0.082083) | 1.599701 \/ 1.492716 (0.106985) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.204101 \/ 0.018006 (0.186095) | 0.416981 \/ 0.000490 (0.416491) | 0.003436 \/ 0.000200 (0.003236) | 0.000071 \/ 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023527 \/ 0.037411 (-0.013885) | 0.095748 \/ 0.014526 (0.081222) | 0.104498 \/ 0.176557 (-0.072059) | 0.164000 \/ 0.737135 (-0.573135) | 0.109170 \/ 0.296338 (-0.187168) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.418239 \/ 0.215209 (0.203030) | 4.153959 \/ 2.077655 (2.076305) | 1.856687 \/ 1.504120 (0.352567) | 1.657818 \/ 1.541195 (0.116623) | 1.715146 \/ 1.468490 (0.246656) | 0.700673 \/ 4.584777 (-3.884103) | 3.401060 \/ 3.745712 (-0.344652) | 2.891045 \/ 5.269862 (-2.378816) | 1.519433 \/ 4.565676 (-3.046243) | 0.083151 \/ 0.424275 (-0.341124) | 0.012352 \/ 0.007607 (0.004745) | 0.523901 \/ 0.226044 (0.297856) | 5.288871 \/ 2.268929 (3.019943) | 2.322806 \/ 55.444624 (-53.121818) | 1.982223 \/ 6.876477 (-4.894253) | 2.074883 \/ 2.142072 (-0.067189) | 0.812400 \/ 4.805227 (-3.992827) | 0.152183 \/ 6.500664 (-6.348481) | 0.066538 \/ 0.075469 (-0.008931) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.223220 \/ 1.841788 (-0.618567) | 14.024391 \/ 8.074308 (5.950083) | 14.166657 \/ 10.191392 (3.975265) | 0.146017 \/ 0.680424 (-0.534407) | 0.016698 \/ 0.534201 (-0.517503) | 0.380779 \/ 0.579283 (-0.198504) | 0.387113 \/ 0.434364 (-0.047251) | 0.446329 \/ 0.540337 (-0.094009) | 0.523819 \/ 1.386936 (-0.863118) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006803 \/ 0.011353 (-0.004549) | 0.004554 \/ 0.011008 (-0.006454) | 0.077406 \/ 0.038508 (0.038897) | 0.028495 \/ 0.023109 (0.005386) | 0.358847 \/ 0.275898 (0.082949) | 0.393256 \/ 0.323480 (0.069776) | 0.005317 \/ 0.007986 (-0.002669) | 0.004690 \/ 0.004328 (0.000362) | 0.075842 \/ 0.004250 (0.071592) | 0.041985 \/ 0.037052 (0.004933) | 0.367546 \/ 0.258489 (0.109057) | 0.408019 \/ 0.293841 (0.114178) | 0.030712 \/ 0.128546 (-0.097834) | 0.011756 \/ 0.075646 (-0.063891) | 0.086002 \/ 0.419271 (-0.333269) | 0.038949 \/ 0.043533 (-0.004583) | 0.361045 \/ 0.255139 (0.105906) | 0.381728 \/ 0.283200 (0.098528) | 0.090692 \/ 0.141683 (-0.050991) | 1.493251 \/ 1.452155 (0.041097) | 1.584566 \/ 1.492716 (0.091850) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.217470 \/ 0.018006 (0.199463) | 0.429955 \/ 0.000490 (0.429465) | 0.000394 \/ 0.000200 (0.000194) | 0.000078 \/ 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026223 \/ 0.037411 (-0.011189) | 0.102570 \/ 0.014526 (0.088045) | 0.110848 \/ 0.176557 (-0.065709) | 0.162413 \/ 0.737135 (-0.574722) | 0.114579 \/ 0.296338 (-0.181760) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.464957 \/ 0.215209 (0.249748) | 4.656597 \/ 2.077655 (2.578942) | 2.279755 \/ 1.504120 (0.775636) | 2.230263 \/ 1.541195 (0.689068) | 2.341540 \/ 1.468490 (0.873050) | 0.699505 \/ 4.584777 (-3.885272) | 3.389003 \/ 3.745712 (-0.356709) | 1.867526 \/ 5.269862 (-3.402336) | 1.167171 \/ 4.565676 (-3.398506) | 0.083451 \/ 0.424275 (-0.340824) | 0.012348 \/ 0.007607 (0.004741) | 0.584205 \/ 0.226044 (0.358161) | 5.853623 \/ 2.268929 (3.584694) | 2.646650 \/ 55.444624 (-52.797974) | 2.286504 \/ 6.876477 (-4.589973) | 2.327536 \/ 2.142072 (0.185464) | 0.811209 \/ 4.805227 (-3.994018) | 0.151842 \/ 6.500664 (-6.348822) | 0.067783 \/ 0.075469 (-0.007686) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.330427 \/ 1.841788 (-0.511360) | 14.668981 \/ 8.074308 (6.594673) | 13.321154 \/ 10.191392 (3.129762) | 0.164383 \/ 0.680424 (-0.516040) | 0.016667 \/ 0.534201 (-0.517534) | 0.383439 \/ 0.579283 (-0.195844) | 0.392988 \/ 0.434364 (-0.041376) | 0.443318 \/ 0.540337 (-0.097020) | 0.537849 \/ 1.386936 (-0.849087) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#e99bd4583bd636074b1826e2d0581161807480f1 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006379 \/ 0.011353 (-0.004974) | 0.004691 \/ 0.011008 (-0.006317) | 0.098047 \/ 0.038508 (0.059539) | 0.028126 \/ 0.023109 (0.005017) | 0.327143 \/ 0.275898 (0.051245) | 0.362482 \/ 0.323480 (0.039002) | 0.004953 \/ 0.007986 (-0.003033) | 0.003386 \/ 0.004328 (-0.000943) | 0.076222 \/ 0.004250 (0.071971) | 0.037583 \/ 0.037052 (0.000531) | 0.329661 \/ 0.258489 (0.071172) | 0.365945 \/ 0.293841 (0.072104) | 0.030455 \/ 0.128546 (-0.098091) | 0.011397 \/ 0.075646 (-0.064249) | 0.323889 \/ 0.419271 (-0.095383) | 0.043719 \/ 0.043533 (0.000186) | 0.331499 \/ 0.255139 (0.076360) | 0.359357 \/ 0.283200 (0.076158) | 0.088904 \/ 0.141683 (-0.052779) | 1.458584 \/ 1.452155 (0.006429) | 1.549375 \/ 1.492716 (0.056658) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.195808 \/ 0.018006 (0.177802) | 0.411148 \/ 0.000490 (0.410659) | 0.003602 \/ 0.000200 (0.003402) | 0.000070 \/ 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023278 \/ 0.037411 (-0.014133) | 0.097317 \/ 0.014526 (0.082791) | 0.102669 \/ 0.176557 (-0.073888) | 0.168203 \/ 0.737135 (-0.568933) | 0.105205 \/ 0.296338 (-0.191133) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.424800 \/ 0.215209 (0.209591) | 4.228444 \/ 2.077655 (2.150790) | 1.895544 \/ 1.504120 (0.391424) | 1.698793 \/ 1.541195 (0.157598) | 1.717931 \/ 1.468490 (0.249441) | 0.702251 \/ 4.584777 (-3.882526) | 3.407013 \/ 3.745712 (-0.338699) | 2.784634 \/ 5.269862 (-2.485228) | 1.491317 \/ 4.565676 (-3.074359) | 0.082926 \/ 0.424275 (-0.341350) | 0.012320 \/ 0.007607 (0.004713) | 0.524188 \/ 0.226044 (0.298143) | 5.249798 \/ 2.268929 (2.980870) | 2.358953 \/ 55.444624 (-53.085672) | 1.985922 \/ 6.876477 (-4.890555) | 2.034293 \/ 2.142072 (-0.107779) | 0.815671 \/ 4.805227 (-3.989556) | 0.152583 \/ 6.500664 (-6.348081) | 0.066687 \/ 0.075469 (-0.008782) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.210901 \/ 1.841788 (-0.630886) | 13.621765 \/ 8.074308 (5.547457) | 14.213215 \/ 10.191392 (4.021823) | 0.143346 \/ 0.680424 (-0.537078) | 0.016904 \/ 0.534201 (-0.517297) | 0.379795 \/ 0.579283 (-0.199489) | 0.381287 \/ 0.434364 (-0.053077) | 0.449086 \/ 0.540337 (-0.091251) | 0.538792 \/ 1.386936 (-0.848144) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006207 \/ 0.011353 (-0.005146) | 0.004404 \/ 0.011008 (-0.006604) | 0.076363 \/ 0.038508 (0.037854) | 0.027335 \/ 0.023109 (0.004226) | 0.370967 \/ 0.275898 (0.095069) | 0.401936 \/ 0.323480 (0.078456) | 0.004835 \/ 0.007986 (-0.003151) | 0.004559 \/ 0.004328 (0.000231) | 0.074964 \/ 0.004250 (0.070713) | 0.038254 \/ 0.037052 (0.001202) | 0.374799 \/ 0.258489 (0.116310) | 0.425191 \/ 0.293841 (0.131350) | 0.035290 \/ 0.128546 (-0.093256) | 0.011379 \/ 0.075646 (-0.064267) | 0.085911 \/ 0.419271 (-0.333360) | 0.043073 \/ 0.043533 (-0.000460) | 0.373557 \/ 0.255139 (0.118418) | 0.395179 \/ 0.283200 (0.111979) | 0.098602 \/ 0.141683 (-0.043081) | 1.467234 \/ 1.452155 (0.015079) | 1.571868 \/ 1.492716 (0.079152) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.221848 \/ 0.018006 (0.203842) | 0.394943 \/ 0.000490 (0.394454) | 0.002983 \/ 0.000200 (0.002783) | 0.000078 \/ 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024385 \/ 0.037411 (-0.013027) | 0.100087 \/ 0.014526 (0.085561) | 0.104897 \/ 0.176557 (-0.071660) | 0.156150 \/ 0.737135 (-0.580985) | 0.109113 \/ 0.296338 (-0.187226) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.441995 \/ 0.215209 (0.226786) | 4.415423 \/ 2.077655 (2.337769) | 2.148791 \/ 1.504120 (0.644671) | 1.947061 \/ 1.541195 (0.405866) | 1.954807 \/ 1.468490 (0.486317) | 0.690245 \/ 4.584777 (-3.894532) | 3.372766 \/ 3.745712 (-0.372946) | 1.851073 \/ 5.269862 (-3.418789) | 1.155558 \/ 4.565676 (-3.410118) | 0.082796 \/ 0.424275 (-0.341479) | 0.012845 \/ 0.007607 (0.005238) | 0.548173 \/ 0.226044 (0.322129) | 5.530984 \/ 2.268929 (3.262056) | 2.665360 \/ 55.444624 (-52.779264) | 2.324266 \/ 6.876477 (-4.552211) | 2.329397 \/ 2.142072 (0.187324) | 0.801481 \/ 4.805227 (-4.003746) | 0.152145 \/ 6.500664 (-6.348519) | 0.067915 \/ 0.075469 (-0.007554) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.291488 \/ 1.841788 (-0.550299) | 13.912143 \/ 8.074308 (5.837835) | 12.975493 \/ 10.191392 (2.784101) | 0.129915 \/ 0.680424 (-0.550509) | 0.016516 \/ 0.534201 (-0.517685) | 0.386979 \/ 0.579283 (-0.192304) | 0.389163 \/ 0.434364 (-0.045201) | 0.443324 \/ 0.540337 (-0.097014) | 0.533744 \/ 1.386936 (-0.853192) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#eb48834fc2aa45cad73fe70a7ecaa0dd6015b8d0 \"CML watermark\")\n","The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_5867). All of your documentation changes will be reflected on that endpoint.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008635 \/ 0.011353 (-0.002717) | 0.006014 \/ 0.011008 (-0.004995) | 0.116314 \/ 0.038508 (0.077806) | 0.041113 \/ 0.023109 (0.018004) | 0.358564 \/ 0.275898 (0.082666) | 0.397547 \/ 0.323480 (0.074067) | 0.007012 \/ 0.007986 (-0.000974) | 0.004638 \/ 0.004328 (0.000310) | 0.086509 \/ 0.004250 (0.082259) | 0.056731 \/ 0.037052 (0.019678) | 0.358859 \/ 0.258489 (0.100370) | 0.425339 \/ 0.293841 (0.131498) | 0.041780 \/ 0.128546 (-0.086767) | 0.014203 \/ 0.075646 (-0.061443) | 0.398240 \/ 0.419271 (-0.021031) | 0.060180 \/ 0.043533 (0.016647) | 0.352887 \/ 0.255139 (0.097748) | 0.381793 \/ 0.283200 (0.098594) | 0.148578 \/ 0.141683 (0.006895) | 1.749483 \/ 1.452155 (0.297328) | 1.869765 \/ 1.492716 (0.377049) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.244435 \/ 0.018006 (0.226428) | 0.499545 \/ 0.000490 (0.499055) | 0.004576 \/ 0.000200 (0.004376) | 0.000147 \/ 0.000054 (0.000093) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031163 \/ 0.037411 (-0.006249) | 0.131082 \/ 0.014526 (0.116556) | 0.137442 \/ 0.176557 (-0.039114) | 0.203783 \/ 0.737135 (-0.533352) | 0.144068 \/ 0.296338 (-0.152270) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.503587 \/ 0.215209 (0.288378) | 5.011953 \/ 2.077655 (2.934299) | 2.366968 \/ 1.504120 (0.862848) | 2.130914 \/ 1.541195 (0.589719) | 2.243560 \/ 1.468490 (0.775070) | 0.856719 \/ 4.584777 (-3.728058) | 4.707445 \/ 3.745712 (0.961733) | 2.506166 \/ 5.269862 (-2.763696) | 1.590400 \/ 4.565676 (-2.975277) | 0.102075 \/ 0.424275 (-0.322200) | 0.014499 \/ 0.007607 (0.006892) | 0.624966 \/ 0.226044 (0.398922) | 6.197671 \/ 2.268929 (3.928742) | 2.898481 \/ 55.444624 (-52.546143) | 2.499590 \/ 6.876477 (-4.376886) | 2.649690 \/ 2.142072 (0.507617) | 1.012542 \/ 4.805227 (-3.792685) | 0.202833 \/ 6.500664 (-6.297831) | 0.078033 \/ 0.075469 (0.002564) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.448321 \/ 1.841788 (-0.393467) | 18.084909 \/ 8.074308 (10.010601) | 17.383027 \/ 10.191392 (7.191635) | 0.212167 \/ 0.680424 (-0.468256) | 0.020754 \/ 0.534201 (-0.513447) | 0.514653 \/ 0.579283 (-0.064630) | 0.543307 \/ 0.434364 (0.108944) | 0.653066 \/ 0.540337 (0.112728) | 0.745773 \/ 1.386936 (-0.641164) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008576 \/ 0.011353 (-0.002777) | 0.005834 \/ 0.011008 (-0.005174) | 0.089842 \/ 0.038508 (0.051334) | 0.040035 \/ 0.023109 (0.016926) | 0.449329 \/ 0.275898 (0.173431) | 0.471572 \/ 0.323480 (0.148092) | 0.006771 \/ 0.007986 (-0.001215) | 0.006129 \/ 0.004328 (0.001800) | 0.090370 \/ 0.004250 (0.086119) | 0.056924 \/ 0.037052 (0.019872) | 0.455134 \/ 0.258489 (0.196645) | 0.502670 \/ 0.293841 (0.208829) | 0.041689 \/ 0.128546 (-0.086857) | 0.014447 \/ 0.075646 (-0.061200) | 0.104528 \/ 0.419271 (-0.314744) | 0.055535 \/ 0.043533 (0.012003) | 0.450667 \/ 0.255139 (0.195528) | 0.453108 \/ 0.283200 (0.169908) | 0.119296 \/ 0.141683 (-0.022387) | 1.747359 \/ 1.452155 (0.295204) | 1.839421 \/ 1.492716 (0.346705) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.314910 \/ 0.018006 (0.296904) | 0.495575 \/ 0.000490 (0.495085) | 0.054702 \/ 0.000200 (0.054503) | 0.000505 \/ 0.000054 (0.000450) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033991 \/ 0.037411 (-0.003420) | 0.133268 \/ 0.014526 (0.118742) | 0.142286 \/ 0.176557 (-0.034271) | 0.200562 \/ 0.737135 (-0.536573) | 0.147161 \/ 0.296338 (-0.149178) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.520288 \/ 0.215209 (0.305079) | 5.227684 \/ 2.077655 (3.150029) | 2.553330 \/ 1.504120 (1.049210) | 2.324338 \/ 1.541195 (0.783143) | 2.406790 \/ 1.468490 (0.938300) | 0.850404 \/ 4.584777 (-3.734373) | 4.612156 \/ 3.745712 (0.866444) | 2.592546 \/ 5.269862 (-2.677316) | 1.708984 \/ 4.565676 (-2.856692) | 0.103751 \/ 0.424275 (-0.320524) | 0.014379 \/ 0.007607 (0.006772) | 0.634661 \/ 0.226044 (0.408616) | 6.344939 \/ 2.268929 (4.076010) | 3.179807 \/ 55.444624 (-52.264817) | 2.831856 \/ 6.876477 (-4.044621) | 2.866729 \/ 2.142072 (0.724656) | 0.994519 \/ 4.805227 (-3.810708) | 0.201566 \/ 6.500664 (-6.299098) | 0.078902 \/ 0.075469 (0.003433) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.538738 \/ 1.841788 (-0.303049) | 18.746367 \/ 8.074308 (10.672059) | 16.504763 \/ 10.191392 (6.313371) | 0.197898 \/ 0.680424 (-0.482526) | 0.020469 \/ 0.534201 (-0.513732) | 0.529106 \/ 0.579283 (-0.050177) | 0.536891 \/ 0.434364 (0.102527) | 0.600947 \/ 0.540337 (0.060610) | 0.701713 \/ 1.386936 (-0.685223) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#3054f66b4765a520e6fe165c44a4307d40775229 \"CML watermark\")\n"],"created_at":1684177415000,"updated_at":1684330908000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5867","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5867","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5867.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5867.patch","merged_at":null},"body":"Fix https:\/\/github.com\/huggingface\/datasets\/issues\/5839\r\n\r\nPS: The `Pickler.save` method is becoming a bit messy, so I plan to refactor the pickler a bit at some point.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5867\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5867\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5866","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5866\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5866\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5866\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5866","id":1710496993,"node_id":"I_kwDODunzps5l9Bzh","number":5866,"title":"Issue with Sequence features","user":{"login":"alialamiidrissi","id":14365168,"node_id":"MDQ6VXNlcjE0MzY1MTY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14365168?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alialamiidrissi","html_url":"https:\/\/github.com\/alialamiidrissi","followers_url":"https:\/\/api.github.com\/users\/alialamiidrissi\/followers","following_url":"https:\/\/api.github.com\/users\/alialamiidrissi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alialamiidrissi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alialamiidrissi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alialamiidrissi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alialamiidrissi\/orgs","repos_url":"https:\/\/api.github.com\/users\/alialamiidrissi\/repos","events_url":"https:\/\/api.github.com\/users\/alialamiidrissi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alialamiidrissi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting! I've opened a PR with a fix."],"created_at":1684170809000,"updated_at":1685102237000,"closed_at":1685102237000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nSequences features sometimes causes errors when the specified length is not -1\r\n\r\n### Steps to reproduce the bug\r\n\r\n```python\r\nimport numpy as np\r\nfrom datasets import Features, ClassLabel, Sequence, Value, Dataset\r\nfeats = Features(**{'target': ClassLabel(names=[0, 1]),'x': Sequence(feature=Value(dtype='float64',id=None), length=2, id=None)})\r\nDataset.from_dict({\"target\": np.ones(2000).astype(int), \"x\": np.random.rand(2000,2)},features = feats).flatten_indices()\r\n```\r\nThrows:\r\n```\r\n TypeError: Couldn't cast array of type\r\n fixed_size_list[2]\r\n to\r\n Sequence(feature=Value(dtype='float64', id=None), length=2, id=None)\r\n```\r\nThe same code works without any issues when `length = -1`\r\n\r\nEDIT: The error seems to happen only when the length of the dataset is bigger than 1000 for some reason\r\n### Expected behavior\r\n\r\nNo exception\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 2.10.1\r\n- Python version: 3.9.5\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 1.4.1","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5866\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5866\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5865","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5865\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5865\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5865\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5865","id":1710455738,"node_id":"PR_kwDODunzps5QiHnw","number":5865,"title":"Deprecate task api","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","If it's easy to keep supporting it we can keep it no ? There are many datasets on the hub that implement the tasks templates in dataset scripts and it's maybe easier to keep task templates than opening PRs to those datasets.","do we know if people use the tasks api?\r\n\r\nedit: i mean, i'm fine with removing it if it's not used much, especially considering that it's not documented well.","@lhoestq \r\n\r\nLess than 80 public datasets (all canonical) implement `task_templates`, so updating them should be easy.\r\n\r\nPS: I skipped gated datasets when checking for the presence of `task_templates`, but it's safe to assume their contribution to the total count is insignificant.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006480 \/ 0.011353 (-0.004872) | 0.003904 \/ 0.011008 (-0.007104) | 0.084287 \/ 0.038508 (0.045779) | 0.071438 \/ 0.023109 (0.048329) | 0.309823 \/ 0.275898 (0.033925) | 0.341038 \/ 0.323480 (0.017558) | 0.005163 \/ 0.007986 (-0.002822) | 0.003291 \/ 0.004328 (-0.001037) | 0.064473 \/ 0.004250 (0.060222) | 0.053385 \/ 0.037052 (0.016332) | 0.323561 \/ 0.258489 (0.065072) | 0.346332 \/ 0.293841 (0.052491) | 0.030588 \/ 0.128546 (-0.097958) | 0.008342 \/ 0.075646 (-0.067305) | 0.287205 \/ 0.419271 (-0.132067) | 0.051953 \/ 0.043533 (0.008420) | 0.310925 \/ 0.255139 (0.055786) | 0.344443 \/ 0.283200 (0.061244) | 0.022754 \/ 0.141683 (-0.118928) | 1.459648 \/ 1.452155 (0.007494) | 1.528413 \/ 1.492716 (0.035697) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.206404 \/ 0.018006 (0.188398) | 0.461864 \/ 0.000490 (0.461374) | 0.004501 \/ 0.000200 (0.004302) | 0.000080 \/ 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026891 \/ 0.037411 (-0.010520) | 0.081206 \/ 0.014526 (0.066680) | 0.093648 \/ 0.176557 (-0.082908) | 0.148491 \/ 0.737135 (-0.588645) | 0.093874 \/ 0.296338 (-0.202464) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.401715 \/ 0.215209 (0.186506) | 4.018597 \/ 2.077655 (1.940943) | 2.029735 \/ 1.504120 (0.525615) | 1.860069 \/ 1.541195 (0.318875) | 1.935712 \/ 1.468490 (0.467222) | 0.485896 \/ 4.584777 (-4.098881) | 3.638177 \/ 3.745712 (-0.107535) | 5.124058 \/ 5.269862 (-0.145804) | 3.099666 \/ 4.565676 (-1.466011) | 0.057173 \/ 0.424275 (-0.367102) | 0.007240 \/ 0.007607 (-0.000367) | 0.478758 \/ 0.226044 (0.252713) | 4.798471 \/ 2.268929 (2.529543) | 2.502980 \/ 55.444624 (-52.941645) | 2.170650 \/ 6.876477 (-4.705827) | 2.381394 \/ 2.142072 (0.239321) | 0.578766 \/ 4.805227 (-4.226462) | 0.132342 \/ 6.500664 (-6.368322) | 0.059759 \/ 0.075469 (-0.015710) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.249238 \/ 1.841788 (-0.592549) | 19.224673 \/ 8.074308 (11.150365) | 13.786894 \/ 10.191392 (3.595502) | 0.164633 \/ 0.680424 (-0.515791) | 0.018065 \/ 0.534201 (-0.516136) | 0.390589 \/ 0.579283 (-0.188694) | 0.408993 \/ 0.434364 (-0.025370) | 0.457001 \/ 0.540337 (-0.083336) | 0.625327 \/ 1.386936 (-0.761609) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006827 \/ 0.011353 (-0.004526) | 0.004007 \/ 0.011008 (-0.007001) | 0.065239 \/ 0.038508 (0.026731) | 0.079829 \/ 0.023109 (0.056719) | 0.400323 \/ 0.275898 (0.124425) | 0.434158 \/ 0.323480 (0.110678) | 0.005314 \/ 0.007986 (-0.002671) | 0.003354 \/ 0.004328 (-0.000974) | 0.065044 \/ 0.004250 (0.060794) | 0.060315 \/ 0.037052 (0.023262) | 0.401513 \/ 0.258489 (0.143024) | 0.441119 \/ 0.293841 (0.147278) | 0.031783 \/ 0.128546 (-0.096763) | 0.008608 \/ 0.075646 (-0.067038) | 0.071755 \/ 0.419271 (-0.347517) | 0.048816 \/ 0.043533 (0.005283) | 0.393896 \/ 0.255139 (0.138757) | 0.412156 \/ 0.283200 (0.128956) | 0.024410 \/ 0.141683 (-0.117272) | 1.515159 \/ 1.452155 (0.063005) | 1.562217 \/ 1.492716 (0.069501) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.229993 \/ 0.018006 (0.211987) | 0.449898 \/ 0.000490 (0.449409) | 0.000376 \/ 0.000200 (0.000176) | 0.000056 \/ 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030297 \/ 0.037411 (-0.007115) | 0.086737 \/ 0.014526 (0.072212) | 0.098312 \/ 0.176557 (-0.078244) | 0.152890 \/ 0.737135 (-0.584246) | 0.099335 \/ 0.296338 (-0.197003) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.415786 \/ 0.215209 (0.200577) | 4.137606 \/ 2.077655 (2.059952) | 2.120082 \/ 1.504120 (0.615963) | 1.943984 \/ 1.541195 (0.402789) | 2.040821 \/ 1.468490 (0.572331) | 0.479273 \/ 4.584777 (-4.105504) | 3.563854 \/ 3.745712 (-0.181858) | 3.396071 \/ 5.269862 (-1.873790) | 2.011302 \/ 4.565676 (-2.554374) | 0.057202 \/ 0.424275 (-0.367073) | 0.007338 \/ 0.007607 (-0.000269) | 0.488378 \/ 0.226044 (0.262333) | 4.881615 \/ 2.268929 (2.612686) | 2.669685 \/ 55.444624 (-52.774939) | 2.258236 \/ 6.876477 (-4.618241) | 2.343303 \/ 2.142072 (0.201230) | 0.606762 \/ 4.805227 (-4.198466) | 0.133190 \/ 6.500664 (-6.367475) | 0.062971 \/ 0.075469 (-0.012498) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.345215 \/ 1.841788 (-0.496573) | 20.023713 \/ 8.074308 (11.949405) | 14.555777 \/ 10.191392 (4.364385) | 0.162388 \/ 0.680424 (-0.518036) | 0.018528 \/ 0.534201 (-0.515673) | 0.393055 \/ 0.579283 (-0.186229) | 0.411820 \/ 0.434364 (-0.022544) | 0.461705 \/ 0.540337 (-0.078633) | 0.629395 \/ 1.386936 (-0.757541) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#4f54f2ff4c68a00242789e9890e3b46cab320448 \"CML watermark\")\n","Ok ! I also know https:\/\/huggingface.co\/datasets\/hf-internal-testing\/cats_vs_dogs_sample\/blob\/main\/cats_vs_dogs_sample.py that needs to be updated as well","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009100 \/ 0.011353 (-0.002253) | 0.005158 \/ 0.011008 (-0.005850) | 0.109291 \/ 0.038508 (0.070782) | 0.086053 \/ 0.023109 (0.062943) | 0.469859 \/ 0.275898 (0.193961) | 0.476142 \/ 0.323480 (0.152662) | 0.006739 \/ 0.007986 (-0.001247) | 0.005077 \/ 0.004328 (0.000748) | 0.078193 \/ 0.004250 (0.073943) | 0.065956 \/ 0.037052 (0.028904) | 0.490323 \/ 0.258489 (0.231834) | 0.497418 \/ 0.293841 (0.203577) | 0.060562 \/ 0.128546 (-0.067984) | 0.016321 \/ 0.075646 (-0.059325) | 0.379703 \/ 0.419271 (-0.039568) | 0.087335 \/ 0.043533 (0.043802) | 0.488240 \/ 0.255139 (0.233101) | 0.497391 \/ 0.283200 (0.214191) | 0.040699 \/ 0.141683 (-0.100984) | 1.778925 \/ 1.452155 (0.326770) | 1.856436 \/ 1.492716 (0.363720) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.236428 \/ 0.018006 (0.218422) | 0.551950 \/ 0.000490 (0.551460) | 0.007400 \/ 0.000200 (0.007201) | 0.000120 \/ 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028461 \/ 0.037411 (-0.008950) | 0.093441 \/ 0.014526 (0.078915) | 0.103868 \/ 0.176557 (-0.072688) | 0.176269 \/ 0.737135 (-0.560867) | 0.107760 \/ 0.296338 (-0.188578) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.593382 \/ 0.215209 (0.378173) | 5.863711 \/ 2.077655 (3.786057) | 2.493777 \/ 1.504120 (0.989657) | 2.088547 \/ 1.541195 (0.547352) | 2.173147 \/ 1.468490 (0.704656) | 0.875661 \/ 4.584777 (-3.709116) | 5.209023 \/ 3.745712 (1.463310) | 4.483261 \/ 5.269862 (-0.786600) | 2.843288 \/ 4.565676 (-1.722388) | 0.098488 \/ 0.424275 (-0.325787) | 0.008371 \/ 0.007607 (0.000764) | 0.668413 \/ 0.226044 (0.442368) | 6.709802 \/ 2.268929 (4.440873) | 3.132453 \/ 55.444624 (-52.312172) | 2.428736 \/ 6.876477 (-4.447741) | 2.560867 \/ 2.142072 (0.418794) | 0.983550 \/ 4.805227 (-3.821677) | 0.207072 \/ 6.500664 (-6.293592) | 0.073786 \/ 0.075469 (-0.001683) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.625871 \/ 1.841788 (-0.215917) | 23.481015 \/ 8.074308 (15.406707) | 20.556677 \/ 10.191392 (10.365285) | 0.238147 \/ 0.680424 (-0.442277) | 0.029453 \/ 0.534201 (-0.504748) | 0.464589 \/ 0.579283 (-0.114695) | 0.599129 \/ 0.434364 (0.164765) | 0.550146 \/ 0.540337 (0.009808) | 0.794646 \/ 1.386936 (-0.592290) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008613 \/ 0.011353 (-0.002739) | 0.004979 \/ 0.011008 (-0.006030) | 0.078095 \/ 0.038508 (0.039587) | 0.080285 \/ 0.023109 (0.057176) | 0.482881 \/ 0.275898 (0.206983) | 0.520442 \/ 0.323480 (0.196962) | 0.006241 \/ 0.007986 (-0.001744) | 0.003964 \/ 0.004328 (-0.000364) | 0.080027 \/ 0.004250 (0.075777) | 0.065209 \/ 0.037052 (0.028157) | 0.476113 \/ 0.258489 (0.217623) | 0.535383 \/ 0.293841 (0.241542) | 0.053084 \/ 0.128546 (-0.075462) | 0.014284 \/ 0.075646 (-0.061362) | 0.083859 \/ 0.419271 (-0.335413) | 0.061024 \/ 0.043533 (0.017492) | 0.477810 \/ 0.255139 (0.222671) | 0.508718 \/ 0.283200 (0.225518) | 0.036602 \/ 0.141683 (-0.105081) | 1.810422 \/ 1.452155 (0.358267) | 1.832833 \/ 1.492716 (0.340117) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.281443 \/ 0.018006 (0.263437) | 0.568249 \/ 0.000490 (0.567760) | 0.000493 \/ 0.000200 (0.000293) | 0.000077 \/ 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033302 \/ 0.037411 (-0.004110) | 0.100433 \/ 0.014526 (0.085907) | 0.105465 \/ 0.176557 (-0.071091) | 0.161986 \/ 0.737135 (-0.575149) | 0.115736 \/ 0.296338 (-0.180603) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.622892 \/ 0.215209 (0.407683) | 6.144361 \/ 2.077655 (4.066706) | 2.849443 \/ 1.504120 (1.345323) | 2.544097 \/ 1.541195 (1.002902) | 2.579859 \/ 1.468490 (1.111369) | 0.826078 \/ 4.584777 (-3.758699) | 5.021808 \/ 3.745712 (1.276096) | 4.694784 \/ 5.269862 (-0.575077) | 2.796263 \/ 4.565676 (-1.769413) | 0.090983 \/ 0.424275 (-0.333292) | 0.008445 \/ 0.007607 (0.000838) | 0.744675 \/ 0.226044 (0.518631) | 7.662989 \/ 2.268929 (5.394060) | 3.665611 \/ 55.444624 (-51.779013) | 2.942836 \/ 6.876477 (-3.933641) | 2.874402 \/ 2.142072 (0.732329) | 1.010097 \/ 4.805227 (-3.795130) | 0.218008 \/ 6.500664 (-6.282656) | 0.087359 \/ 0.075469 (0.011890) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.655631 \/ 1.841788 (-0.186157) | 23.539596 \/ 8.074308 (15.465288) | 20.909512 \/ 10.191392 (10.718120) | 0.202092 \/ 0.680424 (-0.478332) | 0.029807 \/ 0.534201 (-0.504394) | 0.487591 \/ 0.579283 (-0.091692) | 0.573719 \/ 0.434364 (0.139355) | 0.531168 \/ 0.540337 (-0.009170) | 0.742375 \/ 1.386936 (-0.644561) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#aa231a7be55c6bca2bede8af4ac6da63c3162116 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006247 \/ 0.011353 (-0.005106) | 0.003650 \/ 0.011008 (-0.007358) | 0.079655 \/ 0.038508 (0.041147) | 0.060279 \/ 0.023109 (0.037170) | 0.309033 \/ 0.275898 (0.033135) | 0.338479 \/ 0.323480 (0.014999) | 0.004651 \/ 0.007986 (-0.003335) | 0.002849 \/ 0.004328 (-0.001480) | 0.062852 \/ 0.004250 (0.058602) | 0.049230 \/ 0.037052 (0.012178) | 0.312502 \/ 0.258489 (0.054012) | 0.354558 \/ 0.293841 (0.060717) | 0.027497 \/ 0.128546 (-0.101049) | 0.007885 \/ 0.075646 (-0.067762) | 0.260232 \/ 0.419271 (-0.159040) | 0.045459 \/ 0.043533 (0.001926) | 0.311629 \/ 0.255139 (0.056490) | 0.367806 \/ 0.283200 (0.084606) | 0.020875 \/ 0.141683 (-0.120808) | 1.423802 \/ 1.452155 (-0.028352) | 1.497729 \/ 1.492716 (0.005013) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.185629 \/ 0.018006 (0.167623) | 0.441421 \/ 0.000490 (0.440931) | 0.004847 \/ 0.000200 (0.004647) | 0.000074 \/ 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.022428 \/ 0.037411 (-0.014984) | 0.073375 \/ 0.014526 (0.058849) | 0.083194 \/ 0.176557 (-0.093363) | 0.143984 \/ 0.737135 (-0.593151) | 0.084128 \/ 0.296338 (-0.212211) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.397220 \/ 0.215209 (0.182010) | 3.954394 \/ 2.077655 (1.876740) | 1.920638 \/ 1.504120 (0.416518) | 1.744284 \/ 1.541195 (0.203089) | 1.802623 \/ 1.468490 (0.334133) | 0.501988 \/ 4.584777 (-4.082789) | 3.096071 \/ 3.745712 (-0.649642) | 4.648267 \/ 5.269862 (-0.621595) | 2.770995 \/ 4.565676 (-1.794682) | 0.057513 \/ 0.424275 (-0.366762) | 0.006315 \/ 0.007607 (-0.001292) | 0.467683 \/ 0.226044 (0.241639) | 4.683959 \/ 2.268929 (2.415031) | 2.384980 \/ 55.444624 (-53.059645) | 2.030894 \/ 6.876477 (-4.845583) | 2.148374 \/ 2.142072 (0.006302) | 0.585142 \/ 4.805227 (-4.220085) | 0.123173 \/ 6.500664 (-6.377491) | 0.059140 \/ 0.075469 (-0.016329) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.244707 \/ 1.841788 (-0.597080) | 18.176043 \/ 8.074308 (10.101735) | 13.742770 \/ 10.191392 (3.551378) | 0.149692 \/ 0.680424 (-0.530732) | 0.016591 \/ 0.534201 (-0.517610) | 0.342138 \/ 0.579283 (-0.237145) | 0.353931 \/ 0.434364 (-0.080433) | 0.392317 \/ 0.540337 (-0.148020) | 0.524011 \/ 1.386936 (-0.862925) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005937 \/ 0.011353 (-0.005416) | 0.003609 \/ 0.011008 (-0.007399) | 0.061729 \/ 0.038508 (0.023221) | 0.057844 \/ 0.023109 (0.034735) | 0.418051 \/ 0.275898 (0.142153) | 0.453014 \/ 0.323480 (0.129534) | 0.004530 \/ 0.007986 (-0.003456) | 0.002861 \/ 0.004328 (-0.001468) | 0.062236 \/ 0.004250 (0.057986) | 0.048612 \/ 0.037052 (0.011560) | 0.418487 \/ 0.258489 (0.159998) | 0.455114 \/ 0.293841 (0.161273) | 0.027419 \/ 0.128546 (-0.101127) | 0.007919 \/ 0.075646 (-0.067728) | 0.066940 \/ 0.419271 (-0.352331) | 0.041816 \/ 0.043533 (-0.001717) | 0.419788 \/ 0.255139 (0.164649) | 0.439682 \/ 0.283200 (0.156483) | 0.020902 \/ 0.141683 (-0.120781) | 1.473993 \/ 1.452155 (0.021838) | 1.532438 \/ 1.492716 (0.039722) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.228766 \/ 0.018006 (0.210760) | 0.412189 \/ 0.000490 (0.411699) | 0.000371 \/ 0.000200 (0.000171) | 0.000054 \/ 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026139 \/ 0.037411 (-0.011272) | 0.076626 \/ 0.014526 (0.062100) | 0.088262 \/ 0.176557 (-0.088295) | 0.143096 \/ 0.737135 (-0.594039) | 0.089642 \/ 0.296338 (-0.206696) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.423030 \/ 0.215209 (0.207821) | 4.218333 \/ 2.077655 (2.140679) | 2.280943 \/ 1.504120 (0.776823) | 2.051746 \/ 1.541195 (0.510551) | 2.101085 \/ 1.468490 (0.632595) | 0.495860 \/ 4.584777 (-4.088917) | 3.108065 \/ 3.745712 (-0.637647) | 2.944188 \/ 5.269862 (-2.325673) | 1.833693 \/ 4.565676 (-2.731984) | 0.057509 \/ 0.424275 (-0.366766) | 0.006406 \/ 0.007607 (-0.001201) | 0.497208 \/ 0.226044 (0.271164) | 4.974972 \/ 2.268929 (2.706044) | 2.786639 \/ 55.444624 (-52.657985) | 2.423815 \/ 6.876477 (-4.452662) | 2.446377 \/ 2.142072 (0.304305) | 0.584521 \/ 4.805227 (-4.220706) | 0.124129 \/ 6.500664 (-6.376535) | 0.061373 \/ 0.075469 (-0.014096) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.307076 \/ 1.841788 (-0.534711) | 18.443873 \/ 8.074308 (10.369565) | 13.835730 \/ 10.191392 (3.644338) | 0.159795 \/ 0.680424 (-0.520629) | 0.016643 \/ 0.534201 (-0.517558) | 0.334300 \/ 0.579283 (-0.244983) | 0.347136 \/ 0.434364 (-0.087228) | 0.394633 \/ 0.540337 (-0.145704) | 0.552445 \/ 1.386936 (-0.834491) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#8cfc0262363ea8cbd8c78537a09f851ec6ec30f5 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007273 \/ 0.011353 (-0.004080) | 0.004704 \/ 0.011008 (-0.006304) | 0.105857 \/ 0.038508 (0.067349) | 0.062493 \/ 0.023109 (0.039384) | 0.325704 \/ 0.275898 (0.049806) | 0.355795 \/ 0.323480 (0.032315) | 0.005552 \/ 0.007986 (-0.002433) | 0.003543 \/ 0.004328 (-0.000785) | 0.068098 \/ 0.004250 (0.063848) | 0.049563 \/ 0.037052 (0.012511) | 0.362956 \/ 0.258489 (0.104467) | 0.376047 \/ 0.293841 (0.082206) | 0.039272 \/ 0.128546 (-0.089275) | 0.011521 \/ 0.075646 (-0.064125) | 0.291899 \/ 0.419271 (-0.127373) | 0.056916 \/ 0.043533 (0.013383) | 0.365352 \/ 0.255139 (0.110213) | 0.357251 \/ 0.283200 (0.074051) | 0.031670 \/ 0.141683 (-0.110013) | 1.533294 \/ 1.452155 (0.081140) | 1.566580 \/ 1.492716 (0.073864) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.219812 \/ 0.018006 (0.201805) | 0.499808 \/ 0.000490 (0.499318) | 0.000343 \/ 0.000200 (0.000143) | 0.000066 \/ 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024011 \/ 0.037411 (-0.013400) | 0.079686 \/ 0.014526 (0.065161) | 0.087925 \/ 0.176557 (-0.088631) | 0.149065 \/ 0.737135 (-0.588071) | 0.088514 \/ 0.296338 (-0.207824) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.495003 \/ 0.215209 (0.279794) | 5.106371 \/ 2.077655 (3.028717) | 2.285497 \/ 1.504120 (0.781377) | 2.056052 \/ 1.541195 (0.514858) | 2.024913 \/ 1.468490 (0.556423) | 0.726048 \/ 4.584777 (-3.858729) | 4.873945 \/ 3.745712 (1.128233) | 7.488671 \/ 5.269862 (2.218809) | 4.361208 \/ 4.565676 (-0.204469) | 0.089014 \/ 0.424275 (-0.335261) | 0.007178 \/ 0.007607 (-0.000429) | 0.633669 \/ 0.226044 (0.407625) | 6.328154 \/ 2.268929 (4.059226) | 3.071598 \/ 55.444624 (-52.373026) | 2.416077 \/ 6.876477 (-4.460399) | 2.431033 \/ 2.142072 (0.288961) | 0.918167 \/ 4.805227 (-3.887060) | 0.193829 \/ 6.500664 (-6.306836) | 0.073446 \/ 0.075469 (-0.002023) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.344994 \/ 1.841788 (-0.496793) | 19.911699 \/ 8.074308 (11.837391) | 17.182697 \/ 10.191392 (6.991305) | 0.216932 \/ 0.680424 (-0.463492) | 0.025415 \/ 0.534201 (-0.508786) | 0.416806 \/ 0.579283 (-0.162477) | 0.524934 \/ 0.434364 (0.090570) | 0.510783 \/ 0.540337 (-0.029554) | 0.687856 \/ 1.386936 (-0.699081) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008469 \/ 0.011353 (-0.002884) | 0.003797 \/ 0.011008 (-0.007211) | 0.067276 \/ 0.038508 (0.028768) | 0.066825 \/ 0.023109 (0.043716) | 0.394976 \/ 0.275898 (0.119078) | 0.432563 \/ 0.323480 (0.109083) | 0.006003 \/ 0.007986 (-0.001982) | 0.003399 \/ 0.004328 (-0.000930) | 0.070899 \/ 0.004250 (0.066649) | 0.050940 \/ 0.037052 (0.013887) | 0.378291 \/ 0.258489 (0.119802) | 0.429889 \/ 0.293841 (0.136048) | 0.043245 \/ 0.128546 (-0.085302) | 0.012182 \/ 0.075646 (-0.063465) | 0.074560 \/ 0.419271 (-0.344711) | 0.065290 \/ 0.043533 (0.021757) | 0.371209 \/ 0.255139 (0.116070) | 0.389731 \/ 0.283200 (0.106532) | 0.045729 \/ 0.141683 (-0.095954) | 1.451785 \/ 1.452155 (-0.000370) | 1.598539 \/ 1.492716 (0.105822) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.261357 \/ 0.018006 (0.243351) | 0.520142 \/ 0.000490 (0.519653) | 0.008305 \/ 0.000200 (0.008105) | 0.000089 \/ 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026492 \/ 0.037411 (-0.010919) | 0.082430 \/ 0.014526 (0.067904) | 0.095979 \/ 0.176557 (-0.080578) | 0.151752 \/ 0.737135 (-0.585383) | 0.090086 \/ 0.296338 (-0.206252) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.535967 \/ 0.215209 (0.320758) | 5.228605 \/ 2.077655 (3.150950) | 2.395078 \/ 1.504120 (0.890959) | 2.185500 \/ 1.541195 (0.644306) | 2.219456 \/ 1.468490 (0.750966) | 0.764794 \/ 4.584777 (-3.819983) | 4.796617 \/ 3.745712 (1.050905) | 4.143450 \/ 5.269862 (-1.126411) | 2.527391 \/ 4.565676 (-2.038286) | 0.081418 \/ 0.424275 (-0.342857) | 0.007170 \/ 0.007607 (-0.000437) | 0.706071 \/ 0.226044 (0.480026) | 6.501060 \/ 2.268929 (4.232131) | 3.176315 \/ 55.444624 (-52.268309) | 2.443245 \/ 6.876477 (-4.433232) | 2.517832 \/ 2.142072 (0.375759) | 0.916254 \/ 4.805227 (-3.888973) | 0.184282 \/ 6.500664 (-6.316382) | 0.062613 \/ 0.075469 (-0.012857) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.444283 \/ 1.841788 (-0.397504) | 20.227311 \/ 8.074308 (12.153003) | 17.512856 \/ 10.191392 (7.321464) | 0.219556 \/ 0.680424 (-0.460868) | 0.024705 \/ 0.534201 (-0.509496) | 0.423215 \/ 0.579283 (-0.156068) | 0.513103 \/ 0.434364 (0.078739) | 0.473853 \/ 0.540337 (-0.066485) | 0.738165 \/ 1.386936 (-0.648771) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#b65660b7c6e853391991734210e38f805459b0ed \"CML watermark\")\n"],"created_at":1684169304000,"updated_at":1688992439000,"closed_at":1688991841000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5865","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5865","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5865.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5865.patch","merged_at":1688991841000},"body":"The task API is not well adopted in the ecosystem, so this PR deprecates it. The `train_eval_index` is a newer, more flexible solution that should be used instead (I think?).\r\n\r\nThese are the projects that still use the task API :\r\n* the image classification example in Transformers: [here](https:\/\/github.com\/huggingface\/transformers\/blob\/8f76dc8e5aaad58f2df7748b6d6970376f315a9a\/examples\/pytorch\/image-classification\/run_image_classification_no_trainer.py#L262) and [here](https:\/\/github.com\/huggingface\/transformers\/blob\/8f76dc8e5aaad58f2df7748b6d6970376f315a9a\/examples\/tensorflow\/image-classification\/run_image_classification.py#L277)\r\n* autotrain: [here](https:\/\/github.com\/huggingface\/autotrain-backend\/blob\/455e274004b56f9377d64db4ab03671508fcc4cd\/zeus\/zeus\/run\/utils.py#L666)\r\n* api-inference-community: [here](https:\/\/github.com\/huggingface\/api-inference-community\/blob\/fb8fb29d577a5bf01c82944db745489a6d6ed3d4\/manage.py#L64) (but the rest of the code does not call the `resolve_dataset` function)\r\n\r\nSo we need to update these files after the merge.\r\n\r\ncc @lewtun ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5865\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5865\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5864","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5864\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5864\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5864\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5864","id":1710450047,"node_id":"I_kwDODunzps5l82V_","number":5864,"title":"Slow iteration over Torch tensors","user":{"login":"crisostomi","id":51738205,"node_id":"MDQ6VXNlcjUxNzM4MjA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/51738205?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/crisostomi","html_url":"https:\/\/github.com\/crisostomi","followers_url":"https:\/\/api.github.com\/users\/crisostomi\/followers","following_url":"https:\/\/api.github.com\/users\/crisostomi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/crisostomi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/crisostomi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/crisostomi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/crisostomi\/orgs","repos_url":"https:\/\/api.github.com\/users\/crisostomi\/repos","events_url":"https:\/\/api.github.com\/users\/crisostomi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/crisostomi\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I am highly interested performance of dataset so I ran your example as a curious user.\r\n```python\r\ntrain_dataset.cast_column(\"x\", Array3D(shape=img_shape, dtype=\"float32\"))\r\n```\r\nhave return values and \"x\" is a new column, it shoulde be\r\n```python\r\nds=train_dataset.cast_column(\"img\", Array3D(shape=(3,32,32), dtype=\"float32\"))\r\n```\r\nI rewrite your example as\r\n```python\r\ntrain_dataset = load_dataset(\r\n 'cifar100',\r\n split='train',\r\n use_auth_token=True,\r\n)\r\ntransform_func = torchvision.transforms.Compose([\r\n ToTensor(), \r\n Normalize(mean=[0.485, 0.456, 0.406], std= [0.229, 0.224, 0.225]),] \r\n)\r\n \r\ntrain_dataset = train_dataset.map(\r\n desc=f\"Preprocessing samples\",\r\n function=lambda x: {\"img\": transform_func(x[\"img\"])},\r\n)\r\nds=train_dataset.cast_column(\"img\", Array3D(shape=(3,32,32), dtype=\"float32\"))\r\nfor i in tqdm(ds):\r\n pass\r\n```\r\nthat require ~11s in my environment. While\r\n```python\r\nds = load_dataset(\r\n 'cifar100',\r\n split='train',\r\n use_auth_token=True,\r\n)\r\n\r\nfor i in tqdm(ds):\r\n pass\r\n```\r\nonly need ~6s. (So I guess it's still undesirable)"],"created_at":1684169038000,"updated_at":1684207658000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI have a problem related to this [issue](https:\/\/github.com\/huggingface\/datasets\/issues\/5841): I get a way slower iteration when using a Torch dataloader if I use vanilla Numpy tensors or if I first apply a ToTensor transform to the input. In particular, it takes 5 seconds to iterate over the vanilla input and ~30s after the transformation.\n\n### Steps to reproduce the bug\n\nHere is the minimum code to reproduce the problem\r\n\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset, DatasetDict, load_dataset, Array3D, Image, Features\r\nfrom torch.utils.data import DataLoader\r\nfrom tqdm import tqdm\r\nimport torchvision \r\nfrom torchvision.transforms import ToTensor, Normalize\r\n\r\n\r\n#################################\r\n# Without transform\r\n#################################\r\n \r\ntrain_dataset = load_dataset(\r\n 'cifar100',\r\n split='train',\r\n use_auth_token=True,\r\n)\r\n\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"img\", \"fine_label\"])\r\n\r\ntrain_loader= DataLoader(\r\n train_dataset,\r\n batch_size=100,\r\n pin_memory=False,\r\n shuffle=True,\r\n num_workers=8,\r\n)\r\n\r\nfor batch in tqdm(train_loader, desc=\"Loading data, no transform\"):\r\n pass\r\n\r\n\r\n#################################\r\n# With transform\r\n#################################\r\n\r\ntransform_func = torchvision.transforms.Compose([\r\n ToTensor(), \r\n Normalize(mean=[0.485, 0.456, 0.406], std= [0.229, 0.224, 0.225]),] \r\n)\r\n \r\ntrain_dataset = train_dataset.map(\r\n desc=f\"Preprocessing samples\",\r\n function=lambda x: {\"img\": transform_func(x[\"img\"])},\r\n)\r\n\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"img\", \"fine_label\"])\r\n\r\n\r\ntrain_loader= DataLoader(\r\n train_dataset,\r\n batch_size=100,\r\n pin_memory=False,\r\n shuffle=True,\r\n num_workers=8,\r\n)\r\n\r\n\r\nfor batch in tqdm(train_loader, desc=\"Loading data after transform\"):\r\n pass \r\n```\r\n\r\nI have also tried converting the Image column to an Array3D\r\n```python\r\nimg_shape = train_dataset[0][\"img\"].shape\r\n\r\nfeatures = train_dataset.features.copy()\r\nfeatures[\"x\"] = Array3D(shape=img_shape, dtype=\"float32\")\r\n\r\ntrain_dataset = train_dataset.map(\r\n desc=f\"Preprocessing samples\",\r\n function=lambda x: {\"x\": np.array(x[\"img\"], dtype=np.uint8)},\r\n features=features,\r\n)\r\ntrain_dataset.cast_column(\"x\", Array3D(shape=img_shape, dtype=\"float32\"))\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"x\", \"fine_label\"])\r\n```\r\nbut to no avail. Any clue?\n\n### Expected behavior\n\nThe iteration should take approximately the same time with or without the transformation, as it doesn't change the shape of the input. What may be the issue here?\n\n### Environment info\n\n```\r\n- `datasets` version: 2.12.0\r\n- Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31\r\n- Python version: 3.9.16\r\n- Huggingface_hub version: 0.14.1\r\n- PyArrow version: 12.0.0\r\n- Pandas version: 2.0.1\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5864\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5864\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5863","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5863\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5863\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5863\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5863","id":1710335905,"node_id":"PR_kwDODunzps5QhtlM","number":5863,"title":"Use a new low-memory approach for tf dataset index shuffling","user":{"login":"Rocketknight1","id":12866554,"node_id":"MDQ6VXNlcjEyODY2NTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12866554?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Rocketknight1","html_url":"https:\/\/github.com\/Rocketknight1","followers_url":"https:\/\/api.github.com\/users\/Rocketknight1\/followers","following_url":"https:\/\/api.github.com\/users\/Rocketknight1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Rocketknight1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Rocketknight1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Rocketknight1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Rocketknight1\/orgs","repos_url":"https:\/\/api.github.com\/users\/Rocketknight1\/repos","events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_5863). All of your documentation changes will be reflected on that endpoint.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007764 \/ 0.011353 (-0.003588) | 0.005397 \/ 0.011008 (-0.005611) | 0.097995 \/ 0.038508 (0.059487) | 0.036360 \/ 0.023109 (0.013251) | 0.312148 \/ 0.275898 (0.036250) | 0.349427 \/ 0.323480 (0.025947) | 0.006635 \/ 0.007986 (-0.001350) | 0.004373 \/ 0.004328 (0.000044) | 0.074350 \/ 0.004250 (0.070099) | 0.054667 \/ 0.037052 (0.017614) | 0.301621 \/ 0.258489 (0.043132) | 0.364233 \/ 0.293841 (0.070392) | 0.035356 \/ 0.128546 (-0.093191) | 0.012512 \/ 0.075646 (-0.063134) | 0.333399 \/ 0.419271 (-0.085873) | 0.051363 \/ 0.043533 (0.007830) | 0.302372 \/ 0.255139 (0.047233) | 0.326542 \/ 0.283200 (0.043343) | 0.118610 \/ 0.141683 (-0.023073) | 1.438485 \/ 1.452155 (-0.013669) | 1.539131 \/ 1.492716 (0.046415) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.010920 \/ 0.018006 (-0.007086) | 0.561263 \/ 0.000490 (0.560773) | 0.003972 \/ 0.000200 (0.003772) | 0.000096 \/ 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030333 \/ 0.037411 (-0.007078) | 0.113608 \/ 0.014526 (0.099083) | 0.125802 \/ 0.176557 (-0.050755) | 0.183885 \/ 0.737135 (-0.553250) | 0.130242 \/ 0.296338 (-0.166097) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.404147 \/ 0.215209 (0.188938) | 4.021990 \/ 2.077655 (1.944335) | 1.821450 \/ 1.504120 (0.317330) | 1.619032 \/ 1.541195 (0.077837) | 1.791267 \/ 1.468490 (0.322777) | 0.706683 \/ 4.584777 (-3.878094) | 3.819056 \/ 3.745712 (0.073344) | 3.485714 \/ 5.269862 (-1.784147) | 1.938968 \/ 4.565676 (-2.626709) | 0.086501 \/ 0.424275 (-0.337774) | 0.012300 \/ 0.007607 (0.004693) | 0.503600 \/ 0.226044 (0.277555) | 5.042123 \/ 2.268929 (2.773195) | 2.269712 \/ 55.444624 (-53.174912) | 1.944912 \/ 6.876477 (-4.931565) | 2.155196 \/ 2.142072 (0.013123) | 0.853434 \/ 4.805227 (-3.951793) | 0.175554 \/ 6.500664 (-6.325110) | 0.072005 \/ 0.075469 (-0.003464) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.203765 \/ 1.841788 (-0.638022) | 15.836634 \/ 8.074308 (7.762326) | 15.707348 \/ 10.191392 (5.515956) | 0.164828 \/ 0.680424 (-0.515596) | 0.018115 \/ 0.534201 (-0.516086) | 0.434591 \/ 0.579283 (-0.144692) | 0.437858 \/ 0.434364 (0.003495) | 0.524672 \/ 0.540337 (-0.015665) | 0.610535 \/ 1.386936 (-0.776401) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007558 \/ 0.011353 (-0.003795) | 0.005258 \/ 0.011008 (-0.005750) | 0.075263 \/ 0.038508 (0.036755) | 0.033915 \/ 0.023109 (0.010805) | 0.371368 \/ 0.275898 (0.095470) | 0.399239 \/ 0.323480 (0.075760) | 0.006547 \/ 0.007986 (-0.001439) | 0.004675 \/ 0.004328 (0.000347) | 0.074230 \/ 0.004250 (0.069980) | 0.054653 \/ 0.037052 (0.017601) | 0.376655 \/ 0.258489 (0.118166) | 0.438437 \/ 0.293841 (0.144596) | 0.035838 \/ 0.128546 (-0.092709) | 0.012641 \/ 0.075646 (-0.063005) | 0.087279 \/ 0.419271 (-0.331993) | 0.046311 \/ 0.043533 (0.002778) | 0.356649 \/ 0.255139 (0.101510) | 0.377876 \/ 0.283200 (0.094677) | 0.108097 \/ 0.141683 (-0.033586) | 1.478461 \/ 1.452155 (0.026306) | 1.560375 \/ 1.492716 (0.067658) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.316384 \/ 0.018006 (0.298378) | 0.539382 \/ 0.000490 (0.538892) | 0.002029 \/ 0.000200 (0.001829) | 0.000090 \/ 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029950 \/ 0.037411 (-0.007462) | 0.111371 \/ 0.014526 (0.096846) | 0.125254 \/ 0.176557 (-0.051303) | 0.173064 \/ 0.737135 (-0.564071) | 0.130446 \/ 0.296338 (-0.165893) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.424882 \/ 0.215209 (0.209673) | 4.241575 \/ 2.077655 (2.163920) | 2.096216 \/ 1.504120 (0.592096) | 1.916017 \/ 1.541195 (0.374823) | 2.016318 \/ 1.468490 (0.547828) | 0.701197 \/ 4.584777 (-3.883580) | 3.762365 \/ 3.745712 (0.016652) | 3.307805 \/ 5.269862 (-1.962057) | 1.841752 \/ 4.565676 (-2.723925) | 0.086003 \/ 0.424275 (-0.338272) | 0.012247 \/ 0.007607 (0.004640) | 0.532926 \/ 0.226044 (0.306882) | 5.370509 \/ 2.268929 (3.101580) | 2.587853 \/ 55.444624 (-52.856772) | 2.264541 \/ 6.876477 (-4.611936) | 2.374833 \/ 2.142072 (0.232760) | 0.827751 \/ 4.805227 (-3.977476) | 0.169454 \/ 6.500664 (-6.331210) | 0.066340 \/ 0.075469 (-0.009129) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.319128 \/ 1.841788 (-0.522660) | 16.702085 \/ 8.074308 (8.627777) | 13.559957 \/ 10.191392 (3.368565) | 0.146659 \/ 0.680424 (-0.533765) | 0.017384 \/ 0.534201 (-0.516817) | 0.421126 \/ 0.579283 (-0.158157) | 0.422067 \/ 0.434364 (-0.012297) | 0.490615 \/ 0.540337 (-0.049723) | 0.587151 \/ 1.386936 (-0.799785) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#79f4b6de25128999f5fc0a7bde9aa71c461f518f \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006604 \/ 0.011353 (-0.004749) | 0.004508 \/ 0.011008 (-0.006500) | 0.098652 \/ 0.038508 (0.060144) | 0.028172 \/ 0.023109 (0.005063) | 0.366997 \/ 0.275898 (0.091099) | 0.403691 \/ 0.323480 (0.080211) | 0.005127 \/ 0.007986 (-0.002859) | 0.003340 \/ 0.004328 (-0.000989) | 0.075408 \/ 0.004250 (0.071157) | 0.038049 \/ 0.037052 (0.000996) | 0.367914 \/ 0.258489 (0.109425) | 0.410958 \/ 0.293841 (0.117118) | 0.030454 \/ 0.128546 (-0.098093) | 0.011422 \/ 0.075646 (-0.064224) | 0.325048 \/ 0.419271 (-0.094223) | 0.042959 \/ 0.043533 (-0.000574) | 0.374536 \/ 0.255139 (0.119397) | 0.394738 \/ 0.283200 (0.111538) | 0.090481 \/ 0.141683 (-0.051201) | 1.504858 \/ 1.452155 (0.052703) | 1.569072 \/ 1.492716 (0.076356) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.010062 \/ 0.018006 (-0.007945) | 0.408619 \/ 0.000490 (0.408130) | 0.002307 \/ 0.000200 (0.002107) | 0.000070 \/ 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.022898 \/ 0.037411 (-0.014514) | 0.096975 \/ 0.014526 (0.082449) | 0.103032 \/ 0.176557 (-0.073524) | 0.164877 \/ 0.737135 (-0.572259) | 0.107324 \/ 0.296338 (-0.189014) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.446652 \/ 0.215209 (0.231442) | 4.466939 \/ 2.077655 (2.389285) | 2.204590 \/ 1.504120 (0.700471) | 2.004048 \/ 1.541195 (0.462853) | 2.053035 \/ 1.468490 (0.584545) | 0.696617 \/ 4.584777 (-3.888160) | 3.391173 \/ 3.745712 (-0.354539) | 1.863306 \/ 5.269862 (-3.406556) | 1.160637 \/ 4.565676 (-3.405039) | 0.083115 \/ 0.424275 (-0.341160) | 0.012470 \/ 0.007607 (0.004862) | 0.547207 \/ 0.226044 (0.321163) | 5.500667 \/ 2.268929 (3.231739) | 2.656615 \/ 55.444624 (-52.788009) | 2.313281 \/ 6.876477 (-4.563195) | 2.395632 \/ 2.142072 (0.253559) | 0.815361 \/ 4.805227 (-3.989867) | 0.152112 \/ 6.500664 (-6.348552) | 0.067485 \/ 0.075469 (-0.007984) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.206975 \/ 1.841788 (-0.634813) | 13.684136 \/ 8.074308 (5.609828) | 13.919129 \/ 10.191392 (3.727737) | 0.140767 \/ 0.680424 (-0.539657) | 0.016445 \/ 0.534201 (-0.517756) | 0.379136 \/ 0.579283 (-0.200147) | 0.385395 \/ 0.434364 (-0.048969) | 0.445781 \/ 0.540337 (-0.094556) | 0.522056 \/ 1.386936 (-0.864880) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006370 \/ 0.011353 (-0.004983) | 0.004514 \/ 0.011008 (-0.006495) | 0.075671 \/ 0.038508 (0.037163) | 0.026723 \/ 0.023109 (0.003614) | 0.359819 \/ 0.275898 (0.083921) | 0.387935 \/ 0.323480 (0.064456) | 0.004888 \/ 0.007986 (-0.003098) | 0.004619 \/ 0.004328 (0.000290) | 0.075546 \/ 0.004250 (0.071295) | 0.039024 \/ 0.037052 (0.001971) | 0.361173 \/ 0.258489 (0.102684) | 0.411425 \/ 0.293841 (0.117584) | 0.030842 \/ 0.128546 (-0.097705) | 0.011555 \/ 0.075646 (-0.064091) | 0.084697 \/ 0.419271 (-0.334574) | 0.039281 \/ 0.043533 (-0.004252) | 0.370082 \/ 0.255139 (0.114943) | 0.382113 \/ 0.283200 (0.098913) | 0.091237 \/ 0.141683 (-0.050445) | 1.534185 \/ 1.452155 (0.082030) | 1.576488 \/ 1.492716 (0.083772) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.226568 \/ 0.018006 (0.208562) | 0.401566 \/ 0.000490 (0.401076) | 0.002915 \/ 0.000200 (0.002715) | 0.000076 \/ 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025357 \/ 0.037411 (-0.012054) | 0.099747 \/ 0.014526 (0.085221) | 0.106443 \/ 0.176557 (-0.070113) | 0.157147 \/ 0.737135 (-0.579989) | 0.110759 \/ 0.296338 (-0.185580) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.444648 \/ 0.215209 (0.229439) | 4.437930 \/ 2.077655 (2.360275) | 2.154033 \/ 1.504120 (0.649913) | 1.958351 \/ 1.541195 (0.417157) | 1.991031 \/ 1.468490 (0.522541) | 0.691440 \/ 4.584777 (-3.893337) | 3.369087 \/ 3.745712 (-0.376625) | 1.847103 \/ 5.269862 (-3.422758) | 1.152509 \/ 4.565676 (-3.413168) | 0.082519 \/ 0.424275 (-0.341756) | 0.012609 \/ 0.007607 (0.005001) | 0.547267 \/ 0.226044 (0.321222) | 5.501335 \/ 2.268929 (3.232407) | 2.621079 \/ 55.444624 (-52.823545) | 2.281332 \/ 6.876477 (-4.595145) | 2.300427 \/ 2.142072 (0.158354) | 0.803611 \/ 4.805227 (-4.001616) | 0.151784 \/ 6.500664 (-6.348880) | 0.067801 \/ 0.075469 (-0.007669) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.343201 \/ 1.841788 (-0.498587) | 13.901033 \/ 8.074308 (5.826725) | 13.114738 \/ 10.191392 (2.923346) | 0.149358 \/ 0.680424 (-0.531066) | 0.016596 \/ 0.534201 (-0.517605) | 0.377310 \/ 0.579283 (-0.201973) | 0.387045 \/ 0.434364 (-0.047319) | 0.441272 \/ 0.540337 (-0.099065) | 0.525783 \/ 1.386936 (-0.861153) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#c127e5575ab4e22648976ad268d76264ef5d04f8 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008147 \/ 0.011353 (-0.003205) | 0.005531 \/ 0.011008 (-0.005477) | 0.099796 \/ 0.038508 (0.061288) | 0.041574 \/ 0.023109 (0.018465) | 0.315752 \/ 0.275898 (0.039854) | 0.369846 \/ 0.323480 (0.046366) | 0.006489 \/ 0.007986 (-0.001497) | 0.004339 \/ 0.004328 (0.000010) | 0.074769 \/ 0.004250 (0.070519) | 0.051313 \/ 0.037052 (0.014261) | 0.313463 \/ 0.258489 (0.054974) | 0.369918 \/ 0.293841 (0.076077) | 0.035893 \/ 0.128546 (-0.092653) | 0.012487 \/ 0.075646 (-0.063159) | 0.336464 \/ 0.419271 (-0.082807) | 0.052870 \/ 0.043533 (0.009337) | 0.310795 \/ 0.255139 (0.055656) | 0.333146 \/ 0.283200 (0.049946) | 0.112813 \/ 0.141683 (-0.028870) | 1.488192 \/ 1.452155 (0.036038) | 1.563438 \/ 1.492716 (0.070721) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.015015 \/ 0.018006 (-0.002991) | 0.531783 \/ 0.000490 (0.531294) | 0.005039 \/ 0.000200 (0.004839) | 0.000103 \/ 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030205 \/ 0.037411 (-0.007207) | 0.115997 \/ 0.014526 (0.101471) | 0.122958 \/ 0.176557 (-0.053599) | 0.186956 \/ 0.737135 (-0.550180) | 0.130268 \/ 0.296338 (-0.166071) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.402648 \/ 0.215209 (0.187439) | 3.996121 \/ 2.077655 (1.918466) | 1.811715 \/ 1.504120 (0.307595) | 1.640805 \/ 1.541195 (0.099610) | 1.810478 \/ 1.468490 (0.341988) | 0.699996 \/ 4.584777 (-3.884781) | 3.834020 \/ 3.745712 (0.088308) | 3.688364 \/ 5.269862 (-1.581498) | 1.973828 \/ 4.565676 (-2.591849) | 0.087085 \/ 0.424275 (-0.337190) | 0.012501 \/ 0.007607 (0.004894) | 0.498934 \/ 0.226044 (0.272889) | 4.977608 \/ 2.268929 (2.708680) | 2.258678 \/ 55.444624 (-53.185947) | 1.934251 \/ 6.876477 (-4.942226) | 2.177409 \/ 2.142072 (0.035337) | 0.873470 \/ 4.805227 (-3.931757) | 0.173132 \/ 6.500664 (-6.327532) | 0.069144 \/ 0.075469 (-0.006325) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.181554 \/ 1.841788 (-0.660234) | 15.694468 \/ 8.074308 (7.620160) | 15.026954 \/ 10.191392 (4.835562) | 0.167092 \/ 0.680424 (-0.513332) | 0.017921 \/ 0.534201 (-0.516280) | 0.425649 \/ 0.579283 (-0.153634) | 0.423225 \/ 0.434364 (-0.011139) | 0.522132 \/ 0.540337 (-0.018205) | 0.612806 \/ 1.386936 (-0.774130) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007896 \/ 0.011353 (-0.003457) | 0.005581 \/ 0.011008 (-0.005427) | 0.076338 \/ 0.038508 (0.037830) | 0.037064 \/ 0.023109 (0.013954) | 0.399706 \/ 0.275898 (0.123808) | 0.431698 \/ 0.323480 (0.108218) | 0.006846 \/ 0.007986 (-0.001140) | 0.006010 \/ 0.004328 (0.001682) | 0.075771 \/ 0.004250 (0.071520) | 0.058214 \/ 0.037052 (0.021161) | 0.395753 \/ 0.258489 (0.137264) | 0.459925 \/ 0.293841 (0.166084) | 0.036349 \/ 0.128546 (-0.092197) | 0.012720 \/ 0.075646 (-0.062926) | 0.087248 \/ 0.419271 (-0.332024) | 0.049405 \/ 0.043533 (0.005872) | 0.387576 \/ 0.255139 (0.132437) | 0.409861 \/ 0.283200 (0.126661) | 0.111639 \/ 0.141683 (-0.030043) | 1.482840 \/ 1.452155 (0.030685) | 1.574465 \/ 1.492716 (0.081749) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.320628 \/ 0.018006 (0.302622) | 0.556338 \/ 0.000490 (0.555848) | 0.000445 \/ 0.000200 (0.000245) | 0.000060 \/ 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032905 \/ 0.037411 (-0.004507) | 0.121253 \/ 0.014526 (0.106727) | 0.127241 \/ 0.176557 (-0.049316) | 0.178090 \/ 0.737135 (-0.559045) | 0.143285 \/ 0.296338 (-0.153054) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.437852 \/ 0.215209 (0.222643) | 4.369770 \/ 2.077655 (2.292115) | 2.219932 \/ 1.504120 (0.715812) | 2.032520 \/ 1.541195 (0.491325) | 2.154300 \/ 1.468490 (0.685810) | 0.678942 \/ 4.584777 (-3.905835) | 3.768148 \/ 3.745712 (0.022436) | 2.152738 \/ 5.269862 (-3.117124) | 1.341480 \/ 4.565676 (-3.224197) | 0.084326 \/ 0.424275 (-0.339949) | 0.012288 \/ 0.007607 (0.004681) | 0.547677 \/ 0.226044 (0.321633) | 5.496777 \/ 2.268929 (3.227848) | 2.702267 \/ 55.444624 (-52.742357) | 2.388580 \/ 6.876477 (-4.487897) | 2.471673 \/ 2.142072 (0.329601) | 0.833645 \/ 4.805227 (-3.971582) | 0.167113 \/ 6.500664 (-6.333551) | 0.067658 \/ 0.075469 (-0.007811) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.282050 \/ 1.841788 (-0.559737) | 16.413677 \/ 8.074308 (8.339369) | 14.080910 \/ 10.191392 (3.889518) | 0.171782 \/ 0.680424 (-0.508642) | 0.018186 \/ 0.534201 (-0.516015) | 0.425244 \/ 0.579283 (-0.154039) | 0.430260 \/ 0.434364 (-0.004104) | 0.500838 \/ 0.540337 (-0.039499) | 0.591900 \/ 1.386936 (-0.795036) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#5fc5c538de84da400118e3712077acc580ce85c4 \"CML watermark\")\n","The approach we take here is to no longer materialize the entire index array or shuffle buffer. Instead, we do the following:\r\n\r\n1) Generate a dataset with `tf.data.Dataset.range`. This dataset is not materialized - it's basically a range iterator.\r\n2) When we begin iterating over a dataset, generate a random seed. This value is constant for each pass over the dataset, and is regenerated if we start a new iteration or epoch over the dataset.\r\n3) Map the range dataset and the random seed with `tf.random.index_shuffle`. This converts indices into the equivalent values in a permuted array. In other words `tf.random.index_shuffle(indices, maxval=50_000_000)` is equivalent to `np.random.permutation(50_000_000)[indices]`, but without ever materializing the `np.random.permutation(50_000_000)` array.\r\n\r\nUsing this approach gives us a complete iteration over the dataset that does not skip any samples, compiles in TF and also never materializes the complete index array, which should avoid the memory usage issues. I'm testing that now!","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008395 \/ 0.011353 (-0.002958) | 0.005893 \/ 0.011008 (-0.005115) | 0.117081 \/ 0.038508 (0.078573) | 0.040987 \/ 0.023109 (0.017878) | 0.394234 \/ 0.275898 (0.118336) | 0.447036 \/ 0.323480 (0.123556) | 0.006703 \/ 0.007986 (-0.001283) | 0.006085 \/ 0.004328 (0.001757) | 0.086479 \/ 0.004250 (0.082228) | 0.050192 \/ 0.037052 (0.013140) | 0.400958 \/ 0.258489 (0.142469) | 0.455551 \/ 0.293841 (0.161710) | 0.041481 \/ 0.128546 (-0.087065) | 0.014135 \/ 0.075646 (-0.061511) | 0.399929 \/ 0.419271 (-0.019343) | 0.060824 \/ 0.043533 (0.017291) | 0.395946 \/ 0.255139 (0.140807) | 0.428811 \/ 0.283200 (0.145611) | 0.120057 \/ 0.141683 (-0.021626) | 1.703244 \/ 1.452155 (0.251090) | 1.841153 \/ 1.492716 (0.348436) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.021826 \/ 0.018006 (0.003820) | 0.494279 \/ 0.000490 (0.493789) | 0.011258 \/ 0.000200 (0.011058) | 0.000382 \/ 0.000054 (0.000328) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031651 \/ 0.037411 (-0.005760) | 0.132871 \/ 0.014526 (0.118345) | 0.137388 \/ 0.176557 (-0.039169) | 0.205808 \/ 0.737135 (-0.531327) | 0.147585 \/ 0.296338 (-0.148753) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.474483 \/ 0.215209 (0.259274) | 4.726568 \/ 2.077655 (2.648914) | 2.136172 \/ 1.504120 (0.632052) | 1.918364 \/ 1.541195 (0.377169) | 2.068794 \/ 1.468490 (0.600304) | 0.836481 \/ 4.584777 (-3.748296) | 4.550583 \/ 3.745712 (0.804871) | 2.456287 \/ 5.269862 (-2.813574) | 1.563127 \/ 4.565676 (-3.002550) | 0.102541 \/ 0.424275 (-0.321734) | 0.014492 \/ 0.007607 (0.006885) | 0.598572 \/ 0.226044 (0.372528) | 5.953321 \/ 2.268929 (3.684392) | 2.695210 \/ 55.444624 (-52.749414) | 2.294317 \/ 6.876477 (-4.582160) | 2.456585 \/ 2.142072 (0.314513) | 1.019907 \/ 4.805227 (-3.785320) | 0.201225 \/ 6.500664 (-6.299439) | 0.077113 \/ 0.075469 (0.001644) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.497662 \/ 1.841788 (-0.344126) | 18.216941 \/ 8.074308 (10.142633) | 17.016638 \/ 10.191392 (6.825246) | 0.193271 \/ 0.680424 (-0.487153) | 0.020440 \/ 0.534201 (-0.513761) | 0.509361 \/ 0.579283 (-0.069922) | 0.513389 \/ 0.434364 (0.079025) | 0.622266 \/ 0.540337 (0.081928) | 0.741733 \/ 1.386936 (-0.645203) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008641 \/ 0.011353 (-0.002712) | 0.005792 \/ 0.011008 (-0.005216) | 0.086020 \/ 0.038508 (0.047512) | 0.040005 \/ 0.023109 (0.016896) | 0.435120 \/ 0.275898 (0.159222) | 0.480269 \/ 0.323480 (0.156789) | 0.006669 \/ 0.007986 (-0.001317) | 0.006039 \/ 0.004328 (0.001711) | 0.083468 \/ 0.004250 (0.079218) | 0.057700 \/ 0.037052 (0.020648) | 0.416418 \/ 0.258489 (0.157929) | 0.508286 \/ 0.293841 (0.214445) | 0.041198 \/ 0.128546 (-0.087349) | 0.014346 \/ 0.075646 (-0.061301) | 0.100553 \/ 0.419271 (-0.318718) | 0.054201 \/ 0.043533 (0.010668) | 0.438232 \/ 0.255139 (0.183093) | 0.454707 \/ 0.283200 (0.171508) | 0.118332 \/ 0.141683 (-0.023351) | 1.657607 \/ 1.452155 (0.205452) | 1.825510 \/ 1.492716 (0.332794) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.236156 \/ 0.018006 (0.218150) | 0.487612 \/ 0.000490 (0.487123) | 0.005747 \/ 0.000200 (0.005547) | 0.000111 \/ 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.035127 \/ 0.037411 (-0.002284) | 0.132013 \/ 0.014526 (0.117487) | 0.142316 \/ 0.176557 (-0.034241) | 0.198627 \/ 0.737135 (-0.538508) | 0.145454 \/ 0.296338 (-0.150885) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.513041 \/ 0.215209 (0.297832) | 5.066197 \/ 2.077655 (2.988542) | 2.508779 \/ 1.504120 (1.004659) | 2.273901 \/ 1.541195 (0.732706) | 2.364958 \/ 1.468490 (0.896468) | 0.811367 \/ 4.584777 (-3.773410) | 4.504744 \/ 3.745712 (0.759032) | 2.499811 \/ 5.269862 (-2.770050) | 1.583349 \/ 4.565676 (-2.982328) | 0.101701 \/ 0.424275 (-0.322574) | 0.014379 \/ 0.007607 (0.006772) | 0.669506 \/ 0.226044 (0.443462) | 6.556702 \/ 2.268929 (4.287774) | 3.123457 \/ 55.444624 (-52.321167) | 2.731997 \/ 6.876477 (-4.144480) | 2.862866 \/ 2.142072 (0.720794) | 0.992956 \/ 4.805227 (-3.812271) | 0.200473 \/ 6.500664 (-6.300191) | 0.078780 \/ 0.075469 (0.003311) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.540718 \/ 1.841788 (-0.301070) | 18.749344 \/ 8.074308 (10.675036) | 15.648983 \/ 10.191392 (5.457591) | 0.174089 \/ 0.680424 (-0.506335) | 0.020441 \/ 0.534201 (-0.513760) | 0.503742 \/ 0.579283 (-0.075541) | 0.500648 \/ 0.434364 (0.066284) | 0.598558 \/ 0.540337 (0.058221) | 0.712093 \/ 1.386936 (-0.674843) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#621554280f964b5fe87ece1a46b794406d943b1e \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009940 \/ 0.011353 (-0.001412) | 0.006193 \/ 0.011008 (-0.004815) | 0.125874 \/ 0.038508 (0.087366) | 0.038664 \/ 0.023109 (0.015555) | 0.380013 \/ 0.275898 (0.104115) | 0.430152 \/ 0.323480 (0.106672) | 0.006961 \/ 0.007986 (-0.001025) | 0.004749 \/ 0.004328 (0.000420) | 0.099743 \/ 0.004250 (0.095492) | 0.052349 \/ 0.037052 (0.015297) | 0.433354 \/ 0.258489 (0.174865) | 0.436273 \/ 0.293841 (0.142433) | 0.053929 \/ 0.128546 (-0.074617) | 0.019369 \/ 0.075646 (-0.056278) | 0.421783 \/ 0.419271 (0.002511) | 0.062746 \/ 0.043533 (0.019213) | 0.377225 \/ 0.255139 (0.122086) | 0.413708 \/ 0.283200 (0.130508) | 0.111371 \/ 0.141683 (-0.030312) | 1.819166 \/ 1.452155 (0.367011) | 1.974527 \/ 1.492716 (0.481810) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.090664 \/ 0.018006 (0.072658) | 0.566166 \/ 0.000490 (0.565676) | 0.079305 \/ 0.000200 (0.079105) | 0.000755 \/ 0.000054 (0.000700) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029720 \/ 0.037411 (-0.007691) | 0.126030 \/ 0.014526 (0.111504) | 0.146020 \/ 0.176557 (-0.030537) | 0.210354 \/ 0.737135 (-0.526781) | 0.149428 \/ 0.296338 (-0.146910) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.624371 \/ 0.215209 (0.409162) | 6.332839 \/ 2.077655 (4.255184) | 2.547784 \/ 1.504120 (1.043664) | 2.150508 \/ 1.541195 (0.609313) | 2.240816 \/ 1.468490 (0.772326) | 1.271131 \/ 4.584777 (-3.313646) | 5.642726 \/ 3.745712 (1.897014) | 3.212988 \/ 5.269862 (-2.056874) | 2.258123 \/ 4.565676 (-2.307553) | 0.149477 \/ 0.424275 (-0.274798) | 0.014603 \/ 0.007607 (0.006996) | 0.782155 \/ 0.226044 (0.556111) | 7.855191 \/ 2.268929 (5.586262) | 3.308638 \/ 55.444624 (-52.135986) | 2.548142 \/ 6.876477 (-4.328335) | 2.627374 \/ 2.142072 (0.485301) | 1.515170 \/ 4.805227 (-3.290058) | 0.262479 \/ 6.500664 (-6.238185) | 0.082181 \/ 0.075469 (0.006712) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.573169 \/ 1.841788 (-0.268618) | 18.105719 \/ 8.074308 (10.031411) | 22.015179 \/ 10.191392 (11.823787) | 0.254678 \/ 0.680424 (-0.425746) | 0.027098 \/ 0.534201 (-0.507103) | 0.578045 \/ 0.579283 (-0.001238) | 0.647130 \/ 0.434364 (0.212766) | 0.650522 \/ 0.540337 (0.110185) | 0.797713 \/ 1.386936 (-0.589223) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.010376 \/ 0.011353 (-0.000977) | 0.005990 \/ 0.011008 (-0.005018) | 0.097144 \/ 0.038508 (0.058635) | 0.038205 \/ 0.023109 (0.015096) | 0.468347 \/ 0.275898 (0.192449) | 0.497646 \/ 0.323480 (0.174166) | 0.006916 \/ 0.007986 (-0.001069) | 0.004760 \/ 0.004328 (0.000431) | 0.109838 \/ 0.004250 (0.105587) | 0.048321 \/ 0.037052 (0.011269) | 0.437458 \/ 0.258489 (0.178969) | 0.534864 \/ 0.293841 (0.241023) | 0.053655 \/ 0.128546 (-0.074892) | 0.021915 \/ 0.075646 (-0.053732) | 0.121047 \/ 0.419271 (-0.298224) | 0.059694 \/ 0.043533 (0.016162) | 0.466937 \/ 0.255139 (0.211798) | 0.482030 \/ 0.283200 (0.198831) | 0.117458 \/ 0.141683 (-0.024225) | 1.835551 \/ 1.452155 (0.383396) | 1.965748 \/ 1.492716 (0.473031) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.234885 \/ 0.018006 (0.216879) | 0.529925 \/ 0.000490 (0.529436) | 0.000484 \/ 0.000200 (0.000284) | 0.000085 \/ 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030959 \/ 0.037411 (-0.006453) | 0.128905 \/ 0.014526 (0.114379) | 0.136913 \/ 0.176557 (-0.039643) | 0.195133 \/ 0.737135 (-0.542002) | 0.147929 \/ 0.296338 (-0.148410) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.715661 \/ 0.215209 (0.500451) | 6.994125 \/ 2.077655 (4.916470) | 3.033178 \/ 1.504120 (1.529058) | 2.663709 \/ 1.541195 (1.122515) | 2.707558 \/ 1.468490 (1.239068) | 1.316195 \/ 4.584777 (-3.268582) | 5.688264 \/ 3.745712 (1.942552) | 3.260897 \/ 5.269862 (-2.008964) | 2.134985 \/ 4.565676 (-2.430691) | 0.153945 \/ 0.424275 (-0.270330) | 0.014727 \/ 0.007607 (0.007119) | 0.911339 \/ 0.226044 (0.685294) | 8.902640 \/ 2.268929 (6.633711) | 3.806606 \/ 55.444624 (-51.638018) | 3.052238 \/ 6.876477 (-3.824238) | 3.046945 \/ 2.142072 (0.904873) | 1.559837 \/ 4.805227 (-3.245390) | 0.272276 \/ 6.500664 (-6.228388) | 0.087728 \/ 0.075469 (0.012259) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.712691 \/ 1.841788 (-0.129097) | 18.127575 \/ 8.074308 (10.053267) | 19.734063 \/ 10.191392 (9.542671) | 0.235006 \/ 0.680424 (-0.445418) | 0.027581 \/ 0.534201 (-0.506620) | 0.551080 \/ 0.579283 (-0.028203) | 0.608564 \/ 0.434364 (0.174200) | 0.636578 \/ 0.540337 (0.096241) | 0.732374 \/ 1.386936 (-0.654562) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#36911ca06d9c4e37ce36da6228cb3af8b40c2add \"CML watermark\")\n","Looks good in testing - this should be ready for review! cc @lhoestq @massquantity","Looks good to me, though i doubt that very few people will upgrade to TF >= 2.9 unless their memory is full:)","Is it more efficient than using numpy to shuffle as in multiprocessing ? Why not use the same strategy ?","Good question, honestly! The NumPy strategy works fine, but requires us to handle multiple processes instead of doing everything in `tf.data`. We could just scrap this entire code path and always use the multiprocessing NumPy approach, but I think single-threaded throughput would be lower if we did that. If you prefer it for code simplicity, though, I can do that.\r\n\r\nIn the longer term, I'm hoping that `tf.data` gets native support for our data structures and we can transition the whole pipeline to pure `tf.data`, but that still hasn't happened \ud83e\udee0","And @massquantity TF 2.13 is going to release in a couple of days, so I hope most users are at least on TF 2.9 by now!","Unless there is a big gap in performance I think code simplicity would be appreciated ^^","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008638 \/ 0.011353 (-0.002715) | 0.006013 \/ 0.011008 (-0.004995) | 0.116456 \/ 0.038508 (0.077948) | 0.040419 \/ 0.023109 (0.017310) | 0.418374 \/ 0.275898 (0.142476) | 0.447693 \/ 0.323480 (0.124213) | 0.007002 \/ 0.007986 (-0.000984) | 0.006175 \/ 0.004328 (0.001847) | 0.087801 \/ 0.004250 (0.083550) | 0.051980 \/ 0.037052 (0.014928) | 0.393275 \/ 0.258489 (0.134786) | 0.449601 \/ 0.293841 (0.155760) | 0.041670 \/ 0.128546 (-0.086876) | 0.014396 \/ 0.075646 (-0.061251) | 0.399175 \/ 0.419271 (-0.020096) | 0.060635 \/ 0.043533 (0.017102) | 0.391449 \/ 0.255139 (0.136310) | 0.420713 \/ 0.283200 (0.137513) | 0.121369 \/ 0.141683 (-0.020314) | 1.692630 \/ 1.452155 (0.240475) | 1.815526 \/ 1.492716 (0.322810) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.244321 \/ 0.018006 (0.226315) | 0.487947 \/ 0.000490 (0.487458) | 0.004563 \/ 0.000200 (0.004363) | 0.000116 \/ 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033425 \/ 0.037411 (-0.003987) | 0.134458 \/ 0.014526 (0.119932) | 0.138810 \/ 0.176557 (-0.037746) | 0.208871 \/ 0.737135 (-0.528264) | 0.147964 \/ 0.296338 (-0.148374) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.483347 \/ 0.215209 (0.268138) | 4.799550 \/ 2.077655 (2.721895) | 2.174149 \/ 1.504120 (0.670029) | 1.943276 \/ 1.541195 (0.402081) | 2.010884 \/ 1.468490 (0.542394) | 0.832030 \/ 4.584777 (-3.752747) | 4.716713 \/ 3.745712 (0.971001) | 4.615810 \/ 5.269862 (-0.654052) | 2.379600 \/ 4.565676 (-2.186077) | 0.103560 \/ 0.424275 (-0.320715) | 0.014683 \/ 0.007607 (0.007076) | 0.598558 \/ 0.226044 (0.372514) | 5.999126 \/ 2.268929 (3.730197) | 2.677819 \/ 55.444624 (-52.766805) | 2.320838 \/ 6.876477 (-4.555639) | 2.503684 \/ 2.142072 (0.361611) | 1.016459 \/ 4.805227 (-3.788769) | 0.201672 \/ 6.500664 (-6.298992) | 0.079310 \/ 0.075469 (0.003841) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.446374 \/ 1.841788 (-0.395413) | 19.219310 \/ 8.074308 (11.145002) | 17.294665 \/ 10.191392 (7.103273) | 0.246115 \/ 0.680424 (-0.434309) | 0.021406 \/ 0.534201 (-0.512795) | 0.524084 \/ 0.579283 (-0.055200) | 0.511254 \/ 0.434364 (0.076890) | 0.621304 \/ 0.540337 (0.080966) | 0.727088 \/ 1.386936 (-0.659848) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008907 \/ 0.011353 (-0.002446) | 0.006165 \/ 0.011008 (-0.004843) | 0.090786 \/ 0.038508 (0.052278) | 0.040893 \/ 0.023109 (0.017784) | 0.451252 \/ 0.275898 (0.175354) | 0.477811 \/ 0.323480 (0.154331) | 0.007418 \/ 0.007986 (-0.000568) | 0.005789 \/ 0.004328 (0.001461) | 0.087422 \/ 0.004250 (0.083171) | 0.061800 \/ 0.037052 (0.024748) | 0.459085 \/ 0.258489 (0.200596) | 0.488897 \/ 0.293841 (0.195056) | 0.048157 \/ 0.128546 (-0.080389) | 0.014676 \/ 0.075646 (-0.060970) | 0.104372 \/ 0.419271 (-0.314900) | 0.058066 \/ 0.043533 (0.014534) | 0.446131 \/ 0.255139 (0.190992) | 0.460428 \/ 0.283200 (0.177228) | 0.128492 \/ 0.141683 (-0.013191) | 1.811419 \/ 1.452155 (0.359265) | 1.894781 \/ 1.492716 (0.402064) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.220527 \/ 0.018006 (0.202520) | 0.487663 \/ 0.000490 (0.487173) | 0.003864 \/ 0.000200 (0.003664) | 0.000162 \/ 0.000054 (0.000107) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.036354 \/ 0.037411 (-0.001057) | 0.140469 \/ 0.014526 (0.125944) | 0.149990 \/ 0.176557 (-0.026566) | 0.212369 \/ 0.737135 (-0.524766) | 0.154000 \/ 0.296338 (-0.142338) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.514172 \/ 0.215209 (0.298963) | 5.129247 \/ 2.077655 (3.051593) | 2.536773 \/ 1.504120 (1.032653) | 2.317253 \/ 1.541195 (0.776058) | 2.424066 \/ 1.468490 (0.955576) | 0.836160 \/ 4.584777 (-3.748617) | 4.906235 \/ 3.745712 (1.160523) | 4.431395 \/ 5.269862 (-0.838467) | 2.332845 \/ 4.565676 (-2.232831) | 0.102867 \/ 0.424275 (-0.321409) | 0.014851 \/ 0.007607 (0.007244) | 0.644104 \/ 0.226044 (0.418060) | 6.415847 \/ 2.268929 (4.146918) | 3.186984 \/ 55.444624 (-52.257641) | 2.774125 \/ 6.876477 (-4.102352) | 2.848045 \/ 2.142072 (0.705972) | 1.018757 \/ 4.805227 (-3.786470) | 0.212333 \/ 6.500664 (-6.288331) | 0.079405 \/ 0.075469 (0.003936) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.748375 \/ 1.841788 (-0.093412) | 19.733829 \/ 8.074308 (11.659521) | 15.766665 \/ 10.191392 (5.575273) | 0.192087 \/ 0.680424 (-0.488337) | 0.027641 \/ 0.534201 (-0.506560) | 0.504101 \/ 0.579283 (-0.075182) | 0.493815 \/ 0.434364 (0.059451) | 0.583247 \/ 0.540337 (0.042910) | 0.697432 \/ 1.386936 (-0.689504) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#95c177e02ca20bf7bb3ed8f185d2d6f05a5e5f30 \"CML watermark\")\n","Hi @lhoestq, I tried moving everything to the NumPy path but ran into issues - the `SharedMemory` constructs it depends on were only added in Python 3.8. As a result, if we move everything to that path then `to_tf_dataset` does not work on older Python versions.\r\n\r\nFor now, how do you feel about reverting and using my original solution, which has fallbacks for all versions of Python and TensorFlow? Once our minimum versions pass Python 3.8 or TF 2.9 we can remove the older code paths.","Gentle ping on this question @lhoestq!","Ah yes indeed. Feel free to revert and add comments to explain why you needed to have a different approach for single process","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008395 \/ 0.011353 (-0.002958) | 0.005773 \/ 0.011008 (-0.005235) | 0.115702 \/ 0.038508 (0.077194) | 0.039897 \/ 0.023109 (0.016788) | 0.483140 \/ 0.275898 (0.207242) | 0.531288 \/ 0.323480 (0.207808) | 0.006739 \/ 0.007986 (-0.001246) | 0.004419 \/ 0.004328 (0.000090) | 0.086374 \/ 0.004250 (0.082124) | 0.056498 \/ 0.037052 (0.019446) | 0.491589 \/ 0.258489 (0.233100) | 0.556366 \/ 0.293841 (0.262525) | 0.041366 \/ 0.128546 (-0.087181) | 0.014373 \/ 0.075646 (-0.061274) | 0.395504 \/ 0.419271 (-0.023767) | 0.094382 \/ 0.043533 (0.050849) | 0.483000 \/ 0.255139 (0.227861) | 0.522693 \/ 0.283200 (0.239494) | 0.138804 \/ 0.141683 (-0.002879) | 1.719563 \/ 1.452155 (0.267409) | 1.853470 \/ 1.492716 (0.360753) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.235616 \/ 0.018006 (0.217610) | 0.483267 \/ 0.000490 (0.482777) | 0.008663 \/ 0.000200 (0.008463) | 0.000401 \/ 0.000054 (0.000347) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033124 \/ 0.037411 (-0.004287) | 0.128821 \/ 0.014526 (0.114295) | 0.138910 \/ 0.176557 (-0.037647) | 0.213570 \/ 0.737135 (-0.523566) | 0.146646 \/ 0.296338 (-0.149693) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.479998 \/ 0.215209 (0.264789) | 4.772325 \/ 2.077655 (2.694670) | 2.228424 \/ 1.504120 (0.724304) | 2.000915 \/ 1.541195 (0.459721) | 2.105799 \/ 1.468490 (0.637309) | 0.824235 \/ 4.584777 (-3.760542) | 4.511902 \/ 3.745712 (0.766189) | 4.723073 \/ 5.269862 (-0.546789) | 2.333442 \/ 4.565676 (-2.232235) | 0.101161 \/ 0.424275 (-0.323114) | 0.014403 \/ 0.007607 (0.006796) | 0.596395 \/ 0.226044 (0.370351) | 5.961046 \/ 2.268929 (3.692117) | 2.746679 \/ 55.444624 (-52.697946) | 2.352085 \/ 6.876477 (-4.524392) | 2.609812 \/ 2.142072 (0.467740) | 0.996950 \/ 4.805227 (-3.808277) | 0.197923 \/ 6.500664 (-6.302741) | 0.075546 \/ 0.075469 (0.000077) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.529896 \/ 1.841788 (-0.311892) | 18.183887 \/ 8.074308 (10.109578) | 16.352332 \/ 10.191392 (6.160940) | 0.213504 \/ 0.680424 (-0.466920) | 0.020388 \/ 0.534201 (-0.513813) | 0.497832 \/ 0.579283 (-0.081451) | 0.495477 \/ 0.434364 (0.061113) | 0.585984 \/ 0.540337 (0.045647) | 0.688726 \/ 1.386936 (-0.698210) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008422 \/ 0.011353 (-0.002931) | 0.005876 \/ 0.011008 (-0.005132) | 0.089310 \/ 0.038508 (0.050802) | 0.039769 \/ 0.023109 (0.016660) | 0.425279 \/ 0.275898 (0.149381) | 0.470818 \/ 0.323480 (0.147338) | 0.006519 \/ 0.007986 (-0.001467) | 0.006276 \/ 0.004328 (0.001948) | 0.085753 \/ 0.004250 (0.081503) | 0.053867 \/ 0.037052 (0.016815) | 0.429193 \/ 0.258489 (0.170704) | 0.480278 \/ 0.293841 (0.186437) | 0.040657 \/ 0.128546 (-0.087889) | 0.014055 \/ 0.075646 (-0.061591) | 0.101422 \/ 0.419271 (-0.317849) | 0.053803 \/ 0.043533 (0.010271) | 0.428348 \/ 0.255139 (0.173209) | 0.452193 \/ 0.283200 (0.168994) | 0.124914 \/ 0.141683 (-0.016769) | 1.750122 \/ 1.452155 (0.297968) | 1.850875 \/ 1.492716 (0.358159) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.249958 \/ 0.018006 (0.231952) | 0.485183 \/ 0.000490 (0.484694) | 0.000472 \/ 0.000200 (0.000272) | 0.000069 \/ 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034563 \/ 0.037411 (-0.002848) | 0.135565 \/ 0.014526 (0.121039) | 0.143271 \/ 0.176557 (-0.033285) | 0.199080 \/ 0.737135 (-0.538056) | 0.149336 \/ 0.296338 (-0.147003) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.526170 \/ 0.215209 (0.310961) | 5.270960 \/ 2.077655 (3.193305) | 2.664585 \/ 1.504120 (1.160465) | 2.440027 \/ 1.541195 (0.898832) | 2.612764 \/ 1.468490 (1.144274) | 0.828965 \/ 4.584777 (-3.755812) | 4.769983 \/ 3.745712 (1.024271) | 2.441962 \/ 5.269862 (-2.827900) | 1.549032 \/ 4.565676 (-3.016644) | 0.100851 \/ 0.424275 (-0.323424) | 0.014425 \/ 0.007607 (0.006818) | 0.640908 \/ 0.226044 (0.414864) | 6.399041 \/ 2.268929 (4.130113) | 3.242424 \/ 55.444624 (-52.202200) | 2.836317 \/ 6.876477 (-4.040160) | 2.933010 \/ 2.142072 (0.790938) | 1.002277 \/ 4.805227 (-3.802950) | 0.201247 \/ 6.500664 (-6.299417) | 0.078777 \/ 0.075469 (0.003308) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.620415 \/ 1.841788 (-0.221373) | 19.153631 \/ 8.074308 (11.079323) | 16.744068 \/ 10.191392 (6.552676) | 0.167327 \/ 0.680424 (-0.513097) | 0.020186 \/ 0.534201 (-0.514015) | 0.503683 \/ 0.579283 (-0.075600) | 0.500051 \/ 0.434364 (0.065687) | 0.587188 \/ 0.540337 (0.046850) | 0.699975 \/ 1.386936 (-0.686961) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#291d7ffa695edb4b4e818c783b16d3466246cd56 \"CML watermark\")\n","This is probably ready, but likely conflicts with #5883. I'll wait for that PR to be merged and then rebase and merge this one.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008387 \/ 0.011353 (-0.002965) | 0.005824 \/ 0.011008 (-0.005184) | 0.117721 \/ 0.038508 (0.079213) | 0.040420 \/ 0.023109 (0.017311) | 0.404961 \/ 0.275898 (0.129063) | 0.426695 \/ 0.323480 (0.103215) | 0.006634 \/ 0.007986 (-0.001352) | 0.006033 \/ 0.004328 (0.001705) | 0.088652 \/ 0.004250 (0.084402) | 0.048075 \/ 0.037052 (0.011022) | 0.400683 \/ 0.258489 (0.142194) | 0.432489 \/ 0.293841 (0.138648) | 0.042065 \/ 0.128546 (-0.086482) | 0.014071 \/ 0.075646 (-0.061575) | 0.399398 \/ 0.419271 (-0.019873) | 0.066034 \/ 0.043533 (0.022501) | 0.400056 \/ 0.255139 (0.144918) | 0.421130 \/ 0.283200 (0.137930) | 0.119721 \/ 0.141683 (-0.021962) | 1.752166 \/ 1.452155 (0.300011) | 1.820161 \/ 1.492716 (0.327444) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.244264 \/ 0.018006 (0.226258) | 0.480882 \/ 0.000490 (0.480392) | 0.005604 \/ 0.000200 (0.005404) | 0.000175 \/ 0.000054 (0.000121) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032397 \/ 0.037411 (-0.005015) | 0.131632 \/ 0.014526 (0.117106) | 0.139765 \/ 0.176557 (-0.036792) | 0.213135 \/ 0.737135 (-0.524000) | 0.147891 \/ 0.296338 (-0.148447) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.474534 \/ 0.215209 (0.259325) | 4.730424 \/ 2.077655 (2.652770) | 2.163706 \/ 1.504120 (0.659586) | 1.936051 \/ 1.541195 (0.394857) | 2.012185 \/ 1.468490 (0.543695) | 0.826583 \/ 4.584777 (-3.758194) | 4.921494 \/ 3.745712 (1.175782) | 2.431401 \/ 5.269862 (-2.838460) | 1.566020 \/ 4.565676 (-2.999656) | 0.101255 \/ 0.424275 (-0.323020) | 0.014553 \/ 0.007607 (0.006946) | 0.608301 \/ 0.226044 (0.382256) | 6.089801 \/ 2.268929 (3.820873) | 2.691986 \/ 55.444624 (-52.752638) | 2.296498 \/ 6.876477 (-4.579979) | 2.455388 \/ 2.142072 (0.313315) | 0.984342 \/ 4.805227 (-3.820885) | 0.200447 \/ 6.500664 (-6.300217) | 0.077602 \/ 0.075469 (0.002133) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.445067 \/ 1.841788 (-0.396721) | 18.588670 \/ 8.074308 (10.514362) | 16.950216 \/ 10.191392 (6.758824) | 0.169688 \/ 0.680424 (-0.510736) | 0.020544 \/ 0.534201 (-0.513657) | 0.508506 \/ 0.579283 (-0.070777) | 0.516218 \/ 0.434364 (0.081854) | 0.646072 \/ 0.540337 (0.105734) | 0.763227 \/ 1.386936 (-0.623709) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008816 \/ 0.011353 (-0.002537) | 0.006016 \/ 0.011008 (-0.004992) | 0.090946 \/ 0.038508 (0.052438) | 0.040189 \/ 0.023109 (0.017080) | 0.446723 \/ 0.275898 (0.170825) | 0.494633 \/ 0.323480 (0.171153) | 0.007206 \/ 0.007986 (-0.000779) | 0.004508 \/ 0.004328 (0.000180) | 0.088477 \/ 0.004250 (0.084226) | 0.055587 \/ 0.037052 (0.018535) | 0.445349 \/ 0.258489 (0.186860) | 0.504940 \/ 0.293841 (0.211099) | 0.041976 \/ 0.128546 (-0.086570) | 0.014296 \/ 0.075646 (-0.061351) | 0.102835 \/ 0.419271 (-0.316436) | 0.054786 \/ 0.043533 (0.011253) | 0.444789 \/ 0.255139 (0.189651) | 0.472306 \/ 0.283200 (0.189106) | 0.123365 \/ 0.141683 (-0.018318) | 1.725803 \/ 1.452155 (0.273648) | 1.832216 \/ 1.492716 (0.339500) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.252680 \/ 0.018006 (0.234674) | 0.476719 \/ 0.000490 (0.476229) | 0.000461 \/ 0.000200 (0.000261) | 0.000067 \/ 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.035961 \/ 0.037411 (-0.001450) | 0.135399 \/ 0.014526 (0.120873) | 0.147549 \/ 0.176557 (-0.029007) | 0.207468 \/ 0.737135 (-0.529667) | 0.151591 \/ 0.296338 (-0.144747) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.528143 \/ 0.215209 (0.312934) | 5.270766 \/ 2.077655 (3.193111) | 2.675644 \/ 1.504120 (1.171524) | 2.472855 \/ 1.541195 (0.931660) | 2.636020 \/ 1.468490 (1.167530) | 0.841325 \/ 4.584777 (-3.743452) | 4.702290 \/ 3.745712 (0.956578) | 2.523537 \/ 5.269862 (-2.746325) | 1.595617 \/ 4.565676 (-2.970059) | 0.102095 \/ 0.424275 (-0.322180) | 0.014568 \/ 0.007607 (0.006961) | 0.652090 \/ 0.226044 (0.426046) | 6.503086 \/ 2.268929 (4.234158) | 3.277025 \/ 55.444624 (-52.167599) | 2.931264 \/ 6.876477 (-3.945213) | 3.021667 \/ 2.142072 (0.879594) | 1.002560 \/ 4.805227 (-3.802668) | 0.202621 \/ 6.500664 (-6.298043) | 0.080583 \/ 0.075469 (0.005114) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.639281 \/ 1.841788 (-0.202507) | 18.911529 \/ 8.074308 (10.837220) | 17.082795 \/ 10.191392 (6.891403) | 0.179456 \/ 0.680424 (-0.500968) | 0.021740 \/ 0.534201 (-0.512460) | 0.526426 \/ 0.579283 (-0.052857) | 0.535083 \/ 0.434364 (0.100719) | 0.583304 \/ 0.540337 (0.042967) | 0.696733 \/ 1.386936 (-0.690203) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#757f19283f22eeb3e9aedefd82abc0aa2235f797 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006823 \/ 0.011353 (-0.004530) | 0.004847 \/ 0.011008 (-0.006161) | 0.096038 \/ 0.038508 (0.057530) | 0.033037 \/ 0.023109 (0.009928) | 0.298379 \/ 0.275898 (0.022481) | 0.333319 \/ 0.323480 (0.009839) | 0.005343 \/ 0.007986 (-0.002643) | 0.003863 \/ 0.004328 (-0.000465) | 0.072928 \/ 0.004250 (0.068678) | 0.040898 \/ 0.037052 (0.003846) | 0.303116 \/ 0.258489 (0.044627) | 0.334021 \/ 0.293841 (0.040181) | 0.034780 \/ 0.128546 (-0.093767) | 0.011978 \/ 0.075646 (-0.063668) | 0.331642 \/ 0.419271 (-0.087629) | 0.052729 \/ 0.043533 (0.009196) | 0.298586 \/ 0.255139 (0.043447) | 0.319296 \/ 0.283200 (0.036097) | 0.097711 \/ 0.141683 (-0.043972) | 1.416899 \/ 1.452155 (-0.035256) | 1.546008 \/ 1.492716 (0.053292) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.234303 \/ 0.018006 (0.216296) | 0.492767 \/ 0.000490 (0.492278) | 0.004935 \/ 0.000200 (0.004736) | 0.000106 \/ 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030617 \/ 0.037411 (-0.006795) | 0.121203 \/ 0.014526 (0.106677) | 0.126677 \/ 0.176557 (-0.049879) | 0.186379 \/ 0.737135 (-0.550756) | 0.129849 \/ 0.296338 (-0.166490) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.416324 \/ 0.215209 (0.201115) | 4.135563 \/ 2.077655 (2.057908) | 1.976182 \/ 1.504120 (0.472062) | 1.807611 \/ 1.541195 (0.266416) | 1.886282 \/ 1.468490 (0.417792) | 0.713006 \/ 4.584777 (-3.871771) | 3.899205 \/ 3.745712 (0.153493) | 2.283427 \/ 5.269862 (-2.986435) | 1.543088 \/ 4.565676 (-3.022589) | 0.086189 \/ 0.424275 (-0.338087) | 0.012908 \/ 0.007607 (0.005301) | 0.516156 \/ 0.226044 (0.290112) | 5.144199 \/ 2.268929 (2.875271) | 2.460142 \/ 55.444624 (-52.984482) | 2.209054 \/ 6.876477 (-4.667423) | 2.325277 \/ 2.142072 (0.183204) | 0.849890 \/ 4.805227 (-3.955337) | 0.173687 \/ 6.500664 (-6.326977) | 0.070178 \/ 0.075469 (-0.005291) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.241790 \/ 1.841788 (-0.599997) | 16.047257 \/ 8.074308 (7.972949) | 15.774146 \/ 10.191392 (5.582754) | 0.145871 \/ 0.680424 (-0.534553) | 0.018106 \/ 0.534201 (-0.516095) | 0.433642 \/ 0.579283 (-0.145641) | 0.425311 \/ 0.434364 (-0.009053) | 0.533963 \/ 0.540337 (-0.006375) | 0.638786 \/ 1.386936 (-0.748151) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007242 \/ 0.011353 (-0.004111) | 0.005599 \/ 0.011008 (-0.005410) | 0.073443 \/ 0.038508 (0.034935) | 0.033764 \/ 0.023109 (0.010655) | 0.365990 \/ 0.275898 (0.090092) | 0.392943 \/ 0.323480 (0.069463) | 0.005987 \/ 0.007986 (-0.001999) | 0.004312 \/ 0.004328 (-0.000016) | 0.072831 \/ 0.004250 (0.068580) | 0.048854 \/ 0.037052 (0.011802) | 0.362477 \/ 0.258489 (0.103988) | 0.399993 \/ 0.293841 (0.106152) | 0.035602 \/ 0.128546 (-0.092944) | 0.012445 \/ 0.075646 (-0.063202) | 0.085768 \/ 0.419271 (-0.333504) | 0.048544 \/ 0.043533 (0.005011) | 0.362246 \/ 0.255139 (0.107107) | 0.388753 \/ 0.283200 (0.105554) | 0.109829 \/ 0.141683 (-0.031854) | 1.546881 \/ 1.452155 (0.094726) | 1.619454 \/ 1.492716 (0.126737) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.189926 \/ 0.018006 (0.171920) | 0.447936 \/ 0.000490 (0.447446) | 0.002354 \/ 0.000200 (0.002155) | 0.000090 \/ 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031740 \/ 0.037411 (-0.005671) | 0.122595 \/ 0.014526 (0.108069) | 0.128389 \/ 0.176557 (-0.048168) | 0.180570 \/ 0.737135 (-0.556566) | 0.132939 \/ 0.296338 (-0.163399) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.425073 \/ 0.215209 (0.209863) | 4.238964 \/ 2.077655 (2.161309) | 2.095116 \/ 1.504120 (0.590996) | 1.913925 \/ 1.541195 (0.372730) | 2.024669 \/ 1.468490 (0.556179) | 0.699172 \/ 4.584777 (-3.885605) | 3.845807 \/ 3.745712 (0.100094) | 2.167502 \/ 5.269862 (-3.102360) | 1.375267 \/ 4.565676 (-3.190410) | 0.086739 \/ 0.424275 (-0.337536) | 0.012198 \/ 0.007607 (0.004591) | 0.525975 \/ 0.226044 (0.299931) | 5.249449 \/ 2.268929 (2.980521) | 2.550565 \/ 55.444624 (-52.894060) | 2.257557 \/ 6.876477 (-4.618920) | 2.298936 \/ 2.142072 (0.156863) | 0.850295 \/ 4.805227 (-3.954932) | 0.170506 \/ 6.500664 (-6.330158) | 0.065659 \/ 0.075469 (-0.009810) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.330556 \/ 1.841788 (-0.511231) | 16.920203 \/ 8.074308 (8.845894) | 15.966739 \/ 10.191392 (5.775347) | 0.164000 \/ 0.680424 (-0.516424) | 0.018211 \/ 0.534201 (-0.515990) | 0.436253 \/ 0.579283 (-0.143030) | 0.449666 \/ 0.434364 (0.015302) | 0.522287 \/ 0.540337 (-0.018050) | 0.615944 \/ 1.386936 (-0.770992) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#824f96c11a02b3817d6b1bf4dfed0abab27777f0 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007273 \/ 0.011353 (-0.004080) | 0.005198 \/ 0.011008 (-0.005810) | 0.114362 \/ 0.038508 (0.075854) | 0.031113 \/ 0.023109 (0.008003) | 0.378568 \/ 0.275898 (0.102670) | 0.441695 \/ 0.323480 (0.118215) | 0.006037 \/ 0.007986 (-0.001949) | 0.005102 \/ 0.004328 (0.000774) | 0.098682 \/ 0.004250 (0.094432) | 0.042797 \/ 0.037052 (0.005745) | 0.360028 \/ 0.258489 (0.101539) | 0.435757 \/ 0.293841 (0.141916) | 0.041438 \/ 0.128546 (-0.087109) | 0.013728 \/ 0.075646 (-0.061918) | 0.376154 \/ 0.419271 (-0.043117) | 0.075324 \/ 0.043533 (0.031791) | 0.357221 \/ 0.255139 (0.102082) | 0.416378 \/ 0.283200 (0.133178) | 0.110707 \/ 0.141683 (-0.030975) | 1.603215 \/ 1.452155 (0.151061) | 1.736843 \/ 1.492716 (0.244127) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.249479 \/ 0.018006 (0.231473) | 0.513205 \/ 0.000490 (0.512715) | 0.003856 \/ 0.000200 (0.003656) | 0.000100 \/ 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027750 \/ 0.037411 (-0.009661) | 0.105437 \/ 0.014526 (0.090911) | 0.115903 \/ 0.176557 (-0.060653) | 0.179662 \/ 0.737135 (-0.557474) | 0.116305 \/ 0.296338 (-0.180033) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.551681 \/ 0.215209 (0.336472) | 5.544590 \/ 2.077655 (3.466935) | 2.193933 \/ 1.504120 (0.689813) | 1.898395 \/ 1.541195 (0.357201) | 1.877288 \/ 1.468490 (0.408798) | 0.858097 \/ 4.584777 (-3.726680) | 4.920982 \/ 3.745712 (1.175270) | 2.478220 \/ 5.269862 (-2.791641) | 1.779608 \/ 4.565676 (-2.786069) | 0.101321 \/ 0.424275 (-0.322954) | 0.012627 \/ 0.007607 (0.005020) | 0.674865 \/ 0.226044 (0.448820) | 6.808224 \/ 2.268929 (4.539295) | 2.822466 \/ 55.444624 (-52.622159) | 2.170379 \/ 6.876477 (-4.706098) | 2.224278 \/ 2.142072 (0.082205) | 1.032763 \/ 4.805227 (-3.772464) | 0.198851 \/ 6.500664 (-6.301813) | 0.069249 \/ 0.075469 (-0.006220) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.425987 \/ 1.841788 (-0.415801) | 16.212942 \/ 8.074308 (8.138634) | 18.945770 \/ 10.191392 (8.754378) | 0.192901 \/ 0.680424 (-0.487522) | 0.025343 \/ 0.534201 (-0.508858) | 0.465441 \/ 0.579283 (-0.113842) | 0.540966 \/ 0.434364 (0.106602) | 0.576736 \/ 0.540337 (0.036399) | 0.675717 \/ 1.386936 (-0.711219) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007426 \/ 0.011353 (-0.003927) | 0.005023 \/ 0.011008 (-0.005985) | 0.085083 \/ 0.038508 (0.046575) | 0.030559 \/ 0.023109 (0.007449) | 0.398461 \/ 0.275898 (0.122563) | 0.418998 \/ 0.323480 (0.095518) | 0.006697 \/ 0.007986 (-0.001288) | 0.004665 \/ 0.004328 (0.000337) | 0.087724 \/ 0.004250 (0.083473) | 0.045799 \/ 0.037052 (0.008747) | 0.395165 \/ 0.258489 (0.136676) | 0.430172 \/ 0.293841 (0.136331) | 0.040486 \/ 0.128546 (-0.088060) | 0.014237 \/ 0.075646 (-0.061409) | 0.099429 \/ 0.419271 (-0.319843) | 0.056006 \/ 0.043533 (0.012473) | 0.389046 \/ 0.255139 (0.133907) | 0.419559 \/ 0.283200 (0.136359) | 0.108550 \/ 0.141683 (-0.033132) | 1.614052 \/ 1.452155 (0.161897) | 1.677785 \/ 1.492716 (0.185069) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.202178 \/ 0.018006 (0.184172) | 0.486365 \/ 0.000490 (0.485875) | 0.003844 \/ 0.000200 (0.003644) | 0.000112 \/ 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027963 \/ 0.037411 (-0.009449) | 0.110399 \/ 0.014526 (0.095873) | 0.122266 \/ 0.176557 (-0.054291) | 0.178551 \/ 0.737135 (-0.558585) | 0.129259 \/ 0.296338 (-0.167080) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.604178 \/ 0.215209 (0.388969) | 6.135943 \/ 2.077655 (4.058288) | 2.547576 \/ 1.504120 (1.043456) | 2.262470 \/ 1.541195 (0.721276) | 2.275402 \/ 1.468490 (0.806912) | 0.878804 \/ 4.584777 (-3.705972) | 5.152200 \/ 3.745712 (1.406488) | 2.553715 \/ 5.269862 (-2.716147) | 1.580959 \/ 4.565676 (-2.984717) | 0.107895 \/ 0.424275 (-0.316380) | 0.012751 \/ 0.007607 (0.005143) | 0.770678 \/ 0.226044 (0.544633) | 7.744303 \/ 2.268929 (5.475374) | 3.342037 \/ 55.444624 (-52.102588) | 2.756848 \/ 6.876477 (-4.119629) | 2.739357 \/ 2.142072 (0.597285) | 1.086330 \/ 4.805227 (-3.718897) | 0.230983 \/ 6.500664 (-6.269681) | 0.073771 \/ 0.075469 (-0.001698) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.493441 \/ 1.841788 (-0.348347) | 16.621611 \/ 8.074308 (8.547303) | 19.081000 \/ 10.191392 (8.889608) | 0.215623 \/ 0.680424 (-0.464801) | 0.025660 \/ 0.534201 (-0.508541) | 0.446490 \/ 0.579283 (-0.132793) | 0.560078 \/ 0.434364 (0.125714) | 0.527231 \/ 0.540337 (-0.013106) | 0.636551 \/ 1.386936 (-0.750385) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#b899ea45c0a7e724ceb5f43c3a8b9fdb081fa67a \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008266 \/ 0.011353 (-0.003087) | 0.005082 \/ 0.011008 (-0.005927) | 0.119858 \/ 0.038508 (0.081350) | 0.032907 \/ 0.023109 (0.009798) | 0.362816 \/ 0.275898 (0.086918) | 0.403684 \/ 0.323480 (0.080204) | 0.006296 \/ 0.007986 (-0.001690) | 0.006220 \/ 0.004328 (0.001891) | 0.095609 \/ 0.004250 (0.091359) | 0.048734 \/ 0.037052 (0.011682) | 0.385724 \/ 0.258489 (0.127235) | 0.424315 \/ 0.293841 (0.130475) | 0.042344 \/ 0.128546 (-0.086202) | 0.016147 \/ 0.075646 (-0.059500) | 0.409661 \/ 0.419271 (-0.009610) | 0.057900 \/ 0.043533 (0.014367) | 0.387013 \/ 0.255139 (0.131874) | 0.388901 \/ 0.283200 (0.105702) | 0.103920 \/ 0.141683 (-0.037762) | 1.732730 \/ 1.452155 (0.280575) | 1.863912 \/ 1.492716 (0.371196) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.237406 \/ 0.018006 (0.219400) | 0.514398 \/ 0.000490 (0.513909) | 0.005941 \/ 0.000200 (0.005741) | 0.000109 \/ 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027524 \/ 0.037411 (-0.009888) | 0.116498 \/ 0.014526 (0.101972) | 0.129034 \/ 0.176557 (-0.047522) | 0.218272 \/ 0.737135 (-0.518864) | 0.148389 \/ 0.296338 (-0.147950) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.604555 \/ 0.215209 (0.389346) | 5.921576 \/ 2.077655 (3.843921) | 2.410483 \/ 1.504120 (0.906363) | 2.220286 \/ 1.541195 (0.679092) | 2.138880 \/ 1.468490 (0.670390) | 0.934962 \/ 4.584777 (-3.649815) | 5.808855 \/ 3.745712 (2.063143) | 4.881554 \/ 5.269862 (-0.388308) | 2.536408 \/ 4.565676 (-2.029268) | 0.124260 \/ 0.424275 (-0.300015) | 0.017798 \/ 0.007607 (0.010190) | 0.778991 \/ 0.226044 (0.552947) | 7.899262 \/ 2.268929 (5.630333) | 3.208667 \/ 55.444624 (-52.235957) | 2.631182 \/ 6.876477 (-4.245295) | 2.676199 \/ 2.142072 (0.534127) | 1.165516 \/ 4.805227 (-3.639711) | 0.228751 \/ 6.500664 (-6.271913) | 0.081378 \/ 0.075469 (0.005909) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.522156 \/ 1.841788 (-0.319632) | 17.975381 \/ 8.074308 (9.901073) | 18.918882 \/ 10.191392 (8.727490) | 0.223984 \/ 0.680424 (-0.456440) | 0.025171 \/ 0.534201 (-0.509030) | 0.467894 \/ 0.579283 (-0.111389) | 0.559501 \/ 0.434364 (0.125137) | 0.550392 \/ 0.540337 (0.010055) | 0.696923 \/ 1.386936 (-0.690013) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008577 \/ 0.011353 (-0.002775) | 0.006735 \/ 0.011008 (-0.004273) | 0.095108 \/ 0.038508 (0.056600) | 0.035059 \/ 0.023109 (0.011950) | 0.448576 \/ 0.275898 (0.172677) | 0.492049 \/ 0.323480 (0.168569) | 0.006600 \/ 0.007986 (-0.001385) | 0.004760 \/ 0.004328 (0.000431) | 0.094670 \/ 0.004250 (0.090419) | 0.052543 \/ 0.037052 (0.015491) | 0.458927 \/ 0.258489 (0.200438) | 0.511522 \/ 0.293841 (0.217681) | 0.046046 \/ 0.128546 (-0.082500) | 0.015227 \/ 0.075646 (-0.060419) | 0.114585 \/ 0.419271 (-0.304686) | 0.057569 \/ 0.043533 (0.014036) | 0.441989 \/ 0.255139 (0.186850) | 0.487001 \/ 0.283200 (0.203801) | 0.115688 \/ 0.141683 (-0.025995) | 1.777366 \/ 1.452155 (0.325211) | 1.906216 \/ 1.492716 (0.413499) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.224880 \/ 0.018006 (0.206874) | 0.504153 \/ 0.000490 (0.503664) | 0.001143 \/ 0.000200 (0.000943) | 0.000111 \/ 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033618 \/ 0.037411 (-0.003793) | 0.127396 \/ 0.014526 (0.112870) | 0.135648 \/ 0.176557 (-0.040909) | 0.193140 \/ 0.737135 (-0.543995) | 0.142129 \/ 0.296338 (-0.154209) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.692845 \/ 0.215209 (0.477636) | 6.804897 \/ 2.077655 (4.727242) | 2.851041 \/ 1.504120 (1.346921) | 2.480698 \/ 1.541195 (0.939504) | 2.488619 \/ 1.468490 (1.020129) | 0.970439 \/ 4.584777 (-3.614338) | 5.466059 \/ 3.745712 (1.720347) | 2.790261 \/ 5.269862 (-2.479601) | 1.727638 \/ 4.565676 (-2.838039) | 0.116345 \/ 0.424275 (-0.307930) | 0.014348 \/ 0.007607 (0.006740) | 0.845510 \/ 0.226044 (0.619465) | 8.397198 \/ 2.268929 (6.128270) | 3.591998 \/ 55.444624 (-51.852626) | 2.858339 \/ 6.876477 (-4.018137) | 2.905075 \/ 2.142072 (0.763003) | 1.193569 \/ 4.805227 (-3.611658) | 0.243091 \/ 6.500664 (-6.257573) | 0.082198 \/ 0.075469 (0.006729) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.610327 \/ 1.841788 (-0.231461) | 17.191414 \/ 8.074308 (9.117106) | 20.176518 \/ 10.191392 (9.985126) | 0.246574 \/ 0.680424 (-0.433850) | 0.024343 \/ 0.534201 (-0.509858) | 0.482091 \/ 0.579283 (-0.097192) | 0.585241 \/ 0.434364 (0.150877) | 0.558833 \/ 0.540337 (0.018496) | 0.654811 \/ 1.386936 (-0.732125) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#81761dbfa738354a9c50309313dfe90bea26d872 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006353 \/ 0.011353 (-0.004999) | 0.004393 \/ 0.011008 (-0.006616) | 0.098751 \/ 0.038508 (0.060242) | 0.029090 \/ 0.023109 (0.005981) | 0.304169 \/ 0.275898 (0.028271) | 0.339879 \/ 0.323480 (0.016399) | 0.005577 \/ 0.007986 (-0.002408) | 0.003516 \/ 0.004328 (-0.000813) | 0.077347 \/ 0.004250 (0.073097) | 0.041935 \/ 0.037052 (0.004882) | 0.305865 \/ 0.258489 (0.047376) | 0.357063 \/ 0.293841 (0.063222) | 0.025245 \/ 0.128546 (-0.103301) | 0.008753 \/ 0.075646 (-0.066893) | 0.316734 \/ 0.419271 (-0.102538) | 0.043464 \/ 0.043533 (-0.000069) | 0.300944 \/ 0.255139 (0.045805) | 0.330091 \/ 0.283200 (0.046891) | 0.088593 \/ 0.141683 (-0.053090) | 1.588958 \/ 1.452155 (0.136803) | 1.641376 \/ 1.492716 (0.148660) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.220290 \/ 0.018006 (0.202284) | 0.445430 \/ 0.000490 (0.444940) | 0.004800 \/ 0.000200 (0.004600) | 0.000075 \/ 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023828 \/ 0.037411 (-0.013583) | 0.103446 \/ 0.014526 (0.088920) | 0.110668 \/ 0.176557 (-0.065889) | 0.169604 \/ 0.737135 (-0.567531) | 0.114818 \/ 0.296338 (-0.181520) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.416951 \/ 0.215209 (0.201742) | 4.138917 \/ 2.077655 (2.061263) | 1.891265 \/ 1.504120 (0.387145) | 1.687068 \/ 1.541195 (0.145873) | 1.726618 \/ 1.468490 (0.258128) | 0.546977 \/ 4.584777 (-4.037800) | 3.536153 \/ 3.745712 (-0.209560) | 1.795206 \/ 5.269862 (-3.474656) | 1.019845 \/ 4.565676 (-3.545831) | 0.067040 \/ 0.424275 (-0.357235) | 0.012038 \/ 0.007607 (0.004431) | 0.520583 \/ 0.226044 (0.294539) | 5.211520 \/ 2.268929 (2.942591) | 2.336136 \/ 55.444624 (-53.108488) | 2.011262 \/ 6.876477 (-4.865215) | 2.137311 \/ 2.142072 (-0.004762) | 0.654779 \/ 4.805227 (-4.150448) | 0.134555 \/ 6.500664 (-6.366109) | 0.066427 \/ 0.075469 (-0.009042) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.240187 \/ 1.841788 (-0.601600) | 14.104063 \/ 8.074308 (6.029755) | 13.369572 \/ 10.191392 (3.178180) | 0.147891 \/ 0.680424 (-0.532533) | 0.016993 \/ 0.534201 (-0.517208) | 0.364863 \/ 0.579283 (-0.214420) | 0.398684 \/ 0.434364 (-0.035680) | 0.430524 \/ 0.540337 (-0.109813) | 0.520920 \/ 1.386936 (-0.866016) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006845 \/ 0.011353 (-0.004508) | 0.004420 \/ 0.011008 (-0.006588) | 0.078334 \/ 0.038508 (0.039825) | 0.030566 \/ 0.023109 (0.007457) | 0.409568 \/ 0.275898 (0.133670) | 0.458389 \/ 0.323480 (0.134910) | 0.005739 \/ 0.007986 (-0.002247) | 0.005222 \/ 0.004328 (0.000893) | 0.076066 \/ 0.004250 (0.071816) | 0.049239 \/ 0.037052 (0.012187) | 0.409841 \/ 0.258489 (0.151352) | 0.472250 \/ 0.293841 (0.178409) | 0.025463 \/ 0.128546 (-0.103084) | 0.008738 \/ 0.075646 (-0.066909) | 0.083114 \/ 0.419271 (-0.336157) | 0.041233 \/ 0.043533 (-0.002300) | 0.407158 \/ 0.255139 (0.152019) | 0.438724 \/ 0.283200 (0.155524) | 0.097974 \/ 0.141683 (-0.043709) | 1.536514 \/ 1.452155 (0.084360) | 1.636704 \/ 1.492716 (0.143987) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.240589 \/ 0.018006 (0.222583) | 0.440328 \/ 0.000490 (0.439838) | 0.000937 \/ 0.000200 (0.000737) | 0.000076 \/ 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027559 \/ 0.037411 (-0.009853) | 0.109930 \/ 0.014526 (0.095405) | 0.113366 \/ 0.176557 (-0.063190) | 0.166849 \/ 0.737135 (-0.570286) | 0.118872 \/ 0.296338 (-0.177467) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.474120 \/ 0.215209 (0.258911) | 4.739222 \/ 2.077655 (2.661567) | 2.484386 \/ 1.504120 (0.980266) | 2.281937 \/ 1.541195 (0.740742) | 2.362974 \/ 1.468490 (0.894484) | 0.549897 \/ 4.584777 (-4.034879) | 3.425540 \/ 3.745712 (-0.320172) | 1.765810 \/ 5.269862 (-3.504051) | 1.008277 \/ 4.565676 (-3.557400) | 0.067288 \/ 0.424275 (-0.356987) | 0.011954 \/ 0.007607 (0.004347) | 0.577216 \/ 0.226044 (0.351172) | 5.790659 \/ 2.268929 (3.521731) | 2.946732 \/ 55.444624 (-52.497892) | 2.608835 \/ 6.876477 (-4.267641) | 2.642987 \/ 2.142072 (0.500915) | 0.652798 \/ 4.805227 (-4.152429) | 0.135909 \/ 6.500664 (-6.364755) | 0.068480 \/ 0.075469 (-0.006989) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.353550 \/ 1.841788 (-0.488237) | 14.732084 \/ 8.074308 (6.657775) | 14.439174 \/ 10.191392 (4.247782) | 0.131445 \/ 0.680424 (-0.548979) | 0.016608 \/ 0.534201 (-0.517593) | 0.368103 \/ 0.579283 (-0.211180) | 0.393918 \/ 0.434364 (-0.040446) | 0.423562 \/ 0.540337 (-0.116776) | 0.515041 \/ 1.386936 (-0.871895) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#8907bdb23f78545303eb3bb0561e33ec6787f96c \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006414 \/ 0.011353 (-0.004938) | 0.004704 \/ 0.011008 (-0.006305) | 0.096012 \/ 0.038508 (0.057504) | 0.032910 \/ 0.023109 (0.009800) | 0.290676 \/ 0.275898 (0.014778) | 0.319646 \/ 0.323480 (-0.003834) | 0.005806 \/ 0.007986 (-0.002180) | 0.004008 \/ 0.004328 (-0.000320) | 0.073982 \/ 0.004250 (0.069731) | 0.048985 \/ 0.037052 (0.011933) | 0.299498 \/ 0.258489 (0.041009) | 0.338118 \/ 0.293841 (0.044277) | 0.027680 \/ 0.128546 (-0.100866) | 0.009051 \/ 0.075646 (-0.066595) | 0.325051 \/ 0.419271 (-0.094221) | 0.051011 \/ 0.043533 (0.007478) | 0.292249 \/ 0.255139 (0.037110) | 0.315733 \/ 0.283200 (0.032533) | 0.100327 \/ 0.141683 (-0.041356) | 1.481862 \/ 1.452155 (0.029707) | 1.544884 \/ 1.492716 (0.052168) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.289610 \/ 0.018006 (0.271603) | 0.510164 \/ 0.000490 (0.509675) | 0.004726 \/ 0.000200 (0.004526) | 0.000090 \/ 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027617 \/ 0.037411 (-0.009794) | 0.107593 \/ 0.014526 (0.093068) | 0.122783 \/ 0.176557 (-0.053774) | 0.181086 \/ 0.737135 (-0.556049) | 0.128030 \/ 0.296338 (-0.168308) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.403571 \/ 0.215209 (0.188362) | 4.002881 \/ 2.077655 (1.925227) | 1.805550 \/ 1.504120 (0.301430) | 1.619165 \/ 1.541195 (0.077971) | 1.606536 \/ 1.468490 (0.138046) | 0.518917 \/ 4.584777 (-4.065860) | 3.731498 \/ 3.745712 (-0.014214) | 3.206645 \/ 5.269862 (-2.063217) | 1.641615 \/ 4.565676 (-2.924062) | 0.065100 \/ 0.424275 (-0.359175) | 0.011396 \/ 0.007607 (0.003789) | 0.500597 \/ 0.226044 (0.274553) | 4.992293 \/ 2.268929 (2.723364) | 2.278726 \/ 55.444624 (-53.165898) | 1.960823 \/ 6.876477 (-4.915654) | 2.038684 \/ 2.142072 (-0.103388) | 0.640910 \/ 4.805227 (-4.164318) | 0.140597 \/ 6.500664 (-6.360067) | 0.062114 \/ 0.075469 (-0.013355) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.167366 \/ 1.841788 (-0.674422) | 14.748193 \/ 8.074308 (6.673884) | 13.592381 \/ 10.191392 (3.400989) | 0.165341 \/ 0.680424 (-0.515083) | 0.017360 \/ 0.534201 (-0.516841) | 0.393448 \/ 0.579283 (-0.185836) | 0.422951 \/ 0.434364 (-0.011413) | 0.460491 \/ 0.540337 (-0.079847) | 0.558238 \/ 1.386936 (-0.828698) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006373 \/ 0.011353 (-0.004980) | 0.004587 \/ 0.011008 (-0.006421) | 0.076421 \/ 0.038508 (0.037913) | 0.032162 \/ 0.023109 (0.009052) | 0.385531 \/ 0.275898 (0.109633) | 0.410424 \/ 0.323480 (0.086944) | 0.006154 \/ 0.007986 (-0.001832) | 0.005533 \/ 0.004328 (0.001205) | 0.077035 \/ 0.004250 (0.072784) | 0.051571 \/ 0.037052 (0.014519) | 0.393283 \/ 0.258489 (0.134794) | 0.433756 \/ 0.293841 (0.139915) | 0.028381 \/ 0.128546 (-0.100165) | 0.009034 \/ 0.075646 (-0.066613) | 0.083836 \/ 0.419271 (-0.335435) | 0.048246 \/ 0.043533 (0.004713) | 0.385437 \/ 0.255139 (0.130298) | 0.394187 \/ 0.283200 (0.110987) | 0.105453 \/ 0.141683 (-0.036230) | 1.459173 \/ 1.452155 (0.007018) | 1.575083 \/ 1.492716 (0.082367) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.320324 \/ 0.018006 (0.302318) | 0.502945 \/ 0.000490 (0.502455) | 0.004470 \/ 0.000200 (0.004270) | 0.000107 \/ 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028118 \/ 0.037411 (-0.009293) | 0.111430 \/ 0.014526 (0.096904) | 0.123141 \/ 0.176557 (-0.053415) | 0.175215 \/ 0.737135 (-0.561920) | 0.126429 \/ 0.296338 (-0.169909) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.433407 \/ 0.215209 (0.218198) | 4.329945 \/ 2.077655 (2.252291) | 2.096822 \/ 1.504120 (0.592702) | 1.908173 \/ 1.541195 (0.366978) | 1.967167 \/ 1.468490 (0.498676) | 0.529207 \/ 4.584777 (-4.055570) | 3.798424 \/ 3.745712 (0.052712) | 3.050716 \/ 5.269862 (-2.219146) | 1.445009 \/ 4.565676 (-3.120668) | 0.066467 \/ 0.424275 (-0.357809) | 0.011698 \/ 0.007607 (0.004090) | 0.528660 \/ 0.226044 (0.302615) | 5.282069 \/ 2.268929 (3.013141) | 2.535501 \/ 55.444624 (-52.909124) | 2.202856 \/ 6.876477 (-4.673621) | 2.293225 \/ 2.142072 (0.151153) | 0.640216 \/ 4.805227 (-4.165011) | 0.140884 \/ 6.500664 (-6.359780) | 0.064231 \/ 0.075469 (-0.011238) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.292129 \/ 1.841788 (-0.549659) | 15.371370 \/ 8.074308 (7.297062) | 15.114854 \/ 10.191392 (4.923462) | 0.176870 \/ 0.680424 (-0.503554) | 0.017380 \/ 0.534201 (-0.516821) | 0.398156 \/ 0.579283 (-0.181127) | 0.442277 \/ 0.434364 (0.007913) | 0.467093 \/ 0.540337 (-0.073244) | 0.561599 \/ 1.386936 (-0.825337) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#323747a5ff7d9b204ea3c4989d658af7102f7bbd \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009360 \/ 0.011353 (-0.001993) | 0.006297 \/ 0.011008 (-0.004712) | 0.133131 \/ 0.038508 (0.094623) | 0.040261 \/ 0.023109 (0.017152) | 0.419101 \/ 0.275898 (0.143203) | 0.453087 \/ 0.323480 (0.129607) | 0.007718 \/ 0.007986 (-0.000268) | 0.005698 \/ 0.004328 (0.001369) | 0.102261 \/ 0.004250 (0.098010) | 0.055147 \/ 0.037052 (0.018095) | 0.428355 \/ 0.258489 (0.169866) | 0.505241 \/ 0.293841 (0.211400) | 0.046745 \/ 0.128546 (-0.081802) | 0.015559 \/ 0.075646 (-0.060088) | 0.441775 \/ 0.419271 (0.022503) | 0.070165 \/ 0.043533 (0.026632) | 0.421957 \/ 0.255139 (0.166818) | 0.445156 \/ 0.283200 (0.161957) | 0.126321 \/ 0.141683 (-0.015362) | 1.900486 \/ 1.452155 (0.448331) | 2.088630 \/ 1.492716 (0.595913) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.260244 \/ 0.018006 (0.242237) | 0.606317 \/ 0.000490 (0.605828) | 0.006827 \/ 0.000200 (0.006627) | 0.000117 \/ 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031958 \/ 0.037411 (-0.005453) | 0.139362 \/ 0.014526 (0.124836) | 0.148748 \/ 0.176557 (-0.027809) | 0.226269 \/ 0.737135 (-0.510866) | 0.161145 \/ 0.296338 (-0.135194) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.666287 \/ 0.215209 (0.451078) | 6.588707 \/ 2.077655 (4.511053) | 2.736155 \/ 1.504120 (1.232035) | 2.329601 \/ 1.541195 (0.788406) | 2.324991 \/ 1.468490 (0.856501) | 0.943608 \/ 4.584777 (-3.641169) | 6.051653 \/ 3.745712 (2.305941) | 2.929150 \/ 5.269862 (-2.340711) | 1.804461 \/ 4.565676 (-2.761216) | 0.113302 \/ 0.424275 (-0.310973) | 0.015245 \/ 0.007607 (0.007638) | 0.827029 \/ 0.226044 (0.600984) | 8.211536 \/ 2.268929 (5.942608) | 3.445231 \/ 55.444624 (-51.999393) | 2.756728 \/ 6.876477 (-4.119748) | 2.904039 \/ 2.142072 (0.761966) | 1.162339 \/ 4.805227 (-3.642888) | 0.231168 \/ 6.500664 (-6.269496) | 0.089038 \/ 0.075469 (0.013569) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.640619 \/ 1.841788 (-0.201169) | 20.034157 \/ 8.074308 (11.959849) | 22.346006 \/ 10.191392 (12.154614) | 0.255300 \/ 0.680424 (-0.425124) | 0.031452 \/ 0.534201 (-0.502749) | 0.563290 \/ 0.579283 (-0.015993) | 0.653556 \/ 0.434364 (0.219192) | 0.687663 \/ 0.540337 (0.147326) | 0.816432 \/ 1.386936 (-0.570504) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.010340 \/ 0.011353 (-0.001013) | 0.006245 \/ 0.011008 (-0.004764) | 0.128012 \/ 0.038508 (0.089504) | 0.041799 \/ 0.023109 (0.018690) | 0.533340 \/ 0.275898 (0.257442) | 0.592243 \/ 0.323480 (0.268763) | 0.009256 \/ 0.007986 (0.001271) | 0.005310 \/ 0.004328 (0.000982) | 0.110973 \/ 0.004250 (0.106722) | 0.065465 \/ 0.037052 (0.028412) | 0.533845 \/ 0.258489 (0.275356) | 0.602190 \/ 0.293841 (0.308349) | 0.060245 \/ 0.128546 (-0.068301) | 0.016954 \/ 0.075646 (-0.058693) | 0.119727 \/ 0.419271 (-0.299545) | 0.064628 \/ 0.043533 (0.021095) | 0.558229 \/ 0.255139 (0.303090) | 0.563696 \/ 0.283200 (0.280496) | 0.137225 \/ 0.141683 (-0.004458) | 2.038605 \/ 1.452155 (0.586451) | 2.158655 \/ 1.492716 (0.665939) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.327067 \/ 0.018006 (0.309061) | 0.628812 \/ 0.000490 (0.628323) | 0.010259 \/ 0.000200 (0.010059) | 0.000123 \/ 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.037023 \/ 0.037411 (-0.000388) | 0.142462 \/ 0.014526 (0.127936) | 0.158165 \/ 0.176557 (-0.018392) | 0.220808 \/ 0.737135 (-0.516328) | 0.163608 \/ 0.296338 (-0.132731) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.776119 \/ 0.215209 (0.560910) | 7.813044 \/ 2.077655 (5.735389) | 3.610901 \/ 1.504120 (2.106781) | 3.195144 \/ 1.541195 (1.653950) | 3.218245 \/ 1.468490 (1.749755) | 1.092732 \/ 4.584777 (-3.492045) | 5.965526 \/ 3.745712 (2.219813) | 2.914683 \/ 5.269862 (-2.355179) | 1.848397 \/ 4.565676 (-2.717280) | 0.114436 \/ 0.424275 (-0.309839) | 0.014794 \/ 0.007607 (0.007187) | 0.887141 \/ 0.226044 (0.661096) | 9.009743 \/ 2.268929 (6.740815) | 4.180143 \/ 55.444624 (-51.264481) | 3.452194 \/ 6.876477 (-3.424283) | 3.493520 \/ 2.142072 (1.351448) | 1.233327 \/ 4.805227 (-3.571900) | 0.235390 \/ 6.500664 (-6.265274) | 0.099544 \/ 0.075469 (0.024075) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.853482 \/ 1.841788 (0.011694) | 20.071177 \/ 8.074308 (11.996869) | 24.507618 \/ 10.191392 (14.316226) | 0.260164 \/ 0.680424 (-0.420260) | 0.028433 \/ 0.534201 (-0.505768) | 0.549181 \/ 0.579283 (-0.030102) | 0.650069 \/ 0.434364 (0.215705) | 0.629541 \/ 0.540337 (0.089203) | 0.808932 \/ 1.386936 (-0.578004) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#f39ba76af62c8037de3f464e87cbb095f8729062 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009537 \/ 0.011353 (-0.001816) | 0.006036 \/ 0.011008 (-0.004972) | 0.141210 \/ 0.038508 (0.102701) | 0.037493 \/ 0.023109 (0.014384) | 0.404285 \/ 0.275898 (0.128386) | 0.458906 \/ 0.323480 (0.135427) | 0.007224 \/ 0.007986 (-0.000761) | 0.005148 \/ 0.004328 (0.000819) | 0.103889 \/ 0.004250 (0.099639) | 0.048877 \/ 0.037052 (0.011824) | 0.413220 \/ 0.258489 (0.154731) | 0.458153 \/ 0.293841 (0.164312) | 0.046008 \/ 0.128546 (-0.082538) | 0.015116 \/ 0.075646 (-0.060531) | 0.439836 \/ 0.419271 (0.020565) | 0.067527 \/ 0.043533 (0.023994) | 0.435794 \/ 0.255139 (0.180656) | 0.451687 \/ 0.283200 (0.168487) | 0.121274 \/ 0.141683 (-0.020409) | 1.950199 \/ 1.452155 (0.498044) | 2.035589 \/ 1.492716 (0.542873) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.247056 \/ 0.018006 (0.229050) | 0.550348 \/ 0.000490 (0.549858) | 0.005504 \/ 0.000200 (0.005305) | 0.000116 \/ 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032171 \/ 0.037411 (-0.005240) | 0.135983 \/ 0.014526 (0.121457) | 0.149587 \/ 0.176557 (-0.026970) | 0.233414 \/ 0.737135 (-0.503722) | 0.152598 \/ 0.296338 (-0.143740) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.634813 \/ 0.215209 (0.419604) | 6.453619 \/ 2.077655 (4.375964) | 2.582070 \/ 1.504120 (1.077951) | 2.214292 \/ 1.541195 (0.673097) | 2.220012 \/ 1.468490 (0.751522) | 0.987374 \/ 4.584777 (-3.597403) | 5.543760 \/ 3.745712 (1.798047) | 2.808865 \/ 5.269862 (-2.460996) | 1.714713 \/ 4.565676 (-2.850963) | 0.111016 \/ 0.424275 (-0.313259) | 0.014688 \/ 0.007607 (0.007081) | 0.842542 \/ 0.226044 (0.616498) | 8.414336 \/ 2.268929 (6.145407) | 3.501021 \/ 55.444624 (-51.943604) | 2.665335 \/ 6.876477 (-4.211142) | 2.843706 \/ 2.142072 (0.701633) | 1.196398 \/ 4.805227 (-3.608829) | 0.245508 \/ 6.500664 (-6.255156) | 0.086970 \/ 0.075469 (0.011501) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.590244 \/ 1.841788 (-0.251544) | 18.694141 \/ 8.074308 (10.619833) | 21.752463 \/ 10.191392 (11.561071) | 0.264511 \/ 0.680424 (-0.415913) | 0.028713 \/ 0.534201 (-0.505488) | 0.531102 \/ 0.579283 (-0.048181) | 0.626302 \/ 0.434364 (0.191938) | 0.624541 \/ 0.540337 (0.084203) | 0.745745 \/ 1.386936 (-0.641191) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.010097 \/ 0.011353 (-0.001256) | 0.005558 \/ 0.011008 (-0.005451) | 0.111326 \/ 0.038508 (0.072818) | 0.036465 \/ 0.023109 (0.013356) | 0.472116 \/ 0.275898 (0.196218) | 0.524479 \/ 0.323480 (0.200999) | 0.007466 \/ 0.007986 (-0.000520) | 0.005440 \/ 0.004328 (0.001112) | 0.103482 \/ 0.004250 (0.099231) | 0.053217 \/ 0.037052 (0.016165) | 0.476685 \/ 0.258489 (0.218196) | 0.554011 \/ 0.293841 (0.260170) | 0.047157 \/ 0.128546 (-0.081390) | 0.015895 \/ 0.075646 (-0.059751) | 0.115997 \/ 0.419271 (-0.303274) | 0.062290 \/ 0.043533 (0.018758) | 0.474166 \/ 0.255139 (0.219027) | 0.498854 \/ 0.283200 (0.215655) | 0.121798 \/ 0.141683 (-0.019885) | 1.956583 \/ 1.452155 (0.504428) | 2.069620 \/ 1.492716 (0.576904) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.278637 \/ 0.018006 (0.260631) | 0.555295 \/ 0.000490 (0.554805) | 0.007401 \/ 0.000200 (0.007201) | 0.000121 \/ 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033576 \/ 0.037411 (-0.003835) | 0.136479 \/ 0.014526 (0.121954) | 0.153960 \/ 0.176557 (-0.022597) | 0.203422 \/ 0.737135 (-0.533713) | 0.154159 \/ 0.296338 (-0.142180) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.672561 \/ 0.215209 (0.457352) | 6.956675 \/ 2.077655 (4.879020) | 3.063636 \/ 1.504120 (1.559516) | 2.668256 \/ 1.541195 (1.127061) | 2.794793 \/ 1.468490 (1.326303) | 0.964242 \/ 4.584777 (-3.620535) | 5.785992 \/ 3.745712 (2.040279) | 2.850079 \/ 5.269862 (-2.419782) | 1.782491 \/ 4.565676 (-2.783186) | 0.114859 \/ 0.424275 (-0.309416) | 0.015229 \/ 0.007607 (0.007622) | 0.858406 \/ 0.226044 (0.632362) | 8.646296 \/ 2.268929 (6.377367) | 3.842133 \/ 55.444624 (-51.602492) | 3.180017 \/ 6.876477 (-3.696460) | 3.241315 \/ 2.142072 (1.099243) | 1.248988 \/ 4.805227 (-3.556239) | 0.235075 \/ 6.500664 (-6.265589) | 0.087192 \/ 0.075469 (0.011723) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.783877 \/ 1.841788 (-0.057910) | 19.477223 \/ 8.074308 (11.402914) | 22.926734 \/ 10.191392 (12.735342) | 0.246970 \/ 0.680424 (-0.433454) | 0.026386 \/ 0.534201 (-0.507815) | 0.517599 \/ 0.579283 (-0.061684) | 0.626504 \/ 0.434364 (0.192140) | 0.606943 \/ 0.540337 (0.066606) | 0.739115 \/ 1.386936 (-0.647821) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#e8f051a41454f8625091338e6b53119a5eb9b2a0 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008085 \/ 0.011353 (-0.003268) | 0.005568 \/ 0.011008 (-0.005440) | 0.119674 \/ 0.038508 (0.081166) | 0.040452 \/ 0.023109 (0.017343) | 0.360288 \/ 0.275898 (0.084390) | 0.409448 \/ 0.323480 (0.085968) | 0.007281 \/ 0.007986 (-0.000705) | 0.004931 \/ 0.004328 (0.000602) | 0.089956 \/ 0.004250 (0.085706) | 0.056088 \/ 0.037052 (0.019036) | 0.384708 \/ 0.258489 (0.126219) | 0.423506 \/ 0.293841 (0.129665) | 0.033280 \/ 0.128546 (-0.095266) | 0.010696 \/ 0.075646 (-0.064951) | 0.394851 \/ 0.419271 (-0.024421) | 0.058412 \/ 0.043533 (0.014879) | 0.361514 \/ 0.255139 (0.106375) | 0.399121 \/ 0.283200 (0.115921) | 0.117927 \/ 0.141683 (-0.023756) | 1.791499 \/ 1.452155 (0.339344) | 1.889000 \/ 1.492716 (0.396284) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.253324 \/ 0.018006 (0.235318) | 0.536151 \/ 0.000490 (0.535661) | 0.010450 \/ 0.000200 (0.010250) | 0.000171 \/ 0.000054 (0.000117) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034646 \/ 0.037411 (-0.002765) | 0.145999 \/ 0.014526 (0.131473) | 0.153793 \/ 0.176557 (-0.022763) | 0.232871 \/ 0.737135 (-0.504265) | 0.161151 \/ 0.296338 (-0.135188) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.471407 \/ 0.215209 (0.256197) | 4.715702 \/ 2.077655 (2.638047) | 2.228939 \/ 1.504120 (0.724819) | 2.008511 \/ 1.541195 (0.467317) | 2.135182 \/ 1.468490 (0.666692) | 0.620720 \/ 4.584777 (-3.964057) | 4.960731 \/ 3.745712 (1.215019) | 2.222469 \/ 5.269862 (-3.047393) | 1.284467 \/ 4.565676 (-3.281209) | 0.077931 \/ 0.424275 (-0.346344) | 0.013935 \/ 0.007607 (0.006328) | 0.593164 \/ 0.226044 (0.367120) | 5.940829 \/ 2.268929 (3.671900) | 2.664277 \/ 55.444624 (-52.780347) | 2.290655 \/ 6.876477 (-4.585822) | 2.496664 \/ 2.142072 (0.354592) | 0.759166 \/ 4.805227 (-4.046061) | 0.168011 \/ 6.500664 (-6.332653) | 0.077993 \/ 0.075469 (0.002524) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.440663 \/ 1.841788 (-0.401125) | 19.105377 \/ 8.074308 (11.031069) | 16.068118 \/ 10.191392 (5.876726) | 0.193024 \/ 0.680424 (-0.487400) | 0.022348 \/ 0.534201 (-0.511853) | 0.517454 \/ 0.579283 (-0.061829) | 0.528072 \/ 0.434364 (0.093708) | 0.565293 \/ 0.540337 (0.024955) | 0.676578 \/ 1.386936 (-0.710358) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008089 \/ 0.011353 (-0.003264) | 0.005287 \/ 0.011008 (-0.005721) | 0.087964 \/ 0.038508 (0.049456) | 0.041548 \/ 0.023109 (0.018439) | 0.437733 \/ 0.275898 (0.161835) | 0.487878 \/ 0.323480 (0.164398) | 0.006898 \/ 0.007986 (-0.001087) | 0.004649 \/ 0.004328 (0.000320) | 0.086982 \/ 0.004250 (0.082732) | 0.056874 \/ 0.037052 (0.019822) | 0.437397 \/ 0.258489 (0.178908) | 0.490636 \/ 0.293841 (0.196795) | 0.033550 \/ 0.128546 (-0.094997) | 0.010430 \/ 0.075646 (-0.065216) | 0.096076 \/ 0.419271 (-0.323196) | 0.054028 \/ 0.043533 (0.010495) | 0.450262 \/ 0.255139 (0.195123) | 0.465566 \/ 0.283200 (0.182366) | 0.119987 \/ 0.141683 (-0.021696) | 1.764428 \/ 1.452155 (0.312273) | 1.841547 \/ 1.492716 (0.348831) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.271427 \/ 0.018006 (0.253420) | 0.506386 \/ 0.000490 (0.505896) | 0.001213 \/ 0.000200 (0.001013) | 0.000125 \/ 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.036159 \/ 0.037411 (-0.001253) | 0.140578 \/ 0.014526 (0.126053) | 0.147517 \/ 0.176557 (-0.029040) | 0.206215 \/ 0.737135 (-0.530921) | 0.152560 \/ 0.296338 (-0.143779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.522833 \/ 0.215209 (0.307624) | 5.215732 \/ 2.077655 (3.138077) | 2.553406 \/ 1.504120 (1.049286) | 2.344815 \/ 1.541195 (0.803620) | 2.422377 \/ 1.468490 (0.953886) | 0.631197 \/ 4.584777 (-3.953580) | 4.906216 \/ 3.745712 (1.160504) | 2.212923 \/ 5.269862 (-3.056938) | 1.352937 \/ 4.565676 (-3.212740) | 0.079141 \/ 0.424275 (-0.345135) | 0.013691 \/ 0.007607 (0.006084) | 0.634939 \/ 0.226044 (0.408895) | 6.578770 \/ 2.268929 (4.309842) | 3.080339 \/ 55.444624 (-52.364286) | 2.710243 \/ 6.876477 (-4.166234) | 2.740476 \/ 2.142072 (0.598404) | 0.783610 \/ 4.805227 (-4.021617) | 0.171589 \/ 6.500664 (-6.329075) | 0.077311 \/ 0.075469 (0.001842) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.584847 \/ 1.841788 (-0.256941) | 19.510132 \/ 8.074308 (11.435824) | 18.074572 \/ 10.191392 (7.883180) | 0.173494 \/ 0.680424 (-0.506930) | 0.021149 \/ 0.534201 (-0.513052) | 0.469026 \/ 0.579283 (-0.110258) | 0.518463 \/ 0.434364 (0.084099) | 0.550363 \/ 0.540337 (0.010026) | 0.667087 \/ 1.386936 (-0.719849) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#5dfcd876c25cc0ffbd6b5b518b017419390a8ada \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007144 \/ 0.011353 (-0.004209) | 0.004783 \/ 0.011008 (-0.006225) | 0.103991 \/ 0.038508 (0.065483) | 0.039098 \/ 0.023109 (0.015989) | 0.319851 \/ 0.275898 (0.043952) | 0.356104 \/ 0.323480 (0.032625) | 0.007077 \/ 0.007986 (-0.000909) | 0.004188 \/ 0.004328 (-0.000141) | 0.078360 \/ 0.004250 (0.074109) | 0.050951 \/ 0.037052 (0.013899) | 0.321791 \/ 0.258489 (0.063302) | 0.356123 \/ 0.293841 (0.062283) | 0.028967 \/ 0.128546 (-0.099579) | 0.009091 \/ 0.075646 (-0.066555) | 0.355265 \/ 0.419271 (-0.064007) | 0.052521 \/ 0.043533 (0.008988) | 0.317333 \/ 0.255139 (0.062194) | 0.340747 \/ 0.283200 (0.057547) | 0.104354 \/ 0.141683 (-0.037329) | 1.522791 \/ 1.452155 (0.070636) | 1.579835 \/ 1.492716 (0.087118) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.260539 \/ 0.018006 (0.242532) | 0.454230 \/ 0.000490 (0.453740) | 0.036588 \/ 0.000200 (0.036388) | 0.000289 \/ 0.000054 (0.000235) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028375 \/ 0.037411 (-0.009036) | 0.118939 \/ 0.014526 (0.104413) | 0.126553 \/ 0.176557 (-0.050004) | 0.184596 \/ 0.737135 (-0.552539) | 0.130583 \/ 0.296338 (-0.165755) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.417353 \/ 0.215209 (0.202144) | 4.171595 \/ 2.077655 (2.093940) | 1.855096 \/ 1.504120 (0.350976) | 1.673941 \/ 1.541195 (0.132747) | 1.761370 \/ 1.468490 (0.292880) | 0.544081 \/ 4.584777 (-4.040696) | 3.851877 \/ 3.745712 (0.106165) | 1.896661 \/ 5.269862 (-3.373200) | 1.093303 \/ 4.565676 (-3.472373) | 0.067967 \/ 0.424275 (-0.356308) | 0.012313 \/ 0.007607 (0.004706) | 0.532316 \/ 0.226044 (0.306272) | 5.336016 \/ 2.268929 (3.067087) | 2.344780 \/ 55.444624 (-53.099845) | 1.993909 \/ 6.876477 (-4.882568) | 2.167324 \/ 2.142072 (0.025251) | 0.670334 \/ 4.805227 (-4.134893) | 0.147705 \/ 6.500664 (-6.352959) | 0.067634 \/ 0.075469 (-0.007835) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.251005 \/ 1.841788 (-0.590783) | 15.405531 \/ 8.074308 (7.331223) | 14.197019 \/ 10.191392 (4.005627) | 0.144230 \/ 0.680424 (-0.536193) | 0.018352 \/ 0.534201 (-0.515849) | 0.427536 \/ 0.579283 (-0.151748) | 0.433135 \/ 0.434364 (-0.001229) | 0.502624 \/ 0.540337 (-0.037713) | 0.612312 \/ 1.386936 (-0.774624) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007011 \/ 0.011353 (-0.004342) | 0.004857 \/ 0.011008 (-0.006151) | 0.077797 \/ 0.038508 (0.039289) | 0.035411 \/ 0.023109 (0.012302) | 0.368234 \/ 0.275898 (0.092336) | 0.408359 \/ 0.323480 (0.084879) | 0.005883 \/ 0.007986 (-0.002102) | 0.004311 \/ 0.004328 (-0.000017) | 0.077216 \/ 0.004250 (0.072966) | 0.052062 \/ 0.037052 (0.015010) | 0.368502 \/ 0.258489 (0.110013) | 0.428681 \/ 0.293841 (0.134840) | 0.028889 \/ 0.128546 (-0.099657) | 0.009146 \/ 0.075646 (-0.066501) | 0.085515 \/ 0.419271 (-0.333756) | 0.050216 \/ 0.043533 (0.006683) | 0.359562 \/ 0.255139 (0.104423) | 0.378335 \/ 0.283200 (0.095135) | 0.106351 \/ 0.141683 (-0.035332) | 1.538943 \/ 1.452155 (0.086788) | 1.663572 \/ 1.492716 (0.170855) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.216917 \/ 0.018006 (0.198911) | 0.444130 \/ 0.000490 (0.443641) | 0.002640 \/ 0.000200 (0.002440) | 0.000093 \/ 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032509 \/ 0.037411 (-0.004902) | 0.123955 \/ 0.014526 (0.109430) | 0.133236 \/ 0.176557 (-0.043321) | 0.187408 \/ 0.737135 (-0.549727) | 0.136696 \/ 0.296338 (-0.159643) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.443714 \/ 0.215209 (0.228505) | 4.416973 \/ 2.077655 (2.339318) | 2.145279 \/ 1.504120 (0.641159) | 1.946669 \/ 1.541195 (0.405474) | 2.044105 \/ 1.468490 (0.575614) | 0.534463 \/ 4.584777 (-4.050314) | 3.824926 \/ 3.745712 (0.079214) | 3.151796 \/ 5.269862 (-2.118066) | 1.497513 \/ 4.565676 (-3.068164) | 0.066799 \/ 0.424275 (-0.357476) | 0.012408 \/ 0.007607 (0.004801) | 0.544182 \/ 0.226044 (0.318138) | 5.419403 \/ 2.268929 (3.150474) | 2.605191 \/ 55.444624 (-52.839433) | 2.285354 \/ 6.876477 (-4.591123) | 2.359520 \/ 2.142072 (0.217448) | 0.655489 \/ 4.805227 (-4.149738) | 0.143496 \/ 6.500664 (-6.357168) | 0.066782 \/ 0.075469 (-0.008687) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.329370 \/ 1.841788 (-0.512418) | 16.058019 \/ 8.074308 (7.983711) | 15.119769 \/ 10.191392 (4.928377) | 0.147967 \/ 0.680424 (-0.532457) | 0.018360 \/ 0.534201 (-0.515841) | 0.436847 \/ 0.579283 (-0.142436) | 0.435136 \/ 0.434364 (0.000773) | 0.507176 \/ 0.540337 (-0.033161) | 0.610627 \/ 1.386936 (-0.776309) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#b4cc3ee6d8945052283076854eb77575d52b7432 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006425 \/ 0.011353 (-0.004927) | 0.003710 \/ 0.011008 (-0.007298) | 0.102072 \/ 0.038508 (0.063564) | 0.033974 \/ 0.023109 (0.010865) | 0.273146 \/ 0.275898 (-0.002752) | 0.313254 \/ 0.323480 (-0.010226) | 0.004889 \/ 0.007986 (-0.003096) | 0.004803 \/ 0.004328 (0.000475) | 0.067359 \/ 0.004250 (0.063109) | 0.040281 \/ 0.037052 (0.003228) | 0.302106 \/ 0.258489 (0.043617) | 0.318039 \/ 0.293841 (0.024198) | 0.028839 \/ 0.128546 (-0.099707) | 0.008726 \/ 0.075646 (-0.066921) | 0.322532 \/ 0.419271 (-0.096739) | 0.048845 \/ 0.043533 (0.005312) | 0.299836 \/ 0.255139 (0.044697) | 0.300983 \/ 0.283200 (0.017784) | 0.103384 \/ 0.141683 (-0.038299) | 1.417245 \/ 1.452155 (-0.034910) | 1.538819 \/ 1.492716 (0.046102) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.219798 \/ 0.018006 (0.201792) | 0.442297 \/ 0.000490 (0.441807) | 0.013792 \/ 0.000200 (0.013592) | 0.000101 \/ 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024996 \/ 0.037411 (-0.012416) | 0.098558 \/ 0.014526 (0.084032) | 0.116423 \/ 0.176557 (-0.060133) | 0.163481 \/ 0.737135 (-0.573654) | 0.115031 \/ 0.296338 (-0.181308) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.392411 \/ 0.215209 (0.177202) | 4.025992 \/ 2.077655 (1.948337) | 1.850809 \/ 1.504120 (0.346690) | 1.668330 \/ 1.541195 (0.127136) | 1.627041 \/ 1.468490 (0.158551) | 0.510721 \/ 4.584777 (-4.074055) | 3.841318 \/ 3.745712 (0.095606) | 3.416979 \/ 5.269862 (-1.852883) | 1.640796 \/ 4.565676 (-2.924880) | 0.061968 \/ 0.424275 (-0.362307) | 0.010281 \/ 0.007607 (0.002674) | 0.485592 \/ 0.226044 (0.259548) | 4.872205 \/ 2.268929 (2.603277) | 2.146753 \/ 55.444624 (-53.297871) | 1.832087 \/ 6.876477 (-5.044390) | 1.920928 \/ 2.142072 (-0.221144) | 0.606363 \/ 4.805227 (-4.198864) | 0.134351 \/ 6.500664 (-6.366313) | 0.057583 \/ 0.075469 (-0.017886) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.153048 \/ 1.841788 (-0.688739) | 14.165743 \/ 8.074308 (6.091435) | 12.237798 \/ 10.191392 (2.046406) | 0.159815 \/ 0.680424 (-0.520608) | 0.018226 \/ 0.534201 (-0.515975) | 0.372390 \/ 0.579283 (-0.206893) | 0.396552 \/ 0.434364 (-0.037811) | 0.439445 \/ 0.540337 (-0.100892) | 0.521924 \/ 1.386936 (-0.865012) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006162 \/ 0.011353 (-0.005191) | 0.004006 \/ 0.011008 (-0.007002) | 0.067226 \/ 0.038508 (0.028718) | 0.030285 \/ 0.023109 (0.007176) | 0.361220 \/ 0.275898 (0.085322) | 0.386783 \/ 0.323480 (0.063303) | 0.005202 \/ 0.007986 (-0.002784) | 0.003453 \/ 0.004328 (-0.000876) | 0.068299 \/ 0.004250 (0.064048) | 0.041433 \/ 0.037052 (0.004381) | 0.360222 \/ 0.258489 (0.101733) | 0.399327 \/ 0.293841 (0.105486) | 0.026066 \/ 0.128546 (-0.102480) | 0.008025 \/ 0.075646 (-0.067621) | 0.079588 \/ 0.419271 (-0.339683) | 0.042616 \/ 0.043533 (-0.000917) | 0.347639 \/ 0.255139 (0.092500) | 0.386092 \/ 0.283200 (0.102893) | 0.100869 \/ 0.141683 (-0.040814) | 1.386901 \/ 1.452155 (-0.065254) | 1.471523 \/ 1.492716 (-0.021193) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.217020 \/ 0.018006 (0.199014) | 0.431033 \/ 0.000490 (0.430543) | 0.002902 \/ 0.000200 (0.002702) | 0.000092 \/ 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027396 \/ 0.037411 (-0.010015) | 0.114154 \/ 0.014526 (0.099629) | 0.117918 \/ 0.176557 (-0.058638) | 0.173342 \/ 0.737135 (-0.563794) | 0.125812 \/ 0.296338 (-0.170526) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.424843 \/ 0.215209 (0.209634) | 4.324828 \/ 2.077655 (2.247174) | 2.188263 \/ 1.504120 (0.684143) | 1.912288 \/ 1.541195 (0.371094) | 2.011621 \/ 1.468490 (0.543131) | 0.560944 \/ 4.584777 (-4.023833) | 3.975047 \/ 3.745712 (0.229335) | 3.130242 \/ 5.269862 (-2.139619) | 1.667902 \/ 4.565676 (-2.897775) | 0.062245 \/ 0.424275 (-0.362030) | 0.011300 \/ 0.007607 (0.003692) | 0.498571 \/ 0.226044 (0.272527) | 5.024887 \/ 2.268929 (2.755958) | 2.482967 \/ 55.444624 (-52.961657) | 2.216125 \/ 6.876477 (-4.660352) | 2.175856 \/ 2.142072 (0.033783) | 0.615207 \/ 4.805227 (-4.190021) | 0.133808 \/ 6.500664 (-6.366856) | 0.058681 \/ 0.075469 (-0.016788) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.370150 \/ 1.841788 (-0.471637) | 14.580907 \/ 8.074308 (6.506599) | 14.209955 \/ 10.191392 (4.018563) | 0.139738 \/ 0.680424 (-0.540686) | 0.018722 \/ 0.534201 (-0.515479) | 0.375755 \/ 0.579283 (-0.203528) | 0.428335 \/ 0.434364 (-0.006029) | 0.438957 \/ 0.540337 (-0.101380) | 0.541130 \/ 1.386936 (-0.845806) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#c14806a42a20f44a60f3663642bae1de199ab1ec \"CML watermark\")\n"],"created_at":1684164514000,"updated_at":1686242418000,"closed_at":1686241971000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5863","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5863","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5863.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5863.patch","merged_at":1686241970000},"body":"This PR tries out a new approach to generating the index tensor in `to_tf_dataset`, which should reduce memory usage for very large datasets. I'll need to do some testing before merging it!\r\n\r\nFixes #5855","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5863\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5863\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5862","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5862\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5862\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5862\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5862","id":1710140646,"node_id":"I_kwDODunzps5l7qzm","number":5862,"title":"IndexError: list index out of range with data hosted on Zenodo","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":{"login":"albertvillanova","id":8515462.0,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["This error is also raised when data is hosted on Google Drive:\r\n- https:\/\/huggingface.co\/datasets\/docred\/discussions\/5\r\n- https:\/\/huggingface.co\/datasets\/linnaeus\/discussions\/3\r\n- https:\/\/huggingface.co\/datasets\/poleval2019_mt\/discussions\/3\r\n- https:\/\/huggingface.co\/datasets\/reddit_tifu\/discussions\/2\r\n- https:\/\/huggingface.co\/datasets\/species_800\/discussions\/3\r\n- https:\/\/huggingface.co\/datasets\/wiki_lingua\/discussions\/1\r\n- https:\/\/huggingface.co\/datasets\/yoruba_text_c3\/discussions\/1"],"created_at":1684158439000,"updated_at":1686927242000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"draft":null,"pull_request":null,"body":"The dataset viewer sometimes raises an `IndexError`:\r\n```\r\nIndexError: list index out of range\r\n```\r\nSee:\r\n- huggingface\/datasets-server#1151\r\n - https:\/\/huggingface.co\/datasets\/reddit\/discussions\/5\r\n- huggingface\/datasets-server#1118\r\n - https:\/\/huggingface.co\/datasets\/krr-oxford\/OntoLAMA\/discussions\/1\r\n- https:\/\/huggingface.co\/datasets\/hyperpartisan_news_detection\/discussions\/3\r\n- https:\/\/huggingface.co\/datasets\/um005\/discussions\/2\r\n- https:\/\/huggingface.co\/datasets\/tapaco\/discussions\/2\r\n- https:\/\/huggingface.co\/datasets\/common_language\/discussions\/3\r\n- https:\/\/huggingface.co\/datasets\/pass\/discussions\/1\r\n\r\nAfter investigation:\r\n- This happens with data files hosted on Zenodo\r\n- Indeed, there is an underlying 429 HTTP error: Too Many Requests\r\n\r\nNote that some time ago, it also happened with data files hosted on Google Drive. See:\r\n - #4581\r\n - #4580 \r\n\r\nThe reason then was that there was a 403 HTTP error: Forbidden\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5862\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5862\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5861","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5861\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5861\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5861\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5861","id":1709807340,"node_id":"PR_kwDODunzps5Qf55q","number":5861,"title":"Better error message when combining dataset dicts instead of datasets","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007167 \/ 0.011353 (-0.004185) | 0.004914 \/ 0.011008 (-0.006094) | 0.096858 \/ 0.038508 (0.058350) | 0.033468 \/ 0.023109 (0.010359) | 0.297276 \/ 0.275898 (0.021378) | 0.344289 \/ 0.323480 (0.020809) | 0.005703 \/ 0.007986 (-0.002282) | 0.003972 \/ 0.004328 (-0.000357) | 0.075191 \/ 0.004250 (0.070940) | 0.046247 \/ 0.037052 (0.009194) | 0.317857 \/ 0.258489 (0.059368) | 0.347263 \/ 0.293841 (0.053422) | 0.035017 \/ 0.128546 (-0.093529) | 0.012036 \/ 0.075646 (-0.063611) | 0.332522 \/ 0.419271 (-0.086750) | 0.050188 \/ 0.043533 (0.006655) | 0.296627 \/ 0.255139 (0.041488) | 0.319196 \/ 0.283200 (0.035997) | 0.101100 \/ 0.141683 (-0.040583) | 1.484536 \/ 1.452155 (0.032382) | 1.606364 \/ 1.492716 (0.113648) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.203954 \/ 0.018006 (0.185948) | 0.436505 \/ 0.000490 (0.436015) | 0.003853 \/ 0.000200 (0.003654) | 0.000079 \/ 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025834 \/ 0.037411 (-0.011578) | 0.105759 \/ 0.014526 (0.091233) | 0.114289 \/ 0.176557 (-0.062268) | 0.174388 \/ 0.737135 (-0.562748) | 0.122248 \/ 0.296338 (-0.174090) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.404218 \/ 0.215209 (0.189009) | 4.027900 \/ 2.077655 (1.950245) | 1.854757 \/ 1.504120 (0.350637) | 1.668882 \/ 1.541195 (0.127687) | 1.731451 \/ 1.468490 (0.262961) | 0.707843 \/ 4.584777 (-3.876934) | 3.756386 \/ 3.745712 (0.010674) | 2.067751 \/ 5.269862 (-3.202110) | 1.313039 \/ 4.565676 (-3.252638) | 0.086442 \/ 0.424275 (-0.337833) | 0.012329 \/ 0.007607 (0.004722) | 0.505964 \/ 0.226044 (0.279919) | 5.050788 \/ 2.268929 (2.781860) | 2.353936 \/ 55.444624 (-53.090688) | 2.055560 \/ 6.876477 (-4.820917) | 2.162948 \/ 2.142072 (0.020876) | 0.850532 \/ 4.805227 (-3.954696) | 0.168560 \/ 6.500664 (-6.332104) | 0.063143 \/ 0.075469 (-0.012326) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.182723 \/ 1.841788 (-0.659065) | 14.779342 \/ 8.074308 (6.705034) | 14.461572 \/ 10.191392 (4.270180) | 0.163120 \/ 0.680424 (-0.517303) | 0.017978 \/ 0.534201 (-0.516223) | 0.419168 \/ 0.579283 (-0.160115) | 0.420955 \/ 0.434364 (-0.013409) | 0.509710 \/ 0.540337 (-0.030628) | 0.619586 \/ 1.386936 (-0.767350) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006804 \/ 0.011353 (-0.004549) | 0.005136 \/ 0.011008 (-0.005872) | 0.074910 \/ 0.038508 (0.036402) | 0.032552 \/ 0.023109 (0.009443) | 0.374998 \/ 0.275898 (0.099100) | 0.399219 \/ 0.323480 (0.075739) | 0.005615 \/ 0.007986 (-0.002371) | 0.004118 \/ 0.004328 (-0.000210) | 0.074219 \/ 0.004250 (0.069969) | 0.045924 \/ 0.037052 (0.008871) | 0.383228 \/ 0.258489 (0.124739) | 0.407195 \/ 0.293841 (0.113354) | 0.035460 \/ 0.128546 (-0.093086) | 0.012460 \/ 0.075646 (-0.063187) | 0.087077 \/ 0.419271 (-0.332195) | 0.050507 \/ 0.043533 (0.006974) | 0.369001 \/ 0.255139 (0.113862) | 0.385761 \/ 0.283200 (0.102561) | 0.106999 \/ 0.141683 (-0.034684) | 1.465456 \/ 1.452155 (0.013302) | 1.556962 \/ 1.492716 (0.064246) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.214926 \/ 0.018006 (0.196920) | 0.436893 \/ 0.000490 (0.436403) | 0.003388 \/ 0.000200 (0.003188) | 0.000093 \/ 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029919 \/ 0.037411 (-0.007492) | 0.110859 \/ 0.014526 (0.096333) | 0.120617 \/ 0.176557 (-0.055939) | 0.171781 \/ 0.737135 (-0.565355) | 0.125627 \/ 0.296338 (-0.170712) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.436024 \/ 0.215209 (0.220815) | 4.359167 \/ 2.077655 (2.281512) | 2.188399 \/ 1.504120 (0.684279) | 2.001196 \/ 1.541195 (0.460001) | 2.023710 \/ 1.468490 (0.555220) | 0.713799 \/ 4.584777 (-3.870978) | 3.832217 \/ 3.745712 (0.086504) | 3.269351 \/ 5.269862 (-2.000510) | 1.534608 \/ 4.565676 (-3.031068) | 0.088505 \/ 0.424275 (-0.335770) | 0.012345 \/ 0.007607 (0.004738) | 0.542446 \/ 0.226044 (0.316401) | 5.377757 \/ 2.268929 (3.108828) | 2.659837 \/ 55.444624 (-52.784787) | 2.272356 \/ 6.876477 (-4.604120) | 2.297289 \/ 2.142072 (0.155217) | 0.855276 \/ 4.805227 (-3.949952) | 0.170666 \/ 6.500664 (-6.329998) | 0.064549 \/ 0.075469 (-0.010920) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.255938 \/ 1.841788 (-0.585850) | 15.151471 \/ 8.074308 (7.077163) | 12.905762 \/ 10.191392 (2.714370) | 0.162425 \/ 0.680424 (-0.517999) | 0.017504 \/ 0.534201 (-0.516697) | 0.448671 \/ 0.579283 (-0.130612) | 0.422424 \/ 0.434364 (-0.011940) | 0.551772 \/ 0.540337 (0.011434) | 0.649115 \/ 1.386936 (-0.737821) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#be73d9f192149727c5542ff257df81b03024fa39 \"CML watermark\")\n","Having those different checks helps providing an appropriate error message.\r\n\r\nIf the input is a dict, we suggest to select a split. If the input lists is a mix of iterable and non-iterable, we mention that it must be one or the other.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006559 \/ 0.011353 (-0.004794) | 0.004569 \/ 0.011008 (-0.006439) | 0.104503 \/ 0.038508 (0.065995) | 0.028220 \/ 0.023109 (0.005111) | 0.365507 \/ 0.275898 (0.089609) | 0.400238 \/ 0.323480 (0.076758) | 0.004968 \/ 0.007986 (-0.003017) | 0.003271 \/ 0.004328 (-0.001057) | 0.082804 \/ 0.004250 (0.078554) | 0.036299 \/ 0.037052 (-0.000754) | 0.361201 \/ 0.258489 (0.102712) | 0.410962 \/ 0.293841 (0.117121) | 0.030423 \/ 0.128546 (-0.098123) | 0.011612 \/ 0.075646 (-0.064034) | 0.331820 \/ 0.419271 (-0.087452) | 0.043822 \/ 0.043533 (0.000289) | 0.356242 \/ 0.255139 (0.101103) | 0.393035 \/ 0.283200 (0.109836) | 0.088426 \/ 0.141683 (-0.053257) | 1.484139 \/ 1.452155 (0.031984) | 1.566712 \/ 1.492716 (0.073995) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.195887 \/ 0.018006 (0.177880) | 0.402720 \/ 0.000490 (0.402231) | 0.003516 \/ 0.000200 (0.003316) | 0.000075 \/ 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023270 \/ 0.037411 (-0.014141) | 0.095834 \/ 0.014526 (0.081308) | 0.102924 \/ 0.176557 (-0.073632) | 0.161397 \/ 0.737135 (-0.575738) | 0.105225 \/ 0.296338 (-0.191114) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.451701 \/ 0.215209 (0.236491) | 4.495171 \/ 2.077655 (2.417517) | 2.223203 \/ 1.504120 (0.719083) | 2.035533 \/ 1.541195 (0.494338) | 2.076182 \/ 1.468490 (0.607692) | 0.697317 \/ 4.584777 (-3.887460) | 3.406309 \/ 3.745712 (-0.339403) | 1.847179 \/ 5.269862 (-3.422683) | 1.158762 \/ 4.565676 (-3.406914) | 0.083067 \/ 0.424275 (-0.341208) | 0.012453 \/ 0.007607 (0.004846) | 0.546502 \/ 0.226044 (0.320458) | 5.455712 \/ 2.268929 (3.186784) | 2.654142 \/ 55.444624 (-52.790483) | 2.298722 \/ 6.876477 (-4.577755) | 2.383467 \/ 2.142072 (0.241395) | 0.805950 \/ 4.805227 (-3.999278) | 0.152479 \/ 6.500664 (-6.348185) | 0.066784 \/ 0.075469 (-0.008685) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.239129 \/ 1.841788 (-0.602659) | 13.603707 \/ 8.074308 (5.529398) | 14.062004 \/ 10.191392 (3.870612) | 0.130928 \/ 0.680424 (-0.549495) | 0.016907 \/ 0.534201 (-0.517294) | 0.381614 \/ 0.579283 (-0.197670) | 0.386770 \/ 0.434364 (-0.047594) | 0.455792 \/ 0.540337 (-0.084545) | 0.526092 \/ 1.386936 (-0.860844) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006202 \/ 0.011353 (-0.005151) | 0.004478 \/ 0.011008 (-0.006531) | 0.076492 \/ 0.038508 (0.037984) | 0.026703 \/ 0.023109 (0.003594) | 0.355134 \/ 0.275898 (0.079236) | 0.391207 \/ 0.323480 (0.067727) | 0.004852 \/ 0.007986 (-0.003133) | 0.003271 \/ 0.004328 (-0.001057) | 0.075080 \/ 0.004250 (0.070830) | 0.038803 \/ 0.037052 (0.001750) | 0.359530 \/ 0.258489 (0.101041) | 0.409044 \/ 0.293841 (0.115203) | 0.030366 \/ 0.128546 (-0.098180) | 0.011544 \/ 0.075646 (-0.064102) | 0.084849 \/ 0.419271 (-0.334423) | 0.040076 \/ 0.043533 (-0.003457) | 0.357359 \/ 0.255139 (0.102220) | 0.384075 \/ 0.283200 (0.100875) | 0.089130 \/ 0.141683 (-0.052552) | 1.520400 \/ 1.452155 (0.068246) | 1.604403 \/ 1.492716 (0.111687) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.257127 \/ 0.018006 (0.239121) | 0.403691 \/ 0.000490 (0.403202) | 0.006894 \/ 0.000200 (0.006694) | 0.000088 \/ 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024653 \/ 0.037411 (-0.012758) | 0.098834 \/ 0.014526 (0.084309) | 0.107276 \/ 0.176557 (-0.069281) | 0.158256 \/ 0.737135 (-0.578879) | 0.111339 \/ 0.296338 (-0.184999) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.445006 \/ 0.215209 (0.229797) | 4.452953 \/ 2.077655 (2.375299) | 2.168291 \/ 1.504120 (0.664171) | 1.969457 \/ 1.541195 (0.428262) | 2.003505 \/ 1.468490 (0.535015) | 0.695857 \/ 4.584777 (-3.888920) | 3.433424 \/ 3.745712 (-0.312288) | 2.466977 \/ 5.269862 (-2.802885) | 1.528167 \/ 4.565676 (-3.037509) | 0.082425 \/ 0.424275 (-0.341850) | 0.012470 \/ 0.007607 (0.004863) | 0.559039 \/ 0.226044 (0.332995) | 5.609496 \/ 2.268929 (3.340568) | 2.602898 \/ 55.444624 (-52.841726) | 2.273971 \/ 6.876477 (-4.602506) | 2.303370 \/ 2.142072 (0.161298) | 0.803875 \/ 4.805227 (-4.001352) | 0.151069 \/ 6.500664 (-6.349595) | 0.067956 \/ 0.075469 (-0.007513) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.334443 \/ 1.841788 (-0.507345) | 13.773252 \/ 8.074308 (5.698944) | 13.007042 \/ 10.191392 (2.815650) | 0.127939 \/ 0.680424 (-0.552485) | 0.016412 \/ 0.534201 (-0.517789) | 0.374744 \/ 0.579283 (-0.204539) | 0.396912 \/ 0.434364 (-0.037452) | 0.443197 \/ 0.540337 (-0.097140) | 0.528338 \/ 1.386936 (-0.858598) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#51d9f2a3064aa89a780e3d02c6cc34000c51c4fb \"CML watermark\")\n","Just modified it to use only one loop. I think I managed to keep it readable as well","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007382 \/ 0.011353 (-0.003971) | 0.005143 \/ 0.011008 (-0.005865) | 0.097635 \/ 0.038508 (0.059127) | 0.034726 \/ 0.023109 (0.011616) | 0.315556 \/ 0.275898 (0.039658) | 0.355951 \/ 0.323480 (0.032472) | 0.006055 \/ 0.007986 (-0.001931) | 0.004264 \/ 0.004328 (-0.000065) | 0.073636 \/ 0.004250 (0.069386) | 0.050480 \/ 0.037052 (0.013428) | 0.316031 \/ 0.258489 (0.057542) | 0.363933 \/ 0.293841 (0.070092) | 0.035138 \/ 0.128546 (-0.093408) | 0.012407 \/ 0.075646 (-0.063239) | 0.333677 \/ 0.419271 (-0.085595) | 0.050586 \/ 0.043533 (0.007053) | 0.309507 \/ 0.255139 (0.054369) | 0.327043 \/ 0.283200 (0.043844) | 0.108975 \/ 0.141683 (-0.032708) | 1.447778 \/ 1.452155 (-0.004377) | 1.519971 \/ 1.492716 (0.027255) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.248770 \/ 0.018006 (0.230764) | 0.603036 \/ 0.000490 (0.602546) | 0.000383 \/ 0.000200 (0.000183) | 0.000058 \/ 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027094 \/ 0.037411 (-0.010317) | 0.104427 \/ 0.014526 (0.089901) | 0.120627 \/ 0.176557 (-0.055929) | 0.178790 \/ 0.737135 (-0.558346) | 0.124877 \/ 0.296338 (-0.171461) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.414442 \/ 0.215209 (0.199233) | 4.138009 \/ 2.077655 (2.060355) | 1.964642 \/ 1.504120 (0.460523) | 1.775940 \/ 1.541195 (0.234745) | 1.899719 \/ 1.468490 (0.431228) | 0.695406 \/ 4.584777 (-3.889371) | 3.760470 \/ 3.745712 (0.014758) | 3.906958 \/ 5.269862 (-1.362904) | 2.028164 \/ 4.565676 (-2.537513) | 0.086704 \/ 0.424275 (-0.337571) | 0.012465 \/ 0.007607 (0.004857) | 0.512336 \/ 0.226044 (0.286292) | 5.108587 \/ 2.268929 (2.839659) | 2.435273 \/ 55.444624 (-53.009352) | 2.142387 \/ 6.876477 (-4.734090) | 2.258234 \/ 2.142072 (0.116162) | 0.854035 \/ 4.805227 (-3.951193) | 0.170443 \/ 6.500664 (-6.330222) | 0.065762 \/ 0.075469 (-0.009707) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.187529 \/ 1.841788 (-0.654259) | 15.151164 \/ 8.074308 (7.076856) | 14.577545 \/ 10.191392 (4.386153) | 0.166973 \/ 0.680424 (-0.513450) | 0.017883 \/ 0.534201 (-0.516318) | 0.427607 \/ 0.579283 (-0.151676) | 0.417050 \/ 0.434364 (-0.017314) | 0.508116 \/ 0.540337 (-0.032221) | 0.590173 \/ 1.386936 (-0.796763) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007499 \/ 0.011353 (-0.003854) | 0.005195 \/ 0.011008 (-0.005813) | 0.073600 \/ 0.038508 (0.035091) | 0.033574 \/ 0.023109 (0.010464) | 0.377506 \/ 0.275898 (0.101608) | 0.432752 \/ 0.323480 (0.109272) | 0.006042 \/ 0.007986 (-0.001944) | 0.006427 \/ 0.004328 (0.002098) | 0.071666 \/ 0.004250 (0.067416) | 0.053243 \/ 0.037052 (0.016190) | 0.363972 \/ 0.258489 (0.105483) | 0.454988 \/ 0.293841 (0.161147) | 0.035118 \/ 0.128546 (-0.093428) | 0.012395 \/ 0.075646 (-0.063251) | 0.084308 \/ 0.419271 (-0.334963) | 0.048589 \/ 0.043533 (0.005057) | 0.368036 \/ 0.255139 (0.112897) | 0.399414 \/ 0.283200 (0.116215) | 0.109043 \/ 0.141683 (-0.032640) | 1.462972 \/ 1.452155 (0.010817) | 1.574443 \/ 1.492716 (0.081726) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.215107 \/ 0.018006 (0.197101) | 0.550255 \/ 0.000490 (0.549765) | 0.004630 \/ 0.000200 (0.004430) | 0.000104 \/ 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029948 \/ 0.037411 (-0.007463) | 0.111866 \/ 0.014526 (0.097340) | 0.126559 \/ 0.176557 (-0.049997) | 0.181443 \/ 0.737135 (-0.555693) | 0.130559 \/ 0.296338 (-0.165779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.441410 \/ 0.215209 (0.226201) | 4.403406 \/ 2.077655 (2.325752) | 2.180276 \/ 1.504120 (0.676156) | 2.003729 \/ 1.541195 (0.462534) | 2.079394 \/ 1.468490 (0.610904) | 0.706061 \/ 4.584777 (-3.878716) | 3.805668 \/ 3.745712 (0.059956) | 3.864941 \/ 5.269862 (-1.404921) | 1.970468 \/ 4.565676 (-2.595208) | 0.086033 \/ 0.424275 (-0.338242) | 0.012261 \/ 0.007607 (0.004654) | 0.550427 \/ 0.226044 (0.324383) | 5.542270 \/ 2.268929 (3.273342) | 2.717047 \/ 55.444624 (-52.727577) | 2.449022 \/ 6.876477 (-4.427455) | 2.549567 \/ 2.142072 (0.407495) | 0.854981 \/ 4.805227 (-3.950247) | 0.169756 \/ 6.500664 (-6.330908) | 0.067082 \/ 0.075469 (-0.008387) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.281369 \/ 1.841788 (-0.560419) | 15.445090 \/ 8.074308 (7.370781) | 13.205652 \/ 10.191392 (3.014260) | 0.170070 \/ 0.680424 (-0.510354) | 0.017815 \/ 0.534201 (-0.516385) | 0.425193 \/ 0.579283 (-0.154090) | 0.425205 \/ 0.434364 (-0.009159) | 0.493561 \/ 0.540337 (-0.046776) | 0.588994 \/ 1.386936 (-0.797942) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#e427105fc68fce04d0f3c74efb942cbf3a65d166 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006345 \/ 0.011353 (-0.005008) | 0.004330 \/ 0.011008 (-0.006678) | 0.096327 \/ 0.038508 (0.057819) | 0.032964 \/ 0.023109 (0.009855) | 0.335600 \/ 0.275898 (0.059702) | 0.365635 \/ 0.323480 (0.042155) | 0.005435 \/ 0.007986 (-0.002551) | 0.005005 \/ 0.004328 (0.000677) | 0.071107 \/ 0.004250 (0.066856) | 0.044363 \/ 0.037052 (0.007311) | 0.339988 \/ 0.258489 (0.081498) | 0.375575 \/ 0.293841 (0.081734) | 0.028343 \/ 0.128546 (-0.100203) | 0.008587 \/ 0.075646 (-0.067059) | 0.324349 \/ 0.419271 (-0.094922) | 0.050105 \/ 0.043533 (0.006573) | 0.327398 \/ 0.255139 (0.072259) | 0.348479 \/ 0.283200 (0.065279) | 0.102357 \/ 0.141683 (-0.039326) | 1.419905 \/ 1.452155 (-0.032250) | 1.534887 \/ 1.492716 (0.042171) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.212418 \/ 0.018006 (0.194412) | 0.433183 \/ 0.000490 (0.432693) | 0.000595 \/ 0.000200 (0.000395) | 0.000062 \/ 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027520 \/ 0.037411 (-0.009891) | 0.109503 \/ 0.014526 (0.094977) | 0.118202 \/ 0.176557 (-0.058355) | 0.177236 \/ 0.737135 (-0.559899) | 0.123736 \/ 0.296338 (-0.172602) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.405734 \/ 0.215209 (0.190525) | 4.039566 \/ 2.077655 (1.961911) | 1.838211 \/ 1.504120 (0.334091) | 1.652650 \/ 1.541195 (0.111456) | 1.753488 \/ 1.468490 (0.284998) | 0.525258 \/ 4.584777 (-4.059519) | 3.704509 \/ 3.745712 (-0.041203) | 1.826794 \/ 5.269862 (-3.443067) | 1.236361 \/ 4.565676 (-3.329315) | 0.065619 \/ 0.424275 (-0.358656) | 0.011606 \/ 0.007607 (0.003999) | 0.505954 \/ 0.226044 (0.279910) | 5.054140 \/ 2.268929 (2.785211) | 2.352587 \/ 55.444624 (-53.092037) | 2.050601 \/ 6.876477 (-4.825875) | 2.097222 \/ 2.142072 (-0.044850) | 0.641044 \/ 4.805227 (-4.164183) | 0.140676 \/ 6.500664 (-6.359988) | 0.063217 \/ 0.075469 (-0.012253) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.177750 \/ 1.841788 (-0.664038) | 14.819346 \/ 8.074308 (6.745038) | 14.085937 \/ 10.191392 (3.894545) | 0.168618 \/ 0.680424 (-0.511806) | 0.017189 \/ 0.534201 (-0.517011) | 0.393415 \/ 0.579283 (-0.185868) | 0.422879 \/ 0.434364 (-0.011485) | 0.477289 \/ 0.540337 (-0.063048) | 0.569078 \/ 1.386936 (-0.817858) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006502 \/ 0.011353 (-0.004850) | 0.004640 \/ 0.011008 (-0.006368) | 0.073272 \/ 0.038508 (0.034764) | 0.033225 \/ 0.023109 (0.010116) | 0.359165 \/ 0.275898 (0.083267) | 0.391659 \/ 0.323480 (0.068179) | 0.005684 \/ 0.007986 (-0.002302) | 0.004045 \/ 0.004328 (-0.000284) | 0.072880 \/ 0.004250 (0.068629) | 0.046260 \/ 0.037052 (0.009208) | 0.361772 \/ 0.258489 (0.103283) | 0.402905 \/ 0.293841 (0.109064) | 0.027732 \/ 0.128546 (-0.100814) | 0.008864 \/ 0.075646 (-0.066783) | 0.081961 \/ 0.419271 (-0.337310) | 0.046170 \/ 0.043533 (0.002637) | 0.364198 \/ 0.255139 (0.109059) | 0.387468 \/ 0.283200 (0.104269) | 0.105456 \/ 0.141683 (-0.036227) | 1.457176 \/ 1.452155 (0.005021) | 1.564899 \/ 1.492716 (0.072183) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.179129 \/ 0.018006 (0.161123) | 0.439699 \/ 0.000490 (0.439209) | 0.002882 \/ 0.000200 (0.002682) | 0.000090 \/ 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029123 \/ 0.037411 (-0.008288) | 0.112046 \/ 0.014526 (0.097520) | 0.122773 \/ 0.176557 (-0.053784) | 0.178404 \/ 0.737135 (-0.558732) | 0.127904 \/ 0.296338 (-0.168434) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.440413 \/ 0.215209 (0.225204) | 4.407334 \/ 2.077655 (2.329680) | 2.112932 \/ 1.504120 (0.608812) | 1.911034 \/ 1.541195 (0.369840) | 2.057168 \/ 1.468490 (0.588677) | 0.525472 \/ 4.584777 (-4.059305) | 3.738894 \/ 3.745712 (-0.006818) | 1.807592 \/ 5.269862 (-3.462270) | 1.053837 \/ 4.565676 (-3.511839) | 0.066203 \/ 0.424275 (-0.358072) | 0.011965 \/ 0.007607 (0.004358) | 0.541137 \/ 0.226044 (0.315093) | 5.415040 \/ 2.268929 (3.146112) | 2.580476 \/ 55.444624 (-52.864148) | 2.234144 \/ 6.876477 (-4.642333) | 2.306014 \/ 2.142072 (0.163942) | 0.644221 \/ 4.805227 (-4.161006) | 0.142870 \/ 6.500664 (-6.357794) | 0.065015 \/ 0.075469 (-0.010454) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.303465 \/ 1.841788 (-0.538323) | 14.949683 \/ 8.074308 (6.875375) | 14.370871 \/ 10.191392 (4.179478) | 0.142714 \/ 0.680424 (-0.537710) | 0.017372 \/ 0.534201 (-0.516829) | 0.403898 \/ 0.579283 (-0.175385) | 0.424781 \/ 0.434364 (-0.009583) | 0.465984 \/ 0.540337 (-0.074353) | 0.570863 \/ 1.386936 (-0.816074) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#22d1d533e8ab831b1aa1aab3e7d3c72ba42a83e8 \"CML watermark\")\n"],"created_at":1684146984000,"updated_at":1684838413000,"closed_at":1684837978000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5861","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5861","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5861.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5861.patch","merged_at":1684837978000},"body":"close https:\/\/github.com\/huggingface\/datasets\/issues\/5851","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5861\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5861\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5860","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5860\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5860\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5860\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5860","id":1709727460,"node_id":"PR_kwDODunzps5QfojD","number":5860,"title":"Minor tqdm optim","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006917 \/ 0.011353 (-0.004436) | 0.004803 \/ 0.011008 (-0.006205) | 0.097082 \/ 0.038508 (0.058574) | 0.035105 \/ 0.023109 (0.011996) | 0.325911 \/ 0.275898 (0.050013) | 0.371858 \/ 0.323480 (0.048378) | 0.006451 \/ 0.007986 (-0.001534) | 0.004421 \/ 0.004328 (0.000093) | 0.075738 \/ 0.004250 (0.071487) | 0.053624 \/ 0.037052 (0.016572) | 0.332661 \/ 0.258489 (0.074172) | 0.372729 \/ 0.293841 (0.078888) | 0.028279 \/ 0.128546 (-0.100267) | 0.009318 \/ 0.075646 (-0.066328) | 0.328505 \/ 0.419271 (-0.090766) | 0.066962 \/ 0.043533 (0.023429) | 0.316863 \/ 0.255139 (0.061724) | 0.344296 \/ 0.283200 (0.061096) | 0.120575 \/ 0.141683 (-0.021108) | 1.457867 \/ 1.452155 (0.005712) | 1.597361 \/ 1.492716 (0.104644) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.296399 \/ 0.018006 (0.278392) | 0.507196 \/ 0.000490 (0.506706) | 0.003036 \/ 0.000200 (0.002836) | 0.000088 \/ 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028535 \/ 0.037411 (-0.008876) | 0.110566 \/ 0.014526 (0.096040) | 0.122078 \/ 0.176557 (-0.054479) | 0.182926 \/ 0.737135 (-0.554210) | 0.125546 \/ 0.296338 (-0.170792) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.426952 \/ 0.215209 (0.211742) | 4.255608 \/ 2.077655 (2.177953) | 2.063865 \/ 1.504120 (0.559745) | 1.867198 \/ 1.541195 (0.326004) | 2.058236 \/ 1.468490 (0.589746) | 0.525885 \/ 4.584777 (-4.058892) | 3.723607 \/ 3.745712 (-0.022105) | 1.919144 \/ 5.269862 (-3.350718) | 1.235308 \/ 4.565676 (-3.330368) | 0.066423 \/ 0.424275 (-0.357852) | 0.012045 \/ 0.007607 (0.004438) | 0.528432 \/ 0.226044 (0.302388) | 5.268723 \/ 2.268929 (2.999794) | 2.504071 \/ 55.444624 (-52.940553) | 2.137999 \/ 6.876477 (-4.738477) | 2.229987 \/ 2.142072 (0.087914) | 0.641739 \/ 4.805227 (-4.163488) | 0.142635 \/ 6.500664 (-6.358029) | 0.065649 \/ 0.075469 (-0.009820) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.182710 \/ 1.841788 (-0.659078) | 15.339777 \/ 8.074308 (7.265469) | 14.722308 \/ 10.191392 (4.530916) | 0.145914 \/ 0.680424 (-0.534510) | 0.017861 \/ 0.534201 (-0.516340) | 0.393092 \/ 0.579283 (-0.186191) | 0.431179 \/ 0.434364 (-0.003185) | 0.485712 \/ 0.540337 (-0.054625) | 0.602634 \/ 1.386936 (-0.784302) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006792 \/ 0.011353 (-0.004561) | 0.005118 \/ 0.011008 (-0.005890) | 0.073440 \/ 0.038508 (0.034932) | 0.033751 \/ 0.023109 (0.010642) | 0.389243 \/ 0.275898 (0.113345) | 0.397083 \/ 0.323480 (0.073603) | 0.005989 \/ 0.007986 (-0.001997) | 0.004289 \/ 0.004328 (-0.000040) | 0.073228 \/ 0.004250 (0.068977) | 0.053490 \/ 0.037052 (0.016438) | 0.396070 \/ 0.258489 (0.137581) | 0.415134 \/ 0.293841 (0.121293) | 0.028649 \/ 0.128546 (-0.099897) | 0.009159 \/ 0.075646 (-0.066487) | 0.080813 \/ 0.419271 (-0.338458) | 0.048200 \/ 0.043533 (0.004667) | 0.388009 \/ 0.255139 (0.132870) | 0.382174 \/ 0.283200 (0.098975) | 0.107807 \/ 0.141683 (-0.033876) | 1.467276 \/ 1.452155 (0.015121) | 1.568091 \/ 1.492716 (0.075375) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.328030 \/ 0.018006 (0.310024) | 0.498058 \/ 0.000490 (0.497568) | 0.002513 \/ 0.000200 (0.002313) | 0.000099 \/ 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029835 \/ 0.037411 (-0.007576) | 0.113859 \/ 0.014526 (0.099333) | 0.130813 \/ 0.176557 (-0.045743) | 0.183646 \/ 0.737135 (-0.553490) | 0.136561 \/ 0.296338 (-0.159777) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.438901 \/ 0.215209 (0.223692) | 4.376426 \/ 2.077655 (2.298771) | 2.220932 \/ 1.504120 (0.716812) | 2.043585 \/ 1.541195 (0.502390) | 2.161383 \/ 1.468490 (0.692893) | 0.523224 \/ 4.584777 (-4.061553) | 3.730589 \/ 3.745712 (-0.015123) | 1.859602 \/ 5.269862 (-3.410260) | 1.073415 \/ 4.565676 (-3.492261) | 0.066363 \/ 0.424275 (-0.357912) | 0.012491 \/ 0.007607 (0.004884) | 0.542052 \/ 0.226044 (0.316008) | 5.426246 \/ 2.268929 (3.157318) | 2.673884 \/ 55.444624 (-52.770740) | 2.372611 \/ 6.876477 (-4.503865) | 2.482216 \/ 2.142072 (0.340143) | 0.705669 \/ 4.805227 (-4.099558) | 0.141075 \/ 6.500664 (-6.359589) | 0.065339 \/ 0.075469 (-0.010130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.316403 \/ 1.841788 (-0.525385) | 15.832870 \/ 8.074308 (7.758562) | 13.307045 \/ 10.191392 (3.115653) | 0.147258 \/ 0.680424 (-0.533166) | 0.017966 \/ 0.534201 (-0.516235) | 0.414396 \/ 0.579283 (-0.164887) | 0.431801 \/ 0.434364 (-0.002563) | 0.465483 \/ 0.540337 (-0.074855) | 0.577850 \/ 1.386936 (-0.809086) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#c795c7e332a7c850c3e725f2034d4894b5e314f7 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006368 \/ 0.011353 (-0.004985) | 0.004274 \/ 0.011008 (-0.006734) | 0.098799 \/ 0.038508 (0.060291) | 0.029096 \/ 0.023109 (0.005986) | 0.308009 \/ 0.275898 (0.032111) | 0.345701 \/ 0.323480 (0.022221) | 0.005312 \/ 0.007986 (-0.002674) | 0.003435 \/ 0.004328 (-0.000894) | 0.075912 \/ 0.004250 (0.071662) | 0.041993 \/ 0.037052 (0.004941) | 0.320075 \/ 0.258489 (0.061586) | 0.347506 \/ 0.293841 (0.053665) | 0.025456 \/ 0.128546 (-0.103091) | 0.008461 \/ 0.075646 (-0.067185) | 0.322823 \/ 0.419271 (-0.096448) | 0.044650 \/ 0.043533 (0.001117) | 0.314118 \/ 0.255139 (0.058979) | 0.333436 \/ 0.283200 (0.050237) | 0.093811 \/ 0.141683 (-0.047871) | 1.464464 \/ 1.452155 (0.012310) | 1.548098 \/ 1.492716 (0.055382) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.015905 \/ 0.018006 (-0.002101) | 0.427847 \/ 0.000490 (0.427357) | 0.007600 \/ 0.000200 (0.007400) | 0.000421 \/ 0.000054 (0.000366) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024530 \/ 0.037411 (-0.012882) | 0.099907 \/ 0.014526 (0.085381) | 0.107282 \/ 0.176557 (-0.069275) | 0.168332 \/ 0.737135 (-0.568804) | 0.109875 \/ 0.296338 (-0.186464) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.451064 \/ 0.215209 (0.235855) | 4.491434 \/ 2.077655 (2.413779) | 2.253251 \/ 1.504120 (0.749131) | 2.086740 \/ 1.541195 (0.545545) | 2.133288 \/ 1.468490 (0.664798) | 0.558801 \/ 4.584777 (-4.025976) | 3.463525 \/ 3.745712 (-0.282187) | 1.747657 \/ 5.269862 (-3.522205) | 1.005465 \/ 4.565676 (-3.560211) | 0.068341 \/ 0.424275 (-0.355934) | 0.012521 \/ 0.007607 (0.004914) | 0.567002 \/ 0.226044 (0.340957) | 5.689529 \/ 2.268929 (3.420601) | 2.700562 \/ 55.444624 (-52.744062) | 2.384888 \/ 6.876477 (-4.491589) | 2.503160 \/ 2.142072 (0.361088) | 0.667107 \/ 4.805227 (-4.138120) | 0.137253 \/ 6.500664 (-6.363412) | 0.068300 \/ 0.075469 (-0.007170) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.202916 \/ 1.841788 (-0.638872) | 14.163393 \/ 8.074308 (6.089085) | 14.402463 \/ 10.191392 (4.211071) | 0.145273 \/ 0.680424 (-0.535151) | 0.016996 \/ 0.534201 (-0.517205) | 0.363520 \/ 0.579283 (-0.215763) | 0.421595 \/ 0.434364 (-0.012769) | 0.438413 \/ 0.540337 (-0.101925) | 0.508615 \/ 1.386936 (-0.878321) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006419 \/ 0.011353 (-0.004934) | 0.004346 \/ 0.011008 (-0.006662) | 0.076356 \/ 0.038508 (0.037848) | 0.029370 \/ 0.023109 (0.006260) | 0.371046 \/ 0.275898 (0.095148) | 0.398279 \/ 0.323480 (0.074799) | 0.005258 \/ 0.007986 (-0.002728) | 0.003528 \/ 0.004328 (-0.000800) | 0.076787 \/ 0.004250 (0.072537) | 0.041575 \/ 0.037052 (0.004522) | 0.362319 \/ 0.258489 (0.103830) | 0.402134 \/ 0.293841 (0.108293) | 0.025633 \/ 0.128546 (-0.102913) | 0.008826 \/ 0.075646 (-0.066820) | 0.082380 \/ 0.419271 (-0.336892) | 0.041655 \/ 0.043533 (-0.001878) | 0.357583 \/ 0.255139 (0.102444) | 0.383486 \/ 0.283200 (0.100287) | 0.093682 \/ 0.141683 (-0.048001) | 1.488522 \/ 1.452155 (0.036367) | 1.576090 \/ 1.492716 (0.083373) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.185556 \/ 0.018006 (0.167550) | 0.431345 \/ 0.000490 (0.430855) | 0.002290 \/ 0.000200 (0.002090) | 0.000082 \/ 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026030 \/ 0.037411 (-0.011382) | 0.102889 \/ 0.014526 (0.088364) | 0.109541 \/ 0.176557 (-0.067015) | 0.161050 \/ 0.737135 (-0.576085) | 0.113525 \/ 0.296338 (-0.182814) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.445301 \/ 0.215209 (0.230092) | 4.437320 \/ 2.077655 (2.359666) | 2.174181 \/ 1.504120 (0.670061) | 1.977440 \/ 1.541195 (0.436245) | 2.036323 \/ 1.468490 (0.567832) | 0.554227 \/ 4.584777 (-4.030550) | 3.462746 \/ 3.745712 (-0.282966) | 1.765257 \/ 5.269862 (-3.504604) | 1.014515 \/ 4.565676 (-3.551161) | 0.068391 \/ 0.424275 (-0.355884) | 0.013154 \/ 0.007607 (0.005546) | 0.546696 \/ 0.226044 (0.320652) | 5.490628 \/ 2.268929 (3.221699) | 2.611947 \/ 55.444624 (-52.832677) | 2.282659 \/ 6.876477 (-4.593818) | 2.333972 \/ 2.142072 (0.191899) | 0.663140 \/ 4.805227 (-4.142087) | 0.137996 \/ 6.500664 (-6.362668) | 0.069063 \/ 0.075469 (-0.006407) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.332147 \/ 1.841788 (-0.509641) | 14.781592 \/ 8.074308 (6.707284) | 13.399190 \/ 10.191392 (3.207798) | 0.139370 \/ 0.680424 (-0.541054) | 0.016742 \/ 0.534201 (-0.517459) | 0.364138 \/ 0.579283 (-0.215146) | 0.402479 \/ 0.434364 (-0.031885) | 0.427591 \/ 0.540337 (-0.112746) | 0.520864 \/ 1.386936 (-0.866072) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#a8279677b58b93f77995c7da67aea2a04b6a7395 \"CML watermark\")\n"],"created_at":1684144177000,"updated_at":1684349206000,"closed_at":1684348775000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5860","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5860","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5860.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5860.patch","merged_at":1684348775000},"body":"Don't create a tqdm progress bar when `disable_tqdm` is passed to `map_nested`.\r\n\r\nOn my side it sped up some iterable datasets by ~30% when `map_nested` is used extensively to recursively tensorize python dicts.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5860\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5860\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5859","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5859\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5859\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5859\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5859","id":1709554829,"node_id":"PR_kwDODunzps5QfDLC","number":5859,"title":"Raise TypeError when indexing a dataset with bool","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","@lhoestq any idea why this only fails (CI integration fails are unrelated) in \"Build PR Documentation \/ build \/ build_pr_documentation\" (which uses Python 3.8), with message:\r\n```\r\nTypeError: Type subscription requires python >= 3.9\r\n```\r\nwhereas the CI is green for unit tests, which use Python 3.7?","Hmm I don't know sorry :\/","@lhoestq I am afraid I have to remove the generics I created for numpy and pandas (no subscriptable until Python 3.9) and just leave:\r\n```python\r\nListLike = Union[List[T], Tuple[T, ...]]\r\n```","Ok sounds good - no need to spend more time on this","I will merge once the CI is finished. The integration errors are unrelated: `502 Server Error: Bad Gateway`","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006637 \/ 0.011353 (-0.004716) | 0.004578 \/ 0.011008 (-0.006430) | 0.097346 \/ 0.038508 (0.058838) | 0.034171 \/ 0.023109 (0.011062) | 0.315060 \/ 0.275898 (0.039162) | 0.354386 \/ 0.323480 (0.030907) | 0.005778 \/ 0.007986 (-0.002207) | 0.004123 \/ 0.004328 (-0.000206) | 0.073839 \/ 0.004250 (0.069589) | 0.046418 \/ 0.037052 (0.009366) | 0.325910 \/ 0.258489 (0.067421) | 0.368909 \/ 0.293841 (0.075068) | 0.027975 \/ 0.128546 (-0.100571) | 0.008885 \/ 0.075646 (-0.066761) | 0.327956 \/ 0.419271 (-0.091316) | 0.049911 \/ 0.043533 (0.006378) | 0.309424 \/ 0.255139 (0.054285) | 0.346543 \/ 0.283200 (0.063343) | 0.103429 \/ 0.141683 (-0.038253) | 1.517606 \/ 1.452155 (0.065451) | 1.536685 \/ 1.492716 (0.043969) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.211552 \/ 0.018006 (0.193546) | 0.449583 \/ 0.000490 (0.449094) | 0.002949 \/ 0.000200 (0.002750) | 0.000140 \/ 0.000054 (0.000086) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027603 \/ 0.037411 (-0.009808) | 0.108873 \/ 0.014526 (0.094347) | 0.117990 \/ 0.176557 (-0.058567) | 0.174202 \/ 0.737135 (-0.562933) | 0.123793 \/ 0.296338 (-0.172545) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.418449 \/ 0.215209 (0.203240) | 4.177753 \/ 2.077655 (2.100099) | 1.923446 \/ 1.504120 (0.419326) | 1.720576 \/ 1.541195 (0.179381) | 1.783723 \/ 1.468490 (0.315232) | 0.530068 \/ 4.584777 (-4.054709) | 3.709410 \/ 3.745712 (-0.036302) | 1.863924 \/ 5.269862 (-3.405938) | 1.149906 \/ 4.565676 (-3.415770) | 0.066595 \/ 0.424275 (-0.357680) | 0.011733 \/ 0.007607 (0.004126) | 0.519249 \/ 0.226044 (0.293205) | 5.179676 \/ 2.268929 (2.910748) | 2.389488 \/ 55.444624 (-53.055137) | 2.060006 \/ 6.876477 (-4.816471) | 2.160668 \/ 2.142072 (0.018596) | 0.641081 \/ 4.805227 (-4.164146) | 0.141962 \/ 6.500664 (-6.358702) | 0.063146 \/ 0.075469 (-0.012323) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.197424 \/ 1.841788 (-0.644364) | 14.915321 \/ 8.074308 (6.841013) | 14.792302 \/ 10.191392 (4.600910) | 0.145436 \/ 0.680424 (-0.534988) | 0.017669 \/ 0.534201 (-0.516532) | 0.399060 \/ 0.579283 (-0.180223) | 0.416282 \/ 0.434364 (-0.018082) | 0.498392 \/ 0.540337 (-0.041946) | 0.600242 \/ 1.386936 (-0.786694) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007246 \/ 0.011353 (-0.004106) | 0.005353 \/ 0.011008 (-0.005656) | 0.076357 \/ 0.038508 (0.037849) | 0.037662 \/ 0.023109 (0.014553) | 0.387862 \/ 0.275898 (0.111964) | 0.421610 \/ 0.323480 (0.098130) | 0.006424 \/ 0.007986 (-0.001561) | 0.004397 \/ 0.004328 (0.000069) | 0.074212 \/ 0.004250 (0.069961) | 0.054147 \/ 0.037052 (0.017095) | 0.393171 \/ 0.258489 (0.134682) | 0.424082 \/ 0.293841 (0.130241) | 0.029001 \/ 0.128546 (-0.099546) | 0.009381 \/ 0.075646 (-0.066265) | 0.082562 \/ 0.419271 (-0.336710) | 0.048004 \/ 0.043533 (0.004472) | 0.386895 \/ 0.255139 (0.131756) | 0.386104 \/ 0.283200 (0.102904) | 0.113714 \/ 0.141683 (-0.027969) | 1.435601 \/ 1.452155 (-0.016553) | 1.554940 \/ 1.492716 (0.062224) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.179288 \/ 0.018006 (0.161282) | 0.455301 \/ 0.000490 (0.454811) | 0.001469 \/ 0.000200 (0.001269) | 0.000091 \/ 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030928 \/ 0.037411 (-0.006484) | 0.117833 \/ 0.014526 (0.103307) | 0.125088 \/ 0.176557 (-0.051468) | 0.178906 \/ 0.737135 (-0.558230) | 0.131264 \/ 0.296338 (-0.165075) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.436900 \/ 0.215209 (0.221691) | 4.366094 \/ 2.077655 (2.288439) | 2.184398 \/ 1.504120 (0.680278) | 1.992779 \/ 1.541195 (0.451584) | 2.055260 \/ 1.468490 (0.586770) | 0.524136 \/ 4.584777 (-4.060641) | 3.750535 \/ 3.745712 (0.004823) | 2.985095 \/ 5.269862 (-2.284767) | 1.400291 \/ 4.565676 (-3.165385) | 0.065921 \/ 0.424275 (-0.358354) | 0.012110 \/ 0.007607 (0.004502) | 0.538239 \/ 0.226044 (0.312195) | 5.380613 \/ 2.268929 (3.111685) | 2.637509 \/ 55.444624 (-52.807116) | 2.352265 \/ 6.876477 (-4.524212) | 2.409829 \/ 2.142072 (0.267756) | 0.640428 \/ 4.805227 (-4.164799) | 0.142070 \/ 6.500664 (-6.358594) | 0.068171 \/ 0.075469 (-0.007298) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.280080 \/ 1.841788 (-0.561707) | 15.588799 \/ 8.074308 (7.514491) | 14.648596 \/ 10.191392 (4.457204) | 0.147027 \/ 0.680424 (-0.533397) | 0.018981 \/ 0.534201 (-0.515220) | 0.394796 \/ 0.579283 (-0.184487) | 0.423686 \/ 0.434364 (-0.010678) | 0.467376 \/ 0.540337 (-0.072961) | 0.562247 \/ 1.386936 (-0.824689) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#680162303f4c5dae6ad2edef6b3efadded7d37bd \"CML watermark\")\n"],"created_at":1684138122000,"updated_at":1685032284000,"closed_at":1685031797000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5859","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5859","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5859.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5859.patch","merged_at":1685031797000},"body":"Fix #5858.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5859\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5859\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5858","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5858\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5858\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5858\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5858","id":1709332632,"node_id":"I_kwDODunzps5l4liY","number":5858,"title":"Throw an error when dataset improperly indexed","user":{"login":"sarahwie","id":8027676,"node_id":"MDQ6VXNlcjgwMjc2NzY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8027676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sarahwie","html_url":"https:\/\/github.com\/sarahwie","followers_url":"https:\/\/api.github.com\/users\/sarahwie\/followers","following_url":"https:\/\/api.github.com\/users\/sarahwie\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sarahwie\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sarahwie\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sarahwie\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sarahwie\/orgs","repos_url":"https:\/\/api.github.com\/users\/sarahwie\/repos","events_url":"https:\/\/api.github.com\/users\/sarahwie\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sarahwie\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462.0,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @sarahwie.\r\n\r\nPlease note that in `datasets` we do not have vectorized operation like `pandas`. Therefore, your equality comparisons above are `False`:\r\n- For example: `squad['question']` returns a `list`, and this list is not equal to `\"Who was the Norse leader?\"`\r\n\r\nThe `False` value is equivalent to `0` when indexing a dataset, thus the reason why you get the first element (with index 0): \r\n- For example: `squad[False]` is equivalent to `squad[0]`\r\n\r\nMaybe we should an exception instead of assuming that `False` is equivalent to `0` (and `True` is equivalent to `1`) in the context of indexing."],"created_at":1684127753000,"updated_at":1685031799000,"closed_at":1685031799000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nPandas-style subset indexing on dataset does not throw an error, when maybe it should. Instead returns the first instance of the dataset regardless of index condition.\n\n### Steps to reproduce the bug\n\nSteps to reproduce the behavior:\r\n\r\n1. `squad = datasets.load_dataset(\"squad_v2\", split=\"validation\")`\r\n2. `item = squad[squad['question'] == \"Who was the Norse leader?\"]`\r\nor `it = squad[squad['id'] == '56ddde6b9a695914005b962b']`\r\n3. returns the first item in the dataset, which does not satisfy the above conditions:\r\n\r\n`{'id': '56ddde6b9a695914005b9628', 'title': 'Normans', 'context': 'The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their name to Normandy, a region in France. They were descended from Norse (\"Norman\" comes from \"Norseman\") raiders and pirates from Denmark, Iceland and Norway who, under their leader Rollo, agreed to swear fealty to King Charles III of West Francia. Through generations of assimilation and mixing with the native Frankish and Roman-Gaulish populations, their descendants would gradually merge with the Carolingian-based cultures of West Francia. The distinct cultural and ethnic identity of the Normans emerged initially in the first half of the 10th century, and it continued to evolve over the succeeding centuries.', 'question': 'In what country is Normandy located?', 'answers': {'text': ['France', 'France', 'France', 'France'], 'answer_start': [159, 159, 159, 159]}}`\n\n### Expected behavior\n\nShould either throw an error message, or return the dataset item that satisfies the condition.\n\n### Environment info\n\n- `datasets` version: 2.9.0\r\n- Platform: macOS-13.3.1-arm64-arm-64bit\r\n- Python version: 3.10.8\r\n- PyArrow version: 10.0.1\r\n- Pandas version: 1.5.3","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5858\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5858\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5857","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5857\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5857\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5857\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5857","id":1709326622,"node_id":"I_kwDODunzps5l4kEe","number":5857,"title":"Adding chemistry dataset\/models in huggingface","user":{"login":"knc6","id":16902896,"node_id":"MDQ6VXNlcjE2OTAyODk2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16902896?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/knc6","html_url":"https:\/\/github.com\/knc6","followers_url":"https:\/\/api.github.com\/users\/knc6\/followers","following_url":"https:\/\/api.github.com\/users\/knc6\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/knc6\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/knc6\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/knc6\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/knc6\/orgs","repos_url":"https:\/\/api.github.com\/users\/knc6\/repos","events_url":"https:\/\/api.github.com\/users\/knc6\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/knc6\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! \r\n\r\nThis would be a nice addition to the Hub! You can find the existing chemistry datasets\/models on the Hub (using the `chemistry` tag) [here](https:\/\/huggingface.co\/search\/full-text?q=chemistry&type=model&type=dataset).\r\n\r\nFeel free to ping us here on the Hub if you need help adding the datasets.\r\n"],"created_at":1684127389000,"updated_at":1685033439000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\n\nHuggingface is really amazing platform for open science.\r\n\r\nIn addition to computer vision, video and NLP, would it be of interest to add chemistry\/materials science dataset\/models in Huggingface? Or, if its already done, can you provide some pointers.\r\n\r\nWe have been working on a comprehensive benchmark on this topic: [JARVIS-Leaderboard](https:\/\/pages.nist.gov\/jarvis_leaderboard\/) and I am wondering if we could contribute\/integrate this project as a part of huggingface. \n\n### Motivation\n\nSimilar to the main stream AI field, there is need of large scale benchmarks\/models\/infrastructure for chemistry\/materials data.\n\n### Your contribution\n\nWe can start adding datasets as our [benchmarks](https:\/\/github.com\/usnistgov\/jarvis_leaderboard\/tree\/main\/jarvis_leaderboard\/benchmarks) should be easily convertible to the dataset format.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5857\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5857\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5856","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5856\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5856\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5856\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5856","id":1709218242,"node_id":"I_kwDODunzps5l4JnC","number":5856,"title":"Error loading natural_questions","user":{"login":"Crownor","id":19185508,"node_id":"MDQ6VXNlcjE5MTg1NTA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19185508?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Crownor","html_url":"https:\/\/github.com\/Crownor","followers_url":"https:\/\/api.github.com\/users\/Crownor\/followers","following_url":"https:\/\/api.github.com\/users\/Crownor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Crownor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Crownor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Crownor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Crownor\/orgs","repos_url":"https:\/\/api.github.com\/users\/Crownor\/repos","events_url":"https:\/\/api.github.com\/users\/Crownor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Crownor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! You can avoid this error by using the preprocessed version:\r\n```python\r\nimport datasets\r\nds = datasets.load_dataset('natural_questions')\r\n```\r\n\r\nPS: Once we finish https:\/\/github.com\/huggingface\/datasets\/pull\/5364, this error will no longer be a problem.","> Hi! You can avoid this error by using the preprocessed version:\r\n> \r\n> ```python\r\n> import datasets\r\n> ds = datasets.load_dataset('natural_questions')\r\n> ```\r\n> \r\n> PS: Once we finish #5364, this error will no longer be a problem.\r\n\r\nThanks, wish #5364 finish early"],"created_at":1684118764000,"updated_at":1685956279000,"closed_at":1685956278000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nWhen try to load natural_questions through datasets == 2.12.0 with python == 3.8.9:\r\n\r\n```python\r\nimport datasets\r\ndatasets.load_dataset('natural_questions',beam_runner='DirectRunner')\r\n```\r\n\r\nIt failed with following info:\r\n\r\n`pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs`\n\n### Steps to reproduce the bug\n\nIn python console:\r\n\r\n```python\r\nimport datasets\r\ndatasets.load_dataset('natural_questions',beam_runner='DirectRunner')\r\n```\r\n\r\nThen the trace is:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/nlp\/.cache\/pypoetry\/virtualenvs\/drg-W3LF4Ol9-py3.8\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1797, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/nlp\/.cache\/pypoetry\/virtualenvs\/drg-W3LF4Ol9-py3.8\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 890, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/nlp\/.cache\/pypoetry\/virtualenvs\/drg-W3LF4Ol9-py3.8\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 2019, in _download_and_prepare\r\n num_examples, num_bytes = beam_writer.finalize(metrics.query(m_filter))\r\n File \"\/home\/nlp\/.cache\/pypoetry\/virtualenvs\/drg-W3LF4Ol9-py3.8\/lib\/python3.8\/site-packages\/datasets\/arrow_writer.py\", line 694, in finalize\r\n shard_num_bytes, _ = parquet_to_arrow(source, destination)\r\n File \"\/home\/nlp\/.cache\/pypoetry\/virtualenvs\/drg-W3LF4Ol9-py3.8\/lib\/python3.8\/site-packages\/datasets\/arrow_writer.py\", line 737, in parquet_to_arrow\r\n for record_batch in parquet_file.iter_batches():\r\n File \"pyarrow\/_parquet.pyx\", line 1323, in iter_batches\r\n File \"pyarrow\/error.pxi\", line 121, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs\r\n```\n\n### Expected behavior\n\nload natural_question questions\n\n### Environment info\n\n```\r\n- `datasets` version: 2.12.0\r\n- Platform: Linux-3.10.0-1160.42.2.el7.x86_64-x86_64-with-glibc2.2.5\r\n- Python version: 3.8.9\r\n- Huggingface_hub version: 0.14.1\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 2.0.1\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5856\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5856\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5855","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5855\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5855\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5855\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5855","id":1708784943,"node_id":"I_kwDODunzps5l2f0v","number":5855,"title":"`to_tf_dataset` consumes too much memory","user":{"login":"massquantity","id":28751760,"node_id":"MDQ6VXNlcjI4NzUxNzYw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28751760?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/massquantity","html_url":"https:\/\/github.com\/massquantity","followers_url":"https:\/\/api.github.com\/users\/massquantity\/followers","following_url":"https:\/\/api.github.com\/users\/massquantity\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/massquantity\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/massquantity\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/massquantity\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/massquantity\/orgs","repos_url":"https:\/\/api.github.com\/users\/massquantity\/repos","events_url":"https:\/\/api.github.com\/users\/massquantity\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/massquantity\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Cc @amyeroberts @Rocketknight1 \r\n\r\nIndded I think it's because it does something like this under the hood when there's no multiprocessing:\r\n\r\n```python\r\ntf_dataset = tf_dataset.shuffle(len(dataset))\r\n```\r\n\r\nPS: with multiprocessing it appears to be different:\r\n\r\n```python\r\nindices = np.arange(len(dataset))\r\nif shuffle:\r\n np.random.shuffle(indices)\r\n```","Hi @massquantity, the dataset being shuffled there is not the full dataset. If you look at [the line above](https:\/\/github.com\/huggingface\/datasets\/blob\/main\/src\/datasets\/utils\/tf_utils.py#L182), the dataset is actually just a single indices array at that point, and that array is the only thing that gets fully loaded into memory and shuffled. We then load samples from the dataset by applying a transform function to the shuffled dataset, which fetches samples based on the indices it receives.\r\n\r\nIf your dataset is **really** gigantic, then this index tensor might be a memory issue, but since it's just an int64 tensor it will only use 1GB of memory per 125 million samples.\r\n\r\nStill, if you're encountering memory issues, there might be another cause here - can you share some code to reproduce the error, or does it depend on some internal\/proprietary dataset?","Hi @Rocketknight1, you're right and I also noticed that only indices are used in shuffling. My data has shape (50000000, 10), but really the problem doesn't relate to a specific dataset. Simply running the following code costs me 10GB of memory.\r\n\r\n```python\r\nfrom datasets import Dataset\r\n\r\ndef gen():\r\n for i in range(50000000):\r\n yield {\"data\": i}\r\n\r\nds = Dataset.from_generator(gen, cache_dir=\".\/huggingface\")\r\n\r\ntf_ds = ds.to_tf_dataset(\r\n batch_size=1,\r\n shuffle=True,\r\n drop_remainder=False,\r\n prefetch=True,\r\n)\r\ntf_ds = iter(tf_ds)\r\nnext(tf_ds)\r\n# {'data': }\r\n```\r\n\r\nI just realized maybe it was an issue from tensorflow (I'm using tf 2.12). So I tried the following code, and it used 10GB of memory too.\r\n```python\r\nimport numpy as np\r\nimport tensorflow as tf\r\n\r\ndata_size = 50000000\r\ntf_dataset = tf.data.Dataset.from_tensor_slices(np.arange(data_size))\r\ntf_dataset = iter(tf_dataset.shuffle(data_size))\r\nnext(tf_dataset)\r\n# \r\n```\r\n\r\nBy the way, as @lhoestq mentioned, multiprocessing uses numpy shuffling, and it uses less than 1 GB of memory:\r\n```python\r\ntf_ds_mp = ds.to_tf_dataset(\r\n batch_size=1,\r\n shuffle=True,\r\n drop_remainder=False,\r\n prefetch=True,\r\n num_workers=2,\r\n)\r\n```","Thanks for that reproduction script - I've confirmed the same issue is occurring for me. Investigating it now!","Update: The memory usage is occurring in creation of the index and shuffle buffer. You can reproduce it very simply with:\r\n\r\n```python\r\nimport tensorflow as tf\r\nindices = tf.range(50_000_000, dtype=tf.int64)\r\ndataset = tf.data.Dataset.from_tensor_slices(indices)\r\ndataset = dataset.shuffle(len(dataset))\r\nprint(next(iter(dataset))\r\n```\r\nWhen I wrote this code I thought `tf.data` had an optimization for shuffling an entire tensor that wouldn't create the entire shuffle buffer, but evidently it's just creating the enormous buffer in memory. I'll see if I can find a more efficient way to do this - we might end up moving everything to the `numpy` multiprocessing path to avoid it.","I opened a PR to fix this - will continue the discussion there!"],"created_at":1684027349000,"updated_at":1686241972000,"closed_at":1686241972000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nHi, I'm using `to_tf_dataset` to convert a _large_ dataset to `tf.data.Dataset`. I observed that the data loading *before* training took a lot of time and memory, even with `batch_size=1`.\r\n\r\nAfter some digging, i believe the reason lies in the shuffle behavior. The [source code](https:\/\/github.com\/huggingface\/datasets\/blob\/main\/src\/datasets\/utils\/tf_utils.py#L185) uses `len(dataset)` as the `buffer_size`, which may load all the data into the memory, and the [tf.data doc](https:\/\/www.tensorflow.org\/guide\/data#randomly_shuffling_input_data) also states that \"While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill\".\r\n\r\n### Steps to reproduce the bug\r\n\r\n```python\r\nfrom datasets import Dataset\r\n\r\ndef gen(): # some large data\r\n for i in range(50000000):\r\n yield {\"data\": i}\r\n\r\nds = Dataset.from_generator(gen, cache_dir=\".\/huggingface\")\r\n\r\ntf_ds = ds.to_tf_dataset(\r\n batch_size=64,\r\n shuffle=False, # no shuffle\r\n drop_remainder=False,\r\n prefetch=True,\r\n)\r\n\r\n# fast and memory friendly \ud83e\udd17\r\nfor batch in tf_ds: \r\n ...\r\n\r\ntf_ds_shuffle = ds.to_tf_dataset(\r\n batch_size=64,\r\n shuffle=True,\r\n drop_remainder=False,\r\n prefetch=True,\r\n)\r\n\r\n# slow and memory hungry for simple iteration \ud83d\ude31\r\nfor batch in tf_ds_shuffle: \r\n ...\r\n\r\n```\r\n\r\n### Expected behavior\r\n\r\nShuffling should not load all the data into the memory. Would adding a `buffer_size` parameter in the `to_tf_dataset` API alleviate the problem?\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 2.11.0\r\n- Platform: Linux-5.17.1-051701-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.13\r\n- Huggingface_hub version: 0.13.4\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 1.4.3\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5855\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5855\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5854","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5854\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5854\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5854\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5854","id":1708779300,"node_id":"I_kwDODunzps5l2eck","number":5854,"title":"Can not load audiofolder dataset on kaggle","user":{"login":"ILG2021","id":93691919,"node_id":"U_kgDOBZWgDw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/93691919?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ILG2021","html_url":"https:\/\/github.com\/ILG2021","followers_url":"https:\/\/api.github.com\/users\/ILG2021\/followers","following_url":"https:\/\/api.github.com\/users\/ILG2021\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ILG2021\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ILG2021\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ILG2021\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ILG2021\/orgs","repos_url":"https:\/\/api.github.com\/users\/ILG2021\/repos","events_url":"https:\/\/api.github.com\/users\/ILG2021\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ILG2021\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! `audiofolder` requires `datasets>=2.5.0`, so please update the `datasets`' installation (`pip install -U datasets`) in the environment to resolve the issue.","> Hi! `audiofolder` requires `datasets>=2.5.0`, so please update the `datasets`' installation (`pip install -U datasets`) in the environment to resolve the issue.\r\n\r\nI don't think it is a problem of the version. It runs ok on colab or local machine. Only on kaggle will has this bug."],"created_at":1684025447000,"updated_at":1684072254000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\nIt's crash log:\r\nFileNotFoundError: Couldn't find a dataset script at \/kaggle\/working\/audiofolder\/audiofolder.py or any data file in the same directory. Couldn't find 'audiofolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/master\/datasets\/audiofolder\/audiofolder.py\r\n\r\n\r\n### Steps to reproduce the bug\r\n\r\n![image](https:\/\/github.com\/huggingface\/datasets\/assets\/93691919\/a2829d27-d15c-4acc-86fb-d1987c760468)\r\ncommon_voice = load_dataset(\"audiofolder\", data_dir=\"\/kaggle\/working\/data\")\r\n\r\n### Expected behavior\r\n\r\nload dataset without error. It works ok on colab, but on kaggle it happends.\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 2.1.0\r\n- Platform: Linux-5.15.109+-x86_64-with-glibc2.31\r\n- Python version: 3.10.10\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.5.3","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5854\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5854\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5853","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5853\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5853\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5853\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5853","id":1708092786,"node_id":"PR_kwDODunzps5QaZLP","number":5853,"title":"[docs] Redirects, migrated from nginx","user":{"login":"julien-c","id":326577,"node_id":"MDQ6VXNlcjMyNjU3Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/326577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/julien-c","html_url":"https:\/\/github.com\/julien-c","followers_url":"https:\/\/api.github.com\/users\/julien-c\/followers","following_url":"https:\/\/api.github.com\/users\/julien-c\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/julien-c\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/julien-c\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/julien-c\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/julien-c\/orgs","repos_url":"https:\/\/api.github.com\/users\/julien-c\/repos","events_url":"https:\/\/api.github.com\/users\/julien-c\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/julien-c\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","@mishig25 note that it's not exactly the same behavior as in nginx as here it interacts a bit with the `version` and the `language`\r\n\r\nShould be close enough, though.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007212 \/ 0.011353 (-0.004141) | 0.005125 \/ 0.011008 (-0.005883) | 0.098460 \/ 0.038508 (0.059952) | 0.034040 \/ 0.023109 (0.010931) | 0.320203 \/ 0.275898 (0.044305) | 0.357787 \/ 0.323480 (0.034307) | 0.006000 \/ 0.007986 (-0.001986) | 0.005644 \/ 0.004328 (0.001316) | 0.072654 \/ 0.004250 (0.068403) | 0.049393 \/ 0.037052 (0.012341) | 0.345686 \/ 0.258489 (0.087196) | 0.362345 \/ 0.293841 (0.068504) | 0.036597 \/ 0.128546 (-0.091949) | 0.012303 \/ 0.075646 (-0.063343) | 0.334374 \/ 0.419271 (-0.084897) | 0.062010 \/ 0.043533 (0.018477) | 0.312547 \/ 0.255139 (0.057408) | 0.336021 \/ 0.283200 (0.052821) | 0.112304 \/ 0.141683 (-0.029378) | 1.446706 \/ 1.452155 (-0.005449) | 1.523256 \/ 1.492716 (0.030540) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.217658 \/ 0.018006 (0.199652) | 0.449208 \/ 0.000490 (0.448718) | 0.002878 \/ 0.000200 (0.002679) | 0.000091 \/ 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025735 \/ 0.037411 (-0.011676) | 0.105876 \/ 0.014526 (0.091350) | 0.114887 \/ 0.176557 (-0.061669) | 0.170984 \/ 0.737135 (-0.566152) | 0.121420 \/ 0.296338 (-0.174918) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.419670 \/ 0.215209 (0.204461) | 4.189453 \/ 2.077655 (2.111798) | 1.938236 \/ 1.504120 (0.434116) | 1.769747 \/ 1.541195 (0.228553) | 1.910919 \/ 1.468490 (0.442429) | 0.705046 \/ 4.584777 (-3.879730) | 3.783774 \/ 3.745712 (0.038062) | 2.096504 \/ 5.269862 (-3.173358) | 1.339265 \/ 4.565676 (-3.226412) | 0.086670 \/ 0.424275 (-0.337605) | 0.012243 \/ 0.007607 (0.004636) | 0.524701 \/ 0.226044 (0.298657) | 5.240689 \/ 2.268929 (2.971760) | 2.473622 \/ 55.444624 (-52.971003) | 2.170568 \/ 6.876477 (-4.705909) | 2.289653 \/ 2.142072 (0.147581) | 0.848913 \/ 4.805227 (-3.956314) | 0.168332 \/ 6.500664 (-6.332332) | 0.064926 \/ 0.075469 (-0.010543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.193614 \/ 1.841788 (-0.648173) | 14.920403 \/ 8.074308 (6.846095) | 14.475059 \/ 10.191392 (4.283667) | 0.164458 \/ 0.680424 (-0.515966) | 0.017613 \/ 0.534201 (-0.516588) | 0.426311 \/ 0.579283 (-0.152972) | 0.431478 \/ 0.434364 (-0.002886) | 0.520280 \/ 0.540337 (-0.020057) | 0.627738 \/ 1.386936 (-0.759198) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007458 \/ 0.011353 (-0.003895) | 0.005363 \/ 0.011008 (-0.005645) | 0.076713 \/ 0.038508 (0.038205) | 0.034189 \/ 0.023109 (0.011079) | 0.359938 \/ 0.275898 (0.084040) | 0.395532 \/ 0.323480 (0.072052) | 0.005977 \/ 0.007986 (-0.002008) | 0.004263 \/ 0.004328 (-0.000065) | 0.075971 \/ 0.004250 (0.071721) | 0.051924 \/ 0.037052 (0.014871) | 0.362818 \/ 0.258489 (0.104329) | 0.409897 \/ 0.293841 (0.116056) | 0.035494 \/ 0.128546 (-0.093053) | 0.012399 \/ 0.075646 (-0.063247) | 0.088335 \/ 0.419271 (-0.330937) | 0.047968 \/ 0.043533 (0.004435) | 0.355744 \/ 0.255139 (0.100606) | 0.376339 \/ 0.283200 (0.093139) | 0.104542 \/ 0.141683 (-0.037141) | 1.464826 \/ 1.452155 (0.012672) | 1.600665 \/ 1.492716 (0.107948) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.220841 \/ 0.018006 (0.202834) | 0.446444 \/ 0.000490 (0.445954) | 0.000392 \/ 0.000200 (0.000192) | 0.000057 \/ 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029402 \/ 0.037411 (-0.008009) | 0.116511 \/ 0.014526 (0.101986) | 0.122959 \/ 0.176557 (-0.053598) | 0.171674 \/ 0.737135 (-0.565462) | 0.129871 \/ 0.296338 (-0.166468) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.450411 \/ 0.215209 (0.235202) | 4.471859 \/ 2.077655 (2.394205) | 2.229439 \/ 1.504120 (0.725319) | 2.053308 \/ 1.541195 (0.512114) | 2.142476 \/ 1.468490 (0.673986) | 0.708299 \/ 4.584777 (-3.876478) | 3.797830 \/ 3.745712 (0.052118) | 2.142509 \/ 5.269862 (-3.127352) | 1.333357 \/ 4.565676 (-3.232320) | 0.086837 \/ 0.424275 (-0.337439) | 0.012102 \/ 0.007607 (0.004495) | 0.548428 \/ 0.226044 (0.322384) | 5.490611 \/ 2.268929 (3.221682) | 2.713882 \/ 55.444624 (-52.730742) | 2.399638 \/ 6.876477 (-4.476839) | 2.481549 \/ 2.142072 (0.339477) | 0.839812 \/ 4.805227 (-3.965415) | 0.168890 \/ 6.500664 (-6.331774) | 0.065564 \/ 0.075469 (-0.009906) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.275507 \/ 1.841788 (-0.566281) | 14.896343 \/ 8.074308 (6.822035) | 13.159701 \/ 10.191392 (2.968309) | 0.172065 \/ 0.680424 (-0.508359) | 0.017507 \/ 0.534201 (-0.516694) | 0.420031 \/ 0.579283 (-0.159252) | 0.438835 \/ 0.434364 (0.004471) | 0.490597 \/ 0.540337 (-0.049741) | 0.583952 \/ 1.386936 (-0.802984) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#48c9755d0ae9abe4c4d6cd8c1ce76eff849f0e5c \"CML watermark\")\n"],"created_at":1683919167000,"updated_at":1684147039000,"closed_at":1684146614000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5853","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5853","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5853.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5853.patch","merged_at":1684146614000},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5853\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5853\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5852","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5852\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5852\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5852\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5852","id":1707927165,"node_id":"PR_kwDODunzps5QZ1lj","number":5852,"title":"Iterable torch formatting","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006567 \/ 0.011353 (-0.004786) | 0.004479 \/ 0.011008 (-0.006530) | 0.028286 \/ 0.038508 (-0.010222) | 0.033137 \/ 0.023109 (0.010028) | 0.305249 \/ 0.275898 (0.029351) | 0.330306 \/ 0.323480 (0.006826) | 0.003747 \/ 0.007986 (-0.004238) | 0.004409 \/ 0.004328 (0.000081) | 0.004742 \/ 0.004250 (0.000491) | 0.040780 \/ 0.037052 (0.003728) | 0.302879 \/ 0.258489 (0.044390) | 0.346880 \/ 0.293841 (0.053039) | 0.032908 \/ 0.128546 (-0.095638) | 0.010617 \/ 0.075646 (-0.065029) | 0.257996 \/ 0.419271 (-0.161275) | 0.051044 \/ 0.043533 (0.007511) | 0.306113 \/ 0.255139 (0.050974) | 0.324444 \/ 0.283200 (0.041244) | 0.100820 \/ 0.141683 (-0.040863) | 1.478402 \/ 1.452155 (0.026248) | 1.599398 \/ 1.492716 (0.106682) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.216540 \/ 0.018006 (0.198534) | 0.433480 \/ 0.000490 (0.432991) | 0.004032 \/ 0.000200 (0.003832) | 0.000084 \/ 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027807 \/ 0.037411 (-0.009604) | 0.107225 \/ 0.014526 (0.092699) | 0.120157 \/ 0.176557 (-0.056400) | 0.174130 \/ 0.737135 (-0.563005) | 0.128902 \/ 0.296338 (-0.167437) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.395996 \/ 0.215209 (0.180787) | 3.936254 \/ 2.077655 (1.858599) | 1.808864 \/ 1.504120 (0.304744) | 1.608935 \/ 1.541195 (0.067741) | 1.646427 \/ 1.468490 (0.177937) | 0.716026 \/ 4.584777 (-3.868751) | 3.815045 \/ 3.745712 (0.069333) | 2.271534 \/ 5.269862 (-2.998327) | 1.548728 \/ 4.565676 (-3.016948) | 0.076743 \/ 0.424275 (-0.347532) | 0.011575 \/ 0.007607 (0.003968) | 0.499202 \/ 0.226044 (0.273158) | 4.983754 \/ 2.268929 (2.714825) | 2.239319 \/ 55.444624 (-53.205306) | 1.919427 \/ 6.876477 (-4.957050) | 2.019664 \/ 2.142072 (-0.122408) | 0.866318 \/ 4.805227 (-3.938910) | 0.157309 \/ 6.500664 (-6.343355) | 0.063341 \/ 0.075469 (-0.012128) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.180817 \/ 1.841788 (-0.660971) | 14.579869 \/ 8.074308 (6.505561) | 14.277848 \/ 10.191392 (4.086456) | 0.182560 \/ 0.680424 (-0.497863) | 0.017402 \/ 0.534201 (-0.516799) | 0.411549 \/ 0.579283 (-0.167734) | 0.432938 \/ 0.434364 (-0.001426) | 0.545067 \/ 0.540337 (0.004730) | 0.642173 \/ 1.386936 (-0.744763) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006753 \/ 0.011353 (-0.004600) | 0.004590 \/ 0.011008 (-0.006418) | 0.006111 \/ 0.038508 (-0.032397) | 0.032763 \/ 0.023109 (0.009654) | 0.401001 \/ 0.275898 (0.125103) | 0.428063 \/ 0.323480 (0.104583) | 0.003730 \/ 0.007986 (-0.004255) | 0.004617 \/ 0.004328 (0.000289) | 0.004770 \/ 0.004250 (0.000519) | 0.049718 \/ 0.037052 (0.012666) | 0.399724 \/ 0.258489 (0.141235) | 0.440292 \/ 0.293841 (0.146451) | 0.032846 \/ 0.128546 (-0.095700) | 0.010842 \/ 0.075646 (-0.064804) | 0.012642 \/ 0.419271 (-0.406630) | 0.046043 \/ 0.043533 (0.002510) | 0.390862 \/ 0.255139 (0.135723) | 0.407027 \/ 0.283200 (0.123828) | 0.099349 \/ 0.141683 (-0.042334) | 1.455739 \/ 1.452155 (0.003584) | 1.572214 \/ 1.492716 (0.079497) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.227186 \/ 0.018006 (0.209180) | 0.447404 \/ 0.000490 (0.446914) | 0.000400 \/ 0.000200 (0.000200) | 0.000055 \/ 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029830 \/ 0.037411 (-0.007581) | 0.112365 \/ 0.014526 (0.097839) | 0.125736 \/ 0.176557 (-0.050821) | 0.174781 \/ 0.737135 (-0.562354) | 0.129439 \/ 0.296338 (-0.166900) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.444438 \/ 0.215209 (0.229229) | 4.459381 \/ 2.077655 (2.381726) | 2.264541 \/ 1.504120 (0.760421) | 2.075257 \/ 1.541195 (0.534062) | 2.181289 \/ 1.468490 (0.712799) | 0.725279 \/ 4.584777 (-3.859498) | 3.863253 \/ 3.745712 (0.117541) | 2.132498 \/ 5.269862 (-3.137364) | 1.402003 \/ 4.565676 (-3.163673) | 0.084268 \/ 0.424275 (-0.340007) | 0.011762 \/ 0.007607 (0.004155) | 0.556239 \/ 0.226044 (0.330194) | 5.617998 \/ 2.268929 (3.349070) | 2.754789 \/ 55.444624 (-52.689835) | 2.418418 \/ 6.876477 (-4.458059) | 2.479696 \/ 2.142072 (0.337624) | 0.870037 \/ 4.805227 (-3.935190) | 0.160480 \/ 6.500664 (-6.340184) | 0.064464 \/ 0.075469 (-0.011005) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.290916 \/ 1.841788 (-0.550872) | 14.783173 \/ 8.074308 (6.708865) | 13.355883 \/ 10.191392 (3.164491) | 0.169963 \/ 0.680424 (-0.510461) | 0.017657 \/ 0.534201 (-0.516544) | 0.409218 \/ 0.579283 (-0.170065) | 0.422942 \/ 0.434364 (-0.011422) | 0.494968 \/ 0.540337 (-0.045369) | 0.587044 \/ 1.386936 (-0.799892) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#2051e912d9525bc38a1caf295df0620619c488eb \"CML watermark\")\n","_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007183 \/ 0.011353 (-0.004169) | 0.004586 \/ 0.011008 (-0.006423) | 0.032668 \/ 0.038508 (-0.005840) | 0.040896 \/ 0.023109 (0.017787) | 0.358225 \/ 0.275898 (0.082327) | 0.395063 \/ 0.323480 (0.071583) | 0.004540 \/ 0.007986 (-0.003446) | 0.003849 \/ 0.004328 (-0.000480) | 0.005521 \/ 0.004250 (0.001271) | 0.053314 \/ 0.037052 (0.016262) | 0.362417 \/ 0.258489 (0.103928) | 0.414337 \/ 0.293841 (0.120496) | 0.030698 \/ 0.128546 (-0.097849) | 0.008823 \/ 0.075646 (-0.066823) | 0.303583 \/ 0.419271 (-0.115689) | 0.060277 \/ 0.043533 (0.016744) | 0.365938 \/ 0.255139 (0.110799) | 0.379554 \/ 0.283200 (0.096354) | 0.122545 \/ 0.141683 (-0.019138) | 1.712098 \/ 1.452155 (0.259943) | 1.802036 \/ 1.492716 (0.309319) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.239508 \/ 0.018006 (0.221502) | 0.492194 \/ 0.000490 (0.491704) | 0.003280 \/ 0.000200 (0.003081) | 0.000096 \/ 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033301 \/ 0.037411 (-0.004110) | 0.125851 \/ 0.014526 (0.111325) | 0.137757 \/ 0.176557 (-0.038799) | 0.207603 \/ 0.737135 (-0.529533) | 0.143507 \/ 0.296338 (-0.152831) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.470662 \/ 0.215209 (0.255453) | 4.736017 \/ 2.077655 (2.658363) | 2.154152 \/ 1.504120 (0.650032) | 1.954243 \/ 1.541195 (0.413048) | 2.080186 \/ 1.468490 (0.611696) | 0.622884 \/ 4.584777 (-3.961893) | 4.385885 \/ 3.745712 (0.640173) | 2.262085 \/ 5.269862 (-3.007776) | 1.454215 \/ 4.565676 (-3.111462) | 0.067342 \/ 0.424275 (-0.356933) | 0.012913 \/ 0.007607 (0.005306) | 0.600676 \/ 0.226044 (0.374631) | 5.915093 \/ 2.268929 (3.646164) | 2.664915 \/ 55.444624 (-52.779709) | 2.286986 \/ 6.876477 (-4.589490) | 2.387776 \/ 2.142072 (0.245704) | 0.757067 \/ 4.805227 (-4.048160) | 0.154625 \/ 6.500664 (-6.346039) | 0.074632 \/ 0.075469 (-0.000838) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.413229 \/ 1.841788 (-0.428558) | 17.433012 \/ 8.074308 (9.358704) | 16.980340 \/ 10.191392 (6.788948) | 0.218943 \/ 0.680424 (-0.461481) | 0.020525 \/ 0.534201 (-0.513676) | 0.451847 \/ 0.579283 (-0.127436) | 0.495587 \/ 0.434364 (0.061223) | 0.548739 \/ 0.540337 (0.008402) | 0.662120 \/ 1.386936 (-0.724816) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006775 \/ 0.011353 (-0.004577) | 0.004556 \/ 0.011008 (-0.006452) | 0.006462 \/ 0.038508 (-0.032046) | 0.039073 \/ 0.023109 (0.015964) | 0.429249 \/ 0.275898 (0.153351) | 0.469946 \/ 0.323480 (0.146467) | 0.004402 \/ 0.007986 (-0.003584) | 0.003798 \/ 0.004328 (-0.000530) | 0.005347 \/ 0.004250 (0.001097) | 0.053743 \/ 0.037052 (0.016691) | 0.434635 \/ 0.258489 (0.176146) | 0.475661 \/ 0.293841 (0.181820) | 0.029891 \/ 0.128546 (-0.098656) | 0.009058 \/ 0.075646 (-0.066588) | 0.010987 \/ 0.419271 (-0.408284) | 0.053877 \/ 0.043533 (0.010344) | 0.434428 \/ 0.255139 (0.179289) | 0.449637 \/ 0.283200 (0.166437) | 0.124331 \/ 0.141683 (-0.017352) | 1.736083 \/ 1.452155 (0.283928) | 1.831632 \/ 1.492716 (0.338916) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.248428 \/ 0.018006 (0.230422) | 0.493113 \/ 0.000490 (0.492623) | 0.000429 \/ 0.000200 (0.000229) | 0.000057 \/ 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031337 \/ 0.037411 (-0.006074) | 0.132360 \/ 0.014526 (0.117834) | 0.134734 \/ 0.176557 (-0.041822) | 0.193811 \/ 0.737135 (-0.543324) | 0.146883 \/ 0.296338 (-0.149456) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.510876 \/ 0.215209 (0.295666) | 5.170198 \/ 2.077655 (3.092543) | 2.572105 \/ 1.504120 (1.067985) | 2.316918 \/ 1.541195 (0.775723) | 2.449316 \/ 1.468490 (0.980826) | 0.612219 \/ 4.584777 (-3.972558) | 4.456740 \/ 3.745712 (0.711028) | 2.099757 \/ 5.269862 (-3.170105) | 1.293017 \/ 4.565676 (-3.272660) | 0.067922 \/ 0.424275 (-0.356353) | 0.013467 \/ 0.007607 (0.005860) | 0.634240 \/ 0.226044 (0.408196) | 6.373111 \/ 2.268929 (4.104182) | 3.171567 \/ 55.444624 (-52.273057) | 2.763411 \/ 6.876477 (-4.113066) | 2.845557 \/ 2.142072 (0.703485) | 0.763431 \/ 4.805227 (-4.041797) | 0.155949 \/ 6.500664 (-6.344715) | 0.076264 \/ 0.075469 (0.000795) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.468075 \/ 1.841788 (-0.373713) | 17.582354 \/ 8.074308 (9.508046) | 16.565964 \/ 10.191392 (6.374572) | 0.163779 \/ 0.680424 (-0.516644) | 0.020472 \/ 0.534201 (-0.513728) | 0.444416 \/ 0.579283 (-0.134867) | 0.488471 \/ 0.434364 (0.054107) | 0.550661 \/ 0.540337 (0.010323) | 0.667230 \/ 1.386936 (-0.719706) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#3655cbf1c627c945e393641d35298a166f1e4bf5 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006160 \/ 0.011353 (-0.005193) | 0.004093 \/ 0.011008 (-0.006915) | 0.056485 \/ 0.038508 (0.017977) | 0.033637 \/ 0.023109 (0.010528) | 0.296448 \/ 0.275898 (0.020550) | 0.332532 \/ 0.323480 (0.009052) | 0.003864 \/ 0.007986 (-0.004122) | 0.003446 \/ 0.004328 (-0.000883) | 0.034808 \/ 0.004250 (0.030558) | 0.048567 \/ 0.037052 (0.011514) | 0.296090 \/ 0.258489 (0.037601) | 0.336067 \/ 0.293841 (0.042226) | 0.026081 \/ 0.128546 (-0.102465) | 0.007875 \/ 0.075646 (-0.067771) | 0.286049 \/ 0.419271 (-0.133222) | 0.050411 \/ 0.043533 (0.006878) | 0.297016 \/ 0.255139 (0.041877) | 0.320030 \/ 0.283200 (0.036830) | 0.110374 \/ 0.141683 (-0.031308) | 1.432470 \/ 1.452155 (-0.019684) | 1.492479 \/ 1.492716 (-0.000238) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.262352 \/ 0.018006 (0.244346) | 0.557956 \/ 0.000490 (0.557467) | 0.010296 \/ 0.000200 (0.010096) | 0.000315 \/ 0.000054 (0.000260) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028801 \/ 0.037411 (-0.008611) | 0.109844 \/ 0.014526 (0.095318) | 0.122333 \/ 0.176557 (-0.054224) | 0.180571 \/ 0.737135 (-0.556564) | 0.125990 \/ 0.296338 (-0.170348) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.401643 \/ 0.215209 (0.186434) | 4.020993 \/ 2.077655 (1.943338) | 1.815256 \/ 1.504120 (0.311136) | 1.619579 \/ 1.541195 (0.078384) | 1.708889 \/ 1.468490 (0.240398) | 0.537847 \/ 4.584777 (-4.046930) | 3.743331 \/ 3.745712 (-0.002381) | 1.779891 \/ 5.269862 (-3.489970) | 1.021423 \/ 4.565676 (-3.544253) | 0.058869 \/ 0.424275 (-0.365406) | 0.011826 \/ 0.007607 (0.004218) | 0.499665 \/ 0.226044 (0.273621) | 4.980928 \/ 2.268929 (2.712000) | 2.285664 \/ 55.444624 (-53.158960) | 1.936553 \/ 6.876477 (-4.939923) | 2.090428 \/ 2.142072 (-0.051645) | 0.655218 \/ 4.805227 (-4.150009) | 0.133178 \/ 6.500664 (-6.367486) | 0.062991 \/ 0.075469 (-0.012478) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.168895 \/ 1.841788 (-0.672892) | 14.656773 \/ 8.074308 (6.582465) | 13.737921 \/ 10.191392 (3.546529) | 0.145383 \/ 0.680424 (-0.535041) | 0.017614 \/ 0.534201 (-0.516587) | 0.386499 \/ 0.579283 (-0.192784) | 0.425626 \/ 0.434364 (-0.008738) | 0.389572 \/ 0.540337 (-0.150766) | 0.386753 \/ 1.386936 (-1.000183) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005998 \/ 0.011353 (-0.005355) | 0.004265 \/ 0.011008 (-0.006743) | 0.034743 \/ 0.038508 (-0.003766) | 0.033929 \/ 0.023109 (0.010820) | 0.405535 \/ 0.275898 (0.129636) | 0.407235 \/ 0.323480 (0.083755) | 0.003972 \/ 0.007986 (-0.004013) | 0.003616 \/ 0.004328 (-0.000712) | 0.035278 \/ 0.004250 (0.031027) | 0.052990 \/ 0.037052 (0.015937) | 0.405228 \/ 0.258489 (0.146739) | 0.415007 \/ 0.293841 (0.121166) | 0.025951 \/ 0.128546 (-0.102595) | 0.007990 \/ 0.075646 (-0.067656) | 0.040492 \/ 0.419271 (-0.378779) | 0.049123 \/ 0.043533 (0.005591) | 0.399282 \/ 0.255139 (0.144143) | 0.384303 \/ 0.283200 (0.101103) | 0.115234 \/ 0.141683 (-0.026448) | 1.476904 \/ 1.452155 (0.024749) | 1.627191 \/ 1.492716 (0.134475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.209211 \/ 0.018006 (0.191205) | 0.566718 \/ 0.000490 (0.566228) | 0.002094 \/ 0.000200 (0.001894) | 0.000104 \/ 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030885 \/ 0.037411 (-0.006526) | 0.110777 \/ 0.014526 (0.096251) | 0.124382 \/ 0.176557 (-0.052174) | 0.175081 \/ 0.737135 (-0.562054) | 0.130263 \/ 0.296338 (-0.166075) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.448091 \/ 0.215209 (0.232882) | 4.484404 \/ 2.077655 (2.406749) | 2.278438 \/ 1.504120 (0.774318) | 2.087933 \/ 1.541195 (0.546738) | 2.186709 \/ 1.468490 (0.718219) | 0.534822 \/ 4.584777 (-4.049955) | 3.778229 \/ 3.745712 (0.032517) | 3.312334 \/ 5.269862 (-1.957528) | 1.557209 \/ 4.565676 (-3.008467) | 0.058923 \/ 0.424275 (-0.365352) | 0.011350 \/ 0.007607 (0.003743) | 0.550470 \/ 0.226044 (0.324426) | 5.480347 \/ 2.268929 (3.211419) | 2.781709 \/ 55.444624 (-52.662915) | 2.478729 \/ 6.876477 (-4.397748) | 2.492001 \/ 2.142072 (0.349929) | 0.652649 \/ 4.805227 (-4.152578) | 0.131334 \/ 6.500664 (-6.369330) | 0.065619 \/ 0.075469 (-0.009850) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.253998 \/ 1.841788 (-0.587790) | 15.207433 \/ 8.074308 (7.133124) | 14.627842 \/ 10.191392 (4.436450) | 0.146947 \/ 0.680424 (-0.533477) | 0.017533 \/ 0.534201 (-0.516668) | 0.391627 \/ 0.579283 (-0.187656) | 0.431113 \/ 0.434364 (-0.003251) | 0.413886 \/ 0.540337 (-0.126451) | 0.414483 \/ 1.386936 (-0.972453) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#3f4e98701590a4922050051eb0f4d63e6125723d \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007741 \/ 0.011353 (-0.003612) | 0.004584 \/ 0.011008 (-0.006424) | 0.067869 \/ 0.038508 (0.029361) | 0.041612 \/ 0.023109 (0.018503) | 0.377878 \/ 0.275898 (0.101980) | 0.421633 \/ 0.323480 (0.098153) | 0.004614 \/ 0.007986 (-0.003371) | 0.003824 \/ 0.004328 (-0.000504) | 0.041479 \/ 0.004250 (0.037229) | 0.053309 \/ 0.037052 (0.016256) | 0.390147 \/ 0.258489 (0.131658) | 0.437706 \/ 0.293841 (0.143865) | 0.035951 \/ 0.128546 (-0.092595) | 0.009231 \/ 0.075646 (-0.066415) | 0.357572 \/ 0.419271 (-0.061699) | 0.081332 \/ 0.043533 (0.037799) | 0.370076 \/ 0.255139 (0.114937) | 0.423653 \/ 0.283200 (0.140453) | 0.141401 \/ 0.141683 (-0.000282) | 1.722744 \/ 1.452155 (0.270589) | 1.914668 \/ 1.492716 (0.421952) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.256568 \/ 0.018006 (0.238562) | 0.512243 \/ 0.000490 (0.511753) | 0.019913 \/ 0.000200 (0.019713) | 0.000136 \/ 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031742 \/ 0.037411 (-0.005670) | 0.128537 \/ 0.014526 (0.114011) | 0.139962 \/ 0.176557 (-0.036594) | 0.210711 \/ 0.737135 (-0.526424) | 0.147162 \/ 0.296338 (-0.149177) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.509518 \/ 0.215209 (0.294309) | 5.083788 \/ 2.077655 (3.006134) | 2.455381 \/ 1.504120 (0.951262) | 2.208078 \/ 1.541195 (0.666883) | 2.341807 \/ 1.468490 (0.873317) | 0.580014 \/ 4.584777 (-4.004763) | 4.599492 \/ 3.745712 (0.853780) | 2.403249 \/ 5.269862 (-2.866612) | 1.559177 \/ 4.565676 (-3.006500) | 0.072846 \/ 0.424275 (-0.351429) | 0.017327 \/ 0.007607 (0.009720) | 0.627747 \/ 0.226044 (0.401703) | 6.242586 \/ 2.268929 (3.973657) | 2.982875 \/ 55.444624 (-52.461750) | 2.588645 \/ 6.876477 (-4.287832) | 2.765915 \/ 2.142072 (0.623843) | 0.720455 \/ 4.805227 (-4.084772) | 0.157474 \/ 6.500664 (-6.343190) | 0.074295 \/ 0.075469 (-0.001174) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.540799 \/ 1.841788 (-0.300988) | 18.054632 \/ 8.074308 (9.980324) | 16.544036 \/ 10.191392 (6.352644) | 0.201423 \/ 0.680424 (-0.479001) | 0.020497 \/ 0.534201 (-0.513704) | 0.496275 \/ 0.579283 (-0.083008) | 0.547380 \/ 0.434364 (0.113017) | 0.614605 \/ 0.540337 (0.074267) | 0.749889 \/ 1.386936 (-0.637047) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006963 \/ 0.011353 (-0.004389) | 0.004543 \/ 0.011008 (-0.006465) | 0.039530 \/ 0.038508 (0.001022) | 0.038420 \/ 0.023109 (0.015311) | 0.454885 \/ 0.275898 (0.178987) | 0.491731 \/ 0.323480 (0.168251) | 0.004211 \/ 0.007986 (-0.003775) | 0.003673 \/ 0.004328 (-0.000655) | 0.038735 \/ 0.004250 (0.034484) | 0.052085 \/ 0.037052 (0.015032) | 0.448924 \/ 0.258489 (0.190435) | 0.499254 \/ 0.293841 (0.205413) | 0.030069 \/ 0.128546 (-0.098477) | 0.009082 \/ 0.075646 (-0.066565) | 0.047181 \/ 0.419271 (-0.372090) | 0.054758 \/ 0.043533 (0.011225) | 0.445035 \/ 0.255139 (0.189896) | 0.475090 \/ 0.283200 (0.191891) | 0.122641 \/ 0.141683 (-0.019042) | 1.706514 \/ 1.452155 (0.254360) | 1.855726 \/ 1.492716 (0.363010) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.246028 \/ 0.018006 (0.228022) | 0.486382 \/ 0.000490 (0.485892) | 0.003038 \/ 0.000200 (0.002838) | 0.000107 \/ 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034298 \/ 0.037411 (-0.003113) | 0.135364 \/ 0.014526 (0.120838) | 0.146102 \/ 0.176557 (-0.030455) | 0.207997 \/ 0.737135 (-0.529139) | 0.153119 \/ 0.296338 (-0.143219) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.528758 \/ 0.215209 (0.313549) | 5.243303 \/ 2.077655 (3.165648) | 2.617194 \/ 1.504120 (1.113074) | 2.400740 \/ 1.541195 (0.859545) | 2.534692 \/ 1.468490 (1.066202) | 0.585825 \/ 4.584777 (-3.998952) | 4.879766 \/ 3.745712 (1.134054) | 2.377419 \/ 5.269862 (-2.892443) | 1.460711 \/ 4.565676 (-3.104966) | 0.075572 \/ 0.424275 (-0.348703) | 0.013650 \/ 0.007607 (0.006042) | 0.697103 \/ 0.226044 (0.471058) | 6.444984 \/ 2.268929 (4.176055) | 3.227662 \/ 55.444624 (-52.216963) | 2.875163 \/ 6.876477 (-4.001314) | 2.860953 \/ 2.142072 (0.718881) | 0.718908 \/ 4.805227 (-4.086319) | 0.158005 \/ 6.500664 (-6.342659) | 0.077581 \/ 0.075469 (0.002112) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.653027 \/ 1.841788 (-0.188760) | 18.789342 \/ 8.074308 (10.715034) | 16.762678 \/ 10.191392 (6.571286) | 0.238920 \/ 0.680424 (-0.441504) | 0.020698 \/ 0.534201 (-0.513502) | 0.512634 \/ 0.579283 (-0.066649) | 0.542235 \/ 0.434364 (0.107871) | 0.626634 \/ 0.540337 (0.086297) | 0.753324 \/ 1.386936 (-0.633612) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#f978ad8bec6e5e77868c6ffcc6f514354a03901d \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005737 \/ 0.011353 (-0.005616) | 0.003767 \/ 0.011008 (-0.007241) | 0.097792 \/ 0.038508 (0.059284) | 0.028466 \/ 0.023109 (0.005356) | 0.317703 \/ 0.275898 (0.041805) | 0.359512 \/ 0.323480 (0.036032) | 0.003428 \/ 0.007986 (-0.004558) | 0.002848 \/ 0.004328 (-0.001481) | 0.075668 \/ 0.004250 (0.071418) | 0.037165 \/ 0.037052 (0.000113) | 0.329539 \/ 0.258489 (0.071050) | 0.361365 \/ 0.293841 (0.067524) | 0.024777 \/ 0.128546 (-0.103769) | 0.008324 \/ 0.075646 (-0.067323) | 0.317346 \/ 0.419271 (-0.101926) | 0.043296 \/ 0.043533 (-0.000237) | 0.315318 \/ 0.255139 (0.060179) | 0.347641 \/ 0.283200 (0.064441) | 0.089551 \/ 0.141683 (-0.052132) | 1.506335 \/ 1.452155 (0.054180) | 1.573931 \/ 1.492716 (0.081215) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.208041 \/ 0.018006 (0.190034) | 0.428198 \/ 0.000490 (0.427708) | 0.002568 \/ 0.000200 (0.002369) | 0.000072 \/ 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023745 \/ 0.037411 (-0.013667) | 0.096256 \/ 0.014526 (0.081730) | 0.104917 \/ 0.176557 (-0.071639) | 0.164341 \/ 0.737135 (-0.572794) | 0.107972 \/ 0.296338 (-0.188367) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.453995 \/ 0.215209 (0.238786) | 4.546892 \/ 2.077655 (2.469238) | 2.185498 \/ 1.504120 (0.681378) | 1.989156 \/ 1.541195 (0.447962) | 2.053443 \/ 1.468490 (0.584953) | 0.559940 \/ 4.584777 (-4.024837) | 3.420759 \/ 3.745712 (-0.324954) | 1.771528 \/ 5.269862 (-3.498333) | 1.139692 \/ 4.565676 (-3.425984) | 0.067686 \/ 0.424275 (-0.356589) | 0.011729 \/ 0.007607 (0.004122) | 0.558001 \/ 0.226044 (0.331957) | 5.583886 \/ 2.268929 (3.314957) | 2.678726 \/ 55.444624 (-52.765899) | 2.324127 \/ 6.876477 (-4.552350) | 2.472805 \/ 2.142072 (0.330733) | 0.663163 \/ 4.805227 (-4.142065) | 0.134892 \/ 6.500664 (-6.365772) | 0.066722 \/ 0.075469 (-0.008747) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.195200 \/ 1.841788 (-0.646587) | 13.602517 \/ 8.074308 (5.528209) | 14.036344 \/ 10.191392 (3.844952) | 0.143759 \/ 0.680424 (-0.536665) | 0.017215 \/ 0.534201 (-0.516986) | 0.383749 \/ 0.579283 (-0.195534) | 0.388229 \/ 0.434364 (-0.046134) | 0.469366 \/ 0.540337 (-0.070971) | 0.560408 \/ 1.386936 (-0.826528) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005953 \/ 0.011353 (-0.005400) | 0.003840 \/ 0.011008 (-0.007168) | 0.077481 \/ 0.038508 (0.038973) | 0.028318 \/ 0.023109 (0.005209) | 0.403991 \/ 0.275898 (0.128093) | 0.433374 \/ 0.323480 (0.109894) | 0.003572 \/ 0.007986 (-0.004414) | 0.003033 \/ 0.004328 (-0.001295) | 0.075873 \/ 0.004250 (0.071623) | 0.039321 \/ 0.037052 (0.002269) | 0.416790 \/ 0.258489 (0.158301) | 0.459368 \/ 0.293841 (0.165527) | 0.025270 \/ 0.128546 (-0.103276) | 0.008574 \/ 0.075646 (-0.067072) | 0.083376 \/ 0.419271 (-0.335896) | 0.043206 \/ 0.043533 (-0.000327) | 0.404831 \/ 0.255139 (0.149692) | 0.418559 \/ 0.283200 (0.135360) | 0.099135 \/ 0.141683 (-0.042548) | 1.501315 \/ 1.452155 (0.049160) | 1.583912 \/ 1.492716 (0.091195) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.241510 \/ 0.018006 (0.223504) | 0.410473 \/ 0.000490 (0.409983) | 0.001857 \/ 0.000200 (0.001657) | 0.000081 \/ 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025366 \/ 0.037411 (-0.012045) | 0.103353 \/ 0.014526 (0.088828) | 0.107934 \/ 0.176557 (-0.068622) | 0.162388 \/ 0.737135 (-0.574747) | 0.113550 \/ 0.296338 (-0.182789) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.463529 \/ 0.215209 (0.248320) | 4.657688 \/ 2.077655 (2.580034) | 2.455088 \/ 1.504120 (0.950968) | 2.304833 \/ 1.541195 (0.763638) | 2.317520 \/ 1.468490 (0.849029) | 0.563395 \/ 4.584777 (-4.021382) | 3.408489 \/ 3.745712 (-0.337223) | 2.636379 \/ 5.269862 (-2.633482) | 1.425355 \/ 4.565676 (-3.140322) | 0.068335 \/ 0.424275 (-0.355940) | 0.011713 \/ 0.007607 (0.004106) | 0.550230 \/ 0.226044 (0.324186) | 5.519843 \/ 2.268929 (3.250915) | 2.864986 \/ 55.444624 (-52.579639) | 2.604821 \/ 6.876477 (-4.271655) | 2.701501 \/ 2.142072 (0.559428) | 0.668193 \/ 4.805227 (-4.137034) | 0.134739 \/ 6.500664 (-6.365925) | 0.067110 \/ 0.075469 (-0.008359) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.326358 \/ 1.841788 (-0.515430) | 14.184172 \/ 8.074308 (6.109864) | 14.139245 \/ 10.191392 (3.947853) | 0.151881 \/ 0.680424 (-0.528542) | 0.016718 \/ 0.534201 (-0.517483) | 0.367035 \/ 0.579283 (-0.212248) | 0.393512 \/ 0.434364 (-0.040852) | 0.441261 \/ 0.540337 (-0.099076) | 0.533907 \/ 1.386936 (-0.853029) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#54098759d023f0b3e8eccd2dd98d46a1c6d19cce \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006275 \/ 0.011353 (-0.005078) | 0.003980 \/ 0.011008 (-0.007028) | 0.097617 \/ 0.038508 (0.059109) | 0.034089 \/ 0.023109 (0.010980) | 0.297381 \/ 0.275898 (0.021483) | 0.330106 \/ 0.323480 (0.006626) | 0.003838 \/ 0.007986 (-0.004148) | 0.004042 \/ 0.004328 (-0.000287) | 0.074305 \/ 0.004250 (0.070055) | 0.048318 \/ 0.037052 (0.011265) | 0.295585 \/ 0.258489 (0.037096) | 0.346924 \/ 0.293841 (0.053083) | 0.027397 \/ 0.128546 (-0.101150) | 0.008452 \/ 0.075646 (-0.067194) | 0.326837 \/ 0.419271 (-0.092435) | 0.049515 \/ 0.043533 (0.005982) | 0.303931 \/ 0.255139 (0.048792) | 0.317647 \/ 0.283200 (0.034447) | 0.098280 \/ 0.141683 (-0.043403) | 1.442603 \/ 1.452155 (-0.009552) | 1.524050 \/ 1.492716 (0.031334) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.215095 \/ 0.018006 (0.197089) | 0.437662 \/ 0.000490 (0.437173) | 0.009771 \/ 0.000200 (0.009571) | 0.000401 \/ 0.000054 (0.000346) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027169 \/ 0.037411 (-0.010243) | 0.111383 \/ 0.014526 (0.096857) | 0.116163 \/ 0.176557 (-0.060394) | 0.173134 \/ 0.737135 (-0.564001) | 0.122376 \/ 0.296338 (-0.173962) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.398332 \/ 0.215209 (0.183123) | 3.974166 \/ 2.077655 (1.896511) | 1.793847 \/ 1.504120 (0.289727) | 1.615117 \/ 1.541195 (0.073922) | 1.660288 \/ 1.468490 (0.191798) | 0.523833 \/ 4.584777 (-4.060944) | 3.704273 \/ 3.745712 (-0.041439) | 1.873308 \/ 5.269862 (-3.396554) | 1.203546 \/ 4.565676 (-3.362131) | 0.064949 \/ 0.424275 (-0.359326) | 0.011830 \/ 0.007607 (0.004223) | 0.497294 \/ 0.226044 (0.271250) | 4.948663 \/ 2.268929 (2.679735) | 2.233391 \/ 55.444624 (-53.211234) | 1.903208 \/ 6.876477 (-4.973269) | 2.067908 \/ 2.142072 (-0.074164) | 0.644256 \/ 4.805227 (-4.160971) | 0.142798 \/ 6.500664 (-6.357866) | 0.064734 \/ 0.075469 (-0.010735) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.172313 \/ 1.841788 (-0.669475) | 14.665853 \/ 8.074308 (6.591545) | 13.147051 \/ 10.191392 (2.955659) | 0.139338 \/ 0.680424 (-0.541086) | 0.017452 \/ 0.534201 (-0.516749) | 0.395660 \/ 0.579283 (-0.183623) | 0.410138 \/ 0.434364 (-0.024226) | 0.460357 \/ 0.540337 (-0.079980) | 0.555670 \/ 1.386936 (-0.831266) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006247 \/ 0.011353 (-0.005106) | 0.004098 \/ 0.011008 (-0.006910) | 0.075050 \/ 0.038508 (0.036542) | 0.033232 \/ 0.023109 (0.010122) | 0.384139 \/ 0.275898 (0.108241) | 0.420865 \/ 0.323480 (0.097385) | 0.003889 \/ 0.007986 (-0.004096) | 0.003336 \/ 0.004328 (-0.000993) | 0.073837 \/ 0.004250 (0.069587) | 0.048775 \/ 0.037052 (0.011723) | 0.386373 \/ 0.258489 (0.127884) | 0.421718 \/ 0.293841 (0.127878) | 0.027553 \/ 0.128546 (-0.100993) | 0.008724 \/ 0.075646 (-0.066922) | 0.080970 \/ 0.419271 (-0.338302) | 0.045981 \/ 0.043533 (0.002448) | 0.364381 \/ 0.255139 (0.109242) | 0.391203 \/ 0.283200 (0.108004) | 0.101681 \/ 0.141683 (-0.040002) | 1.469533 \/ 1.452155 (0.017378) | 1.562016 \/ 1.492716 (0.069300) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.222318 \/ 0.018006 (0.204312) | 0.441395 \/ 0.000490 (0.440905) | 0.000408 \/ 0.000200 (0.000208) | 0.000057 \/ 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030291 \/ 0.037411 (-0.007120) | 0.114053 \/ 0.014526 (0.099527) | 0.123124 \/ 0.176557 (-0.053433) | 0.173474 \/ 0.737135 (-0.563661) | 0.129946 \/ 0.296338 (-0.166393) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.430342 \/ 0.215209 (0.215133) | 4.309782 \/ 2.077655 (2.232128) | 2.110668 \/ 1.504120 (0.606548) | 1.922881 \/ 1.541195 (0.381687) | 1.993562 \/ 1.468490 (0.525072) | 0.523682 \/ 4.584777 (-4.061095) | 3.774152 \/ 3.745712 (0.028440) | 3.354783 \/ 5.269862 (-1.915079) | 1.489793 \/ 4.565676 (-3.075884) | 0.065169 \/ 0.424275 (-0.359107) | 0.011626 \/ 0.007607 (0.004019) | 0.539126 \/ 0.226044 (0.313081) | 5.372593 \/ 2.268929 (3.103664) | 2.570652 \/ 55.444624 (-52.873973) | 2.253353 \/ 6.876477 (-4.623123) | 2.312876 \/ 2.142072 (0.170804) | 0.644241 \/ 4.805227 (-4.160986) | 0.138326 \/ 6.500664 (-6.362338) | 0.064491 \/ 0.075469 (-0.010979) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.344164 \/ 1.841788 (-0.497624) | 15.124679 \/ 8.074308 (7.050371) | 14.799310 \/ 10.191392 (4.607918) | 0.149054 \/ 0.680424 (-0.531370) | 0.017564 \/ 0.534201 (-0.516637) | 0.394593 \/ 0.579283 (-0.184690) | 0.428768 \/ 0.434364 (-0.005596) | 0.468235 \/ 0.540337 (-0.072103) | 0.557384 \/ 1.386936 (-0.829552) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#a8bfac259e2b5047bb8a0cdcefc8357477ebf93c \"CML watermark\")\n","@albertvillanova could you take a look at this one ? It directly follows the arrow formatting PR","I added tests for the `__array__` case which lets you go from any tensor format to any other tensor format.\r\n\r\nI also properly deprecated format_type and added a warning message.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007838 \/ 0.011353 (-0.003515) | 0.005177 \/ 0.011008 (-0.005831) | 0.131058 \/ 0.038508 (0.092550) | 0.035959 \/ 0.023109 (0.012850) | 0.414071 \/ 0.275898 (0.138173) | 0.429628 \/ 0.323480 (0.106148) | 0.005151 \/ 0.007986 (-0.002834) | 0.003979 \/ 0.004328 (-0.000349) | 0.103209 \/ 0.004250 (0.098958) | 0.046200 \/ 0.037052 (0.009148) | 0.414020 \/ 0.258489 (0.155531) | 0.475748 \/ 0.293841 (0.181907) | 0.041031 \/ 0.128546 (-0.087515) | 0.014462 \/ 0.075646 (-0.061185) | 0.423706 \/ 0.419271 (0.004434) | 0.063488 \/ 0.043533 (0.019955) | 0.404937 \/ 0.255139 (0.149798) | 0.404973 \/ 0.283200 (0.121773) | 0.114982 \/ 0.141683 (-0.026701) | 1.911867 \/ 1.452155 (0.459713) | 1.925274 \/ 1.492716 (0.432557) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.284656 \/ 0.018006 (0.266650) | 0.588329 \/ 0.000490 (0.587840) | 0.007092 \/ 0.000200 (0.006892) | 0.000143 \/ 0.000054 (0.000089) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025136 \/ 0.037411 (-0.012275) | 0.109514 \/ 0.014526 (0.094988) | 0.117953 \/ 0.176557 (-0.058603) | 0.195454 \/ 0.737135 (-0.541682) | 0.134243 \/ 0.296338 (-0.162096) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.584045 \/ 0.215209 (0.368836) | 6.456922 \/ 2.077655 (4.379267) | 2.759728 \/ 1.504120 (1.255608) | 2.260913 \/ 1.541195 (0.719718) | 2.292535 \/ 1.468490 (0.824045) | 0.906873 \/ 4.584777 (-3.677904) | 5.554455 \/ 3.745712 (1.808743) | 4.881557 \/ 5.269862 (-0.388305) | 2.509121 \/ 4.565676 (-2.056555) | 0.107191 \/ 0.424275 (-0.317084) | 0.014684 \/ 0.007607 (0.007077) | 0.761625 \/ 0.226044 (0.535580) | 7.582708 \/ 2.268929 (5.313780) | 3.150160 \/ 55.444624 (-52.294464) | 2.792284 \/ 6.876477 (-4.084193) | 2.881321 \/ 2.142072 (0.739248) | 1.108353 \/ 4.805227 (-3.696874) | 0.220129 \/ 6.500664 (-6.280535) | 0.075877 \/ 0.075469 (0.000408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.465743 \/ 1.841788 (-0.376045) | 17.679219 \/ 8.074308 (9.604911) | 18.929399 \/ 10.191392 (8.738007) | 0.219488 \/ 0.680424 (-0.460935) | 0.028435 \/ 0.534201 (-0.505766) | 0.512623 \/ 0.579283 (-0.066660) | 0.619983 \/ 0.434364 (0.185619) | 0.603430 \/ 0.540337 (0.063092) | 0.730416 \/ 1.386936 (-0.656520) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008285 \/ 0.011353 (-0.003068) | 0.005771 \/ 0.011008 (-0.005237) | 0.106444 \/ 0.038508 (0.067936) | 0.035078 \/ 0.023109 (0.011969) | 0.441198 \/ 0.275898 (0.165300) | 0.536279 \/ 0.323480 (0.212800) | 0.004561 \/ 0.007986 (-0.003424) | 0.006623 \/ 0.004328 (0.002294) | 0.102392 \/ 0.004250 (0.098142) | 0.051736 \/ 0.037052 (0.014684) | 0.479113 \/ 0.258489 (0.220624) | 0.535088 \/ 0.293841 (0.241247) | 0.041805 \/ 0.128546 (-0.086741) | 0.014031 \/ 0.075646 (-0.061615) | 0.115795 \/ 0.419271 (-0.303477) | 0.057913 \/ 0.043533 (0.014380) | 0.435847 \/ 0.255139 (0.180708) | 0.524831 \/ 0.283200 (0.241632) | 0.119419 \/ 0.141683 (-0.022263) | 1.835577 \/ 1.452155 (0.383423) | 1.936990 \/ 1.492716 (0.444273) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.288422 \/ 0.018006 (0.270416) | 0.569776 \/ 0.000490 (0.569287) | 0.005652 \/ 0.000200 (0.005452) | 0.000139 \/ 0.000054 (0.000085) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034632 \/ 0.037411 (-0.002779) | 0.136217 \/ 0.014526 (0.121691) | 0.139468 \/ 0.176557 (-0.037089) | 0.206804 \/ 0.737135 (-0.530331) | 0.148733 \/ 0.296338 (-0.147606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.667728 \/ 0.215209 (0.452518) | 6.548972 \/ 2.077655 (4.471317) | 3.051537 \/ 1.504120 (1.547417) | 2.581173 \/ 1.541195 (1.039978) | 2.653443 \/ 1.468490 (1.184953) | 0.906606 \/ 4.584777 (-3.678171) | 5.704384 \/ 3.745712 (1.958672) | 2.848618 \/ 5.269862 (-2.421244) | 1.821402 \/ 4.565676 (-2.744274) | 0.118018 \/ 0.424275 (-0.306257) | 0.014821 \/ 0.007607 (0.007214) | 0.821967 \/ 0.226044 (0.595923) | 8.165818 \/ 2.268929 (5.896889) | 3.744509 \/ 55.444624 (-51.700116) | 2.901097 \/ 6.876477 (-3.975380) | 3.018068 \/ 2.142072 (0.875996) | 1.106155 \/ 4.805227 (-3.699072) | 0.263118 \/ 6.500664 (-6.237546) | 0.088508 \/ 0.075469 (0.013039) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.725860 \/ 1.841788 (-0.115928) | 19.411246 \/ 8.074308 (11.336938) | 20.807499 \/ 10.191392 (10.616107) | 0.238417 \/ 0.680424 (-0.442007) | 0.026550 \/ 0.534201 (-0.507651) | 0.500715 \/ 0.579283 (-0.078568) | 0.615547 \/ 0.434364 (0.181183) | 0.614361 \/ 0.540337 (0.074023) | 0.720365 \/ 1.386936 (-0.666571) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#ae2e77f8344cdcc1c4c876f67936bec33087b19a \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006640 \/ 0.011353 (-0.004713) | 0.004079 \/ 0.011008 (-0.006930) | 0.100555 \/ 0.038508 (0.062046) | 0.037318 \/ 0.023109 (0.014209) | 0.320050 \/ 0.275898 (0.044152) | 0.358860 \/ 0.323480 (0.035380) | 0.003828 \/ 0.007986 (-0.004158) | 0.003215 \/ 0.004328 (-0.001113) | 0.076577 \/ 0.004250 (0.072326) | 0.048080 \/ 0.037052 (0.011028) | 0.324759 \/ 0.258489 (0.066270) | 0.361862 \/ 0.293841 (0.068021) | 0.030759 \/ 0.128546 (-0.097787) | 0.008998 \/ 0.075646 (-0.066648) | 0.329105 \/ 0.419271 (-0.090167) | 0.051407 \/ 0.043533 (0.007875) | 0.311067 \/ 0.255139 (0.055928) | 0.334401 \/ 0.283200 (0.051201) | 0.098307 \/ 0.141683 (-0.043376) | 1.500931 \/ 1.452155 (0.048776) | 1.574646 \/ 1.492716 (0.081930) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.219080 \/ 0.018006 (0.201073) | 0.447117 \/ 0.000490 (0.446627) | 0.009091 \/ 0.000200 (0.008891) | 0.000396 \/ 0.000054 (0.000341) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.026048 \/ 0.037411 (-0.011363) | 0.112714 \/ 0.014526 (0.098188) | 0.116426 \/ 0.176557 (-0.060131) | 0.172187 \/ 0.737135 (-0.564948) | 0.121707 \/ 0.296338 (-0.174632) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.358898 \/ 0.215209 (0.143689) | 3.589212 \/ 2.077655 (1.511557) | 1.677927 \/ 1.504120 (0.173807) | 1.515861 \/ 1.541195 (-0.025334) | 1.598479 \/ 1.468490 (0.129989) | 0.478265 \/ 4.584777 (-4.106512) | 3.834982 \/ 3.745712 (0.089270) | 1.933815 \/ 5.269862 (-3.336047) | 1.122769 \/ 4.565676 (-3.442908) | 0.066984 \/ 0.424275 (-0.357291) | 0.011276 \/ 0.007607 (0.003669) | 0.512530 \/ 0.226044 (0.286486) | 5.112667 \/ 2.268929 (2.843739) | 2.266336 \/ 55.444624 (-53.178288) | 1.929671 \/ 6.876477 (-4.946806) | 2.127231 \/ 2.142072 (-0.014842) | 0.671307 \/ 4.805227 (-4.133920) | 0.143919 \/ 6.500664 (-6.356745) | 0.066086 \/ 0.075469 (-0.009383) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.208767 \/ 1.841788 (-0.633021) | 15.008415 \/ 8.074308 (6.934106) | 14.085442 \/ 10.191392 (3.894050) | 0.184164 \/ 0.680424 (-0.496260) | 0.017619 \/ 0.534201 (-0.516582) | 0.394443 \/ 0.579283 (-0.184840) | 0.457653 \/ 0.434364 (0.023289) | 0.473169 \/ 0.540337 (-0.067169) | 0.571332 \/ 1.386936 (-0.815604) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007009 \/ 0.011353 (-0.004344) | 0.004330 \/ 0.011008 (-0.006678) | 0.077462 \/ 0.038508 (0.038954) | 0.034780 \/ 0.023109 (0.011671) | 0.395573 \/ 0.275898 (0.119675) | 0.425444 \/ 0.323480 (0.101964) | 0.004119 \/ 0.007986 (-0.003866) | 0.003597 \/ 0.004328 (-0.000731) | 0.075209 \/ 0.004250 (0.070958) | 0.050871 \/ 0.037052 (0.013819) | 0.402990 \/ 0.258489 (0.144500) | 0.445334 \/ 0.293841 (0.151493) | 0.032492 \/ 0.128546 (-0.096054) | 0.009066 \/ 0.075646 (-0.066581) | 0.083073 \/ 0.419271 (-0.336198) | 0.051661 \/ 0.043533 (0.008128) | 0.395207 \/ 0.255139 (0.140068) | 0.409556 \/ 0.283200 (0.126356) | 0.106035 \/ 0.141683 (-0.035648) | 1.506255 \/ 1.452155 (0.054101) | 1.598724 \/ 1.492716 (0.106008) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.194733 \/ 0.018006 (0.176727) | 0.444920 \/ 0.000490 (0.444431) | 0.002402 \/ 0.000200 (0.002202) | 0.000083 \/ 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030464 \/ 0.037411 (-0.006947) | 0.119153 \/ 0.014526 (0.104627) | 0.126081 \/ 0.176557 (-0.050476) | 0.179692 \/ 0.737135 (-0.557444) | 0.131834 \/ 0.296338 (-0.164504) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.440153 \/ 0.215209 (0.224944) | 4.397504 \/ 2.077655 (2.319850) | 2.138320 \/ 1.504120 (0.634200) | 1.950596 \/ 1.541195 (0.409402) | 2.079792 \/ 1.468490 (0.611302) | 0.537606 \/ 4.584777 (-4.047171) | 3.689420 \/ 3.745712 (-0.056292) | 2.960732 \/ 5.269862 (-2.309129) | 1.585652 \/ 4.565676 (-2.980024) | 0.066102 \/ 0.424275 (-0.358173) | 0.011429 \/ 0.007607 (0.003821) | 0.537011 \/ 0.226044 (0.310967) | 5.342171 \/ 2.268929 (3.073242) | 2.624446 \/ 55.444624 (-52.820179) | 2.313311 \/ 6.876477 (-4.563166) | 2.389166 \/ 2.142072 (0.247094) | 0.657547 \/ 4.805227 (-4.147681) | 0.141640 \/ 6.500664 (-6.359025) | 0.066102 \/ 0.075469 (-0.009367) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.130471 \/ 1.841788 (-0.711317) | 14.824792 \/ 8.074308 (6.750484) | 13.436463 \/ 10.191392 (3.245071) | 0.155688 \/ 0.680424 (-0.524736) | 0.015811 \/ 0.534201 (-0.518390) | 0.355623 \/ 0.579283 (-0.223660) | 0.450604 \/ 0.434364 (0.016241) | 0.472542 \/ 0.540337 (-0.067796) | 0.563584 \/ 1.386936 (-0.823352) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#963ff6de6eae80a6de4aabf0092eb3dfbe43096e \"CML watermark\")\n"],"created_at":1683910129000,"updated_at":1686672245000,"closed_at":1686671825000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5852","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5852","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5852.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5852.patch","merged_at":1686671825000},"body":"Used the TorchFormatter to get torch tensors in iterable dataset with format set to \"torch\".\r\n\r\nIt uses the data from Arrow if possible, otherwise applies recursive_tensorize.\r\n\r\nWhen set back to format_type=None, cast_to_python_objects is used.\r\n\r\nrequires https:\/\/github.com\/huggingface\/datasets\/pull\/5821\r\n\r\nclose https:\/\/github.com\/huggingface\/datasets\/issues\/5793","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5852\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5852\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5850","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5850\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5850\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5850\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5850","id":1707678911,"node_id":"PR_kwDODunzps5QZALv","number":5850,"title":"Make packaged builders skip non-supported file formats","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_5850). All of your documentation changes will be reflected on that endpoint.","Good idea. @mariosasko!!!\r\n\r\nPlease note that before this PR, the files are not evenly distributed for archives: `_generate_examples` gets a list of iterators, one for each archive (uncompressed to a directory).","This change could create silent problems when loading files with extensions that are not listed here. For example\r\n\r\n```python\r\nload_dataset(\"text\", data_files=[\"20230515.log\"])\r\n```\r\n\r\nwouldn't even log anything to say that the file was ignored.\r\n\r\nMaybe it's possible to do this at data files patterns resolution ?\r\n\r\ne.g. in get_data_patterns_in_dataset_repository \/ get_data_patterns_locally we could return patterns that include the most common extension","@lhoestq the issue you evoke (.log files skipped by text builder if .log is not added to .txt as supported extension) persists whether you perform the skip at the pattern resolution or in the builder itself.\r\n\r\nThe solution is to add the .log extension (besides the .txt) as supported by text, independently of where we perform the skip (at pattern resolution or in the builder itself).\r\n\r\nAdditionally, at the time we call for pattern resolution, we do not know the builder class yet, so that we cannot pass specific file extensions. First we call data files pattern resolution, and afterwards we call `infer_module_for_data_files` and then know the builder class.","> @lhoestq the issue you evoke (.log files skipped by text builder if .log is not added to .txt as supported extension) persists whether you perform the skip at the pattern resolution or in the builder itself.\r\n\r\nNo I simply think it's a bad breaking change to not support\r\n\r\n```python\r\nload_dataset(\"\", data_files=[\"path\/to\/file_with_unknown_or_no_extension\"])\r\n# or\r\nload_dataset(\"\", data_files=[\"https:\/\/url.to\/file_with_unknown_or_no_extension\"])\r\n```\r\n\r\nIdk if it's the easiest solution, but maybe it's possible to do the change only when inferring the patterns of dataset repositories. This should avoid this breaking change.\r\n\r\nFor example it could do something like that in `get_data_patterns_locally`\r\n\r\n```python\r\n Input:\r\n\r\n my_dataset_repository\/\r\n \u251c\u2500\u2500 README.md\r\n \u251c\u2500\u2500 banner.png\r\n \u251c\u2500\u2500 data0.csv\r\n \u251c\u2500\u2500 data1.csv\r\n \u2514\u2500\u2500 data2.csv\r\n\r\n Output:\r\n\r\n {\"train\": [\"**.csv\"]}\r\n```\r\n\r\ninstead of \r\n\r\n```python\r\n Output:\r\n\r\n {\"train\": [\"**\"]}\r\n```","I agree with @lhoestq - it should still be possible to request parsing a file with a specific builder even if the file's extension is \"invalid\" for the builder, and only ignore non-supported file formats when inferring the patterns.","Therefore, if I understand correctly, what you suggest is:\r\n- if the user passes a packaged builder to `load_dataset` (e.g. `load_dataset(\"csv\",...`), then the *passed* `data_files` should not be filtered to remove unsupported extensions. No breaking change in this case\r\n- if the user passes a no-script repo\/folder to `load_dataset` (e.g. `load_dataset(\"my_dataset_repository\",...`), then the *inferred* data files should be filtered to remove the extensions that are not supported by the inferred module name builder\r\n - if the user passes `data_files` as well, then I guess these should not be filtered, to avoid any breaking change as in the first case above","Yes that would be ideal imo !","I think this now fulfills all the requirements.","I find it a bit confusing to still be able to pass data_files that are going to be silently ignored based on the value of `only_supported_extensions`. My suggestion was to have the right data files pattern, not to filter a posteriori (sorry if my last message was confusing).\r\n\r\nHaving the right data files pattern would also allow users to inspect what's actually being loaded with\r\n```\r\nload_dataset_builder(...).config.data_files\r\n```\r\nand it would list exactly what data files are used."],"created_at":1683899554000,"updated_at":1686140798000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5850","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5850","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5850.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5850.patch","merged_at":null},"body":"This PR makes packaged builders skip non-supported file formats:\r\n- Csv builder skips non-CSV files\r\n- Analogously for the other builders\r\n\r\nFix #5849.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5850\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5850\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5849","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5849\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5849\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5849\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5849","id":1707551511,"node_id":"I_kwDODunzps5lxysX","number":5849,"title":"CSV datasets should only read the CSV data files in the repo","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462.0,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1683894593000,"updated_at":1687443387000,"closed_at":1687443387000,"author_association":"MEMBER","active_lock_reason":null,"draft":null,"pull_request":null,"body":"When a no-script dataset has many CSV files and a JPG file, the library infers to use the Csv builder, but tries to read as CSV all files in the repo, also the JPG file.\r\n\r\nI think the Csv builder should filter out non-CSV files when reading.\r\n\r\nAn analogue solution should be implemented for other packaged builders.\r\n\r\nRelated to:\r\n- https:\/\/huggingface.co\/datasets\/abidlabs\/img2text\/discussions\/1\r\n- https:\/\/github.com\/gradio-app\/gradio\/pull\/3973#issuecomment-1545409061\r\n\r\nCC: @abidlabs @severo ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5849\/reactions","total_count":2,"+1":2,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5849\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5848","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5848\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5848\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5848\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5848","id":1707506734,"node_id":"PR_kwDODunzps5QYa1B","number":5848,"title":"Add `accelerate` as metric's test dependency to fix CI error","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007565 \/ 0.011353 (-0.003788) | 0.005361 \/ 0.011008 (-0.005647) | 0.098963 \/ 0.038508 (0.060455) | 0.034271 \/ 0.023109 (0.011162) | 0.323421 \/ 0.275898 (0.047523) | 0.348495 \/ 0.323480 (0.025015) | 0.006244 \/ 0.007986 (-0.001741) | 0.004215 \/ 0.004328 (-0.000113) | 0.073614 \/ 0.004250 (0.069364) | 0.049334 \/ 0.037052 (0.012282) | 0.315277 \/ 0.258489 (0.056788) | 0.354325 \/ 0.293841 (0.060484) | 0.035001 \/ 0.128546 (-0.093545) | 0.012149 \/ 0.075646 (-0.063497) | 0.335614 \/ 0.419271 (-0.083657) | 0.050532 \/ 0.043533 (0.006999) | 0.308500 \/ 0.255139 (0.053361) | 0.324620 \/ 0.283200 (0.041421) | 0.110241 \/ 0.141683 (-0.031442) | 1.443923 \/ 1.452155 (-0.008232) | 1.559289 \/ 1.492716 (0.066573) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.207629 \/ 0.018006 (0.189622) | 0.433251 \/ 0.000490 (0.432762) | 0.003021 \/ 0.000200 (0.002821) | 0.000074 \/ 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028312 \/ 0.037411 (-0.009100) | 0.111829 \/ 0.014526 (0.097303) | 0.127099 \/ 0.176557 (-0.049458) | 0.184702 \/ 0.737135 (-0.552433) | 0.125062 \/ 0.296338 (-0.171277) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.399451 \/ 0.215209 (0.184242) | 3.966528 \/ 2.077655 (1.888874) | 1.826004 \/ 1.504120 (0.321884) | 1.669547 \/ 1.541195 (0.128353) | 1.751584 \/ 1.468490 (0.283094) | 0.688308 \/ 4.584777 (-3.896469) | 3.813275 \/ 3.745712 (0.067562) | 3.181554 \/ 5.269862 (-2.088307) | 1.750566 \/ 4.565676 (-2.815111) | 0.085038 \/ 0.424275 (-0.339237) | 0.011992 \/ 0.007607 (0.004385) | 0.502374 \/ 0.226044 (0.276330) | 4.970614 \/ 2.268929 (2.701686) | 2.309617 \/ 55.444624 (-53.135007) | 2.012427 \/ 6.876477 (-4.864050) | 2.156348 \/ 2.142072 (0.014276) | 0.834415 \/ 4.805227 (-3.970812) | 0.167912 \/ 6.500664 (-6.332752) | 0.065711 \/ 0.075469 (-0.009758) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.223132 \/ 1.841788 (-0.618656) | 15.126753 \/ 8.074308 (7.052445) | 14.829184 \/ 10.191392 (4.637792) | 0.142582 \/ 0.680424 (-0.537842) | 0.017483 \/ 0.534201 (-0.516718) | 0.429768 \/ 0.579283 (-0.149516) | 0.422745 \/ 0.434364 (-0.011619) | 0.508813 \/ 0.540337 (-0.031525) | 0.618716 \/ 1.386936 (-0.768220) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007749 \/ 0.011353 (-0.003604) | 0.005433 \/ 0.011008 (-0.005576) | 0.076223 \/ 0.038508 (0.037715) | 0.036334 \/ 0.023109 (0.013225) | 0.375339 \/ 0.275898 (0.099441) | 0.413674 \/ 0.323480 (0.090194) | 0.006207 \/ 0.007986 (-0.001778) | 0.004085 \/ 0.004328 (-0.000244) | 0.076154 \/ 0.004250 (0.071904) | 0.050324 \/ 0.037052 (0.013271) | 0.382919 \/ 0.258489 (0.124429) | 0.442508 \/ 0.293841 (0.148667) | 0.035951 \/ 0.128546 (-0.092595) | 0.012067 \/ 0.075646 (-0.063580) | 0.087649 \/ 0.419271 (-0.331623) | 0.048786 \/ 0.043533 (0.005253) | 0.373541 \/ 0.255139 (0.118402) | 0.400437 \/ 0.283200 (0.117237) | 0.102622 \/ 0.141683 (-0.039061) | 1.472443 \/ 1.452155 (0.020288) | 1.580178 \/ 1.492716 (0.087462) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.222105 \/ 0.018006 (0.204098) | 0.445465 \/ 0.000490 (0.444975) | 0.003671 \/ 0.000200 (0.003471) | 0.000096 \/ 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030808 \/ 0.037411 (-0.006603) | 0.116687 \/ 0.014526 (0.102161) | 0.124972 \/ 0.176557 (-0.051584) | 0.175621 \/ 0.737135 (-0.561514) | 0.129029 \/ 0.296338 (-0.167310) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.434627 \/ 0.215209 (0.219418) | 4.330268 \/ 2.077655 (2.252613) | 2.140266 \/ 1.504120 (0.636146) | 1.960705 \/ 1.541195 (0.419510) | 2.035949 \/ 1.468490 (0.567459) | 0.696830 \/ 4.584777 (-3.887947) | 3.790468 \/ 3.745712 (0.044756) | 3.194112 \/ 5.269862 (-2.075750) | 1.577728 \/ 4.565676 (-2.987948) | 0.085445 \/ 0.424275 (-0.338830) | 0.012207 \/ 0.007607 (0.004600) | 0.555199 \/ 0.226044 (0.329154) | 5.551539 \/ 2.268929 (3.282610) | 2.630917 \/ 55.444624 (-52.813707) | 2.383362 \/ 6.876477 (-4.493114) | 2.476301 \/ 2.142072 (0.334229) | 0.845773 \/ 4.805227 (-3.959455) | 0.169229 \/ 6.500664 (-6.331435) | 0.066064 \/ 0.075469 (-0.009405) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.277543 \/ 1.841788 (-0.564245) | 15.775637 \/ 8.074308 (7.701329) | 13.528588 \/ 10.191392 (3.337196) | 0.167428 \/ 0.680424 (-0.512996) | 0.017581 \/ 0.534201 (-0.516620) | 0.454472 \/ 0.579283 (-0.124811) | 0.427987 \/ 0.434364 (-0.006377) | 0.551512 \/ 0.540337 (0.011175) | 0.650811 \/ 1.386936 (-0.736125) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#96a6f5f526cc90330df597ae0097274742d5b84f \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009800 \/ 0.011353 (-0.001552) | 0.006443 \/ 0.011008 (-0.004565) | 0.144137 \/ 0.038508 (0.105629) | 0.037493 \/ 0.023109 (0.014383) | 0.482306 \/ 0.275898 (0.206408) | 0.467625 \/ 0.323480 (0.144145) | 0.006812 \/ 0.007986 (-0.001174) | 0.004810 \/ 0.004328 (0.000481) | 0.109047 \/ 0.004250 (0.104796) | 0.047169 \/ 0.037052 (0.010116) | 0.451253 \/ 0.258489 (0.192764) | 0.511339 \/ 0.293841 (0.217498) | 0.055583 \/ 0.128546 (-0.072963) | 0.021810 \/ 0.075646 (-0.053836) | 0.426522 \/ 0.419271 (0.007250) | 0.070282 \/ 0.043533 (0.026749) | 0.469631 \/ 0.255139 (0.214492) | 0.484951 \/ 0.283200 (0.201751) | 0.117370 \/ 0.141683 (-0.024313) | 1.809917 \/ 1.452155 (0.357763) | 1.882659 \/ 1.492716 (0.389943) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.223843 \/ 0.018006 (0.205837) | 0.549216 \/ 0.000490 (0.548726) | 0.007120 \/ 0.000200 (0.006920) | 0.000128 \/ 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033057 \/ 0.037411 (-0.004354) | 0.128242 \/ 0.014526 (0.113716) | 0.140906 \/ 0.176557 (-0.035650) | 0.213122 \/ 0.737135 (-0.524013) | 0.148115 \/ 0.296338 (-0.148224) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.638712 \/ 0.215209 (0.423503) | 6.383684 \/ 2.077655 (4.306029) | 2.477020 \/ 1.504120 (0.972900) | 2.129190 \/ 1.541195 (0.587996) | 2.230503 \/ 1.468490 (0.762013) | 1.367167 \/ 4.584777 (-3.217610) | 5.570586 \/ 3.745712 (1.824873) | 5.462857 \/ 5.269862 (0.192996) | 2.990604 \/ 4.565676 (-1.575073) | 0.146543 \/ 0.424275 (-0.277732) | 0.016060 \/ 0.007607 (0.008453) | 0.812691 \/ 0.226044 (0.586646) | 7.928041 \/ 2.268929 (5.659112) | 3.329494 \/ 55.444624 (-52.115130) | 2.523452 \/ 6.876477 (-4.353025) | 2.672374 \/ 2.142072 (0.530302) | 1.598554 \/ 4.805227 (-3.206673) | 0.284727 \/ 6.500664 (-6.215937) | 0.080359 \/ 0.075469 (0.004889) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.501112 \/ 1.841788 (-0.340675) | 17.553644 \/ 8.074308 (9.479335) | 22.704062 \/ 10.191392 (12.512670) | 0.225575 \/ 0.680424 (-0.454849) | 0.026531 \/ 0.534201 (-0.507670) | 0.520129 \/ 0.579283 (-0.059154) | 0.626220 \/ 0.434364 (0.191856) | 0.631740 \/ 0.540337 (0.091403) | 0.750611 \/ 1.386936 (-0.636325) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009866 \/ 0.011353 (-0.001487) | 0.005733 \/ 0.011008 (-0.005275) | 0.111529 \/ 0.038508 (0.073021) | 0.042001 \/ 0.023109 (0.018891) | 0.458578 \/ 0.275898 (0.182680) | 0.507796 \/ 0.323480 (0.184316) | 0.006547 \/ 0.007986 (-0.001438) | 0.005611 \/ 0.004328 (0.001282) | 0.115321 \/ 0.004250 (0.111070) | 0.048741 \/ 0.037052 (0.011689) | 0.447611 \/ 0.258489 (0.189122) | 0.531830 \/ 0.293841 (0.237989) | 0.052176 \/ 0.128546 (-0.076370) | 0.022431 \/ 0.075646 (-0.053216) | 0.120709 \/ 0.419271 (-0.298562) | 0.067301 \/ 0.043533 (0.023769) | 0.460577 \/ 0.255139 (0.205438) | 0.497805 \/ 0.283200 (0.214605) | 0.121830 \/ 0.141683 (-0.019853) | 1.876436 \/ 1.452155 (0.424281) | 1.983491 \/ 1.492716 (0.490775) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.230982 \/ 0.018006 (0.212976) | 0.540643 \/ 0.000490 (0.540153) | 0.004646 \/ 0.000200 (0.004446) | 0.000131 \/ 0.000054 (0.000077) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034230 \/ 0.037411 (-0.003181) | 0.136454 \/ 0.014526 (0.121928) | 0.143370 \/ 0.176557 (-0.033187) | 0.206752 \/ 0.737135 (-0.530384) | 0.148722 \/ 0.296338 (-0.147617) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.704667 \/ 0.215209 (0.489458) | 7.112079 \/ 2.077655 (5.034424) | 3.083916 \/ 1.504120 (1.579797) | 2.606388 \/ 1.541195 (1.065193) | 2.738505 \/ 1.468490 (1.270015) | 1.314897 \/ 4.584777 (-3.269880) | 5.764442 \/ 3.745712 (2.018729) | 3.491890 \/ 5.269862 (-1.777972) | 2.299983 \/ 4.565676 (-2.265693) | 0.169655 \/ 0.424275 (-0.254620) | 0.015251 \/ 0.007607 (0.007643) | 0.977230 \/ 0.226044 (0.751186) | 9.697773 \/ 2.268929 (7.428844) | 3.826928 \/ 55.444624 (-51.617697) | 3.108238 \/ 6.876477 (-3.768239) | 3.103242 \/ 2.142072 (0.961169) | 1.586645 \/ 4.805227 (-3.218582) | 0.287181 \/ 6.500664 (-6.213483) | 0.107332 \/ 0.075469 (0.031863) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.712710 \/ 1.841788 (-0.129077) | 19.169403 \/ 8.074308 (11.095095) | 21.777301 \/ 10.191392 (11.585909) | 0.216918 \/ 0.680424 (-0.463506) | 0.026551 \/ 0.534201 (-0.507650) | 0.570383 \/ 0.579283 (-0.008900) | 0.643885 \/ 0.434364 (0.209521) | 0.673906 \/ 0.540337 (0.133568) | 0.824573 \/ 1.386936 (-0.562363) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#4ead18b6921c9576a3078d2fb685c38f1e1a4b8a \"CML watermark\")\n"],"created_at":1683892861000,"updated_at":1683899327000,"closed_at":1683898746000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5848","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5848","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5848.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5848.patch","merged_at":1683898746000},"body":"The `frugalscore` metric uses Transformers' Trainer, which requires `accelerate` (as of recently).\r\n\r\nFixes the following [CI error](https:\/\/github.com\/huggingface\/datasets\/actions\/runs\/4950900048\/jobs\/8855148703?pr=5845).","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5848\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5848\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5847","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5847\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5847\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5847\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5847","id":1706616634,"node_id":"I_kwDODunzps5luOc6","number":5847,"title":"Streaming IterableDataset not working with translation pipeline","user":{"login":"jlquinn","id":826841,"node_id":"MDQ6VXNlcjgyNjg0MQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/826841?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jlquinn","html_url":"https:\/\/github.com\/jlquinn","followers_url":"https:\/\/api.github.com\/users\/jlquinn\/followers","following_url":"https:\/\/api.github.com\/users\/jlquinn\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jlquinn\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jlquinn\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jlquinn\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jlquinn\/orgs","repos_url":"https:\/\/api.github.com\/users\/jlquinn\/repos","events_url":"https:\/\/api.github.com\/users\/jlquinn\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jlquinn\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I wasn't sure to file this against transformers or datasets.","[`KeyDataset`](https:\/\/github.com\/huggingface\/transformers\/blob\/7f8b909189547944617741d8d3c6c84504701693\/src\/transformers\/pipelines\/pt_utils.py#L296) doesn't support iterable datasets, so you either need to implement a version that does (and also indexing nested (translation) fields):\r\n\r\n```python\r\nfrom torch.utils.data import Dataset, IterableDataset\r\n\r\ndef build_key_fetcher(key: str):\r\n def _key_fetcher(item):\r\n for sub_key in key.split(\".\"):\r\n item = item[sub_key]\r\n return item\r\n return _key_fetcher\r\n\r\nclass KeyDataset(Dataset):\r\n def __new__(cls, dataset: Dataset, key: str):\r\n cls = _KeyIterableDataset if isinstance(dataset, IterableDataset) else _KeyMapDataset\r\n self = object.__new__(cls)\r\n self.dataset = dataset\r\n self.key = key\r\n self._key_fetcher = build_key_fetcher(key)\r\n return self\r\n\r\nclass _KeyMapDataset(KeyDataset):\r\n def __getitem__(self, i):\r\n return self._key_fetcher(self.dataset[i])\r\n \r\n def __len__(self):\r\n return len(self.dataset)\r\n\r\n\r\nclass _KeyIterableDataset(KeyDataset):\r\n def __iter__(self):\r\n for ex in self.dataset:\r\n yield self._key_fetcher(ex)\r\n\r\nks = KeyDataset(ds, \"translation.en\")\r\n```\r\n\r\nor use `IterableDataset`'s `map`:\r\n```python\r\ndef fetch_en_translation(ex):\r\n return {\"en\": ex[\"translation\"][\"en\"]}\r\nks = ds.map(fetch_en_translation, remove_columns=ds.column_names) \r\n```\r\n\r\ncc @sgugger: Perhaps the `KeyDataset` + PyTorch `IterableDataset` case should be supported by Transformers","@mariosasko The map snippet didn't quite work, but gave me enough of a clue to get it working. The following snippet does work:\r\n```\r\ndef en_translation(x):\r\n return {\"en\":x['translation']['en']}\r\nks = ds.map(en_translation, remove_columns=['translation'])\r\ntest=[]\r\nfor x in iter(ks):\r\n test.append(x['en'])\r\nxx= mt(test)\r\nfor x in xx:\r\n print(x)\r\n```\r\n\r\nI tried just returning `x['translation']['en`]` in the helper function instead of the dict, but that didn't give me an iterator over strings that pipeline would work with either.\r\n\r\n\r\nThe snippet as is gives the following error:\r\n```\r\nTraceback (most recent call last):\r\n File \"\/home\/jlquinn\/miniconda3\/envs\/watnlp\/lib\/python3.9\/pdb.py\", line 1704, in main\r\n pdb._runscript(mainpyfile)\r\n File \"\/home\/jlquinn\/miniconda3\/envs\/watnlp\/lib\/python3.9\/pdb.py\", line 1573, in _runscript\r\n self.run(statement)\r\n File \"\/home\/jlquinn\/miniconda3\/envs\/watnlp\/lib\/python3.9\/bdb.py\", line 580, in run\r\n exec(cmd, globals, locals)\r\n File \"\", line 1, in \r\n File \"\/home\/jlquinn\/models\/hf\/ende.t5.pipe.py\", line 1, in \r\n from transformers import pipeline\r\n File \"\/home\/jlquinn\/miniconda3\/envs\/watnlp\/lib\/python3.9\/site-packages\/transformers\/pipelines\/text2text_generation.py\", line 335, in __call__\r\n return super().__call__(*args, **kwargs)\r\n File \"\/home\/jlquinn\/miniconda3\/envs\/watnlp\/lib\/python3.9\/site-packages\/transformers\/pipelines\/text2text_generation.py\", line 138, in __call__\r\n result = super().__call__(*args, **kwargs)\r\n File \"\/home\/jlquinn\/miniconda3\/envs\/watnlp\/lib\/python3.9\/site-packages\/transformers\/pipelines\/base.py\", line 1027, in __call__\r\n return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\r\n File \"\/home\/jlquinn\/miniconda3\/envs\/watnlp\/lib\/python3.9\/site-packages\/transformers\/pipelines\/base.py\", line 1033, in run_single\r\n model_inputs = self.preprocess(inputs, **preprocess_params)\r\n File \"\/home\/jlquinn\/miniconda3\/envs\/watnlp\/lib\/python3.9\/site-packages\/transformers\/pipelines\/text2text_generation.py\", line 287, in preprocess\r\n return super()._parse_and_tokenize(*args, truncation=truncation)\r\n File \"\/home\/jlquinn\/miniconda3\/envs\/watnlp\/lib\/python3.9\/site-packages\/transformers\/pipelines\/text2text_generation.py\", line 100, in _parse_and_tokenize\r\n raise ValueError(\r\nValueError: `args[0]`: have the wrong format. The should be either of type `str` or type `list`\r\nUncaught exception. Entering post mortem debugging\r\nRunning 'cont' or 'step' will restart the program\r\n```\r\n","So perhaps there's no bug exactly, but I would love to see two things: 1) improve the documentation to better understand what's really getting returned. 2) update the example provided of using transformer pipeline with a dataset to include the oddball case that translation appears to be.","cc @Narsil ","Hi,\r\n\r\nfor the original snippet, the issue is that `streaming` datasets are not countable (they have no len) and therefore `KeyDataset` cannot work with them ( KeyDataset is a dataset and therefore requires a length).\r\n\r\nI modified slightly the original snippet to make it work:\r\n\r\n```python\r\nfrom transformers import pipeline\r\nfrom transformers.pipelines.pt_utils import KeyDataset\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(path=\"wmt14\", name=\"fr-en\", split=\"test\", streaming=True)\r\nbs = 1\r\nmt = pipeline(\r\n \"translation_en_to_fr\", model=\"hf-internal-testing\/tiny-random-T5ForConditionalGeneration\", batch_size=bs\r\n)\r\n\r\n\r\ndef ks(ds):\r\n for item in ds:\r\n yield item[\"translation\"][\"en\"]\r\n\r\n\r\n# print(f\"{ks}\")\r\nxx = mt(ks(ds))\r\nfor x in xx:\r\n print(x)\r\n```\r\n\r\nThis is what the first example in the docs suggests to use (as it's the most flexible): https:\/\/huggingface.co\/docs\/transformers\/v4.29.1\/en\/pipeline_tutorial#using-pipelines-on-a-dataset\r\n\r\n`KeyDataset` really exists only to get a `sized` dataset to work nicer with `tqdm` for instance.\r\n\r\n@sgugger should we update the docs to remove `KeyDataset` entirely ? (We can add a note to pass manually the length of the data to tqdm so that the progress bar option can still be easy to use ?)\r\n","Maybe moving `KeyDataset` later on in the guide and specify it's mostly for streaming then? Or is it also necessary for batch_size>1 (which is what the current doc implies)?","Hmm\r\n\r\nIterator (`yield`) :\r\n- Not countable\r\n- Super flexible\r\n- Cannot use `num_workers>1` (threading requires indexing at the correct location, iterators require to iterate in order,so each thread would iterate over the full thing being genuinely a bad idea)\r\n- Can batch\r\n- tqdm doesn't show a nice progress bar (it has no total)\r\n\r\nKeyDataset (Or any PyTorch like Dataset returning the correct object for the pipeline):\r\n- Countable\r\n- Less flexible (not applicable to datasets with streaming), can only work on single keys. But should be easy to read and write your own (like @mariosasko did)\r\n- Works with `num_workers > 1` (Every worker can fetch exactly what's needed)\r\n- Can batch \r\n- tqdm shows a nice progress bar\r\n\r\nIn the docs, if we update all the examples to use iterators, and include an example with\r\n\r\n```\r\nfor item in tqdm.tqdm(pipe(iterator(), total=len(dataset))))\r\n```\r\n\r\nWe can save the biggest feature that doesn't work out of the box with iterators which is the tqdm progress bar.\r\n\r\n`num_workers>1` we can mention it, but it tends to be an issues only on CPU intensive loads, like image (and maybe audio)\r\n"],"created_at":1683841958000,"updated_at":1684252795000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI'm trying to use a streaming dataset for translation inference to avoid downloading the training data.\r\n\r\nI'm using a pipeline and a dataset, and following the guidance in the tutorial. \r\n\r\nInstead I get an exception that IterableDataset has no len().\n\n### Steps to reproduce the bug\n\nCODE:\r\n```\r\nfrom transformers import pipeline\r\nfrom transformers.pipelines.pt_utils import KeyDataset\r\nfrom datasets import load_dataset\r\nds = load_dataset(path=\"wmt14\", name=\"fr-en\", split=\"test\", streaming=True)\r\nbs=1\r\nmt = pipeline(\"translation_en_to_fr\", model=\"t5-base\", batch_size=bs)\r\n#print(mt(\"hello\")) THIS WORKS \r\nks = KeyDataset(ds, \"translation\")\r\nprint(f\"{ks}\")\r\nxx= mt(ks)\r\nfor x in xx:\r\n print(x)\r\n```\r\n\r\nRUN:\r\n```\r\n(watnlp) [jlquinn@bertdev01 hf]$ python ende.t5.pipe.py \r\n2023-05-11 16:48:08.817572: I tensorflow\/core\/util\/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\r\n2023-05-11 16:48:08.821388: W tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n2023-05-11 16:48:08.821407: I tensorflow\/stream_executor\/cuda\/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n\r\nTraceback (most recent call last):\r\n File \"\/home\/jlquinn\/models\/hf\/ende.t5.pipe.py\", line 11, in \r\n for x in xx:\r\n File \"\/home\/jlquinn\/miniconda3\/envs\/watnlp\/lib\/python3.9\/site-packages\/transformers\/pipelines\/pt_utils.py\", line 111, in __next__\r\n item = next(self.iterator)\r\n File \"\/home\/jlquinn\/miniconda3\/envs\/watnlp\/lib\/python3.9\/site-packages\/transformers\/pipelines\/pt_utils.py\", line 111, in __next__\r\n item = next(self.iterator)\r\n File \"\/home\/jlquinn\/miniconda3\/envs\/watnlp\/lib\/python3.9\/site-packages\/torch\/utils\/data\/dataloader.py\", line 681, in __next__\r\n data = self._next_data()\r\n File \"\/home\/jlquinn\/miniconda3\/envs\/watnlp\/lib\/python3.9\/site-packages\/torch\/utils\/data\/dataloader.py\", line 720, in _next_data\r\n index = self._next_index() # may raise StopIteration\r\n File \"\/home\/jlquinn\/miniconda3\/envs\/watnlp\/lib\/python3.9\/site-packages\/torch\/utils\/data\/dataloader.py\", line 671, in _next_index\r\n return next(self._sampler_iter) # may raise StopIteration\r\n File \"\/home\/jlquinn\/miniconda3\/envs\/watnlp\/lib\/python3.9\/site-packages\/torch\/utils\/data\/sampler.py\", line 247, in __iter__\r\n for idx in self.sampler:\r\n File \"\/home\/jlquinn\/miniconda3\/envs\/watnlp\/lib\/python3.9\/site-packages\/torch\/utils\/data\/sampler.py\", line 76, in __iter__\r\n return iter(range(len(self.data_source)))\r\n File \"\/home\/jlquinn\/miniconda3\/envs\/watnlp\/lib\/python3.9\/site-packages\/transformers\/pipelines\/pt_utils.py\", line 13, in __len__\r\n return len(self.dataset)\r\n File \"\/home\/jlquinn\/miniconda3\/envs\/watnlp\/lib\/python3.9\/site-packages\/transformers\/pipelines\/pt_utils.py\", line 289, in __len__\r\n return len(self.dataset)\r\nTypeError: object of type 'IterableDataset' has no len()\r\n```\n\n### Expected behavior\n\nI'm expecting french translations of the english test set to be printed.\r\n\n\n### Environment info\n\nRun on CPU with no GPU.\r\n\r\nRHEL 8.7 x86_64\r\npython 3.9.0\r\ntransformers 4.17.0\r\ndatasets 2.0.0\r\ntokenizers 0.12.1\r\n\r\n```\r\n(watnlp) [jlquinn@bertdev01 hf]$ datasets-cli env\r\n\r\nCopy-and-paste the text below in your GitHub issue.\r\n\r\n- `datasets` version: 2.0.0\r\n- Platform: Linux-4.18.0-372.19.1.el8_6.x86_64-x86_64-with-glibc2.28\r\n- Python version: 3.9.0\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.4\r\n```\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5847\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5847\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5851","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5851\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5851\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5851\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5851","id":1707907048,"node_id":"I_kwDODunzps5lzJfo","number":5851,"title":"Error message not clear in interleaving datasets","user":{"login":"surya-narayanan","id":17240858,"node_id":"MDQ6VXNlcjE3MjQwODU4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17240858?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/surya-narayanan","html_url":"https:\/\/github.com\/surya-narayanan","followers_url":"https:\/\/api.github.com\/users\/surya-narayanan\/followers","following_url":"https:\/\/api.github.com\/users\/surya-narayanan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/surya-narayanan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/surya-narayanan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/surya-narayanan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/surya-narayanan\/orgs","repos_url":"https:\/\/api.github.com\/users\/surya-narayanan\/repos","events_url":"https:\/\/api.github.com\/users\/surya-narayanan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/surya-narayanan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186.0,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1683838333000,"updated_at":1684837979000,"closed_at":1684837979000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### System Info\n\nstandard env\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE\/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\nI'm trying to interleave 'sciq', 'wiki' and the 'pile-enron' dataset. I think the error I made was that I loaded the train split of one, but for the other but the error is not too helpful- \r\n\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n[\/home\/suryahari\/Vornoi\/save_model_ops.py](https:\/\/vscode-remote+ssh-002dremote-002bthomsonlab-002d2-002ejamesgornet-002ecom.vscode-resource.vscode-cdn.net\/home\/suryahari\/Vornoi\/save_model_ops.py) in line 3\r\n [41](file:\/\/\/home\/suryahari\/Vornoi\/save_model_ops.py?line=40) # %%\r\n----> [43](file:\/\/\/home\/suryahari\/Vornoi\/save_model_ops.py?line=42) dataset = interleave_datasets(datasets, stopping_strategy=\"all_exhausted\")\r\n\r\nFile [~\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/combine.py:124](https:\/\/vscode-remote+ssh-002dremote-002bthomsonlab-002d2-002ejamesgornet-002ecom.vscode-resource.vscode-cdn.net\/home\/suryahari\/~\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/combine.py:124), in interleave_datasets(datasets, probabilities, seed, info, split, stopping_strategy)\r\n [122](file:\/\/\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/combine.py?line=121) for dataset in datasets[1:]:\r\n [123](file:\/\/\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/combine.py?line=122) if (map_style and not isinstance(dataset, Dataset)) or (iterable and not isinstance(dataset, IterableDataset)):\r\n--> [124](file:\/\/\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/combine.py?line=123) raise ValueError(\r\n [125](file:\/\/\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/combine.py?line=124) f\"Unable to interleave a {type(datasets[0])} with a {type(dataset)}. Expected a list of Dataset objects or a list of IterableDataset objects.\"\r\n [126](file:\/\/\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/combine.py?line=125) )\r\n [127](file:\/\/\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/combine.py?line=126) if stopping_strategy not in [\"first_exhausted\", \"all_exhausted\"]:\r\n [128](file:\/\/\/home\/suryahari\/miniconda3\/envs\/vornoi\/lib\/python3.10\/site-packages\/datasets\/combine.py?line=127) raise ValueError(f\"{stopping_strategy} is not supported. Please enter a valid stopping_strategy.\")\r\n\r\nValueError: Unable to interleave a with a . Expected a list of Dataset objects or a list of IterableDataset objects.\r\n```\n\n### Expected behavior\n\nthe error message should hopefully be more clear","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5851\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5851\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5846","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5846\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5846\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5846\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5846","id":1706289290,"node_id":"I_kwDODunzps5ls-iK","number":5846,"title":"load_dataset('bigcode\/the-stack-dedup', streaming=True) very slow!","user":{"login":"tbenthompson","id":4241811,"node_id":"MDQ6VXNlcjQyNDE4MTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4241811?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tbenthompson","html_url":"https:\/\/github.com\/tbenthompson","followers_url":"https:\/\/api.github.com\/users\/tbenthompson\/followers","following_url":"https:\/\/api.github.com\/users\/tbenthompson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tbenthompson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tbenthompson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tbenthompson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tbenthompson\/orgs","repos_url":"https:\/\/api.github.com\/users\/tbenthompson\/repos","events_url":"https:\/\/api.github.com\/users\/tbenthompson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tbenthompson\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":{"login":"mariosasko","id":47462742.0,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"assignees":[{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["This is due to the slow resolution of the data files: https:\/\/github.com\/huggingface\/datasets\/issues\/5537.\r\n\r\nWe plan to switch to `huggingface_hub`'s `HfFileSystem` soon to make the resolution faster (will be up to 20x faster once we merge https:\/\/github.com\/huggingface\/huggingface_hub\/pull\/1443)\r\n\r\n","You're right, when I try to parse more than 50GB of text data, I also get very slow, usually taking hours or even tens of hours.","> You're right, when I try to parse more than 50GB of text data, I also get very slow, usually taking hours or even tens of hours.\r\n\r\nThat's unrelated to the problem discussed in this issue. ","> > You're right, when I try to parse more than 50GB of text data, I also get very slow, usually taking hours or even tens of hours.\r\n> \r\n> That's unrelated to the problem discussed in this issue.\r\n\r\nSorry, I misunderstood it."],"created_at":1683827937000,"updated_at":1684207426000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nRunning\r\n\r\n```\r\nimport datasets\r\nds = datasets.load_dataset('bigcode\/the-stack-dedup', streaming=True)\r\n```\r\n\r\ntakes about 2.5 minutes! \r\n\r\nI would expect this to be near instantaneous. With other datasets, the runtime is one or two seconds.\r\n\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 2.11.0\r\n- Platform: macOS-13.3.1-arm64-arm-64bit\r\n- Python version: 3.10.10\r\n- Huggingface_hub version: 0.13.4\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 2.0.0","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5846\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5846\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5845","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5845\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5845\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5845\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5845","id":1706253251,"node_id":"PR_kwDODunzps5QUMjS","number":5845,"title":"Add `date_format` param to the CSV reader","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007592 \/ 0.011353 (-0.003761) | 0.005223 \/ 0.011008 (-0.005786) | 0.110218 \/ 0.038508 (0.071710) | 0.027644 \/ 0.023109 (0.004534) | 0.335063 \/ 0.275898 (0.059165) | 0.347102 \/ 0.323480 (0.023623) | 0.005107 \/ 0.007986 (-0.002878) | 0.003932 \/ 0.004328 (-0.000396) | 0.086095 \/ 0.004250 (0.081845) | 0.034735 \/ 0.037052 (-0.002317) | 0.329029 \/ 0.258489 (0.070540) | 0.370282 \/ 0.293841 (0.076441) | 0.043040 \/ 0.128546 (-0.085507) | 0.019626 \/ 0.075646 (-0.056021) | 0.336452 \/ 0.419271 (-0.082819) | 0.070365 \/ 0.043533 (0.026832) | 0.326881 \/ 0.255139 (0.071742) | 0.354984 \/ 0.283200 (0.071785) | 0.102605 \/ 0.141683 (-0.039077) | 1.459161 \/ 1.452155 (0.007007) | 1.453599 \/ 1.492716 (-0.039117) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.201021 \/ 0.018006 (0.183015) | 0.456415 \/ 0.000490 (0.455926) | 0.012349 \/ 0.000200 (0.012149) | 0.000115 \/ 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025199 \/ 0.037411 (-0.012213) | 0.098536 \/ 0.014526 (0.084010) | 0.107528 \/ 0.176557 (-0.069028) | 0.160492 \/ 0.737135 (-0.576643) | 0.108660 \/ 0.296338 (-0.187679) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.527020 \/ 0.215209 (0.311811) | 5.357635 \/ 2.077655 (3.279980) | 2.062930 \/ 1.504120 (0.558811) | 1.783009 \/ 1.541195 (0.241815) | 1.840225 \/ 1.468490 (0.371735) | 1.074278 \/ 4.584777 (-3.510499) | 4.710533 \/ 3.745712 (0.964821) | 2.611202 \/ 5.269862 (-2.658660) | 1.885487 \/ 4.565676 (-2.680189) | 0.123201 \/ 0.424275 (-0.301074) | 0.013880 \/ 0.007607 (0.006273) | 0.636511 \/ 0.226044 (0.410467) | 6.516075 \/ 2.268929 (4.247146) | 2.710138 \/ 55.444624 (-52.734486) | 2.046606 \/ 6.876477 (-4.829871) | 2.085907 \/ 2.142072 (-0.056166) | 1.199489 \/ 4.805227 (-3.605738) | 0.211668 \/ 6.500664 (-6.288996) | 0.075436 \/ 0.075469 (-0.000033) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.219771 \/ 1.841788 (-0.622016) | 14.276215 \/ 8.074308 (6.201907) | 16.611529 \/ 10.191392 (6.420137) | 0.221091 \/ 0.680424 (-0.459333) | 0.024922 \/ 0.534201 (-0.509279) | 0.431906 \/ 0.579283 (-0.147377) | 0.518863 \/ 0.434364 (0.084499) | 0.515366 \/ 0.540337 (-0.024971) | 0.640411 \/ 1.386936 (-0.746525) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007955 \/ 0.011353 (-0.003398) | 0.004813 \/ 0.011008 (-0.006196) | 0.076508 \/ 0.038508 (0.038000) | 0.028137 \/ 0.023109 (0.005028) | 0.349609 \/ 0.275898 (0.073711) | 0.403588 \/ 0.323480 (0.080109) | 0.005456 \/ 0.007986 (-0.002530) | 0.005677 \/ 0.004328 (0.001349) | 0.076882 \/ 0.004250 (0.072632) | 0.039832 \/ 0.037052 (0.002779) | 0.351930 \/ 0.258489 (0.093440) | 0.390492 \/ 0.293841 (0.096651) | 0.045199 \/ 0.128546 (-0.083347) | 0.023945 \/ 0.075646 (-0.051701) | 0.091140 \/ 0.419271 (-0.328132) | 0.057728 \/ 0.043533 (0.014195) | 0.370663 \/ 0.255139 (0.115524) | 0.380649 \/ 0.283200 (0.097449) | 0.097017 \/ 0.141683 (-0.044666) | 1.362248 \/ 1.452155 (-0.089907) | 1.445699 \/ 1.492716 (-0.047018) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.204207 \/ 0.018006 (0.186201) | 0.474471 \/ 0.000490 (0.473981) | 0.012187 \/ 0.000200 (0.011987) | 0.000151 \/ 0.000054 (0.000096) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023123 \/ 0.037411 (-0.014288) | 0.097547 \/ 0.014526 (0.083021) | 0.113877 \/ 0.176557 (-0.062679) | 0.158307 \/ 0.737135 (-0.578828) | 0.113876 \/ 0.296338 (-0.182462) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.519920 \/ 0.215209 (0.304711) | 5.384371 \/ 2.077655 (3.306716) | 2.263276 \/ 1.504120 (0.759156) | 1.960604 \/ 1.541195 (0.419409) | 2.022864 \/ 1.468490 (0.554374) | 1.015430 \/ 4.584777 (-3.569347) | 4.774426 \/ 3.745712 (1.028714) | 4.549598 \/ 5.269862 (-0.720264) | 2.412638 \/ 4.565676 (-2.153039) | 0.117983 \/ 0.424275 (-0.306292) | 0.013340 \/ 0.007607 (0.005733) | 0.639826 \/ 0.226044 (0.413782) | 6.491622 \/ 2.268929 (4.222693) | 2.946892 \/ 55.444624 (-52.497732) | 2.376393 \/ 6.876477 (-4.500084) | 2.285592 \/ 2.142072 (0.143519) | 1.185049 \/ 4.805227 (-3.620178) | 0.204127 \/ 6.500664 (-6.296537) | 0.070285 \/ 0.075469 (-0.005184) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.439736 \/ 1.841788 (-0.402052) | 14.852087 \/ 8.074308 (6.777779) | 15.675742 \/ 10.191392 (5.484350) | 0.206577 \/ 0.680424 (-0.473846) | 0.031688 \/ 0.534201 (-0.502513) | 0.471003 \/ 0.579283 (-0.108280) | 0.505449 \/ 0.434364 (0.071085) | 0.506114 \/ 0.540337 (-0.034224) | 0.583752 \/ 1.386936 (-0.803184) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#d6fcff8a031db39cb31079bc1fa62ded6e35218c \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.012965 \/ 0.011353 (0.001612) | 0.006660 \/ 0.011008 (-0.004348) | 0.126060 \/ 0.038508 (0.087551) | 0.041154 \/ 0.023109 (0.018045) | 0.413428 \/ 0.275898 (0.137530) | 0.429035 \/ 0.323480 (0.105555) | 0.006680 \/ 0.007986 (-0.001305) | 0.005063 \/ 0.004328 (0.000734) | 0.092161 \/ 0.004250 (0.087911) | 0.056092 \/ 0.037052 (0.019039) | 0.421460 \/ 0.258489 (0.162971) | 0.450291 \/ 0.293841 (0.156450) | 0.050820 \/ 0.128546 (-0.077726) | 0.021392 \/ 0.075646 (-0.054255) | 0.426915 \/ 0.419271 (0.007643) | 0.064908 \/ 0.043533 (0.021375) | 0.406769 \/ 0.255139 (0.151630) | 0.434344 \/ 0.283200 (0.151144) | 0.127967 \/ 0.141683 (-0.013716) | 1.922414 \/ 1.452155 (0.470260) | 1.940717 \/ 1.492716 (0.448000) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.288024 \/ 0.018006 (0.270017) | 0.615859 \/ 0.000490 (0.615369) | 0.007095 \/ 0.000200 (0.006895) | 0.000160 \/ 0.000054 (0.000106) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028182 \/ 0.037411 (-0.009230) | 0.126277 \/ 0.014526 (0.111752) | 0.131687 \/ 0.176557 (-0.044870) | 0.206191 \/ 0.737135 (-0.530944) | 0.141799 \/ 0.296338 (-0.154539) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.631580 \/ 0.215209 (0.416371) | 6.141942 \/ 2.077655 (4.064287) | 2.476721 \/ 1.504120 (0.972602) | 2.128850 \/ 1.541195 (0.587655) | 2.236468 \/ 1.468490 (0.767978) | 1.188665 \/ 4.584777 (-3.396112) | 5.481179 \/ 3.745712 (1.735467) | 3.120333 \/ 5.269862 (-2.149529) | 2.365889 \/ 4.565676 (-2.199787) | 0.145081 \/ 0.424275 (-0.279194) | 0.015866 \/ 0.007607 (0.008259) | 0.795650 \/ 0.226044 (0.569605) | 7.595289 \/ 2.268929 (5.326361) | 3.174418 \/ 55.444624 (-52.270207) | 2.905207 \/ 6.876477 (-3.971270) | 2.428263 \/ 2.142072 (0.286191) | 1.408900 \/ 4.805227 (-3.396328) | 0.265485 \/ 6.500664 (-6.235179) | 0.083882 \/ 0.075469 (0.008413) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.517025 \/ 1.841788 (-0.324762) | 18.110288 \/ 8.074308 (10.035980) | 20.810003 \/ 10.191392 (10.618611) | 0.210380 \/ 0.680424 (-0.470044) | 0.030180 \/ 0.534201 (-0.504021) | 0.523453 \/ 0.579283 (-0.055830) | 0.603896 \/ 0.434364 (0.169532) | 0.622554 \/ 0.540337 (0.082216) | 0.737973 \/ 1.386936 (-0.648963) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009795 \/ 0.011353 (-0.001558) | 0.006269 \/ 0.011008 (-0.004739) | 0.099938 \/ 0.038508 (0.061430) | 0.035162 \/ 0.023109 (0.012052) | 0.506353 \/ 0.275898 (0.230455) | 0.527804 \/ 0.323480 (0.204324) | 0.007211 \/ 0.007986 (-0.000775) | 0.005498 \/ 0.004328 (0.001169) | 0.098325 \/ 0.004250 (0.094075) | 0.054513 \/ 0.037052 (0.017461) | 0.525764 \/ 0.258489 (0.267274) | 0.576699 \/ 0.293841 (0.282858) | 0.052800 \/ 0.128546 (-0.075747) | 0.021192 \/ 0.075646 (-0.054454) | 0.117676 \/ 0.419271 (-0.301596) | 0.055415 \/ 0.043533 (0.011882) | 0.516746 \/ 0.255139 (0.261607) | 0.528417 \/ 0.283200 (0.245217) | 0.116947 \/ 0.141683 (-0.024735) | 1.757864 \/ 1.452155 (0.305709) | 2.043632 \/ 1.492716 (0.550916) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.284018 \/ 0.018006 (0.266011) | 0.595086 \/ 0.000490 (0.594596) | 0.001945 \/ 0.000200 (0.001745) | 0.000127 \/ 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032255 \/ 0.037411 (-0.005157) | 0.128201 \/ 0.014526 (0.113676) | 0.139189 \/ 0.176557 (-0.037367) | 0.199750 \/ 0.737135 (-0.537385) | 0.149406 \/ 0.296338 (-0.146933) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.652184 \/ 0.215209 (0.436975) | 6.453319 \/ 2.077655 (4.375664) | 2.831566 \/ 1.504120 (1.327446) | 2.453064 \/ 1.541195 (0.911869) | 2.622056 \/ 1.468490 (1.153566) | 1.191279 \/ 4.584777 (-3.393498) | 5.504720 \/ 3.745712 (1.759007) | 5.916900 \/ 5.269862 (0.647038) | 2.974400 \/ 4.565676 (-1.591277) | 0.142851 \/ 0.424275 (-0.281424) | 0.015241 \/ 0.007607 (0.007634) | 0.917537 \/ 0.226044 (0.691493) | 8.277645 \/ 2.268929 (6.008717) | 3.700495 \/ 55.444624 (-51.744130) | 3.047127 \/ 6.876477 (-3.829350) | 3.093216 \/ 2.142072 (0.951143) | 1.413529 \/ 4.805227 (-3.391698) | 0.259395 \/ 6.500664 (-6.241270) | 0.083144 \/ 0.075469 (0.007675) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.632240 \/ 1.841788 (-0.209548) | 18.687403 \/ 8.074308 (10.613095) | 20.134091 \/ 10.191392 (9.942699) | 0.238792 \/ 0.680424 (-0.441632) | 0.027645 \/ 0.534201 (-0.506556) | 0.518200 \/ 0.579283 (-0.061083) | 0.613535 \/ 0.434364 (0.179171) | 0.631414 \/ 0.540337 (0.091076) | 0.724658 \/ 1.386936 (-0.662278) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#ac7caa5e195ad76c7e8ef98914813383f4f668cf \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006228 \/ 0.011353 (-0.005125) | 0.004517 \/ 0.011008 (-0.006492) | 0.097998 \/ 0.038508 (0.059490) | 0.027903 \/ 0.023109 (0.004793) | 0.309789 \/ 0.275898 (0.033891) | 0.332784 \/ 0.323480 (0.009304) | 0.004757 \/ 0.007986 (-0.003228) | 0.003348 \/ 0.004328 (-0.000981) | 0.075193 \/ 0.004250 (0.070942) | 0.037382 \/ 0.037052 (0.000330) | 0.306929 \/ 0.258489 (0.048440) | 0.347304 \/ 0.293841 (0.053463) | 0.030235 \/ 0.128546 (-0.098312) | 0.011516 \/ 0.075646 (-0.064131) | 0.322249 \/ 0.419271 (-0.097023) | 0.044125 \/ 0.043533 (0.000592) | 0.303874 \/ 0.255139 (0.048735) | 0.326808 \/ 0.283200 (0.043608) | 0.088137 \/ 0.141683 (-0.053546) | 1.521426 \/ 1.452155 (0.069272) | 1.573823 \/ 1.492716 (0.081107) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.203204 \/ 0.018006 (0.185197) | 0.402247 \/ 0.000490 (0.401757) | 0.003146 \/ 0.000200 (0.002946) | 0.000088 \/ 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.022955 \/ 0.037411 (-0.014456) | 0.096059 \/ 0.014526 (0.081533) | 0.105552 \/ 0.176557 (-0.071004) | 0.167459 \/ 0.737135 (-0.569676) | 0.106723 \/ 0.296338 (-0.189615) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.454626 \/ 0.215209 (0.239417) | 4.556346 \/ 2.077655 (2.478691) | 2.220349 \/ 1.504120 (0.716229) | 2.011820 \/ 1.541195 (0.470625) | 2.048149 \/ 1.468490 (0.579659) | 0.697583 \/ 4.584777 (-3.887194) | 3.428394 \/ 3.745712 (-0.317318) | 1.863872 \/ 5.269862 (-3.405989) | 1.159691 \/ 4.565676 (-3.405985) | 0.082598 \/ 0.424275 (-0.341677) | 0.012202 \/ 0.007607 (0.004594) | 0.555617 \/ 0.226044 (0.329572) | 5.545481 \/ 2.268929 (3.276553) | 2.650850 \/ 55.444624 (-52.793775) | 2.305864 \/ 6.876477 (-4.570613) | 2.392252 \/ 2.142072 (0.250179) | 0.808512 \/ 4.805227 (-3.996716) | 0.152086 \/ 6.500664 (-6.348578) | 0.066440 \/ 0.075469 (-0.009029) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.211789 \/ 1.841788 (-0.629999) | 13.515546 \/ 8.074308 (5.441238) | 13.859870 \/ 10.191392 (3.668478) | 0.150335 \/ 0.680424 (-0.530088) | 0.016578 \/ 0.534201 (-0.517623) | 0.379145 \/ 0.579283 (-0.200138) | 0.393735 \/ 0.434364 (-0.040628) | 0.460219 \/ 0.540337 (-0.080118) | 0.555896 \/ 1.386936 (-0.831040) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006402 \/ 0.011353 (-0.004950) | 0.004558 \/ 0.011008 (-0.006450) | 0.077332 \/ 0.038508 (0.038824) | 0.027955 \/ 0.023109 (0.004846) | 0.407877 \/ 0.275898 (0.131979) | 0.432552 \/ 0.323480 (0.109072) | 0.004850 \/ 0.007986 (-0.003135) | 0.003329 \/ 0.004328 (-0.000999) | 0.075767 \/ 0.004250 (0.071517) | 0.035940 \/ 0.037052 (-0.001112) | 0.419544 \/ 0.258489 (0.161055) | 0.454672 \/ 0.293841 (0.160831) | 0.030461 \/ 0.128546 (-0.098085) | 0.011536 \/ 0.075646 (-0.064111) | 0.085774 \/ 0.419271 (-0.333498) | 0.039408 \/ 0.043533 (-0.004125) | 0.389909 \/ 0.255139 (0.134770) | 0.403287 \/ 0.283200 (0.120088) | 0.088385 \/ 0.141683 (-0.053298) | 1.596840 \/ 1.452155 (0.144686) | 1.659296 \/ 1.492716 (0.166580) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.216349 \/ 0.018006 (0.198342) | 0.394969 \/ 0.000490 (0.394479) | 0.000408 \/ 0.000200 (0.000208) | 0.000059 \/ 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024346 \/ 0.037411 (-0.013066) | 0.099609 \/ 0.014526 (0.085084) | 0.106779 \/ 0.176557 (-0.069778) | 0.156889 \/ 0.737135 (-0.580247) | 0.110625 \/ 0.296338 (-0.185714) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.443809 \/ 0.215209 (0.228600) | 4.450524 \/ 2.077655 (2.372870) | 2.151694 \/ 1.504120 (0.647574) | 1.952521 \/ 1.541195 (0.411326) | 1.963320 \/ 1.468490 (0.494830) | 0.709291 \/ 4.584777 (-3.875486) | 3.415708 \/ 3.745712 (-0.330005) | 1.850498 \/ 5.269862 (-3.419363) | 1.164355 \/ 4.565676 (-3.401321) | 0.084977 \/ 0.424275 (-0.339298) | 0.013284 \/ 0.007607 (0.005677) | 0.555103 \/ 0.226044 (0.329059) | 5.583587 \/ 2.268929 (3.314658) | 2.608754 \/ 55.444624 (-52.835870) | 2.264079 \/ 6.876477 (-4.612398) | 2.272455 \/ 2.142072 (0.130382) | 0.820849 \/ 4.805227 (-3.984379) | 0.155063 \/ 6.500664 (-6.345601) | 0.069709 \/ 0.075469 (-0.005760) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.293285 \/ 1.841788 (-0.548503) | 14.181867 \/ 8.074308 (6.107559) | 13.021280 \/ 10.191392 (2.829888) | 0.130101 \/ 0.680424 (-0.550323) | 0.016461 \/ 0.534201 (-0.517740) | 0.383651 \/ 0.579283 (-0.195632) | 0.387353 \/ 0.434364 (-0.047011) | 0.443351 \/ 0.540337 (-0.096986) | 0.529448 \/ 1.386936 (-0.857488) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#05145d50b5bb1b7b42b76516cd6492d4868c46ba \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007513 \/ 0.011353 (-0.003840) | 0.005328 \/ 0.011008 (-0.005680) | 0.096937 \/ 0.038508 (0.058429) | 0.036230 \/ 0.023109 (0.013121) | 0.325808 \/ 0.275898 (0.049910) | 0.363601 \/ 0.323480 (0.040121) | 0.006130 \/ 0.007986 (-0.001855) | 0.004352 \/ 0.004328 (0.000023) | 0.073543 \/ 0.004250 (0.069293) | 0.054114 \/ 0.037052 (0.017062) | 0.328952 \/ 0.258489 (0.070463) | 0.366943 \/ 0.293841 (0.073102) | 0.035768 \/ 0.128546 (-0.092778) | 0.012505 \/ 0.075646 (-0.063142) | 0.332260 \/ 0.419271 (-0.087012) | 0.066673 \/ 0.043533 (0.023140) | 0.323866 \/ 0.255139 (0.068727) | 0.341311 \/ 0.283200 (0.058112) | 0.129898 \/ 0.141683 (-0.011785) | 1.456890 \/ 1.452155 (0.004735) | 1.546933 \/ 1.492716 (0.054217) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.299236 \/ 0.018006 (0.281229) | 0.496134 \/ 0.000490 (0.495645) | 0.004233 \/ 0.000200 (0.004033) | 0.000081 \/ 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028089 \/ 0.037411 (-0.009322) | 0.104723 \/ 0.014526 (0.090197) | 0.121032 \/ 0.176557 (-0.055525) | 0.179916 \/ 0.737135 (-0.557220) | 0.126628 \/ 0.296338 (-0.169711) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.403497 \/ 0.215209 (0.188288) | 4.052481 \/ 2.077655 (1.974827) | 1.804419 \/ 1.504120 (0.300299) | 1.619833 \/ 1.541195 (0.078638) | 1.732438 \/ 1.468490 (0.263948) | 0.702474 \/ 4.584777 (-3.882303) | 3.808973 \/ 3.745712 (0.063261) | 3.682764 \/ 5.269862 (-1.587098) | 1.919184 \/ 4.565676 (-2.646493) | 0.086638 \/ 0.424275 (-0.337637) | 0.012265 \/ 0.007607 (0.004658) | 0.501273 \/ 0.226044 (0.275229) | 5.010918 \/ 2.268929 (2.741989) | 2.278114 \/ 55.444624 (-53.166510) | 1.942266 \/ 6.876477 (-4.934211) | 2.101982 \/ 2.142072 (-0.040091) | 0.847622 \/ 4.805227 (-3.957606) | 0.172973 \/ 6.500664 (-6.327691) | 0.066884 \/ 0.075469 (-0.008586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.187609 \/ 1.841788 (-0.654179) | 15.089485 \/ 8.074308 (7.015177) | 14.787398 \/ 10.191392 (4.596006) | 0.168254 \/ 0.680424 (-0.512170) | 0.018266 \/ 0.534201 (-0.515935) | 0.423204 \/ 0.579283 (-0.156079) | 0.435238 \/ 0.434364 (0.000874) | 0.512473 \/ 0.540337 (-0.027864) | 0.618091 \/ 1.386936 (-0.768845) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007249 \/ 0.011353 (-0.004104) | 0.005297 \/ 0.011008 (-0.005711) | 0.076428 \/ 0.038508 (0.037920) | 0.033565 \/ 0.023109 (0.010456) | 0.373756 \/ 0.275898 (0.097858) | 0.407405 \/ 0.323480 (0.083925) | 0.006100 \/ 0.007986 (-0.001886) | 0.006482 \/ 0.004328 (0.002153) | 0.075884 \/ 0.004250 (0.071633) | 0.055338 \/ 0.037052 (0.018286) | 0.378721 \/ 0.258489 (0.120232) | 0.427065 \/ 0.293841 (0.133224) | 0.036285 \/ 0.128546 (-0.092261) | 0.012460 \/ 0.075646 (-0.063186) | 0.087641 \/ 0.419271 (-0.331630) | 0.048199 \/ 0.043533 (0.004666) | 0.386785 \/ 0.255139 (0.131646) | 0.386702 \/ 0.283200 (0.103503) | 0.110087 \/ 0.141683 (-0.031596) | 1.511204 \/ 1.452155 (0.059050) | 1.585671 \/ 1.492716 (0.092954) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.313558 \/ 0.018006 (0.295552) | 0.496991 \/ 0.000490 (0.496501) | 0.001492 \/ 0.000200 (0.001292) | 0.000093 \/ 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.031814 \/ 0.037411 (-0.005597) | 0.113486 \/ 0.014526 (0.098960) | 0.125208 \/ 0.176557 (-0.051348) | 0.174469 \/ 0.737135 (-0.562666) | 0.131095 \/ 0.296338 (-0.165244) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.439282 \/ 0.215209 (0.224073) | 4.362286 \/ 2.077655 (2.284631) | 2.153271 \/ 1.504120 (0.649151) | 1.990482 \/ 1.541195 (0.449288) | 2.103322 \/ 1.468490 (0.634831) | 0.692522 \/ 4.584777 (-3.892254) | 3.861931 \/ 3.745712 (0.116219) | 3.686294 \/ 5.269862 (-1.583567) | 1.734525 \/ 4.565676 (-2.831152) | 0.085057 \/ 0.424275 (-0.339218) | 0.012116 \/ 0.007607 (0.004509) | 0.547996 \/ 0.226044 (0.321952) | 5.513835 \/ 2.268929 (3.244906) | 2.723829 \/ 55.444624 (-52.720795) | 2.404715 \/ 6.876477 (-4.471761) | 2.514768 \/ 2.142072 (0.372696) | 0.834972 \/ 4.805227 (-3.970255) | 0.168261 \/ 6.500664 (-6.332403) | 0.066464 \/ 0.075469 (-0.009005) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.259923 \/ 1.841788 (-0.581865) | 15.646277 \/ 8.074308 (7.571969) | 13.097598 \/ 10.191392 (2.906206) | 0.187991 \/ 0.680424 (-0.492433) | 0.017358 \/ 0.534201 (-0.516843) | 0.427979 \/ 0.579283 (-0.151304) | 0.425747 \/ 0.434364 (-0.008617) | 0.501907 \/ 0.540337 (-0.038431) | 0.595106 \/ 1.386936 (-0.791830) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#db56f7f0d2f0b99af4da17d388c205152504c7d9 \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.009378 \/ 0.011353 (-0.001975) | 0.006434 \/ 0.011008 (-0.004574) | 0.120603 \/ 0.038508 (0.082095) | 0.042929 \/ 0.023109 (0.019820) | 0.366853 \/ 0.275898 (0.090955) | 0.436795 \/ 0.323480 (0.113315) | 0.007730 \/ 0.007986 (-0.000256) | 0.004842 \/ 0.004328 (0.000513) | 0.091058 \/ 0.004250 (0.086808) | 0.058256 \/ 0.037052 (0.021203) | 0.378692 \/ 0.258489 (0.120203) | 0.467384 \/ 0.293841 (0.173543) | 0.042948 \/ 0.128546 (-0.085598) | 0.015172 \/ 0.075646 (-0.060475) | 0.409225 \/ 0.419271 (-0.010046) | 0.083672 \/ 0.043533 (0.040140) | 0.390088 \/ 0.255139 (0.134949) | 0.406965 \/ 0.283200 (0.123765) | 0.142132 \/ 0.141683 (0.000449) | 1.765737 \/ 1.452155 (0.313582) | 1.895419 \/ 1.492716 (0.402703) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.244052 \/ 0.018006 (0.226046) | 0.553383 \/ 0.000490 (0.552893) | 0.006798 \/ 0.000200 (0.006598) | 0.000227 \/ 0.000054 (0.000173) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.032032 \/ 0.037411 (-0.005380) | 0.129990 \/ 0.014526 (0.115464) | 0.140338 \/ 0.176557 (-0.036219) | 0.212155 \/ 0.737135 (-0.524980) | 0.147395 \/ 0.296338 (-0.148943) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.478760 \/ 0.215209 (0.263551) | 4.751335 \/ 2.077655 (2.673680) | 2.164755 \/ 1.504120 (0.660635) | 1.944288 \/ 1.541195 (0.403094) | 2.077657 \/ 1.468490 (0.609167) | 0.818519 \/ 4.584777 (-3.766258) | 4.689013 \/ 3.745712 (0.943301) | 2.484079 \/ 5.269862 (-2.785782) | 1.788632 \/ 4.565676 (-2.777044) | 0.100484 \/ 0.424275 (-0.323791) | 0.013838 \/ 0.007607 (0.006231) | 0.589650 \/ 0.226044 (0.363605) | 5.859461 \/ 2.268929 (3.590533) | 2.670025 \/ 55.444624 (-52.774599) | 2.688709 \/ 6.876477 (-4.187768) | 2.408060 \/ 2.142072 (0.265988) | 0.972107 \/ 4.805227 (-3.833120) | 0.194425 \/ 6.500664 (-6.306239) | 0.076077 \/ 0.075469 (0.000608) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.430150 \/ 1.841788 (-0.411638) | 17.710507 \/ 8.074308 (9.636199) | 16.210789 \/ 10.191392 (6.019397) | 0.163940 \/ 0.680424 (-0.516484) | 0.020295 \/ 0.534201 (-0.513906) | 0.472596 \/ 0.579283 (-0.106687) | 0.483107 \/ 0.434364 (0.048743) | 0.585269 \/ 0.540337 (0.044931) | 0.705526 \/ 1.386936 (-0.681410) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.008864 \/ 0.011353 (-0.002489) | 0.006095 \/ 0.011008 (-0.004913) | 0.088702 \/ 0.038508 (0.050194) | 0.041596 \/ 0.023109 (0.018486) | 0.453515 \/ 0.275898 (0.177617) | 0.476217 \/ 0.323480 (0.152737) | 0.007574 \/ 0.007986 (-0.000412) | 0.004727 \/ 0.004328 (0.000398) | 0.087271 \/ 0.004250 (0.083021) | 0.059631 \/ 0.037052 (0.022578) | 0.449379 \/ 0.258489 (0.190890) | 0.494436 \/ 0.293841 (0.200595) | 0.043448 \/ 0.128546 (-0.085098) | 0.014580 \/ 0.075646 (-0.061067) | 0.103836 \/ 0.419271 (-0.315435) | 0.057537 \/ 0.043533 (0.014004) | 0.449359 \/ 0.255139 (0.194220) | 0.447577 \/ 0.283200 (0.164377) | 0.123600 \/ 0.141683 (-0.018083) | 1.748448 \/ 1.452155 (0.296294) | 1.902116 \/ 1.492716 (0.409399) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.237214 \/ 0.018006 (0.219207) | 0.497648 \/ 0.000490 (0.497158) | 0.003519 \/ 0.000200 (0.003319) | 0.000112 \/ 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.034477 \/ 0.037411 (-0.002934) | 0.132627 \/ 0.014526 (0.118101) | 0.139721 \/ 0.176557 (-0.036836) | 0.195705 \/ 0.737135 (-0.541430) | 0.150762 \/ 0.296338 (-0.145577) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.521306 \/ 0.215209 (0.306097) | 5.184982 \/ 2.077655 (3.107328) | 2.503979 \/ 1.504120 (0.999859) | 2.301054 \/ 1.541195 (0.759860) | 2.352713 \/ 1.468490 (0.884222) | 0.819804 \/ 4.584777 (-3.764973) | 4.584011 \/ 3.745712 (0.838299) | 2.497311 \/ 5.269862 (-2.772550) | 1.561262 \/ 4.565676 (-3.004414) | 0.101814 \/ 0.424275 (-0.322461) | 0.014078 \/ 0.007607 (0.006471) | 0.666564 \/ 0.226044 (0.440520) | 6.616379 \/ 2.268929 (4.347450) | 3.263892 \/ 55.444624 (-52.180732) | 2.891774 \/ 6.876477 (-3.984703) | 2.945260 \/ 2.142072 (0.803188) | 1.014379 \/ 4.805227 (-3.790848) | 0.201762 \/ 6.500664 (-6.298902) | 0.078012 \/ 0.075469 (0.002543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.567808 \/ 1.841788 (-0.273980) | 19.096552 \/ 8.074308 (11.022244) | 15.522285 \/ 10.191392 (5.330893) | 0.226568 \/ 0.680424 (-0.453856) | 0.021078 \/ 0.534201 (-0.513123) | 0.501686 \/ 0.579283 (-0.077597) | 0.517575 \/ 0.434364 (0.083211) | 0.589685 \/ 0.540337 (0.049348) | 0.705053 \/ 1.386936 (-0.681883) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#db56f7f0d2f0b99af4da17d388c205152504c7d9 \"CML watermark\")\n"],"created_at":1683826197000,"updated_at":1684136353000,"closed_at":1683904488000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5845","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5845","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5845.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5845.patch","merged_at":1683904488000},"body":"Adds the `date_format` param introduced in Pandas 2.0 to the CSV reader and improves its type hints.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5845\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5845\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5844","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5844\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5844\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5844\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5844","id":1705907812,"node_id":"I_kwDODunzps5lrhZk","number":5844,"title":"TypeError: Couldn't cast array of type struct, evidenceAnnotate: list, highlighted_evidence: list>> to ...","user":{"login":"chen-coding","id":54010030,"node_id":"MDQ6VXNlcjU0MDEwMDMw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/54010030?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chen-coding","html_url":"https:\/\/github.com\/chen-coding","followers_url":"https:\/\/api.github.com\/users\/chen-coding\/followers","following_url":"https:\/\/api.github.com\/users\/chen-coding\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chen-coding\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chen-coding\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chen-coding\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chen-coding\/orgs","repos_url":"https:\/\/api.github.com\/users\/chen-coding\/repos","events_url":"https:\/\/api.github.com\/users\/chen-coding\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chen-coding\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1683814501000,"updated_at":1683814501000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nTypeError: Couldn't cast array of type struct, evidenceAnnotate: list, highlighted_evidence: list>> to {'answer': {'unanswerable': Value(dtype='bool', id=None), 'answerType': Value(dtype='string', id=None), 'free_form_answer': Value(dtype='string', id=None), 'evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'evidenceAnnotate': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'highlighted_evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'unanswerable': Value(dtype='bool', id=None), 'answerType': Value(dtype='string', id=None), 'free_form_answer': Value(dtype='string', id=None), 'evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'evidenceAnnotate': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'highlighted_evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}\r\n\r\nWhen I use _load_dataset()_ I get the error\r\n\r\n`from datasets import load_dataset\r\ndatafiles = {'train': '.\/data\/train.json', 'validation': '.\/data\/validation.json', 'test': '.\/data\/test.json'}\r\nraw_data = load_dataset(\"json\", data_files=datafiles, cache_dir=\".\/cache\")\r\n`\r\nDetailed error information is as follows\uff1a\r\n\r\nTraceback (most recent call last):\r\n File \"C:\/Users\/CHENJIALEI\/Desktop\/NLPCC2023\/NLPCC23_SciMRC-main\/test2.py\", line 9, in \r\n raw_data = load_dataset(\"json\", data_files=datafiles, cache_dir=\".\/cache\")\r\n File \"D:\\Environment\\anaconda3\\envs\\test\\lib\\site-packages\\datasets\\load.py\", line 1747, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"D:\\Environment\\anaconda3\\envs\\test\\lib\\site-packages\\datasets\\builder.py\", line 814, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"D:\\Environment\\anaconda3\\envs\\test\\lib\\site-packages\\datasets\\builder.py\", line 905, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"D:\\Environment\\anaconda3\\envs\\test\\lib\\site-packages\\datasets\\builder.py\", line 1521, in _prepare_split\r\n writer.write_table(table)\r\n File \"D:\\Environment\\anaconda3\\envs\\test\\lib\\site-packages\\datasets\\arrow_writer.py\", line 540, in write_table\r\n pa_table = table_cast(pa_table, self._schema)\r\n File \"D:\\Environment\\anaconda3\\envs\\test\\lib\\site-packages\\datasets\\table.py\", line 2069, in table_cast\r\n return cast_table_to_schema(table, schema)\r\n File \"D:\\Environment\\anaconda3\\envs\\test\\lib\\site-packages\\datasets\\table.py\", line 2031, in cast_table_to_schema\r\n arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]\r\n File \"D:\\Environment\\anaconda3\\envs\\test\\lib\\site-packages\\datasets\\table.py\", line 2031, in \r\n arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]\r\n File \"D:\\Environment\\anaconda3\\envs\\test\\lib\\site-packages\\datasets\\table.py\", line 1740, in wrapper\r\n return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n File \"D:\\Environment\\anaconda3\\envs\\test\\lib\\site-packages\\datasets\\table.py\", line 1740, in \r\n return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n File \"D:\\Environment\\anaconda3\\envs\\test\\lib\\site-packages\\datasets\\table.py\", line 1867, in cast_array_to_feature\r\n casted_values = _c(array.values, feature[0])\r\n File \"D:\\Environment\\anaconda3\\envs\\test\\lib\\site-packages\\datasets\\table.py\", line 1742, in wrapper\r\n return func(array, *args, **kwargs)\r\n File \"D:\\Environment\\anaconda3\\envs\\test\\lib\\site-packages\\datasets\\table.py\", line 1862, in cast_array_to_feature\r\n arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]\r\n File \"D:\\Environment\\anaconda3\\envs\\test\\lib\\site-packages\\datasets\\table.py\", line 1862, in \r\n arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]\r\n File \"D:\\Environment\\anaconda3\\envs\\test\\lib\\site-packages\\datasets\\table.py\", line 1742, in wrapper\r\n return func(array, *args, **kwargs)\r\n File \"D:\\Environment\\anaconda3\\envs\\test\\lib\\site-packages\\datasets\\table.py\", line 1867, in cast_array_to_feature\r\n casted_values = _c(array.values, feature[0])\r\n File \"D:\\Environment\\anaconda3\\envs\\test\\lib\\site-packages\\datasets\\table.py\", line 1742, in wrapper\r\n return func(array, *args, **kwargs)\r\n File \"D:\\Environment\\anaconda3\\envs\\test\\lib\\site-packages\\datasets\\table.py\", line 1913, in cast_array_to_feature\r\n raise TypeError(f\"Couldn't cast array of type\\n{array.type}\\nto\\n{feature}\")\r\n\r\nIt is successful when I load the data separately\r\n\r\n`raw_data = load_dataset(\"json\", data_files=\".\/data\/train.json\", cache_dir=\".\/cache\")`\r\n\r\n\n\n### Steps to reproduce the bug\n\n1.from datasets import load_dataset\r\n2.datafiles = {'train': '.\/data\/train.json', 'validation': '.\/data\/validation.json', 'test': '.\/data\/test.json'}\r\n3.raw_data = load_dataset(\"json\", data_files=datafiles, cache_dir=\".\/cache\")\n\n### Expected behavior\n\n Successfully load dataset\n\n### Environment info\n\ndatasets == 2.6.1\r\npyarrow == 8.0.0\r\npython == 3.8\r\n\r\nplatform:windows11","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5844\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5844\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5841","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5841\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5841\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5841\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5841","id":1705286639,"node_id":"I_kwDODunzps5lpJvv","number":5841,"title":"Abusurdly slow on iteration","user":{"login":"fecet","id":41792945,"node_id":"MDQ6VXNlcjQxNzkyOTQ1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/41792945?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/fecet","html_url":"https:\/\/github.com\/fecet","followers_url":"https:\/\/api.github.com\/users\/fecet\/followers","following_url":"https:\/\/api.github.com\/users\/fecet\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/fecet\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/fecet\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/fecet\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/fecet\/orgs","repos_url":"https:\/\/api.github.com\/users\/fecet\/repos","events_url":"https:\/\/api.github.com\/users\/fecet\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/fecet\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! You can try to use the [Image](https:\/\/huggingface.co\/docs\/datasets\/v2.12.0\/en\/package_reference\/main_classes#datasets.Image) type which [decodes images on-the-fly](https:\/\/huggingface.co\/docs\/datasets\/v2.12.0\/en\/about_dataset_features#image-feature) into pytorch tensors :)\r\n\r\n```python\r\nds = Dataset.from_dict({\"tensor\":a}).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 5.04 s, sys: 96.5 ms, total: 5.14 s\r\n# Wall time: 5.14 s\r\n# 10000\r\n```\r\n\r\n```python\r\nfeatures = Features({\"tensor\": Image()})\r\nds = Dataset.from_dict({\"tensor\":a}, features=features).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 1.86 s, sys: 49 ms, total: 1.91 s\r\n# Wall time: 1.9 s\r\n# 10000\r\n```\r\n\r\n-> Speed x2.7\r\n\r\nAnd if you want to keep using arrays of integers, consider using the [Array2D](https:\/\/huggingface.co\/docs\/datasets\/v2.12.0\/en\/package_reference\/main_classes#datasets.Array2D) or [Array3D](https:\/\/huggingface.co\/docs\/datasets\/v2.12.0\/en\/package_reference\/main_classes#datasets.Array3D) types which are even faster (since it doesn't decode images):\r\n\r\n```python\r\nfeatures = Features({\"tensor\": Array2D(shape=(100, 224), dtype=\"float32\")})\r\nds = Dataset.from_dict({\"tensor\":a}, features=features).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 828 ms, sys: 68.4 ms, total: 896 ms\r\n# Wall time: 897 ms\r\n# 10000\r\n```\r\n\r\n-> Speed x5.7\r\n\r\nBatching also speeds up a lot\r\n\r\n```python\r\nfrom torch.utils.data import DataLoader\r\ndl = DataLoader(ds, batch_size=100)\r\n%time sum(1 for _ in dl)\r\n# CPU times: user 564 ms, sys: 83.5 ms, total: 648 ms\r\n# Wall time: 579 ms\r\n# 100\r\n```\r\n\r\n-> Speed x8.9\r\n\r\n```python\r\n%time sum(1 for _ in ds.iter(batch_size=100))\r\n# CPU times: user 119 ms, sys: 96.8 ms, total: 215 ms\r\n# Wall time: 117 ms\r\n# 100\r\n```\r\n\r\n-> Speed x46","Anyway, regarding the speed difference between numpy and pytorch, I think the issue is that we first convert numpy sub-arrays to pytorch and then consolidate into one tensor, while we should to the opposite. Indeed converting a numpy array to pytorch has a fix cost that seems to cause a slow down. The current pipeline is\r\n\r\n```\r\narrow -> nested numpy arrays -> lists of torch tensors -> one torch tensor\r\n```\r\n\r\nand we should do\r\n\r\n```\r\narrow -> nested numpy arrays -> one numpy array -> one torch tensor\r\n```","I have a similar issue: iterating over a dataset takes 5s without applying any transform, but takes ~30s after applying a transform.\r\nHere is the minimum code to reproduce the problem\r\n\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset, DatasetDict, load_dataset, Array3D, Image, Features\r\nfrom torch.utils.data import DataLoader\r\nfrom tqdm import tqdm\r\nimport torchvision \r\nfrom torchvision.transforms import ToTensor, Normalize\r\n\r\n\r\n#################################\r\n# Without transform\r\n#################################\r\n \r\ntrain_dataset = load_dataset(\r\n 'cifar100',\r\n split='train',\r\n use_auth_token=True,\r\n)\r\n\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"img\", \"fine_label\"])\r\n\r\ntrain_loader= DataLoader(\r\n train_dataset,\r\n batch_size=100,\r\n pin_memory=False,\r\n shuffle=True,\r\n num_workers=8,\r\n)\r\n\r\nfor batch in tqdm(train_loader, desc=\"Loading data, no transform\"):\r\n pass\r\n\r\n\r\n#################################\r\n# With transform\r\n#################################\r\n\r\ntransform_func = torchvision.transforms.Compose([\r\n ToTensor(), \r\n Normalize(mean=[0.485, 0.456, 0.406], std= [0.229, 0.224, 0.225]),] \r\n)\r\n \r\ntrain_dataset = train_dataset.map(\r\n desc=f\"Preprocessing samples\",\r\n function=lambda x: {\"img\": transform_func(x[\"img\"])},\r\n)\r\n\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"img\", \"fine_label\"])\r\n\r\n\r\ntrain_loader= DataLoader(\r\n train_dataset,\r\n batch_size=100,\r\n pin_memory=False,\r\n shuffle=True,\r\n num_workers=8,\r\n)\r\n\r\n\r\nfor batch in tqdm(train_loader, desc=\"Loading data after transform\"):\r\n pass \r\n```\r\n\r\nI have also tried converting the Image column to an Array3D\r\n```python\r\nimg_shape = train_dataset[0][\"img\"].shape\r\n\r\nfeatures = train_dataset.features.copy()\r\nfeatures[\"x\"] = Array3D(shape=img_shape, dtype=\"float32\")\r\n\r\ntrain_dataset = train_dataset.map(\r\n desc=f\"Preprocessing samples\",\r\n function=lambda x: {\"x\": np.array(x[\"img\"], dtype=np.uint8)},\r\n features=features,\r\n)\r\ntrain_dataset.cast_column(\"x\", Array3D(shape=img_shape, dtype=\"float32\"))\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"x\", \"fine_label\"])\r\n```\r\nbut to no avail. Any clue?","Thanks! I convert my dataset feature to Array3D and this speed became awesome!"],"created_at":1683792249000,"updated_at":1684165093000,"closed_at":1684165093000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI am attempting to iterate through an image dataset, but I am encountering a significant slowdown in the iteration speed. In order to investigate this issue, I conducted the following experiment:\r\n\r\n\r\n```python\r\na=torch.randn(100,224)\r\na=torch.stack([a] * 10000)\r\na.shape\r\n\r\n# %%\r\nds=Dataset.from_dict({\"tensor\":a})\r\nfor i in tqdm(ds.with_format(\"numpy\")):\r\n pass\r\n\r\nfor i in tqdm(ds.with_format(\"torch\")):\r\n pass\r\n```\r\nI noticed that the dataset in numpy format performs significantly faster than the one in torch format. My hypothesis is that the dataset undergoes a transformation process of torch->python->numpy(torch) in the background, which might be causing the slowdown. Is there any way to expedite the process by bypassing such transformations?\r\n\r\nFurthermore, if I increase the size of a to an image shape, like:\r\n```python\r\na=torch.randn(3,224,224)\r\n```\r\nthe iteration speed becomes absurdly slow, around 100 iterations per second, whereas the speed with numpy format is approximately 250 iterations per second. This level of speed would be unacceptable for large image datasets, as it could take several hours just to iterate through a single epoch.\n\n### Steps to reproduce the bug\n\n ```python\r\na=torch.randn(100,224)\r\na=torch.stack([a] * 10000)\r\na.shape\r\n\r\n# %%\r\nds=Dataset.from_dict({\"tensor\":a})\r\nfor i in tqdm(ds.with_format(\"numpy\")):\r\n pass\r\n\r\nfor i in tqdm(ds.with_format(\"torch\")):\r\n pass\r\n```\n\n### Expected behavior\n\niteration faster\n\n### Environment info\n\n - `datasets` version: 2.11.0\r\n- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.10\r\n- Python version: 3.8.16\r\n- Huggingface_hub version: 0.13.4\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 2.0.0","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5841\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5841\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5840","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5840\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5840\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5840\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5840","id":1705212085,"node_id":"I_kwDODunzps5lo3i1","number":5840,"title":"load model error.","user":{"login":"LanShanPi","id":58167546,"node_id":"MDQ6VXNlcjU4MTY3NTQ2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/58167546?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/LanShanPi","html_url":"https:\/\/github.com\/LanShanPi","followers_url":"https:\/\/api.github.com\/users\/LanShanPi\/followers","following_url":"https:\/\/api.github.com\/users\/LanShanPi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/LanShanPi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/LanShanPi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/LanShanPi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/LanShanPi\/orgs","repos_url":"https:\/\/api.github.com\/users\/LanShanPi\/repos","events_url":"https:\/\/api.github.com\/users\/LanShanPi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/LanShanPi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Please report this in the `transformers` repo, as it's not related to `datasets`"],"created_at":1683789158000,"updated_at":1683899047000,"closed_at":1683899046000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI had trained one model use deepspeed, when I load the final load I get the follow error:\r\nOSError: Can't load tokenizer for '\/XXX\/DeepSpeedExamples\/applications\/DeepSpeed-Chat\/output\/step3-models\/1.3b\/actor'. If you were trying to load it from 'https:\/\/huggingface.co\/models', make sure you don't have a local directory with the same name. Otherwise, make sure '\/home\/fm001\/hzl\/Project\/DeepSpeedExamples\/applications\/DeepSpeed-Chat\/output\/step3-models\/1.3b\/actor' is the correct path to a directory containing all relevant files for a BloomTokenizerFast tokenizer.\r\n\r\n\r\nmy load code is : python chat.py --path \/XXX\/DeepSpeedExamples\/applications\/DeepSpeed-Chat\/output\/step3-models\/1.3b\/actor\/\r\n\r\n\n\n### Steps to reproduce the bug\n\n\u3002\u3002\u3002\n\n### Expected behavior\n\n\u3002\u3002\u3002\n\n### Environment info\n\n\u3002\u3002\u3002","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5840\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5840\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5842","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5842\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5842\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5842\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5842","id":1705510602,"node_id":"I_kwDODunzps5lqAbK","number":5842,"title":"Remove columns in interable dataset","user":{"login":"surya-narayanan","id":17240858,"node_id":"MDQ6VXNlcjE3MjQwODU4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17240858?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/surya-narayanan","html_url":"https:\/\/github.com\/surya-narayanan","followers_url":"https:\/\/api.github.com\/users\/surya-narayanan\/followers","following_url":"https:\/\/api.github.com\/users\/surya-narayanan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/surya-narayanan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/surya-narayanan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/surya-narayanan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/surya-narayanan\/orgs","repos_url":"https:\/\/api.github.com\/users\/surya-narayanan\/repos","events_url":"https:\/\/api.github.com\/users\/surya-narayanan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/surya-narayanan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Transferring this issue as it's related to the \ud83e\udd17 Datasets library ","Hi @surya-narayanan! Could you provide some code snippet?","This method has been recently added to the `IterableDataset`, so you need to update the `datasets`' installation (`pip install -U datasets`) to use it."],"created_at":1683776926000,"updated_at":1687365402000,"closed_at":1687365401000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\n\nRight now, remove_columns() produces a NotImplementedError for iterable style datasets\n\n### Motivation\n\nIt would be great to have the same functionality irrespective of whether one is using an iterable or a map-style dataset\n\n### Your contribution\n\nhope and courage.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5842\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5842\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5843","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5843\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5843\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5843\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5843","id":1705514551,"node_id":"I_kwDODunzps5lqBY3","number":5843,"title":"Can't add iterable datasets to a Dataset Dict.","user":{"login":"surya-narayanan","id":17240858,"node_id":"MDQ6VXNlcjE3MjQwODU4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17240858?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/surya-narayanan","html_url":"https:\/\/github.com\/surya-narayanan","followers_url":"https:\/\/api.github.com\/users\/surya-narayanan\/followers","following_url":"https:\/\/api.github.com\/users\/surya-narayanan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/surya-narayanan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/surya-narayanan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/surya-narayanan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/surya-narayanan\/orgs","repos_url":"https:\/\/api.github.com\/users\/surya-narayanan\/repos","events_url":"https:\/\/api.github.com\/users\/surya-narayanan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/surya-narayanan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Transferring as this is relating to the \ud83e\udd17 Datasets library","You need to use `IterableDatasetDict` instead of `DatasetDict` for iterable datasets."],"created_at":1683770969000,"updated_at":1684990319000,"closed_at":1684990319000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### System Info\n\nstandard env\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE\/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nGet the following error:\r\n\r\nTypeError: Values in `DatasetDict` should be of type `Dataset` but got type ''\r\n\n\n### Expected behavior\n\nshould be able to add iterable datasets to a dataset dict","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5843\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5843\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5839","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5839\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5839\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5839\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5839","id":1704554718,"node_id":"I_kwDODunzps5lmXDe","number":5839,"title":"Make models\/functions optimized with `torch.compile` hashable","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":{"login":"mariosasko","id":47462742.0,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"assignees":[{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1683748928000,"updated_at":1683748928000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"As reported in https:\/\/github.com\/huggingface\/datasets\/issues\/5819, hashing functions\/transforms that reference a model, or a function, optimized with `torch.compile` currently fails due to them not being picklable (the concrete error can be found in the linked issue).\r\n\r\nThe solutions to consider:\r\n1. hashing\/pickling the original, uncompiled version of a compiled model\/function (attributes `_orig_mod`\/`_torchdynamo_orig_callable`) (less precise than the 2nd option as it ignores the other params of `torch.compute`)\r\n2. wait for https:\/\/github.com\/pytorch\/pytorch\/issues\/101107 to be resolved\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5839\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5839\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5838","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5838\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5838\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5838\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5838","id":1703210848,"node_id":"I_kwDODunzps5lhO9g","number":5838,"title":"Streaming support for `load_from_disk`","user":{"login":"Nilabhra","id":5437792,"node_id":"MDQ6VXNlcjU0Mzc3OTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5437792?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Nilabhra","html_url":"https:\/\/github.com\/Nilabhra","followers_url":"https:\/\/api.github.com\/users\/Nilabhra\/followers","following_url":"https:\/\/api.github.com\/users\/Nilabhra\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Nilabhra\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Nilabhra\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Nilabhra\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Nilabhra\/orgs","repos_url":"https:\/\/api.github.com\/users\/Nilabhra\/repos","events_url":"https:\/\/api.github.com\/users\/Nilabhra\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Nilabhra\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["As the name says, `load_from_disk` load the data from your disk. If the data is hosted on S3, it is first downloaded locally and then loaded from your disk.\r\n\r\nThere is a discussion on streaming data from S3 here though: #5281 ","@lhoestq \r\nThanks for your comment. I have checked out the discussion before and attempted at replicating the mentioned changes in the main branch (#5580). What I found was that if a dataset is saved using `save_to_disk`, it cannot be read by `load_dataset`. The error message asks me to to use `load_from_disk` instead. What would be the correct way of saving the data in this scenario?","Using `push_to_hub` you can save the dataset on the HF Hub as parquet files, and reload it \/ stream it using `load_dataset` :)\r\n\r\nIf you want to save your dataset somewhere else you can use `.to_parquet` to get a parquet file. If your dataset is big it's usually recommended to shard it into multi parquet files (around 1GB each).","@lhoestq \r\nThanks for the explanation. Appreciate it. I'll try this out.","@lhoestq\r\nI tried the method you mentioned. This the current scenario I'm facing:\r\n\r\n- The parquet file can be read from disk and streaming can be enabled.\r\n- The parquet file can be read from `s3` (local MinIO).\r\n- When `streaming=True` is enabled for `s3`, I get the error mentioned below:\r\n\r\n```\r\nFile ~\/...\/lib\/python3.8\/site-packages\/s3fs\/core.py:502, in S3FileSystem.set_session(self, refresh, kwargs)\r\n 500 conf = AioConfig(**config_kwargs)\r\n 501 if self.session is None:\r\n--> 502 self.session = aiobotocore.session.AioSession(**self.kwargs)\r\n 504 for parameters in (config_kwargs, self.kwargs, init_kwargs, client_kwargs):\r\n 505 for option in (\"region_name\", \"endpoint_url\"):\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'headers'\r\n```\r\n\r\nDoes this mean there is a bug in the main branch?","Streaming from S3 is still experimental, there might be a few bugs unfortunately.\r\n\r\nCan you share the full stack trace ?","@lhoestq \r\nSure, here you go:\r\n\r\n```python\r\nTypeError Traceback (most recent call last)\r\nCell In[8], line 1\r\n----> 1 dataset = load_dataset(\"parquet\", data_files=[\"s3:\/\/\/\/data-parquet\"], storage_options=fs.storage_options, streaming=True)\r\n\r\nFile ~\/...\/datasets\/src\/datasets\/load.py:1790, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1788 # Return iterable dataset in case of streaming\r\n 1789 if streaming:\r\n-> 1790 return builder_instance.as_streaming_dataset(split=split)\r\n 1792 # Some datasets are already processed on the HF google storage\r\n 1793 # Don't try downloading from Google storage for the packaged datasets as text, json, csv or pandas\r\n 1794 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n\r\nFile ~\/...\/datasets\/src\/datasets\/builder.py:1264, in DatasetBuilder.as_streaming_dataset(self, split, base_path)\r\n 1257 dl_manager = StreamingDownloadManager(\r\n 1258 base_path=base_path or self.base_path,\r\n 1259 download_config=DownloadConfig(use_auth_token=self.use_auth_token, storage_options=self.storage_options),\r\n 1260 dataset_name=self.name,\r\n 1261 data_dir=self.config.data_dir,\r\n 1262 )\r\n 1263 self._check_manual_download(dl_manager)\r\n-> 1264 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n 1265 # By default, return all splits\r\n 1266 if split is None:\r\n\r\nFile ~\/...\/datasets\/src\/datasets\/packaged_modules\/parquet\/parquet.py:34, in Parquet._split_generators(self, dl_manager)\r\n 32 if not self.config.data_files:\r\n 33 raise ValueError(f\"At least one data file must be specified, but got data_files={self.config.data_files}\")\r\n---> 34 data_files = dl_manager.download_and_extract(self.config.data_files)\r\n 35 if isinstance(data_files, (str, list, tuple)):\r\n 36 files = data_files\r\n\r\nFile ~\/...\/datasets\/src\/datasets\/download\/streaming_download_manager.py:1087, in StreamingDownloadManager.download_and_extract(self, url_or_urls)\r\n 1069 def download_and_extract(self, url_or_urls):\r\n 1070 \"\"\"Prepare given `url_or_urls` for streaming (add extraction protocol).\r\n 1071 \r\n 1072 This is the lazy version of `DownloadManager.download_and_extract` for streaming.\r\n (...)\r\n 1085 url(s): (`str` or `list` or `dict`), URL(s) to stream data from matching the given input `url_or_urls`.\r\n 1086 \"\"\"\r\n-> 1087 return self.extract(self.download(url_or_urls))\r\n\r\nFile ~\/...\/datasets\/src\/datasets\/download\/streaming_download_manager.py:1039, in StreamingDownloadManager.extract(self, url_or_urls)\r\n 1020 def extract(self, url_or_urls):\r\n 1021 \"\"\"Add extraction protocol for given url(s) for streaming.\r\n 1022 \r\n 1023 This is the lazy version of `DownloadManager.extract` for streaming.\r\n (...)\r\n 1037 ```\r\n 1038 \"\"\"\r\n-> 1039 urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)\r\n 1040 return urlpaths\r\n\r\nFile ~\/...\/datasets\/src\/datasets\/utils\/py_utils.py:443, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc)\r\n 441 num_proc = 1\r\n 442 if num_proc <= 1 or len(iterable) < parallel_min_length:\r\n--> 443 mapped = [\r\n 444 _single_map_nested((function, obj, types, None, True, None))\r\n 445 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 446 ]\r\n 447 else:\r\n 448 num_proc = num_proc if num_proc <= len(iterable) else len(iterable)\r\n\r\nFile ~\/...\/datasets\/src\/datasets\/utils\/py_utils.py:444, in (.0)\r\n 441 num_proc = 1\r\n 442 if num_proc <= 1 or len(iterable) < parallel_min_length:\r\n 443 mapped = [\r\n--> 444 _single_map_nested((function, obj, types, None, True, None))\r\n 445 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 446 ]\r\n 447 else:\r\n 448 num_proc = num_proc if num_proc <= len(iterable) else len(iterable)\r\n\r\nFile ~\/...\/datasets\/src\/datasets\/utils\/py_utils.py:363, in _single_map_nested(args)\r\n 361 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n 362 else:\r\n--> 363 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n 364 if isinstance(data_struct, list):\r\n 365 return mapped\r\n\r\nFile ~\/...\/datasets\/src\/datasets\/utils\/py_utils.py:363, in (.0)\r\n 361 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n 362 else:\r\n--> 363 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n 364 if isinstance(data_struct, list):\r\n 365 return mapped\r\n\r\nFile ~\/...\/datasets\/src\/datasets\/utils\/py_utils.py:346, in _single_map_nested(args)\r\n 344 # Singleton first to spare some computation\r\n 345 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 346 return function(data_struct)\r\n 348 # Reduce logging to keep things readable in multiprocessing with tqdm\r\n 349 if rank is not None and logging.get_verbosity() < logging.WARNING:\r\n\r\nFile ~\/...\/datasets\/src\/datasets\/download\/streaming_download_manager.py:1044, in StreamingDownloadManager._extract(self, urlpath)\r\n 1042 def _extract(self, urlpath: str) -> str:\r\n 1043 urlpath = str(urlpath)\r\n-> 1044 protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n 1045 # get inner file: zip:\/\/train-00000.json.gz::https:\/\/foo.bar\/data.zip -> zip:\/\/train-00000.json.gz\r\n 1046 path = urlpath.split(\"::\")[0]\r\n\r\nFile ~\/...\/datasets\/src\/datasets\/download\/streaming_download_manager.py:433, in _get_extraction_protocol(urlpath, use_auth_token)\r\n 431 else:\r\n 432 urlpath, kwargs = urlpath, {}\r\n--> 433 with fsspec.open(urlpath, **kwargs) as f:\r\n 434 return _get_extraction_protocol_with_magic_number(f)\r\n\r\nFile ~\/...\/lib\/python3.8\/site-packages\/fsspec\/core.py:102, in OpenFile.__enter__(self)\r\n 99 def __enter__(self):\r\n 100 mode = self.mode.replace(\"t\", \"\").replace(\"b\", \"\") + \"b\"\r\n--> 102 f = self.fs.open(self.path, mode=mode)\r\n 104 self.fobjects = [f]\r\n 106 if self.compression is not None:\r\n\r\nFile ~\/...\/lib\/python3.8\/site-packages\/fsspec\/spec.py:1199, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)\r\n 1197 else:\r\n 1198 ac = kwargs.pop(\"autocommit\", not self._intrans)\r\n-> 1199 f = self._open(\r\n 1200 path,\r\n 1201 mode=mode,\r\n 1202 block_size=block_size,\r\n 1203 autocommit=ac,\r\n 1204 cache_options=cache_options,\r\n 1205 **kwargs,\r\n 1206 )\r\n 1207 if compression is not None:\r\n 1208 from fsspec.compression import compr\r\n\r\nFile ~\/...\/lib\/python3.8\/site-packages\/s3fs\/core.py:659, in S3FileSystem._open(self, path, mode, block_size, acl, version_id, fill_cache, cache_type, autocommit, requester_pays, cache_options, **kwargs)\r\n 656 if cache_type is None:\r\n 657 cache_type = self.default_cache_type\r\n--> 659 return S3File(\r\n 660 self,\r\n 661 path,\r\n 662 mode,\r\n 663 block_size=block_size,\r\n 664 acl=acl,\r\n 665 version_id=version_id,\r\n 666 fill_cache=fill_cache,\r\n 667 s3_additional_kwargs=kw,\r\n 668 cache_type=cache_type,\r\n 669 autocommit=autocommit,\r\n 670 requester_pays=requester_pays,\r\n 671 cache_options=cache_options,\r\n 672 )\r\n\r\nFile ~\/...\/lib\/python3.8\/site-packages\/s3fs\/core.py:2043, in S3File.__init__(self, s3, path, mode, block_size, acl, version_id, fill_cache, s3_additional_kwargs, autocommit, cache_type, requester_pays, cache_options)\r\n 2041 self.details = s3.info(path)\r\n 2042 self.version_id = self.details.get(\"VersionId\")\r\n-> 2043 super().__init__(\r\n 2044 s3,\r\n 2045 path,\r\n 2046 mode,\r\n 2047 block_size,\r\n 2048 autocommit=autocommit,\r\n 2049 cache_type=cache_type,\r\n 2050 cache_options=cache_options,\r\n 2051 )\r\n 2052 self.s3 = self.fs # compatibility\r\n 2054 # when not using autocommit we want to have transactional state to manage\r\n\r\nFile ~\/...\/lib\/python3.8\/site-packages\/fsspec\/spec.py:1555, in AbstractBufferedFile.__init__(self, fs, path, mode, block_size, autocommit, cache_type, cache_options, size, **kwargs)\r\n 1553 self.size = size\r\n 1554 else:\r\n-> 1555 self.size = self.details[\"size\"]\r\n 1556 self.cache = caches[cache_type](\r\n 1557 self.blocksize, self._fetch_range, self.size, **cache_options\r\n 1558 )\r\n 1559 else:\r\n\r\nFile ~\/...\/lib\/python3.8\/site-packages\/fsspec\/spec.py:1568, in AbstractBufferedFile.details(self)\r\n 1565 @property\r\n 1566 def details(self):\r\n 1567 if self._details is None:\r\n-> 1568 self._details = self.fs.info(self.path)\r\n 1569 return self._details\r\n\r\nFile ~\/...\/lib\/python3.8\/site-packages\/fsspec\/asyn.py:115, in sync_wrapper..wrapper(*args, **kwargs)\r\n 112 @functools.wraps(func)\r\n 113 def wrapper(*args, **kwargs):\r\n 114 self = obj or args[0]\r\n--> 115 return sync(self.loop, func, *args, **kwargs)\r\n\r\nFile ~\/...\/lib\/python3.8\/site-packages\/fsspec\/asyn.py:100, in sync(loop, func, timeout, *args, **kwargs)\r\n 98 raise FSTimeoutError from return_result\r\n 99 elif isinstance(return_result, BaseException):\r\n--> 100 raise return_result\r\n 101 else:\r\n 102 return return_result\r\n\r\nFile ~\/...\/lib\/python3.8\/site-packages\/fsspec\/asyn.py:55, in _runner(event, coro, result, timeout)\r\n 53 coro = asyncio.wait_for(coro, timeout=timeout)\r\n 54 try:\r\n---> 55 result[0] = await coro\r\n 56 except Exception as ex:\r\n 57 result[0] = ex\r\n\r\nFile ~\/...\/lib\/python3.8\/site-packages\/s3fs\/core.py:1248, in S3FileSystem._info(self, path, bucket, key, refresh, version_id)\r\n 1246 if key:\r\n 1247 try:\r\n-> 1248 out = await self._call_s3(\r\n 1249 \"head_object\",\r\n 1250 self.kwargs,\r\n 1251 Bucket=bucket,\r\n 1252 Key=key,\r\n 1253 **version_id_kw(version_id),\r\n 1254 **self.req_kw,\r\n 1255 )\r\n 1256 return {\r\n 1257 \"ETag\": out.get(\"ETag\", \"\"),\r\n 1258 \"LastModified\": out[\"LastModified\"],\r\n (...)\r\n 1264 \"ContentType\": out.get(\"ContentType\"),\r\n 1265 }\r\n 1266 except FileNotFoundError:\r\n\r\nFile ~\/...\/lib\/python3.8\/site-packages\/s3fs\/core.py:341, in S3FileSystem._call_s3(self, method, *akwarglist, **kwargs)\r\n 340 async def _call_s3(self, method, *akwarglist, **kwargs):\r\n--> 341 await self.set_session()\r\n 342 s3 = await self.get_s3(kwargs.get(\"Bucket\"))\r\n 343 method = getattr(s3, method)\r\n\r\nFile ~\/...\/lib\/python3.8\/site-packages\/s3fs\/core.py:502, in S3FileSystem.set_session(self, refresh, kwargs)\r\n 500 conf = AioConfig(**config_kwargs)\r\n 501 if self.session is None:\r\n--> 502 self.session = aiobotocore.session.AioSession(**self.kwargs)\r\n 504 for parameters in (config_kwargs, self.kwargs, init_kwargs, client_kwargs):\r\n 505 for option in (\"region_name\", \"endpoint_url\"):\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'headers'\r\n```","Is `\"data-parquet\"` a file ? In `data_files` you should pass the paths to the parquet files (not to a directory). Glob patterns are not supported yet for S3 URLs.\r\n\r\nThe bug seems to happen because your provided data file has no extension. Because of that it tries to infer it from the file content, but fails because `_get_extraction_protocol` doesn't support S3 URLs yet.\r\n\r\n","@lhoestq \r\nThank you for your answer. Saving the file with `.parquet` extension solved the issue! This is really great! Really appreciate all the help! \r\n\r\nLet me know if I should close the issue or feel free to close it if you want.","Cool ! I'm glad it worked out :)\r\n\r\nSure feel free to close the issue, since the original question about streaming with load_from_disk has been answered anyway"],"created_at":1683699922000,"updated_at":1683884265000,"closed_at":1683884265000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Feature request\n\nSupport for streaming datasets stored in object stores in `load_from_disk`. \n\n### Motivation\n\nThe `load_from_disk` function supports fetching datasets stored in object stores such as `s3`. In many cases, the datasets that are stored in object stores are very large and being able to stream the data from the buckets becomes essential.\n\n### Your contribution\n\nI'd be happy to contribute this feature if I could get the guidance on how to do so.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5838\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5838\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5837","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5837\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5837\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5837\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5837","id":1703019816,"node_id":"I_kwDODunzps5lggUo","number":5837,"title":"Use DeepSpeed load myself \" .csv \" dataset.","user":{"login":"LanShanPi","id":58167546,"node_id":"MDQ6VXNlcjU4MTY3NTQ2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/58167546?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/LanShanPi","html_url":"https:\/\/github.com\/LanShanPi","followers_url":"https:\/\/api.github.com\/users\/LanShanPi\/followers","following_url":"https:\/\/api.github.com\/users\/LanShanPi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/LanShanPi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/LanShanPi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/LanShanPi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/LanShanPi\/orgs","repos_url":"https:\/\/api.github.com\/users\/LanShanPi\/repos","events_url":"https:\/\/api.github.com\/users\/LanShanPi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/LanShanPi\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Doing `load_dataset(\"path\/to\/data.csv\")` is not supported yet, but you can do\r\n\r\n```python\r\nds = load_dataset(\"csv\", data_files=[\"path\/to\/data.csv\"])\r\n```","@lhoestq thank you.","The other question: \r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/fm001\/.conda\/envs\/hzl\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1767, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"\/home\/fm001\/.conda\/envs\/hzl\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1498, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"\/home\/fm001\/.conda\/envs\/hzl\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1127, in dataset_module_factory\r\n return PackagedDatasetModuleFactory(\r\n File \"\/home\/fm001\/.conda\/envs\/hzl\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 708, in get_module\r\n data_files = DataFilesDict.from_local_or_remote(\r\n File \"\/home\/fm001\/.conda\/envs\/hzl\/lib\/python3.8\/site-packages\/datasets\/data_files.py\", line 796, in from_local_or_remote\r\n DataFilesList.from_local_or_remote(\r\n File \"\/home\/fm001\/.conda\/envs\/hzl\/lib\/python3.8\/site-packages\/datasets\/data_files.py\", line 764, in from_local_or_remote\r\n data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n File \"\/home\/fm001\/.conda\/envs\/hzl\/lib\/python3.8\/site-packages\/datasets\/data_files.py\", line 362, in resolve_patterns_locally_or_by_urls\r\n for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):\r\n File \"\/home\/fm001\/.conda\/envs\/hzl\/lib\/python3.8\/site-packages\/datasets\/data_files.py\", line 306, in _resolve_single_pattern_locally\r\n raise FileNotFoundError(error_msg)\r\nFileNotFoundError: Unable to find '\/home\/fm001\/hzl\/Data\/qa\/' at \/\r\n>>> mydata = load_dataset(\"\/home\/fm001\/hzl\/Data\/qa\/\")\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/fm001\/.conda\/envs\/hzl\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1767, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"\/home\/fm001\/.conda\/envs\/hzl\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1508, in load_dataset_builder\r\n builder_cls = import_main_class(dataset_module.module_path)\r\n File \"\/home\/fm001\/.conda\/envs\/hzl\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 115, in import_main_class\r\n module = importlib.import_module(module_path)\r\n File \"\/home\/fm001\/.conda\/envs\/hzl\/lib\/python3.8\/importlib\/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"\", line 1014, in _gcd_import\r\n File \"\", line 991, in _find_and_load\r\n File \"\", line 975, in _find_and_load_unlocked\r\n File \"\", line 671, in _load_unlocked\r\n File \"\", line 783, in exec_module\r\n File \"\", line 219, in _call_with_frames_removed\r\n File \"\/home\/fm001\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/qa\/b8b9f481eff9d17b769b4b50f30a51da32b47c94d1af4d2bdffb9fc2c589513a\/qa.py\", line 2, in \r\n mydata = load_dataset(\"\/home\/fm001\/hzl\/Data\/qa\/\")\r\n File \"\/home\/fm001\/.conda\/envs\/hzl\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1767, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"\/home\/fm001\/.conda\/envs\/hzl\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1524, in load_dataset_builder\r\n builder_instance: DatasetBuilder = builder_cls(\r\nTypeError: 'NoneType' object is not callable\r\n\r\nAnd I follow the setting with https:\/\/huggingface.co\/docs\/datasets\/dataset_script"],"created_at":1683686368000,"updated_at":1684122696000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nWhen I use DeepSpeed train a model with my own \" XXX.csv\" dataset I got the follow question:\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/fm001\/.conda\/envs\/hzl\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1767, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"\/home\/fm001\/.conda\/envs\/hzl\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1498, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"\/home\/fm001\/.conda\/envs\/hzl\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1217, in dataset_module_factory\r\n raise FileNotFoundError(\r\nFileNotFoundError: Couldn't find a dataset script at \/home\/fm001\/hzl\/Data\/qa.csv\/qa.csv.py or any data file in the same directory.\r\n\r\n\r\n\n\n### Steps to reproduce the bug\n\nmy code is :\r\nfrom datasets import load_dataset\r\nmydata = load_dataset(\"\/home\/fm001\/hzl\/Data\/qa.csv\")\n\n### Expected behavior\n\n\u3002\u3002\u3002\n\n### Environment info\n\n\u3002\u3002\u3002","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5837\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5837\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5836","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5836\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5836\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5836\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5836","id":1702773316,"node_id":"PR_kwDODunzps5QIgzu","number":5836,"title":"[docs] Custom decoding transforms","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_5836). All of your documentation changes will be reflected on that endpoint.","The error seems unrelated to the changes, so feel free to merge.","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006562 \/ 0.011353 (-0.004791) | 0.004568 \/ 0.011008 (-0.006440) | 0.098151 \/ 0.038508 (0.059643) | 0.028117 \/ 0.023109 (0.005008) | 0.305442 \/ 0.275898 (0.029544) | 0.338288 \/ 0.323480 (0.014808) | 0.005012 \/ 0.007986 (-0.002973) | 0.003415 \/ 0.004328 (-0.000913) | 0.075022 \/ 0.004250 (0.070771) | 0.036869 \/ 0.037052 (-0.000183) | 0.301427 \/ 0.258489 (0.042937) | 0.348485 \/ 0.293841 (0.054644) | 0.030761 \/ 0.128546 (-0.097785) | 0.011461 \/ 0.075646 (-0.064185) | 0.321987 \/ 0.419271 (-0.097285) | 0.042885 \/ 0.043533 (-0.000648) | 0.300691 \/ 0.255139 (0.045552) | 0.333208 \/ 0.283200 (0.050008) | 0.090203 \/ 0.141683 (-0.051480) | 1.459744 \/ 1.452155 (0.007590) | 1.522960 \/ 1.492716 (0.030243) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.213219 \/ 0.018006 (0.195213) | 0.408118 \/ 0.000490 (0.407629) | 0.003716 \/ 0.000200 (0.003516) | 0.000077 \/ 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.023060 \/ 0.037411 (-0.014351) | 0.097423 \/ 0.014526 (0.082897) | 0.103988 \/ 0.176557 (-0.072568) | 0.162793 \/ 0.737135 (-0.574343) | 0.108282 \/ 0.296338 (-0.188056) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.431628 \/ 0.215209 (0.216419) | 4.300881 \/ 2.077655 (2.223226) | 2.058853 \/ 1.504120 (0.554733) | 1.897910 \/ 1.541195 (0.356715) | 1.991723 \/ 1.468490 (0.523233) | 0.699686 \/ 4.584777 (-3.885091) | 3.395004 \/ 3.745712 (-0.350708) | 1.841613 \/ 5.269862 (-3.428248) | 1.152347 \/ 4.565676 (-3.413330) | 0.082517 \/ 0.424275 (-0.341758) | 0.012323 \/ 0.007607 (0.004715) | 0.535812 \/ 0.226044 (0.309767) | 5.374103 \/ 2.268929 (3.105174) | 2.429662 \/ 55.444624 (-53.014962) | 2.097199 \/ 6.876477 (-4.779277) | 2.172625 \/ 2.142072 (0.030552) | 0.810156 \/ 4.805227 (-3.995071) | 0.151629 \/ 6.500664 (-6.349035) | 0.066528 \/ 0.075469 (-0.008941) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.220667 \/ 1.841788 (-0.621121) | 13.696976 \/ 8.074308 (5.622668) | 14.042916 \/ 10.191392 (3.851524) | 0.129626 \/ 0.680424 (-0.550798) | 0.016593 \/ 0.534201 (-0.517607) | 0.383747 \/ 0.579283 (-0.195536) | 0.386872 \/ 0.434364 (-0.047492) | 0.456524 \/ 0.540337 (-0.083813) | 0.545033 \/ 1.386936 (-0.841903) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006361 \/ 0.011353 (-0.004992) | 0.004516 \/ 0.011008 (-0.006493) | 0.077155 \/ 0.038508 (0.038647) | 0.027239 \/ 0.023109 (0.004130) | 0.359892 \/ 0.275898 (0.083994) | 0.391994 \/ 0.323480 (0.068514) | 0.004950 \/ 0.007986 (-0.003036) | 0.003379 \/ 0.004328 (-0.000949) | 0.077057 \/ 0.004250 (0.072806) | 0.039562 \/ 0.037052 (0.002509) | 0.364244 \/ 0.258489 (0.105755) | 0.416033 \/ 0.293841 (0.122192) | 0.031049 \/ 0.128546 (-0.097497) | 0.011479 \/ 0.075646 (-0.064167) | 0.086479 \/ 0.419271 (-0.332793) | 0.039381 \/ 0.043533 (-0.004151) | 0.372143 \/ 0.255139 (0.117004) | 0.388569 \/ 0.283200 (0.105369) | 0.090954 \/ 0.141683 (-0.050728) | 1.540957 \/ 1.452155 (0.088802) | 1.596841 \/ 1.492716 (0.104125) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.221130 \/ 0.018006 (0.203123) | 0.403728 \/ 0.000490 (0.403238) | 0.003172 \/ 0.000200 (0.002972) | 0.000078 \/ 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024963 \/ 0.037411 (-0.012449) | 0.101065 \/ 0.014526 (0.086539) | 0.110846 \/ 0.176557 (-0.065710) | 0.158578 \/ 0.737135 (-0.578557) | 0.112235 \/ 0.296338 (-0.184104) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.457320 \/ 0.215209 (0.242111) | 4.548094 \/ 2.077655 (2.470439) | 2.175376 \/ 1.504120 (0.671256) | 1.964755 \/ 1.541195 (0.423561) | 2.008128 \/ 1.468490 (0.539638) | 0.702448 \/ 4.584777 (-3.882329) | 3.437595 \/ 3.745712 (-0.308117) | 3.009871 \/ 5.269862 (-2.259990) | 1.558181 \/ 4.565676 (-3.007496) | 0.082568 \/ 0.424275 (-0.341707) | 0.012371 \/ 0.007607 (0.004764) | 0.550688 \/ 0.226044 (0.324644) | 5.534210 \/ 2.268929 (3.265282) | 2.649605 \/ 55.444624 (-52.795020) | 2.317293 \/ 6.876477 (-4.559184) | 2.351525 \/ 2.142072 (0.209453) | 0.808971 \/ 4.805227 (-3.996256) | 0.152737 \/ 6.500664 (-6.347927) | 0.068416 \/ 0.075469 (-0.007053) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.340219 \/ 1.841788 (-0.501569) | 13.903388 \/ 8.074308 (5.829080) | 13.063477 \/ 10.191392 (2.872085) | 0.130216 \/ 0.680424 (-0.550208) | 0.016522 \/ 0.534201 (-0.517679) | 0.398946 \/ 0.579283 (-0.180337) | 0.382450 \/ 0.434364 (-0.051914) | 0.491007 \/ 0.540337 (-0.049330) | 0.577747 \/ 1.386936 (-0.809189) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#15c37ed142e4fbcb8c00ae62d4c71c84ce41959a \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007812 \/ 0.011353 (-0.003541) | 0.005563 \/ 0.011008 (-0.005446) | 0.099372 \/ 0.038508 (0.060864) | 0.035629 \/ 0.023109 (0.012520) | 0.301457 \/ 0.275898 (0.025559) | 0.339136 \/ 0.323480 (0.015656) | 0.006152 \/ 0.007986 (-0.001834) | 0.005843 \/ 0.004328 (0.001515) | 0.075280 \/ 0.004250 (0.071030) | 0.052789 \/ 0.037052 (0.015736) | 0.301805 \/ 0.258489 (0.043316) | 0.347918 \/ 0.293841 (0.054078) | 0.036182 \/ 0.128546 (-0.092364) | 0.012655 \/ 0.075646 (-0.062991) | 0.334428 \/ 0.419271 (-0.084844) | 0.062746 \/ 0.043533 (0.019213) | 0.296932 \/ 0.255139 (0.041793) | 0.314115 \/ 0.283200 (0.030916) | 0.121291 \/ 0.141683 (-0.020392) | 1.453252 \/ 1.452155 (0.001097) | 1.564714 \/ 1.492716 (0.071997) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.243810 \/ 0.018006 (0.225804) | 0.547129 \/ 0.000490 (0.546640) | 0.004666 \/ 0.000200 (0.004466) | 0.000089 \/ 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.028214 \/ 0.037411 (-0.009197) | 0.108878 \/ 0.014526 (0.094352) | 0.122313 \/ 0.176557 (-0.054243) | 0.182412 \/ 0.737135 (-0.554723) | 0.127014 \/ 0.296338 (-0.169324) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.423946 \/ 0.215209 (0.208737) | 4.207112 \/ 2.077655 (2.129457) | 2.048658 \/ 1.504120 (0.544538) | 1.843593 \/ 1.541195 (0.302398) | 1.952426 \/ 1.468490 (0.483936) | 0.712098 \/ 4.584777 (-3.872679) | 3.824971 \/ 3.745712 (0.079258) | 3.507141 \/ 5.269862 (-1.762721) | 1.868866 \/ 4.565676 (-2.696810) | 0.087895 \/ 0.424275 (-0.336380) | 0.012783 \/ 0.007607 (0.005176) | 0.524087 \/ 0.226044 (0.298042) | 5.246498 \/ 2.268929 (2.977570) | 2.495944 \/ 55.444624 (-52.948680) | 2.126779 \/ 6.876477 (-4.749698) | 2.315545 \/ 2.142072 (0.173472) | 0.859546 \/ 4.805227 (-3.945681) | 0.173457 \/ 6.500664 (-6.327208) | 0.067483 \/ 0.075469 (-0.007986) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.173851 \/ 1.841788 (-0.667937) | 15.091913 \/ 8.074308 (7.017605) | 14.640035 \/ 10.191392 (4.448643) | 0.168498 \/ 0.680424 (-0.511926) | 0.017513 \/ 0.534201 (-0.516688) | 0.425770 \/ 0.579283 (-0.153513) | 0.434248 \/ 0.434364 (-0.000116) | 0.504204 \/ 0.540337 (-0.036134) | 0.616885 \/ 1.386936 (-0.770051) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007775 \/ 0.011353 (-0.003578) | 0.005153 \/ 0.011008 (-0.005855) | 0.075461 \/ 0.038508 (0.036953) | 0.034994 \/ 0.023109 (0.011885) | 0.372389 \/ 0.275898 (0.096491) | 0.397911 \/ 0.323480 (0.074431) | 0.006572 \/ 0.007986 (-0.001413) | 0.005549 \/ 0.004328 (0.001220) | 0.075101 \/ 0.004250 (0.070851) | 0.054014 \/ 0.037052 (0.016962) | 0.368964 \/ 0.258489 (0.110475) | 0.425353 \/ 0.293841 (0.131512) | 0.035546 \/ 0.128546 (-0.093001) | 0.012707 \/ 0.075646 (-0.062939) | 0.087418 \/ 0.419271 (-0.331853) | 0.046425 \/ 0.043533 (0.002893) | 0.363982 \/ 0.255139 (0.108843) | 0.376421 \/ 0.283200 (0.093221) | 0.105369 \/ 0.141683 (-0.036314) | 1.494408 \/ 1.452155 (0.042253) | 1.596783 \/ 1.492716 (0.104067) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.258780 \/ 0.018006 (0.240773) | 0.533373 \/ 0.000490 (0.532883) | 0.000432 \/ 0.000200 (0.000232) | 0.000058 \/ 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030687 \/ 0.037411 (-0.006725) | 0.110231 \/ 0.014526 (0.095705) | 0.123738 \/ 0.176557 (-0.052819) | 0.171999 \/ 0.737135 (-0.565137) | 0.127673 \/ 0.296338 (-0.168665) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.448058 \/ 0.215209 (0.232849) | 4.459381 \/ 2.077655 (2.381726) | 2.234020 \/ 1.504120 (0.729900) | 2.038616 \/ 1.541195 (0.497421) | 2.123795 \/ 1.468490 (0.655305) | 0.702664 \/ 4.584777 (-3.882113) | 3.837133 \/ 3.745712 (0.091420) | 2.138574 \/ 5.269862 (-3.131287) | 1.375955 \/ 4.565676 (-3.189722) | 0.086996 \/ 0.424275 (-0.337280) | 0.012461 \/ 0.007607 (0.004854) | 0.557978 \/ 0.226044 (0.331934) | 5.648613 \/ 2.268929 (3.379685) | 2.777829 \/ 55.444624 (-52.666796) | 2.392424 \/ 6.876477 (-4.484052) | 2.482823 \/ 2.142072 (0.340750) | 0.851891 \/ 4.805227 (-3.953336) | 0.171335 \/ 6.500664 (-6.329329) | 0.065041 \/ 0.075469 (-0.010428) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.319697 \/ 1.841788 (-0.522091) | 15.748688 \/ 8.074308 (7.674380) | 13.397042 \/ 10.191392 (3.205650) | 0.166424 \/ 0.680424 (-0.514000) | 0.017755 \/ 0.534201 (-0.516446) | 0.424989 \/ 0.579283 (-0.154294) | 0.424705 \/ 0.434364 (-0.009659) | 0.494190 \/ 0.540337 (-0.046147) | 0.588315 \/ 1.386936 (-0.798622) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#15c37ed142e4fbcb8c00ae62d4c71c84ce41959a \"CML watermark\")\n"],"created_at":1683667301000,"updated_at":1684136172000,"closed_at":1683750183000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5836","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5836","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5836.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5836.patch","merged_at":1683750183000},"body":"Adds custom decoding transform solution to the docs to fix #5782.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5836\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5836\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5835","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5835\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5835\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5835\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5835","id":1702522620,"node_id":"PR_kwDODunzps5QHquR","number":5835,"title":"Always set nullable fields in the writer","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006640 \/ 0.011353 (-0.004713) | 0.004606 \/ 0.011008 (-0.006402) | 0.098870 \/ 0.038508 (0.060362) | 0.028201 \/ 0.023109 (0.005092) | 0.304396 \/ 0.275898 (0.028498) | 0.339804 \/ 0.323480 (0.016324) | 0.005011 \/ 0.007986 (-0.002974) | 0.003530 \/ 0.004328 (-0.000799) | 0.075223 \/ 0.004250 (0.070973) | 0.037922 \/ 0.037052 (0.000870) | 0.310273 \/ 0.258489 (0.051784) | 0.348324 \/ 0.293841 (0.054483) | 0.030181 \/ 0.128546 (-0.098365) | 0.011584 \/ 0.075646 (-0.064062) | 0.322637 \/ 0.419271 (-0.096635) | 0.043119 \/ 0.043533 (-0.000414) | 0.314514 \/ 0.255139 (0.059375) | 0.334384 \/ 0.283200 (0.051185) | 0.092551 \/ 0.141683 (-0.049132) | 1.496694 \/ 1.452155 (0.044539) | 1.555426 \/ 1.492716 (0.062710) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.205078 \/ 0.018006 (0.187072) | 0.399200 \/ 0.000490 (0.398710) | 0.004881 \/ 0.000200 (0.004681) | 0.000200 \/ 0.000054 (0.000146) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.025042 \/ 0.037411 (-0.012369) | 0.101501 \/ 0.014526 (0.086975) | 0.107430 \/ 0.176557 (-0.069127) | 0.170107 \/ 0.737135 (-0.567028) | 0.111253 \/ 0.296338 (-0.185086) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.460358 \/ 0.215209 (0.245149) | 4.592037 \/ 2.077655 (2.514383) | 2.222612 \/ 1.504120 (0.718493) | 2.022804 \/ 1.541195 (0.481610) | 2.040824 \/ 1.468490 (0.572334) | 0.700485 \/ 4.584777 (-3.884292) | 3.427847 \/ 3.745712 (-0.317866) | 2.836916 \/ 5.269862 (-2.432946) | 1.505055 \/ 4.565676 (-3.060621) | 0.083206 \/ 0.424275 (-0.341069) | 0.046492 \/ 0.007607 (0.038885) | 0.555562 \/ 0.226044 (0.329518) | 5.563574 \/ 2.268929 (3.294645) | 2.635273 \/ 55.444624 (-52.809351) | 2.299377 \/ 6.876477 (-4.577100) | 2.394512 \/ 2.142072 (0.252440) | 0.809541 \/ 4.805227 (-3.995686) | 0.151814 \/ 6.500664 (-6.348850) | 0.067241 \/ 0.075469 (-0.008228) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.188396 \/ 1.841788 (-0.653392) | 13.714596 \/ 8.074308 (5.640288) | 14.076906 \/ 10.191392 (3.885514) | 0.143447 \/ 0.680424 (-0.536977) | 0.016514 \/ 0.534201 (-0.517687) | 0.383075 \/ 0.579283 (-0.196209) | 0.386997 \/ 0.434364 (-0.047367) | 0.441941 \/ 0.540337 (-0.098396) | 0.522145 \/ 1.386936 (-0.864791) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.006266 \/ 0.011353 (-0.005086) | 0.004562 \/ 0.011008 (-0.006446) | 0.077472 \/ 0.038508 (0.038964) | 0.027596 \/ 0.023109 (0.004486) | 0.400498 \/ 0.275898 (0.124600) | 0.406728 \/ 0.323480 (0.083248) | 0.004745 \/ 0.007986 (-0.003241) | 0.003375 \/ 0.004328 (-0.000954) | 0.076645 \/ 0.004250 (0.072394) | 0.037756 \/ 0.037052 (0.000703) | 0.415183 \/ 0.258489 (0.156694) | 0.413758 \/ 0.293841 (0.119917) | 0.030624 \/ 0.128546 (-0.097922) | 0.011525 \/ 0.075646 (-0.064121) | 0.086033 \/ 0.419271 (-0.333238) | 0.039307 \/ 0.043533 (-0.004226) | 0.418192 \/ 0.255139 (0.163053) | 0.403152 \/ 0.283200 (0.119952) | 0.094141 \/ 0.141683 (-0.047542) | 1.459012 \/ 1.452155 (0.006857) | 1.546493 \/ 1.492716 (0.053777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.239494 \/ 0.018006 (0.221488) | 0.420918 \/ 0.000490 (0.420428) | 0.000411 \/ 0.000200 (0.000211) | 0.000057 \/ 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024525 \/ 0.037411 (-0.012886) | 0.099793 \/ 0.014526 (0.085267) | 0.105888 \/ 0.176557 (-0.070669) | 0.155912 \/ 0.737135 (-0.581223) | 0.109937 \/ 0.296338 (-0.186401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.470108 \/ 0.215209 (0.254899) | 4.696390 \/ 2.077655 (2.618735) | 2.467841 \/ 1.504120 (0.963721) | 2.275012 \/ 1.541195 (0.733818) | 2.430736 \/ 1.468490 (0.962245) | 0.700442 \/ 4.584777 (-3.884335) | 3.458451 \/ 3.745712 (-0.287261) | 1.921120 \/ 5.269862 (-3.348742) | 1.183292 \/ 4.565676 (-3.382384) | 0.083985 \/ 0.424275 (-0.340290) | 0.012510 \/ 0.007607 (0.004903) | 0.589066 \/ 0.226044 (0.363022) | 5.896070 \/ 2.268929 (3.627141) | 2.935379 \/ 55.444624 (-52.509245) | 2.599524 \/ 6.876477 (-4.276953) | 2.663426 \/ 2.142072 (0.521354) | 0.812096 \/ 4.805227 (-3.993131) | 0.152559 \/ 6.500664 (-6.348105) | 0.066906 \/ 0.075469 (-0.008563) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.333341 \/ 1.841788 (-0.508446) | 14.441667 \/ 8.074308 (6.367359) | 14.754069 \/ 10.191392 (4.562677) | 0.155707 \/ 0.680424 (-0.524716) | 0.016983 \/ 0.534201 (-0.517218) | 0.389386 \/ 0.579283 (-0.189897) | 0.394106 \/ 0.434364 (-0.040258) | 0.447355 \/ 0.540337 (-0.092982) | 0.533142 \/ 1.386936 (-0.853794) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#99ee4467ce77f8f718159a535e237dd8790b5bed \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007801 \/ 0.011353 (-0.003552) | 0.004884 \/ 0.011008 (-0.006124) | 0.114754 \/ 0.038508 (0.076245) | 0.040427 \/ 0.023109 (0.017318) | 0.402064 \/ 0.275898 (0.126166) | 0.428830 \/ 0.323480 (0.105350) | 0.006429 \/ 0.007986 (-0.001556) | 0.004394 \/ 0.004328 (0.000066) | 0.087681 \/ 0.004250 (0.083431) | 0.053684 \/ 0.037052 (0.016632) | 0.399967 \/ 0.258489 (0.141478) | 0.445298 \/ 0.293841 (0.151457) | 0.033194 \/ 0.128546 (-0.095352) | 0.010288 \/ 0.075646 (-0.065359) | 0.390719 \/ 0.419271 (-0.028552) | 0.059311 \/ 0.043533 (0.015778) | 0.393651 \/ 0.255139 (0.138512) | 0.418395 \/ 0.283200 (0.135196) | 0.121494 \/ 0.141683 (-0.020189) | 1.735470 \/ 1.452155 (0.283315) | 1.820485 \/ 1.492716 (0.327769) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.012887 \/ 0.018006 (-0.005119) | 0.491652 \/ 0.000490 (0.491162) | 0.005481 \/ 0.000200 (0.005281) | 0.000127 \/ 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030931 \/ 0.037411 (-0.006480) | 0.125212 \/ 0.014526 (0.110686) | 0.136004 \/ 0.176557 (-0.040552) | 0.201686 \/ 0.737135 (-0.535449) | 0.140181 \/ 0.296338 (-0.156157) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.475003 \/ 0.215209 (0.259794) | 4.743918 \/ 2.077655 (2.666263) | 2.149422 \/ 1.504120 (0.645302) | 1.925016 \/ 1.541195 (0.383821) | 2.061441 \/ 1.468490 (0.592951) | 0.619845 \/ 4.584777 (-3.964932) | 4.534691 \/ 3.745712 (0.788979) | 2.248198 \/ 5.269862 (-3.021664) | 1.409868 \/ 4.565676 (-3.155808) | 0.080265 \/ 0.424275 (-0.344010) | 0.014455 \/ 0.007607 (0.006848) | 0.597810 \/ 0.226044 (0.371765) | 5.845492 \/ 2.268929 (3.576564) | 2.729139 \/ 55.444624 (-52.715486) | 2.313879 \/ 6.876477 (-4.562598) | 2.418763 \/ 2.142072 (0.276690) | 0.748687 \/ 4.805227 (-4.056540) | 0.165278 \/ 6.500664 (-6.335387) | 0.076848 \/ 0.075469 (0.001379) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.416349 \/ 1.841788 (-0.425439) | 17.440903 \/ 8.074308 (9.366595) | 17.025733 \/ 10.191392 (6.834341) | 0.167428 \/ 0.680424 (-0.512995) | 0.020484 \/ 0.534201 (-0.513717) | 0.470273 \/ 0.579283 (-0.109010) | 0.494380 \/ 0.434364 (0.060016) | 0.566131 \/ 0.540337 (0.025794) | 0.690444 \/ 1.386936 (-0.696492) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007695 \/ 0.011353 (-0.003657) | 0.005551 \/ 0.011008 (-0.005457) | 0.087812 \/ 0.038508 (0.049304) | 0.039107 \/ 0.023109 (0.015998) | 0.436461 \/ 0.275898 (0.160563) | 0.465116 \/ 0.323480 (0.141636) | 0.006590 \/ 0.007986 (-0.001396) | 0.004672 \/ 0.004328 (0.000343) | 0.087109 \/ 0.004250 (0.082858) | 0.054227 \/ 0.037052 (0.017175) | 0.442660 \/ 0.258489 (0.184171) | 0.484296 \/ 0.293841 (0.190455) | 0.033308 \/ 0.128546 (-0.095238) | 0.010780 \/ 0.075646 (-0.064866) | 0.095255 \/ 0.419271 (-0.324016) | 0.054399 \/ 0.043533 (0.010866) | 0.431734 \/ 0.255139 (0.176595) | 0.453583 \/ 0.283200 (0.170383) | 0.116067 \/ 0.141683 (-0.025616) | 1.780701 \/ 1.452155 (0.328546) | 1.851077 \/ 1.492716 (0.358360) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.228000 \/ 0.018006 (0.209994) | 0.485733 \/ 0.000490 (0.485243) | 0.003955 \/ 0.000200 (0.003755) | 0.000109 \/ 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.033974 \/ 0.037411 (-0.003437) | 0.134504 \/ 0.014526 (0.119978) | 0.144421 \/ 0.176557 (-0.032135) | 0.202171 \/ 0.737135 (-0.534964) | 0.152015 \/ 0.296338 (-0.144323) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.520462 \/ 0.215209 (0.305253) | 5.233339 \/ 2.077655 (3.155684) | 2.575013 \/ 1.504120 (1.070893) | 2.384119 \/ 1.541195 (0.842924) | 2.403856 \/ 1.468490 (0.935366) | 0.618656 \/ 4.584777 (-3.966121) | 4.663582 \/ 3.745712 (0.917870) | 3.738594 \/ 5.269862 (-1.531268) | 1.794903 \/ 4.565676 (-2.770773) | 0.077903 \/ 0.424275 (-0.346372) | 0.014681 \/ 0.007607 (0.007074) | 0.648615 \/ 0.226044 (0.422570) | 6.503721 \/ 2.268929 (4.234792) | 3.326239 \/ 55.444624 (-52.118386) | 2.989791 \/ 6.876477 (-3.886685) | 2.995479 \/ 2.142072 (0.853407) | 0.765483 \/ 4.805227 (-4.039744) | 0.169783 \/ 6.500664 (-6.330882) | 0.077533 \/ 0.075469 (0.002064) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.518736 \/ 1.841788 (-0.323051) | 17.989119 \/ 8.074308 (9.914811) | 15.484365 \/ 10.191392 (5.292973) | 0.168507 \/ 0.680424 (-0.511917) | 0.020289 \/ 0.534201 (-0.513912) | 0.467491 \/ 0.579283 (-0.111793) | 0.501714 \/ 0.434364 (0.067350) | 0.553418 \/ 0.540337 (0.013081) | 0.662199 \/ 1.386936 (-0.724737) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#5ebda17e4362bd5f6123543a14fa526a3b54481a \"CML watermark\")\n","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007044 \/ 0.011353 (-0.004309) | 0.004750 \/ 0.011008 (-0.006258) | 0.096694 \/ 0.038508 (0.058186) | 0.035682 \/ 0.023109 (0.012573) | 0.300613 \/ 0.275898 (0.024715) | 0.334831 \/ 0.323480 (0.011351) | 0.006428 \/ 0.007986 (-0.001558) | 0.004456 \/ 0.004328 (0.000128) | 0.075060 \/ 0.004250 (0.070810) | 0.053166 \/ 0.037052 (0.016114) | 0.299601 \/ 0.258489 (0.041112) | 0.359521 \/ 0.293841 (0.065680) | 0.028072 \/ 0.128546 (-0.100474) | 0.009216 \/ 0.075646 (-0.066430) | 0.328895 \/ 0.419271 (-0.090377) | 0.050881 \/ 0.043533 (0.007349) | 0.298265 \/ 0.255139 (0.043126) | 0.318095 \/ 0.283200 (0.034896) | 0.116046 \/ 0.141683 (-0.025637) | 1.491312 \/ 1.452155 (0.039157) | 1.556053 \/ 1.492716 (0.063337) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.014248 \/ 0.018006 (-0.003758) | 0.551455 \/ 0.000490 (0.550965) | 0.006096 \/ 0.000200 (0.005897) | 0.000145 \/ 0.000054 (0.000091) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030598 \/ 0.037411 (-0.006813) | 0.109549 \/ 0.014526 (0.095023) | 0.123207 \/ 0.176557 (-0.053350) | 0.181940 \/ 0.737135 (-0.555195) | 0.128965 \/ 0.296338 (-0.167374) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.404552 \/ 0.215209 (0.189343) | 4.030674 \/ 2.077655 (1.953020) | 1.841819 \/ 1.504120 (0.337699) | 1.650055 \/ 1.541195 (0.108860) | 1.763208 \/ 1.468490 (0.294718) | 0.532715 \/ 4.584777 (-4.052062) | 3.774810 \/ 3.745712 (0.029098) | 3.221927 \/ 5.269862 (-2.047934) | 1.607974 \/ 4.565676 (-2.957702) | 0.067160 \/ 0.424275 (-0.357116) | 0.012479 \/ 0.007607 (0.004872) | 0.498801 \/ 0.226044 (0.272757) | 4.980567 \/ 2.268929 (2.711638) | 2.356017 \/ 55.444624 (-53.088608) | 2.018975 \/ 6.876477 (-4.857502) | 2.218343 \/ 2.142072 (0.076270) | 0.645714 \/ 4.805227 (-4.159514) | 0.145470 \/ 6.500664 (-6.355195) | 0.065666 \/ 0.075469 (-0.009803) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.205756 \/ 1.841788 (-0.636031) | 15.682779 \/ 8.074308 (7.608470) | 14.748987 \/ 10.191392 (4.557595) | 0.167105 \/ 0.680424 (-0.513319) | 0.017554 \/ 0.534201 (-0.516647) | 0.393924 \/ 0.579283 (-0.185359) | 0.432659 \/ 0.434364 (-0.001705) | 0.502033 \/ 0.540337 (-0.038304) | 0.602244 \/ 1.386936 (-0.784692) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007077 \/ 0.011353 (-0.004276) | 0.004911 \/ 0.011008 (-0.006097) | 0.075120 \/ 0.038508 (0.036612) | 0.035460 \/ 0.023109 (0.012351) | 0.362569 \/ 0.275898 (0.086671) | 0.398995 \/ 0.323480 (0.075515) | 0.006587 \/ 0.007986 (-0.001398) | 0.004571 \/ 0.004328 (0.000242) | 0.074647 \/ 0.004250 (0.070397) | 0.057331 \/ 0.037052 (0.020279) | 0.365123 \/ 0.258489 (0.106634) | 0.408617 \/ 0.293841 (0.114776) | 0.028911 \/ 0.128546 (-0.099635) | 0.009533 \/ 0.075646 (-0.066113) | 0.081566 \/ 0.419271 (-0.337705) | 0.048841 \/ 0.043533 (0.005308) | 0.367245 \/ 0.255139 (0.112106) | 0.375975 \/ 0.283200 (0.092776) | 0.123211 \/ 0.141683 (-0.018472) | 1.471588 \/ 1.452155 (0.019433) | 1.569342 \/ 1.492716 (0.076625) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.328443 \/ 0.018006 (0.310436) | 0.541402 \/ 0.000490 (0.540912) | 0.000440 \/ 0.000200 (0.000240) | 0.000058 \/ 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.030772 \/ 0.037411 (-0.006639) | 0.115833 \/ 0.014526 (0.101307) | 0.127837 \/ 0.176557 (-0.048719) | 0.180897 \/ 0.737135 (-0.556238) | 0.132458 \/ 0.296338 (-0.163881) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.445979 \/ 0.215209 (0.230770) | 4.453101 \/ 2.077655 (2.375447) | 2.276625 \/ 1.504120 (0.772505) | 2.102167 \/ 1.541195 (0.560972) | 2.181583 \/ 1.468490 (0.713093) | 0.525069 \/ 4.584777 (-4.059708) | 3.803446 \/ 3.745712 (0.057734) | 1.954173 \/ 5.269862 (-3.315688) | 1.088734 \/ 4.565676 (-3.476942) | 0.066020 \/ 0.424275 (-0.358255) | 0.012158 \/ 0.007607 (0.004551) | 0.546828 \/ 0.226044 (0.320783) | 5.454060 \/ 2.268929 (3.185132) | 2.756154 \/ 55.444624 (-52.688470) | 2.476501 \/ 6.876477 (-4.399976) | 2.525875 \/ 2.142072 (0.383803) | 0.647515 \/ 4.805227 (-4.157712) | 0.144511 \/ 6.500664 (-6.356153) | 0.067060 \/ 0.075469 (-0.008409) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.306456 \/ 1.841788 (-0.535332) | 15.822623 \/ 8.074308 (7.748315) | 14.929114 \/ 10.191392 (4.737721) | 0.168650 \/ 0.680424 (-0.511773) | 0.018043 \/ 0.534201 (-0.516158) | 0.396712 \/ 0.579283 (-0.182572) | 0.425800 \/ 0.434364 (-0.008564) | 0.466452 \/ 0.540337 (-0.073885) | 0.564370 \/ 1.386936 (-0.822566) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#5ebda17e4362bd5f6123543a14fa526a3b54481a \"CML watermark\")\n"],"created_at":1683656219000,"updated_at":1684858229000,"closed_at":1684501470000,"author_association":"MEMBER","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5835","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5835","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5835.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5835.patch","merged_at":1684501470000},"body":"This fixes loading of e.g. parquet data with non-nullable fields.\r\n\r\nIndeed `datasets.Features` doesn't support non-nullable fields, which can lead to data not concatenable due to arrow schema mismatch.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5835\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5835\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5834","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5834\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5834\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5834\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5834","id":1702448892,"node_id":"I_kwDODunzps5leU78","number":5834,"title":"Is uint8 supported?","user":{"login":"ryokan0123","id":17979572,"node_id":"MDQ6VXNlcjE3OTc5NTcy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17979572?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ryokan0123","html_url":"https:\/\/github.com\/ryokan0123","followers_url":"https:\/\/api.github.com\/users\/ryokan0123\/followers","following_url":"https:\/\/api.github.com\/users\/ryokan0123\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ryokan0123\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ryokan0123\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ryokan0123\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ryokan0123\/orgs","repos_url":"https:\/\/api.github.com\/users\/ryokan0123\/repos","events_url":"https:\/\/api.github.com\/users\/ryokan0123\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ryokan0123\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! The numpy formatting detaults to int64 and float32 - but you can use uint8 using\r\n```python\r\nds = ds.with_format(\"numpy\", dtype=np.uint8)\r\n```","Related to https:\/\/github.com\/huggingface\/datasets\/issues\/5517.","Thank you!\r\nBy setting `ds.with_format(\"numpy\", dtype=np.uint8)`, the dataset returns the data in `uint8`.\r\n\r\nHowever, `with_format` and `set_format` seem to cast the data on-the-fly.\r\nI want to reduce the dataset size by using `uint8` instead of `int64` and I observe no difference between using `int64` and `uint8` for the vector.\r\nIs there any way to actually store the data in `uint8` and save the disk space and the downloading time when loaded from the hub?\r\n","If the feature type is `Value(\"uint8\")` then it's written an uint8 on disk using the uint8 Arrow dtype.\r\n\r\ne.g.\r\n```python\r\nds = Dataset.from_dict({\"a\": range(10)}, features=Features({\"a\": Value(\"uint8\")}))\r\nds.data.nbytes\r\n# 10\r\n```","Oh, I understand now.\r\nThe data was stored in `uint8` from the beginning (when the dataset returns `int64`).\r\n\r\nThank you for your time!\r\nMy question is fully resolved."],"created_at":1683653473000,"updated_at":1683954261000,"closed_at":1683954261000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI expect the dataset to store the data in the `uint8` data type, but it's returning `int64` instead.\r\nWhile I've found that `datasets` doesn't yet support float16 (https:\/\/github.com\/huggingface\/datasets\/issues\/4981), I'm wondering if this is the case for other data types as well.\r\nIs there a way to store vector data as `uint8` and then upload it to the hub?\n\n### Steps to reproduce the bug\n\n```python\r\nfrom datasets import Features, Dataset, Sequence, Value\r\nimport numpy as np\r\n\r\ndataset = Dataset.from_dict(\r\n {\"vector\": [np.array([0, 1, 2], dtype=np.uint8)]}, features=Features({\"vector\": Sequence(Value(\"uint8\"))})\r\n).with_format(\"numpy\")\r\n\r\nprint(dataset[0][\"vector\"].dtype)\r\n```\n\n### Expected behavior\n\nExpected: `uint8`\r\nActual: `int64`\n\n### Environment info\n\n- `datasets` version: 2.12.0\r\n- Platform: macOS-12.1-x86_64-i386-64bit\r\n- Python version: 3.8.12\r\n- Huggingface_hub version: 0.12.1\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 1.5.3","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5834\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5834\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5833","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5833\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5833\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5833\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5833","id":1702280682,"node_id":"I_kwDODunzps5ldr3q","number":5833,"title":"Unable to push dataset - `create_pr` problem","user":{"login":"agombert","id":17645711,"node_id":"MDQ6VXNlcjE3NjQ1NzEx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17645711?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/agombert","html_url":"https:\/\/github.com\/agombert","followers_url":"https:\/\/api.github.com\/users\/agombert\/followers","following_url":"https:\/\/api.github.com\/users\/agombert\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/agombert\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/agombert\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/agombert\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/agombert\/orgs","repos_url":"https:\/\/api.github.com\/users\/agombert\/repos","events_url":"https:\/\/api.github.com\/users\/agombert\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/agombert\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":{"login":"albertvillanova","id":8515462.0,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @agombert.\r\n\r\nIn this case, I think the root issue is authentication: before pushing to Hub, you should authenticate. See our docs: https:\/\/huggingface.co\/docs\/datasets\/upload_dataset#upload-with-python\r\n> 2. To upload a dataset on the Hub in Python, you need to log in to your Hugging Face account:\r\n ```\r\n huggingface-cli login\r\n ```","Hey @albertvillanova well I actually did :D \r\n\r\n\"Capture\r\n","That is weird that you get a Forbidden error if you are properly authenticated...\r\n\r\nToday we had a big outage issue affecting the Hugging Face Hub. Could you please retry to push_to_hub your dataset? Maybe that was the cause...","Yes I've just tried again and same error 403 :\/","Login successful but also got this error \"Forbidden: pass `create_pr=1` as a query parameter to create a Pull Request\"","Make sure your API token has a `write` role. I had the same issue as you with the `read` token. Creating a `write` token and using that solved the issue.","> Make sure your API token has a `write` role. I had the same issue as you with the `read` token. Creating a `write` token and using that solved the issue.\r\n\r\nI generate a token with write role. It works! thank you so much."],"created_at":1683646375000,"updated_at":1687860569000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI can't upload to the hub the dataset I manually created locally (Image dataset). I have a problem when using the method `.push_to_hub` which asks for a `create_pr` attribute which is not compatible.\n\n### Steps to reproduce the bug\n\nhere what I have:\r\n\r\n```python\r\ndataset.push_to_hub(\"agomberto\/FrenchCensus-handwritten-texts\")\r\n```\r\nOutput:\r\n```python\r\nPushing split train to the Hub.\r\nPushing dataset shards to the dataset hub: 0%| | 0\/2 [00:00 259 response.raise_for_status()\r\n 260 except HTTPError as e:\r\n\r\nFile ~\/miniconda3\/envs\/hwocr\/lib\/python3.8\/site-packages\/requests\/models.py:1021, in Response.raise_for_status(self)\r\n 1020 if http_error_msg:\r\n-> 1021 raise HTTPError(http_error_msg, response=self)\r\n\r\nHTTPError: 403 Client Error: Forbidden for url: https:\/\/huggingface.co\/api\/datasets\/agomberto\/FrenchCensus-handwritten-texts\/commit\/main\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nHfHubHTTPError Traceback (most recent call last)\r\nCell In[7], line 1\r\n----> 1 dataset.push_to_hub(\"agomberto\/FrenchCensus-handwritten-texts\")\r\n\r\nFile ~\/miniconda3\/envs\/hwocr\/lib\/python3.8\/site-packages\/datasets\/dataset_dict.py:1583, in DatasetDict.push_to_hub(self, repo_id, private, token, branch, max_shard_size, num_shards, embed_external_files)\r\n 1581 logger.warning(f\"Pushing split {split} to the Hub.\")\r\n 1582 # The split=key needs to be removed before merging\r\n-> 1583 repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parquet_shards_to_hub(\r\n 1584 repo_id,\r\n 1585 split=split,\r\n 1586 private=private,\r\n 1587 token=token,\r\n 1588 branch=branch,\r\n 1589 max_shard_size=max_shard_size,\r\n 1590 num_shards=num_shards.get(split),\r\n 1591 embed_external_files=embed_external_files,\r\n 1592 )\r\n 1593 total_uploaded_size += uploaded_size\r\n 1594 total_dataset_nbytes += dataset_nbytes\r\n\r\nFile ~\/miniconda3\/envs\/hwocr\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:5275, in Dataset._push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, max_shard_size, num_shards, embed_external_files)\r\n 5273 shard.to_parquet(buffer)\r\n 5274 uploaded_size += buffer.tell()\r\n-> 5275 _retry(\r\n 5276 api.upload_file,\r\n 5277 func_kwargs={\r\n 5278 \"path_or_fileobj\": buffer.getvalue(),\r\n 5279 \"path_in_repo\": shard_path_in_repo,\r\n 5280 \"repo_id\": repo_id,\r\n 5281 \"token\": token,\r\n 5282 \"repo_type\": \"dataset\",\r\n 5283 \"revision\": branch,\r\n 5284 },\r\n 5285 exceptions=HTTPError,\r\n 5286 status_codes=[504],\r\n 5287 base_wait_time=2.0,\r\n 5288 max_retries=5,\r\n 5289 max_wait_time=20.0,\r\n 5290 )\r\n 5291 shards_path_in_repo.append(shard_path_in_repo)\r\n 5293 # Cleanup to remove unused files\r\n\r\nFile ~\/miniconda3\/envs\/hwocr\/lib\/python3.8\/site-packages\/datasets\/utils\/file_utils.py:285, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)\r\n 283 except exceptions as err:\r\n 284 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):\r\n--> 285 raise err\r\n 286 else:\r\n 287 sleep_time = min(max_wait_time, base_wait_time * 2**retry) # Exponential backoff\r\n\r\nFile ~\/miniconda3\/envs\/hwocr\/lib\/python3.8\/site-packages\/datasets\/utils\/file_utils.py:282, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)\r\n 280 while True:\r\n 281 try:\r\n--> 282 return func(*func_args, **func_kwargs)\r\n 283 except exceptions as err:\r\n 284 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):\r\n\r\nFile ~\/miniconda3\/envs\/hwocr\/lib\/python3.8\/site-packages\/huggingface_hub\/utils\/_validators.py:120, in validate_hf_hub_args.._inner_fn(*args, **kwargs)\r\n 117 if check_use_auth_token:\r\n 118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)\r\n--> 120 return fn(*args, **kwargs)\r\n\r\nFile ~\/miniconda3\/envs\/hwocr\/lib\/python3.8\/site-packages\/huggingface_hub\/hf_api.py:2998, in HfApi.upload_file(self, path_or_fileobj, path_in_repo, repo_id, token, repo_type, revision, commit_message, commit_description, create_pr, parent_commit)\r\n 2990 commit_message = (\r\n 2991 commit_message if commit_message is not None else f\"Upload {path_in_repo} with huggingface_hub\"\r\n 2992 )\r\n 2993 operation = CommitOperationAdd(\r\n 2994 path_or_fileobj=path_or_fileobj,\r\n 2995 path_in_repo=path_in_repo,\r\n 2996 )\r\n-> 2998 commit_info = self.create_commit(\r\n 2999 repo_id=repo_id,\r\n 3000 repo_type=repo_type,\r\n 3001 operations=[operation],\r\n 3002 commit_message=commit_message,\r\n 3003 commit_description=commit_description,\r\n 3004 token=token,\r\n 3005 revision=revision,\r\n 3006 create_pr=create_pr,\r\n 3007 parent_commit=parent_commit,\r\n 3008 )\r\n 3010 if commit_info.pr_url is not None:\r\n 3011 revision = quote(_parse_revision_from_pr_url(commit_info.pr_url), safe=\"\")\r\n\r\nFile ~\/miniconda3\/envs\/hwocr\/lib\/python3.8\/site-packages\/huggingface_hub\/utils\/_validators.py:120, in validate_hf_hub_args.._inner_fn(*args, **kwargs)\r\n 117 if check_use_auth_token:\r\n 118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)\r\n--> 120 return fn(*args, **kwargs)\r\n\r\nFile ~\/miniconda3\/envs\/hwocr\/lib\/python3.8\/site-packages\/huggingface_hub\/hf_api.py:2548, in HfApi.create_commit(self, repo_id, operations, commit_message, commit_description, token, repo_type, revision, create_pr, num_threads, parent_commit)\r\n 2546 try:\r\n 2547 commit_resp = get_session().post(url=commit_url, headers=headers, data=data, params=params)\r\n-> 2548 hf_raise_for_status(commit_resp, endpoint_name=\"commit\")\r\n 2549 except RepositoryNotFoundError as e:\r\n 2550 e.append_to_message(_CREATE_COMMIT_NO_REPO_ERROR_MESSAGE)\r\n\r\nFile ~\/miniconda3\/envs\/hwocr\/lib\/python3.8\/site-packages\/huggingface_hub\/utils\/_errors.py:301, in hf_raise_for_status(response, endpoint_name)\r\n 297 raise BadRequestError(message, response=response) from e\r\n 299 # Convert `HTTPError` into a `HfHubHTTPError` to display request information\r\n 300 # as well (request id and\/or server error message)\r\n--> 301 raise HfHubHTTPError(str(e), response=response) from e\r\n\r\nHfHubHTTPError: 403 Client Error: Forbidden for url: https:\/\/huggingface.co\/api\/datasets\/agomberto\/FrenchCensus-handwritten-texts\/commit\/main (Request ID: Root=1-645a66bf-255ad91602a6404e6cb70fba)\r\n\r\nForbidden: pass `create_pr=1` as a query parameter to create a Pull Request\r\n```\r\n\r\nAnd then when I do\r\n\r\n```python\r\ndataset.push_to_hub(\"agomberto\/FrenchCensus-handwritten-texts\", create_pr=1)\r\n```\r\n\r\nI get \r\n\r\n```python\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\nCell In[8], line 1\r\n----> 1 dataset.push_to_hub(\"agomberto\/FrenchCensus-handwritten-texts\", create_pr=1)\r\n\r\nTypeError: push_to_hub() got an unexpected keyword argument 'create_pr'\r\n```\n\n### Expected behavior\n\nI would like to have the dataset updloaded [here](https:\/\/huggingface.co\/datasets\/agomberto\/FrenchCensus-handwritten-texts).\n\n### Environment info\n\n```bash\r\n- `datasets` version: 2.12.0\r\n- Platform: macOS-13.3.1-arm64-arm-64bit\r\n- Python version: 3.8.16\r\n- Huggingface_hub version: 0.14.1\r\n- PyArrow version: 12.0.0\r\n- Pandas version: 1.5.3\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5833\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5833\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5832","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5832\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5832\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5832\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5832","id":1702135336,"node_id":"I_kwDODunzps5ldIYo","number":5832,"title":"404 Client Error: Not Found for url: https:\/\/huggingface.co\/api\/models\/bert-large-cased","user":{"login":"varungupta31","id":51288316,"node_id":"MDQ6VXNlcjUxMjg4MzE2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/51288316?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/varungupta31","html_url":"https:\/\/github.com\/varungupta31","followers_url":"https:\/\/api.github.com\/users\/varungupta31\/followers","following_url":"https:\/\/api.github.com\/users\/varungupta31\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/varungupta31\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/varungupta31\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/varungupta31\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/varungupta31\/orgs","repos_url":"https:\/\/api.github.com\/users\/varungupta31\/repos","events_url":"https:\/\/api.github.com\/users\/varungupta31\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/varungupta31\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["moved to https:\/\/github.com\/huggingface\/transformers\/issues\/23233"],"created_at":1683641699000,"updated_at":1683642359000,"closed_at":1683642359000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nRunning [Bert-Large-Cased](https:\/\/huggingface.co\/bert-large-cased) model causes `HTTPError`, with the following traceback-\r\n\r\n```\r\nHTTPError Traceback (most recent call last)\r\n in \r\n----> 1 tokenizer = BertTokenizer.from_pretrained('bert-large-cased')\r\n\r\n~\/miniconda3\/envs\/cmd-chall\/lib\/python3.7\/site-packages\/transformers\/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)\r\n 1646 # At this point pretrained_model_name_or_path is either a directory or a model identifier name\r\n 1647 fast_tokenizer_file = get_fast_tokenizer_file(\r\n-> 1648 pretrained_model_name_or_path, revision=revision, use_auth_token=use_auth_token\r\n 1649 )\r\n 1650 additional_files_names = {\r\n\r\n~\/miniconda3\/envs\/cmd-chall\/lib\/python3.7\/site-packages\/transformers\/tokenization_utils_base.py in get_fast_tokenizer_file(path_or_repo, revision, use_auth_token)\r\n 3406 \"\"\"\r\n 3407 # Inspect all files from the repo\/folder.\r\n-> 3408 all_files = get_list_of_files(path_or_repo, revision=revision, use_auth_token=use_auth_token)\r\n 3409 tokenizer_files_map = {}\r\n 3410 for file_name in all_files:\r\n\r\n~\/miniconda3\/envs\/cmd-chall\/lib\/python3.7\/site-packages\/transformers\/file_utils.py in get_list_of_files(path_or_repo, revision, use_auth_token)\r\n 1685 token = None\r\n 1686 model_info = HfApi(endpoint=HUGGINGFACE_CO_RESOLVE_ENDPOINT).model_info(\r\n-> 1687 path_or_repo, revision=revision, token=token\r\n 1688 )\r\n 1689 return [f.rfilename for f in model_info.siblings]\r\n\r\n~\/miniconda3\/envs\/cmd-chall\/lib\/python3.7\/site-packages\/huggingface_hub\/hf_api.py in model_info(self, repo_id, revision, token)\r\n 246 )\r\n 247 r = requests.get(path, headers=headers)\r\n--> 248 r.raise_for_status()\r\n 249 d = r.json()\r\n 250 return ModelInfo(**d)\r\n\r\n~\/miniconda3\/envs\/cmd-chall\/lib\/python3.7\/site-packages\/requests\/models.py in raise_for_status(self)\r\n 951 \r\n 952 if http_error_msg:\r\n--> 953 raise HTTPError(http_error_msg, response=self)\r\n 954 \r\n 955 def close(self):\r\n\r\nHTTPError: 404 Client Error: Not Found for url: https:\/\/huggingface.co\/api\/models\/bert-large-cased\r\n```\r\n\r\nI have also tried running in offline mode, as [discussed here](https:\/\/huggingface.co\/docs\/transformers\/installation#offline-mode)\r\n```\r\nHF_DATASETS_OFFLINE=1 \r\nTRANSFORMERS_OFFLINE=1\r\n```\n\n### Steps to reproduce the bug\n\n1. `from transformers import BertTokenizer, BertModel`\r\n2. `tokenizer = BertTokenizer.from_pretrained('bert-large-cased')`\n\n### Expected behavior\n\nRun without the HTTP error.\n\n### Environment info\n\n| # Name | Version | Build | Channel | |\r\n|--------------------|------------|-----------------------------|---------|---|\r\n| _libgcc_mutex | 0.1 | main | | |\r\n| _openmp_mutex | 4.5 | 1_gnu | | |\r\n| _pytorch_select | 0.1 | cpu_0 | | |\r\n| appdirs | 1.4.4 | pypi_0 | pypi | |\r\n| backcall | 0.2.0 | pypi_0 | pypi | |\r\n| blas | 1.0 | mkl | | |\r\n| bzip2 | 1.0.8 | h7b6447c_0 | | |\r\n| ca-certificates | 2021.7.5 | h06a4308_1 | | |\r\n| certifi | 2021.5.30 | py37h06a4308_0 | | |\r\n| cffi | 1.14.6 | py37h400218f_0 | | |\r\n| charset-normalizer | 2.0.3 | pypi_0 | pypi | |\r\n| click | 8.0.1 | pypi_0 | pypi | |\r\n| colorama | 0.4.4 | pypi_0 | pypi | |\r\n| cudatoolkit | 11.1.74 | h6bb024c_0 | nvidia | |\r\n| cycler | 0.11.0 | pypi_0 | pypi | |\r\n| decorator | 5.0.9 | pypi_0 | pypi | |\r\n| docker-pycreds | 0.4.0 | pypi_0 | pypi | |\r\n| docopt | 0.6.2 | pypi_0 | pypi | |\r\n| dominate | 2.6.0 | pypi_0 | pypi | |\r\n| ffmpeg | 4.3 | hf484d3e_0 | pytorch | |\r\n| filelock | 3.0.12 | pypi_0 | pypi | |\r\n| fonttools | 4.38.0 | pypi_0 | pypi | |\r\n| freetype | 2.10.4 | h5ab3b9f_0 | | |\r\n| gitdb | 4.0.7 | pypi_0 | pypi | |\r\n| gitpython | 3.1.18 | pypi_0 | pypi | |\r\n| gmp | 6.2.1 | h2531618_2 | | |\r\n| gnutls | 3.6.15 | he1e5248_0 | | |\r\n| huggingface-hub | 0.0.12 | pypi_0 | pypi | |\r\n| humanize | 3.10.0 | pypi_0 | pypi | |\r\n| idna | 3.2 | pypi_0 | pypi | |\r\n| importlib-metadata | 4.6.1 | pypi_0 | pypi | |\r\n| intel-openmp | 2019.4 | 243 | | |\r\n| ipdb | 0.13.9 | pypi_0 | pypi | |\r\n| ipython | 7.25.0 | pypi_0 | pypi | |\r\n| ipython-genutils | 0.2.0 | pypi_0 | pypi | |\r\n| jedi | 0.18.0 | pypi_0 | pypi | |\r\n| joblib | 1.0.1 | pypi_0 | pypi | |\r\n| jpeg | 9b | h024ee3a_2 | | |\r\n| jsonpickle | 1.5.2 | pypi_0 | pypi | |\r\n| kiwisolver | 1.4.4 | pypi_0 | pypi | |\r\n| lame | 3.100 | h7b6447c_0 | | |\r\n| lcms2 | 2.12 | h3be6417_0 | | |\r\n| ld_impl_linux-64 | 2.35.1 | h7274673_9 | | |\r\n| libffi | 3.3 | he6710b0_2 | | |\r\n| libgcc-ng | 9.3.0 | h5101ec6_17 | | |\r\n| libgomp | 9.3.0 | h5101ec6_17 | | |\r\n| libiconv | 1.15 | h63c8f33_5 | | |\r\n| libidn2 | 2.3.2 | h7f8727e_0 | | |\r\n| libmklml | 2019.0.5 | 0 | | |\r\n| libpng | 1.6.37 | hbc83047_0 | | |\r\n| libstdcxx-ng | 9.3.0 | hd4cf53a_17 | | |\r\n| libtasn1 | 4.16.0 | h27cfd23_0 | | |\r\n| libtiff | 4.2.0 | h85742a9_0 | | |\r\n| libunistring | 0.9.10 | h27cfd23_0 | | |\r\n| libuv | 1.40.0 | h7b6447c_0 | | |\r\n| libwebp-base | 1.2.0 | h27cfd23_0 | | |\r\n| lz4-c | 1.9.3 | h2531618_0 | | |\r\n| matplotlib | 3.5.3 | pypi_0 | pypi | |\r\n| matplotlib-inline | 0.1.2 | pypi_0 | pypi | |\r\n| mergedeep | 1.3.4 | pypi_0 | pypi | |\r\n| mkl | 2020.2 | 256 | | |\r\n| mkl-service | 2.3.0 | py37he8ac12f_0 | | |\r\n| mkl_fft | 1.3.0 | py37h54f3939_0 | | |\r\n| mkl_random | 1.1.1 | py37h0573a6f_0 | | |\r\n| msgpack | 1.0.2 | pypi_0 | pypi | |\r\n| munch | 2.5.0 | pypi_0 | pypi | |\r\n| ncurses | 6.2 | he6710b0_1 | | |\r\n| nettle | 3.7.3 | hbbd107a_1 | | |\r\n| ninja | 1.10.2 | hff7bd54_1 | | |\r\n| nltk | 3.8.1 | pypi_0 | pypi | |\r\n| numpy | 1.19.2 | py37h54aff64_0 | | |\r\n| numpy-base | 1.19.2 | py37hfa32c7d_0 | | |\r\n| olefile | 0.46 | py37_0 | | |\r\n| openh264 | 2.1.0 | hd408876_0 | | |\r\n| openjpeg | 2.3.0 | h05c96fa_1 | | |\r\n| openssl | 1.1.1k | h27cfd23_0 | | |\r\n| packaging | 21.0 | pypi_0 | pypi | |\r\n| pandas | 1.3.1 | pypi_0 | pypi | |\r\n| parso | 0.8.2 | pypi_0 | pypi | |\r\n| pathtools | 0.1.2 | pypi_0 | pypi | |\r\n| pexpect | 4.8.0 | pypi_0 | pypi | |\r\n| pickleshare | 0.7.5 | pypi_0 | pypi | |\r\n| pillow | 8.3.1 | py37h2c7a002_0 | | |\r\n| pip | 21.1.3 | py37h06a4308_0 | | |\r\n| prompt-toolkit | 3.0.19 | pypi_0 | pypi | |\r\n| protobuf | 4.21.12 | pypi_0 | pypi | |\r\n| psutil | 5.8.0 | pypi_0 | pypi | |\r\n| ptyprocess | 0.7.0 | pypi_0 | pypi | |\r\n| py-cpuinfo | 8.0.0 | pypi_0 | pypi | |\r\n| pycparser | 2.20 | py_2 | | |\r\n| pygments | 2.9.0 | pypi_0 | pypi | |\r\n| pyparsing | 2.4.7 | pypi_0 | pypi | |\r\n| python | 3.7.10 | h12debd9_4 | | |\r\n| python-dateutil | 2.8.2 | pypi_0 | pypi | |\r\n| pytorch | 1.9.0 | py3.7_cuda11.1_cudnn8.0.5_0 | pytorch | |\r\n| pytz | 2021.1 | pypi_0 | pypi | |\r\n| pyyaml | 5.4.1 | pypi_0 | pypi | |\r\n| readline | 8.1 | h27cfd23_0 | | |\r\n| regex | 2022.10.31 | pypi_0 | pypi | |\r\n| requests | 2.26.0 | pypi_0 | pypi | |\r\n| sacred | 0.8.2 | pypi_0 | pypi | |\r\n| sacremoses | 0.0.45 | pypi_0 | pypi | |\r\n| scikit-learn | 0.24.2 | pypi_0 | pypi | |\r\n| scipy | 1.7.0 | pypi_0 | pypi | |\r\n| sentry-sdk | 1.15.0 | pypi_0 | pypi | |\r\n| setproctitle | 1.3.2 | pypi_0 | pypi | |\r\n| setuptools | 52.0.0 | py37h06a4308_0 | | |\r\n| six | 1.16.0 | pyhd3eb1b0_0 | | |\r\n| smmap | 4.0.0 | pypi_0 | pypi | |\r\n| sqlite | 3.36.0 | hc218d9a_0 | | |\r\n| threadpoolctl | 2.2.0 | pypi_0 | pypi | |\r\n| tk | 8.6.10 | hbc83047_0 | | |\r\n| tokenizers | 0.10.3 | pypi_0 | pypi | |\r\n| toml | 0.10.2 | pypi_0 | pypi | |\r\n| torchaudio | 0.9.0 | py37 | pytorch | |\r\n| torchvision | 0.10.0 | py37_cu111 | pytorch | |\r\n| tqdm | 4.61.2 | pypi_0 | pypi | |\r\n| traitlets | 5.0.5 | pypi_0 | pypi | |\r\n| transformers | 4.9.1 | pypi_0 | pypi | |\r\n| typing-extensions | 3.10.0.0 | hd3eb1b0_0 | | |\r\n| typing_extensions | 3.10.0.0 | pyh06a4308_0 | | |\r\n| urllib3 | 1.26.14 | pypi_0 | pypi | |\r\n| wandb | 0.13.10 | pypi_0 | pypi | |\r\n| wcwidth | 0.2.5 | pypi_0 | pypi | |\r\n| wheel | 0.36.2 | pyhd3eb1b0_0 | | |\r\n| wrapt | 1.12.1 | pypi_0 | pypi | |\r\n| xz | 5.2.5 | h7b6447c_0 | | |\r\n| zipp | 3.5.0 | pypi_0 | pypi | |\r\n| zlib | 1.2.11 | h7b6447c_3 | | |\r\n| zstd | 1.4.9 | haebb681_0 | | |","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5832\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5832\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5831","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5831\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5831\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5831\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5831","id":1701813835,"node_id":"I_kwDODunzps5lb55L","number":5831,"title":"[Bug]504 Server Error when loading dataset which was already cached","user":{"login":"SingL3","id":20473466,"node_id":"MDQ6VXNlcjIwNDczNDY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20473466?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SingL3","html_url":"https:\/\/github.com\/SingL3","followers_url":"https:\/\/api.github.com\/users\/SingL3\/followers","following_url":"https:\/\/api.github.com\/users\/SingL3\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SingL3\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SingL3\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SingL3\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SingL3\/orgs","repos_url":"https:\/\/api.github.com\/users\/SingL3\/repos","events_url":"https:\/\/api.github.com\/users\/SingL3\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SingL3\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I am experiencing the same problem with the following environment:\r\n\r\n* `datasets` version: 2.11.0\r\n* Platform: `Linux 5.19.0-41-generic x86_64 GNU\/Linux`\r\n* Python version: `3.8.5`\r\n* Huggingface_hub version: 0.13.3\r\n* PyArrow version: `11.0.0`\r\n* Pandas version: `1.5.3`\r\n\r\nTrying to get some diagnostics, I got the following: \r\n\r\n```python\r\n>>> from huggingface_hub import scan_cache_dir\r\n>>> sd = scan_cache_dir()\r\n>>> sd\r\nHFCacheInfo(size_on_disk=0, repos=frozenset(), warnings=[CorruptedCacheException('Repo path is not a directory: \/home\/myname\/.cache\/huggingface\/hub\/version_diffusers_cache.txt')])\r\n\r\n```\r\nHowever, that might also be because I had tried to manually specify the `cache_dir` and that resulted in trying to download the dataset again ... but into a folder one level higher up than it should have.\r\n\r\nNote that my issue is with the `huggan\/wikiart` dataset, so it is not a dataset-specific issue.","same problem with a private dataset repo, seems the huggingface hub server got some connection problem?","Yes, dataset server seems down for now","@SingL3 You can avoid this error by setting the [`HF_DATASETS_OFFLINE`](https:\/\/huggingface.co\/docs\/datasets\/v2.12.0\/en\/loading#offline) env variable to 1. By default, if an internet connection is available, we check whether the cache of a cached dataset is up-to-date.\r\n\r\n@lucidBrot `datasets`' cache is still not aligned with `huggigface_hub`'s. We plan to align it eventually.","Today we had a big issue affecting the Hugging Face Hub, thus all the `504 Server Error: Gateway Time-out` errors.\r\n\r\nIt is fixed now and loading your datasets should work as expected.","Hi, @albertvillanova.\r\nIf there is a locally cached version of datasets or something cache using huggingface_hub, when a network problem(either client or server) occurs, is it a better way to fallback to use the current cached version rather than raise a exception and exit?"],"created_at":1683628267000,"updated_at":1683683300000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI have already cached the dataset using:\r\n```\r\ndataset = load_dataset(\"databricks\/databricks-dolly-15k\",\r\n cache_dir=\"\/mnt\/data\/llm\/datasets\/databricks-dolly-15k\")\r\n```\r\nAfter that, I tried to load it again using the same machine, I got this error:\r\n```\r\nTraceback (most recent call last):\r\n File \"\/mnt\/home\/llm\/pythia\/train.py\", line 16, in \r\n dataset = load_dataset(\"databricks\/databricks-dolly-15k\",\r\n File \"\/mnt\/data\/conda\/envs\/pythia_ft\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 1773, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"\/mnt\/data\/conda\/envs\/pythia_ft\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 1502, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"\/mnt\/data\/conda\/envs\/pythia_ft\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 1219, in dataset_module_factory\r\n raise e1 from None\r\n File \"\/mnt\/data\/conda\/envs\/pythia_ft\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 1186, in dataset_module_factory\r\n raise e\r\n File \"\/mnt\/data\/conda\/envs\/pythia_ft\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 1160, in dataset_module_factory\r\n dataset_info = hf_api.dataset_info(\r\n File \"\/mnt\/data\/conda\/envs\/pythia_ft\/lib\/python3.9\/site-packages\/huggingface_hub\/utils\/_validators.py\", line 120, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"\/mnt\/data\/conda\/envs\/pythia_ft\/lib\/python3.9\/site-packages\/huggingface_hub\/hf_api.py\", line 1667, in dataset_info\r\n hf_raise_for_status(r)\r\n File \"\/mnt\/data\/conda\/envs\/pythia_ft\/lib\/python3.9\/site-packages\/huggingface_hub\/utils\/_errors.py\", line 301, in hf_raise_for_status\r\n raise HfHubHTTPError(str(e), response=response) from e\r\nhuggingface_hub.utils._errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https:\/\/huggingface.co\/api\/datasets\/databricks\/databricks-dolly-15k\r\n```\n\n### Steps to reproduce the bug\n\n1. cache the databrick-dolly-15k dataset using load_dataset, setting a cache_dir\r\n2. use load_dataset again, setting the same cache_dir\n\n### Expected behavior\n\nDataset loaded succuessfully.\n\n### Environment info\n\n- `datasets` version: 2.12.0\r\n- Platform: Linux-4.18.0-372.16.1.el8_6.x86_64-x86_64-with-glibc2.27\r\n- Python version: 3.9.16\r\n- Huggingface_hub version: 0.14.1\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 1.5.3","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5831\/reactions","total_count":3,"+1":3,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5831\/timeline","performed_via_github_app":null,"state_reason":"reopened","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5830","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5830\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5830\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5830\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5830","id":1701451399,"node_id":"PR_kwDODunzps5QEFEi","number":5830,"title":"Debug windows #2","user":{"login":"HyukjinKwon","id":6477701,"node_id":"MDQ6VXNlcjY0Nzc3MDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6477701?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/HyukjinKwon","html_url":"https:\/\/github.com\/HyukjinKwon","followers_url":"https:\/\/api.github.com\/users\/HyukjinKwon\/followers","following_url":"https:\/\/api.github.com\/users\/HyukjinKwon\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/HyukjinKwon\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/HyukjinKwon\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/HyukjinKwon\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/HyukjinKwon\/orgs","repos_url":"https:\/\/api.github.com\/users\/HyukjinKwon\/repos","events_url":"https:\/\/api.github.com\/users\/HyukjinKwon\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/HyukjinKwon\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1683614434000,"updated_at":1683614447000,"closed_at":1683614447000,"author_association":"NONE","active_lock_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5830","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5830","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5830.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5830.patch","merged_at":null},"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5830\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5830\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5829","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5829\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5829\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5829\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5829","id":1699958189,"node_id":"I_kwDODunzps5lU02t","number":5829,"title":"(mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64'))","user":{"login":"elcolie","id":18206728,"node_id":"MDQ6VXNlcjE4MjA2NzI4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/18206728?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/elcolie","html_url":"https:\/\/github.com\/elcolie","followers_url":"https:\/\/api.github.com\/users\/elcolie\/followers","following_url":"https:\/\/api.github.com\/users\/elcolie\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/elcolie\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/elcolie\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/elcolie\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/elcolie\/orgs","repos_url":"https:\/\/api.github.com\/users\/elcolie\/repos","events_url":"https:\/\/api.github.com\/users\/elcolie\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/elcolie\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Can you paste the error stack trace?","That is weird. I can't reproduce it again after reboot.\r\n```python\r\nIn [2]: import platform\r\n\r\nIn [3]: platform.platform()\r\nOut[3]: 'macOS-13.2-arm64-arm-64bit'\r\n\r\nIn [4]: from datasets import load_dataset\r\n ...:\r\n ...: jazzy = load_dataset(\"nomic-ai\/gpt4all-j-prompt-generations\", revision='v1.2-jazzy')\r\nFound cached dataset parquet (\/Users\/sarit\/.cache\/huggingface\/datasets\/nomic-ai___parquet\/nomic-ai--gpt4all-j-prompt-generations-a3b62015e2e52043\/0.0.0\/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec)\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 63.25it\/s]\r\n```"],"created_at":1683540434000,"updated_at":1688125154000,"closed_at":1683593202000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nM2 MBP can't run\r\n```python\r\nfrom datasets import load_dataset\r\n\r\njazzy = load_dataset(\"nomic-ai\/gpt4all-j-prompt-generations\", revision='v1.2-jazzy')\r\n```\n\n### Steps to reproduce the bug\n\n1. Use M2 MBP\r\n2. Python 3.10.10 from pyenv\r\n3. Run \r\n```\r\nfrom datasets import load_dataset\r\n\r\njazzy = load_dataset(\"nomic-ai\/gpt4all-j-prompt-generations\", revision='v1.2-jazzy')\r\n```\n\n### Expected behavior\n\nBe able to run normally\n\n### Environment info\n\n```\r\nfrom datasets import load_dataset\r\n\r\njazzy = load_dataset(\"nomic-ai\/gpt4all-j-prompt-generations\", revision='v1.2-jazzy')\r\n```\r\nOSX: 13.2\r\nCPU: M2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5829\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5829\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5828","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5828\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5828\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5828\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5828","id":1699235739,"node_id":"I_kwDODunzps5lSEeb","number":5828,"title":"Stream data concatenation issue","user":{"login":"krishnapriya-18","id":48817796,"node_id":"MDQ6VXNlcjQ4ODE3Nzk2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48817796?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/krishnapriya-18","html_url":"https:\/\/github.com\/krishnapriya-18","followers_url":"https:\/\/api.github.com\/users\/krishnapriya-18\/followers","following_url":"https:\/\/api.github.com\/users\/krishnapriya-18\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/krishnapriya-18\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/krishnapriya-18\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/krishnapriya-18\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/krishnapriya-18\/orgs","repos_url":"https:\/\/api.github.com\/users\/krishnapriya-18\/repos","events_url":"https:\/\/api.github.com\/users\/krishnapriya-18\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/krishnapriya-18\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! \r\n\r\nYou can call `map` as follows to avoid the error:\r\n```python\r\naugmented_dataset_cln = dataset_cln['train'].map(augment_dataset, features=dataset_cln['train'].features)\r\n```","Thanks it is solved","Hi! \r\nI have run into the same problem with you. Could you please let me know how you solve it? Thanks!"],"created_at":1683493374000,"updated_at":1688069276000,"closed_at":1683695147000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\r\n\r\nI am not able to concatenate the augmentation of the stream data. I am using the latest version of dataset.\r\n\r\nValueError: The features can't be aligned because the key audio of features {'audio_id': Value(dtype='string', \r\nid=None), 'audio': {'array': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'path': \r\nValue(dtype='null', id=None), 'sampling_rate': Value(dtype='int64', id=None)}, 'transcript': Value(dtype='string', \r\nid=None)} has unexpected type - {'array': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), \r\n'path': Value(dtype='null', id=None), 'sampling_rate': Value(dtype='int64', id=None)} (expected either \r\nAudio(sampling_rate=16000, mono=True, decode=True, id=None) or Value(\"null\").\r\n\r\n### Steps to reproduce the bug\r\n\r\ndataset = load_dataset(\"tobiolatunji\/afrispeech-200\", \"all\", streaming=True).shuffle(seed=42)\r\ndataset_cln = dataset.remove_columns(['speaker_id', 'path', 'age_group', 'gender', 'accent', 'domain', 'country', 'duration'])\r\ndataset_cln = dataset_cln.cast_column(\"audio\", Audio(sampling_rate=16000))\r\nfrom audiomentations import AddGaussianNoise,Compose,Gain,OneOf,PitchShift,PolarityInversion,TimeStretch\r\n\r\naugmentation = Compose([\r\n AddGaussianNoise(min_amplitude=0.005, max_amplitude=0.015, p=0.2)\r\n])\r\n\r\ndef augment_dataset(batch):\r\n audio = batch[\"audio\"]\r\n audio[\"array\"] = augmentation(audio[\"array\"], sample_rate=audio[\"sampling_rate\"]) \r\n return batch\r\n\r\naugmented_dataset_cln = dataset_cln['train'].map(augment_dataset)\r\ndataset_cln['train'] = interleave_datasets([dataset_cln['train'], augmented_dataset_cln])\r\ndataset_cln['train'] = dataset_cln['train'].shuffle(seed=42)\r\n\r\n### Expected behavior\r\n\r\nI should be able to merge as sampling rate is same.\r\n\r\n### Environment info\r\n\r\nimport datasets\r\nimport transformers\r\nimport accelerate\r\nprint(datasets.__version__)\r\nprint(transformers.__version__)\r\nprint(torch.__version__)\r\nprint(evaluate.__version__)\r\nprint(accelerate.__version__)\r\n\r\n2.12.0\r\n4.28.1\r\n2.0.0\r\n0.4.0\r\n0.18.0","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5828\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5828\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5827","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5827\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5827\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5827\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5827","id":1698891246,"node_id":"I_kwDODunzps5lQwXu","number":5827,"title":"load json dataset interrupt when dtype cast problem occured","user":{"login":"1014661165","id":46060451,"node_id":"MDQ6VXNlcjQ2MDYwNDUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46060451?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/1014661165","html_url":"https:\/\/github.com\/1014661165","followers_url":"https:\/\/api.github.com\/users\/1014661165\/followers","following_url":"https:\/\/api.github.com\/users\/1014661165\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/1014661165\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/1014661165\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/1014661165\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/1014661165\/orgs","repos_url":"https:\/\/api.github.com\/users\/1014661165\/repos","events_url":"https:\/\/api.github.com\/users\/1014661165\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/1014661165\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Indeed the JSON dataset builder raises an error when it encounters an unexpected type.\r\n\r\nThere's an old PR open to add away to ignore such elements though, if it can help: https:\/\/github.com\/huggingface\/datasets\/pull\/2838"],"created_at":1683435129000,"updated_at":1683721948000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\ni have a json like this:\r\n[\r\n {\"id\": 1, \"name\": 1},\r\n {\"id\": 2, \"name\": \"Nan\"},\r\n {\"id\": 3, \"name\": 3}, \r\n ....\r\n]\r\n\uff0cwhich have several problematic rows data like row 2, then i load it with datasets.load_dataset('json', data_files=['xx.json'], split='train'), it will report like this:\r\nGenerating train split: 0 examples [00:00, ? examples\/s]Failed to read file 'C:\\Users\\gawinjunwu\\Downloads\\test\\data\\a.json' with error : Could not convert '2' with type str: tried to convert to int64\r\nTraceback (most recent call last):\r\n File \"D:\\Python3.9\\lib\\site-packages\\datasets\\builder.py\", line 1858, in _prepare_split_single\r\n for _, table in generator:\r\n File \"D:\\Python3.9\\lib\\site-packages\\datasets\\packaged_modules\\json\\json.py\", line 146, in _generate_tables\r\n raise ValueError(f\"Not able to read records in the JSON file at {file}.\") from None\r\nValueError: Not able to read records in the JSON file at C:\\Users\\gawinjunwu\\Downloads\\test\\data\\a.json. \r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"c:\\Users\\gawinjunwu\\Downloads\\test\\scripts\\a.py\", line 4, in \r\n ds = load_dataset('json', data_dir='data', split='train')\r\n File \"D:\\Python3.9\\lib\\site-packages\\datasets\\load.py\", line 1797, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"D:\\Python3.9\\lib\\site-packages\\datasets\\builder.py\", line 890, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"D:\\Python3.9\\lib\\site-packages\\datasets\\builder.py\", line 985, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"D:\\Python3.9\\lib\\site-packages\\datasets\\builder.py\", line 1746, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"D:\\Python3.9\\lib\\site-packages\\datasets\\builder.py\", line 1891, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\ndatasets.builder.DatasetGenerationError: An error occurred while generating the dataset.\r\nCould datasets skip those problematic data row? \n\n### Steps to reproduce the bug\n\nprepare a json file like this:\r\n[\r\n {\"id\": 1, \"name\": 1},\r\n {\"id\": 2, \"name\": \"Nan\"},\r\n {\"id\": 3, \"name\": 3}\r\n]\r\nthen use datasets.load_dataset('json', dir_files=['xxx.json']) to load the json file\n\n### Expected behavior\n\nskip the problematic data row and load row1 and row3\n\n### Environment info\n\npython3.9","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5827\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5827\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5826","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5826\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5826\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5826\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5826","id":1698155751,"node_id":"PR_kwDODunzps5P5FYZ","number":5826,"title":"Support working_dir in from_spark","user":{"login":"maddiedawson","id":106995444,"node_id":"U_kgDOBmCe9A","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/106995444?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/maddiedawson","html_url":"https:\/\/github.com\/maddiedawson","followers_url":"https:\/\/api.github.com\/users\/maddiedawson\/followers","following_url":"https:\/\/api.github.com\/users\/maddiedawson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/maddiedawson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/maddiedawson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/maddiedawson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/maddiedawson\/orgs","repos_url":"https:\/\/api.github.com\/users\/maddiedawson\/repos","events_url":"https:\/\/api.github.com\/users\/maddiedawson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/maddiedawson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Added env var","@lhoestq would you or another maintainer be able to review please? :)","I removed the env var","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005771 \/ 0.011353 (-0.005582) | 0.004086 \/ 0.011008 (-0.006922) | 0.097170 \/ 0.038508 (0.058661) | 0.027464 \/ 0.023109 (0.004355) | 0.305425 \/ 0.275898 (0.029527) | 0.343869 \/ 0.323480 (0.020389) | 0.004899 \/ 0.007986 (-0.003087) | 0.003294 \/ 0.004328 (-0.001034) | 0.074710 \/ 0.004250 (0.070459) | 0.034982 \/ 0.037052 (-0.002070) | 0.306063 \/ 0.258489 (0.047574) | 0.343115 \/ 0.293841 (0.049274) | 0.025155 \/ 0.128546 (-0.103392) | 0.008429 \/ 0.075646 (-0.067217) | 0.318680 \/ 0.419271 (-0.100591) | 0.043304 \/ 0.043533 (-0.000229) | 0.306703 \/ 0.255139 (0.051564) | 0.335535 \/ 0.283200 (0.052335) | 0.087428 \/ 0.141683 (-0.054255) | 1.483769 \/ 1.452155 (0.031614) | 1.538753 \/ 1.492716 (0.046037) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.203313 \/ 0.018006 (0.185307) | 0.413864 \/ 0.000490 (0.413375) | 0.003186 \/ 0.000200 (0.002986) | 0.000068 \/ 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.022862 \/ 0.037411 (-0.014550) | 0.097306 \/ 0.014526 (0.082780) | 0.102823 \/ 0.176557 (-0.073733) | 0.162803 \/ 0.737135 (-0.574333) | 0.106311 \/ 0.296338 (-0.190028) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.451710 \/ 0.215209 (0.236501) | 4.508520 \/ 2.077655 (2.430865) | 2.181118 \/ 1.504120 (0.676998) | 1.977607 \/ 1.541195 (0.436412) | 2.008366 \/ 1.468490 (0.539876) | 0.565388 \/ 4.584777 (-4.019389) | 3.439318 \/ 3.745712 (-0.306394) | 1.747512 \/ 5.269862 (-3.522349) | 1.102124 \/ 4.565676 (-3.463553) | 0.069212 \/ 0.424275 (-0.355063) | 0.011926 \/ 0.007607 (0.004318) | 0.553414 \/ 0.226044 (0.327370) | 5.548959 \/ 2.268929 (3.280031) | 2.628769 \/ 55.444624 (-52.815856) | 2.301003 \/ 6.876477 (-4.575473) | 2.341744 \/ 2.142072 (0.199672) | 0.673092 \/ 4.805227 (-4.132135) | 0.137722 \/ 6.500664 (-6.362942) | 0.066909 \/ 0.075469 (-0.008560) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.196854 \/ 1.841788 (-0.644934) | 13.421776 \/ 8.074308 (5.347468) | 13.839760 \/ 10.191392 (3.648368) | 0.140557 \/ 0.680424 (-0.539867) | 0.016619 \/ 0.534201 (-0.517582) | 0.357985 \/ 0.579283 (-0.221298) | 0.387018 \/ 0.434364 (-0.047346) | 0.452798 \/ 0.540337 (-0.087540) | 0.542085 \/ 1.386936 (-0.844851) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.005868 \/ 0.011353 (-0.005484) | 0.004103 \/ 0.011008 (-0.006905) | 0.076126 \/ 0.038508 (0.037618) | 0.027744 \/ 0.023109 (0.004635) | 0.357257 \/ 0.275898 (0.081359) | 0.387981 \/ 0.323480 (0.064501) | 0.004807 \/ 0.007986 (-0.003178) | 0.003337 \/ 0.004328 (-0.000991) | 0.075486 \/ 0.004250 (0.071236) | 0.035121 \/ 0.037052 (-0.001931) | 0.361385 \/ 0.258489 (0.102896) | 0.399346 \/ 0.293841 (0.105505) | 0.025263 \/ 0.128546 (-0.103284) | 0.008571 \/ 0.075646 (-0.067075) | 0.081815 \/ 0.419271 (-0.337457) | 0.041114 \/ 0.043533 (-0.002418) | 0.362840 \/ 0.255139 (0.107701) | 0.380926 \/ 0.283200 (0.097727) | 0.092728 \/ 0.141683 (-0.048955) | 1.517647 \/ 1.452155 (0.065492) | 1.534914 \/ 1.492716 (0.042198) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.199669 \/ 0.018006 (0.181663) | 0.399070 \/ 0.000490 (0.398580) | 0.002014 \/ 0.000200 (0.001814) | 0.000079 \/ 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.024541 \/ 0.037411 (-0.012870) | 0.099676 \/ 0.014526 (0.085151) | 0.106503 \/ 0.176557 (-0.070054) | 0.153755 \/ 0.737135 (-0.583380) | 0.108564 \/ 0.296338 (-0.187775) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.443842 \/ 0.215209 (0.228633) | 4.441158 \/ 2.077655 (2.363503) | 2.159496 \/ 1.504120 (0.655376) | 1.955358 \/ 1.541195 (0.414163) | 1.973864 \/ 1.468490 (0.505374) | 0.550467 \/ 4.584777 (-4.034310) | 3.381831 \/ 3.745712 (-0.363881) | 2.561192 \/ 5.269862 (-2.708670) | 1.361684 \/ 4.565676 (-3.203992) | 0.068140 \/ 0.424275 (-0.356135) | 0.012005 \/ 0.007607 (0.004398) | 0.551921 \/ 0.226044 (0.325877) | 5.503591 \/ 2.268929 (3.234662) | 2.591609 \/ 55.444624 (-52.853015) | 2.246681 \/ 6.876477 (-4.629796) | 2.290941 \/ 2.142072 (0.148868) | 0.655212 \/ 4.805227 (-4.150015) | 0.136013 \/ 6.500664 (-6.364651) | 0.066995 \/ 0.075469 (-0.008474) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.300438 \/ 1.841788 (-0.541350) | 13.866224 \/ 8.074308 (5.791916) | 13.932624 \/ 10.191392 (3.741232) | 0.144345 \/ 0.680424 (-0.536079) | 0.016623 \/ 0.534201 (-0.517578) | 0.357629 \/ 0.579283 (-0.221654) | 0.389759 \/ 0.434364 (-0.044605) | 0.417704 \/ 0.540337 (-0.122633) | 0.501358 \/ 1.386936 (-0.885578) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#89f775226321ba94e5bf4670a323c0fb44f5f65c \"CML watermark\")\n","Thank you!"],"created_at":1683318160000,"updated_at":1685036754000,"closed_at":1685004375000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5826","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5826","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5826.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5826.patch","merged_at":1685004375000},"body":"Accept `working_dir` as an argument to `Dataset.from_spark`. Setting a non-NFS working directory for Spark workers to materialize to will improve write performance.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5826\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5826\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5825","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5825\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5825\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5825\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5825","id":1697327483,"node_id":"I_kwDODunzps5lKyl7","number":5825,"title":"FileNotFound even though exists","user":{"login":"Muennighoff","id":62820084,"node_id":"MDQ6VXNlcjYyODIwMDg0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/62820084?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Muennighoff","html_url":"https:\/\/github.com\/Muennighoff","followers_url":"https:\/\/api.github.com\/users\/Muennighoff\/followers","following_url":"https:\/\/api.github.com\/users\/Muennighoff\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Muennighoff\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Muennighoff\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Muennighoff\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Muennighoff\/orgs","repos_url":"https:\/\/api.github.com\/users\/Muennighoff\/repos","events_url":"https:\/\/api.github.com\/users\/Muennighoff\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Muennighoff\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! \r\n\r\nThis would only work if `bigscience\/xP3` was a no-code dataset, but it isn't (it has a Python builder script).\r\n\r\nBut this should work: \r\n```python\r\nload_dataset(\"json\", data_files=\"https:\/\/huggingface.co\/datasets\/bigscience\/xP3\/resolve\/main\/ur\/xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl\")\r\n```\r\n\r\n","I see, it's not compatible w\/ regex right?\r\ne.g.\r\n`load_dataset(\"json\", data_files=\"https:\/\/huggingface.co\/datasets\/bigscience\/xP3\/resolve\/main\/ur\/*\")`","> I see, it's not compatible w\/ regex right? e.g. `load_dataset(\"json\", data_files=\"https:\/\/huggingface.co\/datasets\/bigscience\/xP3\/resolve\/main\/ur\/*\")`\r\n\r\nIt should work for patterns that \"reference\" the local filesystem, but to make this work with the Hub, we must implement https:\/\/github.com\/huggingface\/datasets\/issues\/5281 first.\r\n\r\nIn the meantime, you can fetch these glob files with `HfFileSystem` and pass them as a list to `load_dataset`:\r\n```python\r\nfrom datasets import load_dataset\r\nfrom huggingface_hub import HfFileSystem, hf_hub_url # `HfFileSystem` requires the latest version of `huggingface_hub`\r\n\r\nfs = HfFileSystem()\r\nglob_files = fs.glob(\"datasets\/bigscience\/xP3\/ur\/*\")\r\n# convert fsspec URLs to HTTP URLs\r\nresolved_paths = [fs.resolve_path(file) for file in glob_files]\r\ndata_files = [hf_hub_url(resolved_path.repo_id, resolved_path.path_in_repo, repo_type=resolved_path.repo_type) for resolved_path in resolved_paths]\r\n\r\nds = load_dataset(\"json\", data_files=data_files)\r\n```"],"created_at":1683280195000,"updated_at":1683481426000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nI'm trying to download https:\/\/huggingface.co\/datasets\/bigscience\/xP3\/resolve\/main\/ur\/xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl which works fine in my webbrowser, but somehow not with datasets. Am I doing sth wrong?\r\n\r\n```\r\nDownloading builder script: 100%\r\n2.82k\/2.82k [00:00<00:00, 64.2kB\/s]\r\nDownloading readme: 100%\r\n12.6k\/12.6k [00:00<00:00, 585kB\/s]\r\n---------------------------------------------------------------------------\r\nFileNotFoundError Traceback (most recent call last)\r\n[](https:\/\/localhost:8080\/#) in ()\r\n 2 lang = \"ur\"\r\n 3 fname = \"xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl\"\r\n----> 4 dataset = load_dataset(\"bigscience\/xP3\", data_files=f\"{lang}\/{fname}\")\r\n\r\n6 frames\r\n[\/usr\/local\/lib\/python3.10\/dist-packages\/datasets\/data_files.py](https:\/\/localhost:8080\/#) in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions)\r\n 291 if allowed_extensions is not None:\r\n 292 error_msg += f\" with any supported extension {list(allowed_extensions)}\"\r\n--> 293 raise FileNotFoundError(error_msg)\r\n 294 return sorted(out)\r\n 295 \r\n\r\nFileNotFoundError: Unable to find 'https:\/\/huggingface.co\/datasets\/bigscience\/xP3\/resolve\/main\/ur\/xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl' at \/content\/https:\/huggingface.co\/datasets\/bigscience\/xP3\/resolve\/main\r\n```\n\n### Steps to reproduce the bug\n\n```\r\n!pip install -q datasets\r\nfrom datasets import load_dataset\r\nlang = \"ur\"\r\nfname = \"xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl\"\r\ndataset = load_dataset(\"bigscience\/xP3\", data_files=f\"{lang}\/{fname}\")\r\n```\n\n### Expected behavior\n\nCorrectly downloads\n\n### Environment info\n\nlatest versions","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5825\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5825\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5824","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5824\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5824\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5824\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5824","id":1697152148,"node_id":"PR_kwDODunzps5P1rIZ","number":5824,"title":"Fix incomplete docstring for `BuilderConfig`","user":{"login":"Laurent2916","id":21087104,"node_id":"MDQ6VXNlcjIxMDg3MTA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/21087104?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Laurent2916","html_url":"https:\/\/github.com\/Laurent2916","followers_url":"https:\/\/api.github.com\/users\/Laurent2916\/followers","following_url":"https:\/\/api.github.com\/users\/Laurent2916\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Laurent2916\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Laurent2916\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Laurent2916\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Laurent2916\/orgs","repos_url":"https:\/\/api.github.com\/users\/Laurent2916\/repos","events_url":"https:\/\/api.github.com\/users\/Laurent2916\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Laurent2916\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","
\nShow benchmarks<\/summary>\n\nPyArrow==8.0.0\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007658 \/ 0.011353 (-0.003695) | 0.005497 \/ 0.011008 (-0.005511) | 0.097142 \/ 0.038508 (0.058633) | 0.034602 \/ 0.023109 (0.011493) | 0.304191 \/ 0.275898 (0.028293) | 0.329103 \/ 0.323480 (0.005624) | 0.005936 \/ 0.007986 (-0.002049) | 0.004324 \/ 0.004328 (-0.000004) | 0.073387 \/ 0.004250 (0.069137) | 0.049657 \/ 0.037052 (0.012604) | 0.301352 \/ 0.258489 (0.042863) | 0.343095 \/ 0.293841 (0.049254) | 0.036767 \/ 0.128546 (-0.091779) | 0.012438 \/ 0.075646 (-0.063208) | 0.333804 \/ 0.419271 (-0.085468) | 0.064557 \/ 0.043533 (0.021024) | 0.302397 \/ 0.255139 (0.047258) | 0.319739 \/ 0.283200 (0.036540) | 0.119264 \/ 0.141683 (-0.022418) | 1.465309 \/ 1.452155 (0.013155) | 1.578194 \/ 1.492716 (0.085478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.256552 \/ 0.018006 (0.238545) | 0.555344 \/ 0.000490 (0.554854) | 0.004845 \/ 0.000200 (0.004645) | 0.000082 \/ 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.027215 \/ 0.037411 (-0.010197) | 0.107071 \/ 0.014526 (0.092545) | 0.116343 \/ 0.176557 (-0.060213) | 0.172646 \/ 0.737135 (-0.564490) | 0.123366 \/ 0.296338 (-0.172973) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.411421 \/ 0.215209 (0.196212) | 4.126028 \/ 2.077655 (2.048373) | 1.975826 \/ 1.504120 (0.471706) | 1.784404 \/ 1.541195 (0.243210) | 1.848697 \/ 1.468490 (0.380207) | 0.686400 \/ 4.584777 (-3.898377) | 3.677649 \/ 3.745712 (-0.068063) | 2.077787 \/ 5.269862 (-3.192075) | 1.310912 \/ 4.565676 (-3.254764) | 0.083980 \/ 0.424275 (-0.340295) | 0.012183 \/ 0.007607 (0.004575) | 0.506969 \/ 0.226044 (0.280924) | 5.094730 \/ 2.268929 (2.825802) | 2.419790 \/ 55.444624 (-53.024834) | 2.106592 \/ 6.876477 (-4.769884) | 2.244309 \/ 2.142072 (0.102237) | 0.814312 \/ 4.805227 (-3.990915) | 0.167872 \/ 6.500664 (-6.332792) | 0.065339 \/ 0.075469 (-0.010130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.193314 \/ 1.841788 (-0.648474) | 14.980621 \/ 8.074308 (6.906313) | 14.352452 \/ 10.191392 (4.161060) | 0.164531 \/ 0.680424 (-0.515893) | 0.017432 \/ 0.534201 (-0.516769) | 0.422193 \/ 0.579283 (-0.157090) | 0.410047 \/ 0.434364 (-0.024317) | 0.497011 \/ 0.540337 (-0.043326) | 0.581395 \/ 1.386936 (-0.805541) |\n\n<\/details>\nPyArrow==latest\n\n
\nShow updated benchmarks!<\/summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.007214 \/ 0.011353 (-0.004139) | 0.005449 \/ 0.011008 (-0.005559) | 0.074320 \/ 0.038508 (0.035812) | 0.034261 \/ 0.023109 (0.011152) | 0.378265 \/ 0.275898 (0.102367) | 0.414419 \/ 0.323480 (0.090939) | 0.005804 \/ 0.007986 (-0.002182) | 0.004205 \/ 0.004328 (-0.000124) | 0.073266 \/ 0.004250 (0.069015) | 0.050444 \/ 0.037052 (0.013392) | 0.372999 \/ 0.258489 (0.114510) | 0.436032 \/ 0.293841 (0.142191) | 0.035432 \/ 0.128546 (-0.093114) | 0.012581 \/ 0.075646 (-0.063065) | 0.085777 \/ 0.419271 (-0.333495) | 0.046902 \/ 0.043533 (0.003369) | 0.378732 \/ 0.255139 (0.123593) | 0.401746 \/ 0.283200 (0.118547) | 0.113398 \/ 0.141683 (-0.028285) | 1.463851 \/ 1.452155 (0.011696) | 1.566387 \/ 1.492716 (0.073670) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new \/ old (diff) | 0.261246 \/ 0.018006 (0.243240) | 0.546730 \/ 0.000490 (0.546241) | 0.005245 \/ 0.000200 (0.005045) | 0.000103 \/ 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new \/ old (diff) | 0.029441 \/ 0.037411 (-0.007970) | 0.111834 \/ 0.014526 (0.097308) | 0.122411 \/ 0.176557 (-0.054145) | 0.171288 \/ 0.737135 (-0.565847) | 0.130338 \/ 0.296338 (-0.166001) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 0.433405 \/ 0.215209 (0.218196) | 4.315790 \/ 2.077655 (2.238135) | 2.121934 \/ 1.504120 (0.617814) | 1.924123 \/ 1.541195 (0.382928) | 2.029077 \/ 1.468490 (0.560587) | 0.710245 \/ 4.584777 (-3.874532) | 3.844393 \/ 3.745712 (0.098681) | 3.576580 \/ 5.269862 (-1.693281) | 1.930985 \/ 4.565676 (-2.634691) | 0.092186 \/ 0.424275 (-0.332090) | 0.012307 \/ 0.007607 (0.004700) | 0.533722 \/ 0.226044 (0.307677) | 5.324447 \/ 2.268929 (3.055519) | 2.615451 \/ 55.444624 (-52.829174) | 2.282310 \/ 6.876477 (-4.594167) | 2.319847 \/ 2.142072 (0.177774) | 0.849364 \/ 4.805227 (-3.955864) | 0.172722 \/ 6.500664 (-6.327942) | 0.064721 \/ 0.075469 (-0.010748) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new \/ old (diff) | 1.289942 \/ 1.841788 (-0.551846) | 15.875062 \/ 8.074308 (7.800754) | 14.784682 \/ 10.191392 (4.593290) | 0.144432 \/ 0.680424 (-0.535991) | 0.017703 \/ 0.534201 (-0.516498) | 0.424357 \/ 0.579283 (-0.154926) | 0.419078 \/ 0.434364 (-0.015286) | 0.489331 \/ 0.540337 (-0.051006) | 0.585284 \/ 1.386936 (-0.801652) |\n\n<\/details>\n<\/details>\n\n![](https:\/\/cml.dev\/watermark.png#e3f4f124a1b118a5bfff5bae76b25a68aedbebbc \"CML watermark\")\n"],"created_at":1683272068000,"updated_at":1683290354000,"closed_at":1683289914000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/5824","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5824","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5824.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/5824.patch","merged_at":1683289914000},"body":"Fixes #5820\r\nAlso fixed a couple of typos I spotted","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5824\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5824\/timeline","performed_via_github_app":null,"state_reason":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5823","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5823\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5823\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5823\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5823","id":1697024789,"node_id":"I_kwDODunzps5lJosV","number":5823,"title":"[2.12.0] DatasetDict.save_to_disk not saving to S3","user":{"login":"thejamesmarq","id":5233185,"node_id":"MDQ6VXNlcjUyMzMxODU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5233185?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thejamesmarq","html_url":"https:\/\/github.com\/thejamesmarq","followers_url":"https:\/\/api.github.com\/users\/thejamesmarq\/followers","following_url":"https:\/\/api.github.com\/users\/thejamesmarq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thejamesmarq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thejamesmarq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thejamesmarq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thejamesmarq\/orgs","repos_url":"https:\/\/api.github.com\/users\/thejamesmarq\/repos","events_url":"https:\/\/api.github.com\/users\/thejamesmarq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thejamesmarq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Can you try adding the `s3:\/\/` prefix ?\r\n```python\r\nf\"s3:\/\/{s3_bucket}\/{s3_dir}\/{dataset_name}\"\r\n```","Ugh, yeah that was it. Thank you!"],"created_at":1683264179000,"updated_at":1683298878000,"closed_at":1683298877000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nWhen trying to save a `DatasetDict` to a private S3 bucket using `save_to_disk`, the artifacts are instead saved locally, and not in the S3 bucket.\r\n\r\nI have tried using the deprecated `fs` as well as the `storage_options` arguments and I get the same results.\n\n### Steps to reproduce the bug\n\n1. Create a DatsetDict `dataset`\r\n2. Create a S3FileSystem object\r\n`s3 = datasets.filesystems.S3FileSystem(key=aws_access_key_id, secret=aws_secret_access_key)`\r\n3. Save using `dataset_dict.save_to_disk(f\"{s3_bucket}\/{s3_dir}\/{dataset_name}\", storage_options=s3.storage_options)` or `dataset_dict.save_to_disk(f\"{s3_bucket}\/{s3_dir}\/{dataset_name}\", fs=s3)`\r\n4. Check the corresponding S3 bucket and verify nothing has been uploaded\r\n5. Check the path at f\"{s3_bucket}\/{s3_dir}\/{dataset_name}\" and verify that files have been saved there\n\n### Expected behavior\n\nArtifacts are uploaded at the f\"{s3_bucket}\/{s3_dir}\/{dataset_name}\" S3 location.\n\n### Environment info\n\n- `datasets` version: 2.12.0\r\n- Platform: macOS-13.3.1-x86_64-i386-64bit\r\n- Python version: 3.11.2\r\n- Huggingface_hub version: 0.14.1\r\n- PyArrow version: 12.0.0\r\n- Pandas version: 2.0.1","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5823\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5823\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5822","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5822\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5822\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5822\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5822","id":1696627308,"node_id":"I_kwDODunzps5lIHps","number":5822,"title":"Audio Dataset with_format torch problem","user":{"login":"paulbauriegel","id":20282916,"node_id":"MDQ6VXNlcjIwMjgyOTE2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20282916?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/paulbauriegel","html_url":"https:\/\/github.com\/paulbauriegel","followers_url":"https:\/\/api.github.com\/users\/paulbauriegel\/followers","following_url":"https:\/\/api.github.com\/users\/paulbauriegel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/paulbauriegel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/paulbauriegel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/paulbauriegel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/paulbauriegel\/orgs","repos_url":"https:\/\/api.github.com\/users\/paulbauriegel\/repos","events_url":"https:\/\/api.github.com\/users\/paulbauriegel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/paulbauriegel\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Can you try with a more recent version of `datasets` ?","Ok, yes it worked with the most recent version. Thanks"],"created_at":1683230871000,"updated_at":1683837953000,"closed_at":1683837953000,"author_association":"NONE","active_lock_reason":null,"draft":null,"pull_request":null,"body":"### Describe the bug\n\nCommon Voice v10 Delta (German) Dataset from here https:\/\/commonvoice.mozilla.org\/de\/datasets\r\n\r\n```\r\naudio_dataset = \\\r\n (Dataset\r\n .from_dict({\"audio\": ('\/tmp\/cv-corpus-10.0-delta-2022-07-04\/de\/clips\/' + df.path).to_list()})\r\n .cast_column(\"audio\", Audio(sampling_rate=16_000))\r\n .with_format('numpy'))\r\naudio_dataset[0][\"audio\"]\r\n```\r\n\r\nworks, but\r\n\r\n```\r\naudio_dataset = \\\r\n (Dataset\r\n .from_dict({\"audio\": ('\/tmp\/cv-corpus-10.0-delta-2022-07-04\/de\/clips\/' + df.path).to_list()})\r\n .cast_column(\"audio\", Audio(sampling_rate=16_000))\r\n .with_format('torch'))\r\naudio_dataset[0][\"audio\"]\r\n```\r\n\r\ndoes not instead I get\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\nCell In[54], line 1\r\n----> 1 audio_dataset[0][\"audio\"]\r\n\r\nFile \/anaconda\/envs\/azureml_py38\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:2154, in Dataset.__getitem__(self, key)\r\n 2152 def __getitem__(self, key): # noqa: F811\r\n 2153 \"\"\"Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).\"\"\"\r\n-> 2154 return self._getitem(\r\n 2155 key,\r\n 2156 )\r\n\r\nFile \/anaconda\/envs\/azureml_py38\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:2139, in Dataset._getitem(self, key, decoded, **kwargs)\r\n 2137 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)\r\n 2138 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)\r\n-> 2139 formatted_output = format_table(\r\n 2140 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns\r\n 2141 )\r\n 2142 return formatted_output\r\n\r\nFile \/anaconda\/envs\/azureml_py38\/lib\/python3.8\/site-packages\/datasets\/formatting\/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns)\r\n 530 python_formatter = PythonFormatter(features=None)\r\n 531 if format_columns is None:\r\n--> 532 return formatter(pa_table, query_type=query_type)\r\n 533 elif query_type == \"column\":\r\n 534 if key in format_columns:\r\n\r\nFile \/anaconda\/envs\/azureml_py38\/lib\/python3.8\/site-packages\/datasets\/formatting\/formatting.py:281, in Formatter.__call__(self, pa_table, query_type)\r\n 279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:\r\n 280 if query_type == \"row\":\r\n--> 281 return self.format_row(pa_table)\r\n 282 elif query_type == \"column\":\r\n 283 return self.format_column(pa_table)\r\n\r\nFile \/anaconda\/envs\/azureml_py38\/lib\/python3.8\/site-packages\/datasets\/formatting\/torch_formatter.py:58, in TorchFormatter.format_row(self, pa_table)\r\n 56 def format_row(self, pa_table: pa.Table) -> dict:\r\n 57 row = self.numpy_arrow_extractor().extract_row(pa_table)\r\n---> 58 return self.recursive_tensorize(row)\r\n\r\nFile \/anaconda\/envs\/azureml_py38\/lib\/python3.8\/site-packages\/datasets\/formatting\/torch_formatter.py:54, in TorchFormatter.recursive_tensorize(self, data_struct)\r\n 53 def recursive_tensorize(self, data_struct: dict):\r\n---> 54 return map_nested(self._recursive_tensorize, data_struct, map_list=False)\r\n\r\nFile \/anaconda\/envs\/azureml_py38\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py:356, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)\r\n 354 num_proc = 1\r\n 355 if num_proc <= 1 or len(iterable) <= num_proc:\r\n--> 356 mapped = [\r\n 357 _single_map_nested((function, obj, types, None, True, None))\r\n 358 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 359 ]\r\n 360 else:\r\n 361 split_kwds = [] # We organize the splits ourselve (contiguous splits)\r\n\r\nFile \/anaconda\/envs\/azureml_py38\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py:357, in (.0)\r\n 354 num_proc = 1\r\n 355 if num_proc <= 1 or len(iterable) <= num_proc:\r\n 356 mapped = [\r\n--> 357 _single_map_nested((function, obj, types, None, True, None))\r\n 358 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 359 ]\r\n 360 else:\r\n 361 split_kwds = [] # We organize the splits ourselve (contiguous splits)\r\n\r\nFile \/anaconda\/envs\/azureml_py38\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py:309, in _single_map_nested(args)\r\n 306 pbar = logging.tqdm(pbar_iterable, disable=disable_tqdm, position=rank, unit=\"obj\", desc=pbar_desc)\r\n 308 if isinstance(data_struct, dict):\r\n--> 309 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n 310 else:\r\n 311 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n\r\nFile \/anaconda\/envs\/azureml_py38\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py:309, in (.0)\r\n 306 pbar = logging.tqdm(pbar_iterable, disable=disable_tqdm, position=rank, unit=\"obj\", desc=pbar_desc)\r\n 308 if isinstance(data_struct, dict):\r\n--> 309 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n 310 else:\r\n 311 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n\r\nFile \/anaconda\/envs\/azureml_py38\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py:293, in _single_map_nested(args)\r\n 291 # Singleton first to spare some computation\r\n 292 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 293 return function(data_struct)\r\n 295 # Reduce logging to keep things readable in multiprocessing with tqdm\r\n 296 if rank is not None and logging.get_verbosity() < logging.WARNING:\r\n\r\nFile \/anaconda\/envs\/azureml_py38\/lib\/python3.8\/site-packages\/datasets\/formatting\/torch_formatter.py:51, in TorchFormatter._recursive_tensorize(self, data_struct)\r\n 49 if data_struct.dtype == np.object: # pytorch tensors cannot be instantied from an array of objects\r\n 50 return [self.recursive_tensorize(substruct) for substruct in data_struct]\r\n---> 51 return self._tensorize(data_struct)\r\n\r\nFile \/anaconda\/envs\/azureml_py38\/lib\/python3.8\/site-packages\/datasets\/formatting\/torch_formatter.py:38, in TorchFormatter._tensorize(self, value)\r\n 35 import torch\r\n 37 default_dtype = {}\r\n---> 38 if np.issubdtype(value.dtype, np.integer):\r\n 39 default_dtype = {\"dtype\": torch.int64}\r\n 40 elif np.issubdtype(value.dtype, np.floating):\r\n\r\nAttributeError: 'NoneType' object has no attribute 'dtype'\r\n```\n\n### Steps to reproduce the bug\n\n1. Download some audio dataset in this case I used Common Voice v10 Delta (German) Dataset from here https:\/\/commonvoice.mozilla.org\/de\/datasets\r\n2. Try the Code from above\n\n### Expected behavior\n\nIt should work for torch\n\n### Environment info\n\npytorch: 2.0.0\r\ndatasets: 2.3.2\r\nnumpy: 1.21.6\r\n\r\nPython: 3.8\r\nLinux","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5822\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5822\/timeline","performed_via_github_app":null,"state_reason":"completed","is_pull_request":false}