trojblue commited on
Commit
617bafb
·
verified ·
1 Parent(s): f974dff

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -495
README.md CHANGED
@@ -165,27 +165,70 @@ A refreshed, Parquet-formatted metadata dump of Danbooru, current as of January
165
 
166
  ## Uses
167
 
 
 
 
 
 
 
 
 
168
 
169
  This dataset can be used to:
170
  - Retrieve the full Danbooru image set via the metadata’s URLs
171
  - Train or fine-tune an image tagger
172
  - Compare against previous metadata versions to track changes, tag evolution, and historical trends
173
 
174
- This dataset can be loaded or filtered with the Huggingface `datasets` library:
 
 
 
 
 
 
175
 
176
  ```python
177
- from datasets import Dataset, load_dataset
 
178
 
179
- danbooru_dataset = load_dataset("trojblue/danbooru2025-metadata", split="train")
180
- df = danbooru_dataset.to_pandas()
181
- ```
182
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
183
 
184
  ## Dataset Structure
185
 
186
  Below is a partial overview of the DataFrame columns, derived directly from the Danbooru JSONs:
187
 
188
- **column values preview:**
189
 
190
  ```python
191
  import unibox as ub
@@ -333,495 +376,6 @@ Index(['approver_id', 'bit_flags', 'created_at', 'down_score', 'fav_count',
333
  </div>
334
 
335
 
336
- **memory size preview**:
337
-
338
- <div>
339
- <style scoped>
340
- .dataframe tbody tr th:only-of-type {
341
- vertical-align: middle;
342
- }
343
- .dataframe tbody tr th {
344
- vertical-align: top;
345
- }
346
- .dataframe thead th {
347
- text-align: right;
348
- }
349
- </style>
350
- <table border="1" class="dataframe">
351
- <thead>
352
- <tr style="text-align: right;">
353
- <th></th>
354
- <th>Column</th>
355
- <th>Memory Usage</th>
356
- <th>Readable Memory Usage</th>
357
- <th>Dtype</th>
358
- </tr>
359
- </thead>
360
- <tbody>
361
- <tr>
362
- <th>37</th>
363
- <td>media_asset_variants</td>
364
- <td>6504773310</td>
365
- <td>6.06 GB</td>
366
- <td>object</td>
367
- </tr>
368
- <tr>
369
- <th>50</th>
370
- <td>tag_string</td>
371
- <td>4010312632</td>
372
- <td>3.73 GB</td>
373
- <td>object</td>
374
- </tr>
375
- <tr>
376
- <th>54</th>
377
- <td>tag_string_general</td>
378
- <td>3219096004</td>
379
- <td>3.00 GB</td>
380
- <td>object</td>
381
- </tr>
382
- <tr>
383
- <th>19</th>
384
- <td>large_file_url</td>
385
- <td>1120901064</td>
386
- <td>1.04 GB</td>
387
- <td>object</td>
388
- </tr>
389
- <tr>
390
- <th>7</th>
391
- <td>file_url</td>
392
- <td>1091768146</td>
393
- <td>1.02 GB</td>
394
- <td>object</td>
395
- </tr>
396
- <tr>
397
- <th>40</th>
398
- <td>preview_file_url</td>
399
- <td>1083377159</td>
400
- <td>1.01 GB</td>
401
- <td>object</td>
402
- </tr>
403
- <tr>
404
- <th>43</th>
405
- <td>source</td>
406
- <td>1028480196</td>
407
- <td>980.84 MB</td>
408
- <td>object</td>
409
- </tr>
410
- <tr>
411
- <th>34</th>
412
- <td>media_asset_pixel_hash</td>
413
- <td>766839397</td>
414
- <td>731.32 MB</td>
415
- <td>object</td>
416
- </tr>
417
- <tr>
418
- <th>33</th>
419
- <td>media_asset_md5</td>
420
- <td>749456967</td>
421
- <td>714.74 MB</td>
422
- <td>object</td>
423
- </tr>
424
- <tr>
425
- <th>23</th>
426
- <td>md5</td>
427
- <td>749456967</td>
428
- <td>714.74 MB</td>
429
- <td>object</td>
430
- </tr>
431
- <tr>
432
- <th>2</th>
433
- <td>created_at</td>
434
- <td>740990878</td>
435
- <td>706.66 MB</td>
436
- <td>object</td>
437
- </tr>
438
- <tr>
439
- <th>36</th>
440
- <td>media_asset_updated_at</td>
441
- <td>740990878</td>
442
- <td>706.66 MB</td>
443
- <td>object</td>
444
- </tr>
445
- <tr>
446
- <th>24</th>
447
- <td>media_asset_created_at</td>
448
- <td>740990878</td>
449
- <td>706.66 MB</td>
450
- <td>object</td>
451
- </tr>
452
- <tr>
453
- <th>57</th>
454
- <td>updated_at</td>
455
- <td>740990878</td>
456
- <td>706.66 MB</td>
457
- <td>object</td>
458
- </tr>
459
- <tr>
460
- <th>52</th>
461
- <td>tag_string_character</td>
462
- <td>735974700</td>
463
- <td>701.88 MB</td>
464
- <td>object</td>
465
- </tr>
466
- <tr>
467
- <th>55</th>
468
- <td>tag_string_meta</td>
469
- <td>734235943</td>
470
- <td>700.22 MB</td>
471
- <td>object</td>
472
- </tr>
473
- <tr>
474
- <th>53</th>
475
- <td>tag_string_copyright</td>
476
- <td>664510146</td>
477
- <td>633.73 MB</td>
478
- <td>object</td>
479
- </tr>
480
- <tr>
481
- <th>51</th>
482
- <td>tag_string_artist</td>
483
- <td>588454689</td>
484
- <td>561.19 MB</td>
485
- <td>object</td>
486
- </tr>
487
- <tr>
488
- <th>27</th>
489
- <td>media_asset_file_key</td>
490
- <td>557435694</td>
491
- <td>531.61 MB</td>
492
- <td>object</td>
493
- </tr>
494
- <tr>
495
- <th>35</th>
496
- <td>media_asset_status</td>
497
- <td>542818899</td>
498
- <td>517.67 MB</td>
499
- <td>object</td>
500
- </tr>
501
- <tr>
502
- <th>26</th>
503
- <td>media_asset_file_ext</td>
504
- <td>516984380</td>
505
- <td>493.03 MB</td>
506
- <td>object</td>
507
- </tr>
508
- <tr>
509
- <th>5</th>
510
- <td>file_ext</td>
511
- <td>516984380</td>
512
- <td>493.03 MB</td>
513
- <td>object</td>
514
- </tr>
515
- <tr>
516
- <th>41</th>
517
- <td>rating</td>
518
- <td>499738034</td>
519
- <td>476.59 MB</td>
520
- <td>object</td>
521
- </tr>
522
- <tr>
523
- <th>21</th>
524
- <td>last_commented_at</td>
525
- <td>259587166</td>
526
- <td>247.56 MB</td>
527
- <td>object</td>
528
- </tr>
529
- <tr>
530
- <th>20</th>
531
- <td>last_comment_bumped_at</td>
532
- <td>252905426</td>
533
- <td>241.19 MB</td>
534
- <td>object</td>
535
- </tr>
536
- <tr>
537
- <th>22</th>
538
- <td>last_noted_at</td>
539
- <td>247393688</td>
540
- <td>235.93 MB</td>
541
- <td>object</td>
542
- </tr>
543
- <tr>
544
- <th>1</th>
545
- <td>bit_flags</td>
546
- <td>68929384</td>
547
- <td>65.74 MB</td>
548
- <td>int64</td>
549
- </tr>
550
- <tr>
551
- <th>49</th>
552
- <td>tag_count_meta</td>
553
- <td>68929384</td>
554
- <td>65.74 MB</td>
555
- <td>int64</td>
556
- </tr>
557
- <tr>
558
- <th>42</th>
559
- <td>score</td>
560
- <td>68929384</td>
561
- <td>65.74 MB</td>
562
- <td>int64</td>
563
- </tr>
564
- <tr>
565
- <th>38</th>
566
- <td>parent_id</td>
567
- <td>68929384</td>
568
- <td>65.74 MB</td>
569
- <td>float64</td>
570
- </tr>
571
- <tr>
572
- <th>39</th>
573
- <td>pixiv_id</td>
574
- <td>68929384</td>
575
- <td>65.74 MB</td>
576
- <td>float64</td>
577
- </tr>
578
- <tr>
579
- <th>30</th>
580
- <td>media_asset_image_height</td>
581
- <td>68929384</td>
582
- <td>65.74 MB</td>
583
- <td>int64</td>
584
- </tr>
585
- <tr>
586
- <th>29</th>
587
- <td>media_asset_id</td>
588
- <td>68929384</td>
589
- <td>65.74 MB</td>
590
- <td>int64</td>
591
- </tr>
592
- <tr>
593
- <th>31</th>
594
- <td>media_asset_image_width</td>
595
- <td>68929384</td>
596
- <td>65.74 MB</td>
597
- <td>int64</td>
598
- </tr>
599
- <tr>
600
- <th>28</th>
601
- <td>media_asset_file_size</td>
602
- <td>68929384</td>
603
- <td>65.74 MB</td>
604
- <td>int64</td>
605
- </tr>
606
- <tr>
607
- <th>12</th>
608
- <td>id</td>
609
- <td>68929384</td>
610
- <td>65.74 MB</td>
611
- <td>int64</td>
612
- </tr>
613
- <tr>
614
- <th>0</th>
615
- <td>approver_id</td>
616
- <td>68929384</td>
617
- <td>65.74 MB</td>
618
- <td>float64</td>
619
- </tr>
620
- <tr>
621
- <th>4</th>
622
- <td>fav_count</td>
623
- <td>68929384</td>
624
- <td>65.74 MB</td>
625
- <td>int64</td>
626
- </tr>
627
- <tr>
628
- <th>3</th>
629
- <td>down_score</td>
630
- <td>68929384</td>
631
- <td>65.74 MB</td>
632
- <td>int64</td>
633
- </tr>
634
- <tr>
635
- <th>6</th>
636
- <td>file_size</td>
637
- <td>68929384</td>
638
- <td>65.74 MB</td>
639
- <td>int64</td>
640
- </tr>
641
- <tr>
642
- <th>25</th>
643
- <td>media_asset_duration</td>
644
- <td>68929384</td>
645
- <td>65.74 MB</td>
646
- <td>float64</td>
647
- </tr>
648
- <tr>
649
- <th>14</th>
650
- <td>image_width</td>
651
- <td>68929384</td>
652
- <td>65.74 MB</td>
653
- <td>int64</td>
654
- </tr>
655
- <tr>
656
- <th>13</th>
657
- <td>image_height</td>
658
- <td>68929384</td>
659
- <td>65.74 MB</td>
660
- <td>int64</td>
661
- </tr>
662
- <tr>
663
- <th>44</th>
664
- <td>tag_count</td>
665
- <td>68929384</td>
666
- <td>65.74 MB</td>
667
- <td>int64</td>
668
- </tr>
669
- <tr>
670
- <th>48</th>
671
- <td>tag_count_general</td>
672
- <td>68929384</td>
673
- <td>65.74 MB</td>
674
- <td>int64</td>
675
- </tr>
676
- <tr>
677
- <th>47</th>
678
- <td>tag_count_copyright</td>
679
- <td>68929384</td>
680
- <td>65.74 MB</td>
681
- <td>int64</td>
682
- </tr>
683
- <tr>
684
- <th>46</th>
685
- <td>tag_count_character</td>
686
- <td>68929384</td>
687
- <td>65.74 MB</td>
688
- <td>int64</td>
689
- </tr>
690
- <tr>
691
- <th>56</th>
692
- <td>up_score</td>
693
- <td>68929384</td>
694
- <td>65.74 MB</td>
695
- <td>int64</td>
696
- </tr>
697
- <tr>
698
- <th>45</th>
699
- <td>tag_count_artist</td>
700
- <td>68929384</td>
701
- <td>65.74 MB</td>
702
- <td>int64</td>
703
- </tr>
704
- <tr>
705
- <th>58</th>
706
- <td>uploader_id</td>
707
- <td>68929384</td>
708
- <td>65.74 MB</td>
709
- <td>int64</td>
710
- </tr>
711
- <tr>
712
- <th>10</th>
713
- <td>has_large</td>
714
- <td>8616173</td>
715
- <td>8.22 MB</td>
716
- <td>bool</td>
717
- </tr>
718
- <tr>
719
- <th>15</th>
720
- <td>is_banned</td>
721
- <td>8616173</td>
722
- <td>8.22 MB</td>
723
- <td>bool</td>
724
- </tr>
725
- <tr>
726
- <th>16</th>
727
- <td>is_deleted</td>
728
- <td>8616173</td>
729
- <td>8.22 MB</td>
730
- <td>bool</td>
731
- </tr>
732
- <tr>
733
- <th>17</th>
734
- <td>is_flagged</td>
735
- <td>8616173</td>
736
- <td>8.22 MB</td>
737
- <td>bool</td>
738
- </tr>
739
- <tr>
740
- <th>11</th>
741
- <td>has_visible_children</td>
742
- <td>8616173</td>
743
- <td>8.22 MB</td>
744
- <td>bool</td>
745
- </tr>
746
- <tr>
747
- <th>18</th>
748
- <td>is_pending</td>
749
- <td>8616173</td>
750
- <td>8.22 MB</td>
751
- <td>bool</td>
752
- </tr>
753
- <tr>
754
- <th>8</th>
755
- <td>has_active_children</td>
756
- <td>8616173</td>
757
- <td>8.22 MB</td>
758
- <td>bool</td>
759
- </tr>
760
- <tr>
761
- <th>9</th>
762
- <td>has_children</td>
763
- <td>8616173</td>
764
- <td>8.22 MB</td>
765
- <td>bool</td>
766
- </tr>
767
- <tr>
768
- <th>32</th>
769
- <td>media_asset_is_public</td>
770
- <td>8616173</td>
771
- <td>8.22 MB</td>
772
- <td>bool</td>
773
- </tr>
774
- </tbody>
775
- </table>
776
- </div>
777
-
778
- ## Dataset Creation
779
-
780
- We scraped all post IDs on Danbooru from 1 up to the latest. Some restricted tags (e.g. `loli`) were hidden by the site and require a gold account to access, so they are not present.
781
- For a more complete (but older) metadata reference, you may wish to combine this with Danbooru2021 or similar previous scrapes.
782
-
783
- The scraping process used a pool of roughly 400 IPs over six hours, ensuring consistent tag definitions. Below is a simplified example of the process used to convert the metadata into Parquet:
784
-
785
- ```python
786
- import pandas as pd
787
- from pandarallel import pandarallel
788
-
789
- # Initialize pandarallel
790
- pandarallel.initialize(nb_workers=4, progress_bar=True)
791
-
792
- def flatten_dict(d, parent_key='', sep='_'):
793
- """
794
- Flattens a nested dictionary.
795
- """
796
- items = []
797
- for k, v in d.items():
798
- new_key = f"{parent_key}{sep}{k}" if parent_key else k
799
- if isinstance(v, dict):
800
- items.extend(flatten_dict(v, new_key, sep=sep).items())
801
- elif isinstance(v, list):
802
- items.append((new_key, ', '.join(map(str, v))))
803
- else:
804
- items.append((new_key, v))
805
- return dict(items)
806
-
807
- def extract_all_illust_info(json_content):
808
- """
809
- Parses and flattens Danbooru JSON into a pandas Series.
810
- """
811
- flattened_data = flatten_dict(json_content)
812
- return pd.Series(flattened_data)
813
-
814
- def dicts_to_dataframe_parallel(dicts):
815
- """
816
- Converts a list of dicts to a flattened DataFrame using pandarallel.
817
- """
818
- df = pd.DataFrame(dicts)
819
- flattened_df = df.parallel_apply(lambda row: extract_all_illust_info(row.to_dict()), axis=1)
820
- return flattened_df
821
- ```
822
-
823
-
824
-
825
  ### Recommendations
826
 
827
  Users should be aware of potential biases and limitations, including the presence of adult content in some tags. More details and mitigations may be needed.
 
165
 
166
  ## Uses
167
 
168
+ The dataset can be loaded or filtered with the Huggingface `datasets` library:
169
+
170
+ ```python
171
+ from datasets import Dataset, load_dataset
172
+
173
+ danbooru_dataset = load_dataset("trojblue/danbooru2025-metadata", split="train")
174
+ df = danbooru_dataset.to_pandas()
175
+ ```
176
 
177
  This dataset can be used to:
178
  - Retrieve the full Danbooru image set via the metadata’s URLs
179
  - Train or fine-tune an image tagger
180
  - Compare against previous metadata versions to track changes, tag evolution, and historical trends
181
 
182
+
183
+ ## Dataset Creation
184
+
185
+ We scraped all post IDs on Danbooru from 1 up to the latest. Some restricted tags (e.g. `loli`) were hidden by the site and require a gold account to access, so they are not present.
186
+ For a more complete (but older) metadata reference, you may wish to combine this with Danbooru2021 or similar previous scrapes.
187
+
188
+ The scraping process used a pool of roughly 400 IPs over six hours, ensuring consistent tag definitions. Below is a simplified example of the process used to convert the metadata into Parquet:
189
 
190
  ```python
191
+ import pandas as pd
192
+ from pandarallel import pandarallel
193
 
194
+ # Initialize pandarallel
195
+ pandarallel.initialize(nb_workers=4, progress_bar=True)
 
196
 
197
+ def flatten_dict(d, parent_key='', sep='_'):
198
+ """
199
+ Flattens a nested dictionary.
200
+ """
201
+ items = []
202
+ for k, v in d.items():
203
+ new_key = f"{parent_key}{sep}{k}" if parent_key else k
204
+ if isinstance(v, dict):
205
+ items.extend(flatten_dict(v, new_key, sep=sep).items())
206
+ elif isinstance(v, list):
207
+ items.append((new_key, ', '.join(map(str, v))))
208
+ else:
209
+ items.append((new_key, v))
210
+ return dict(items)
211
+
212
+ def extract_all_illust_info(json_content):
213
+ """
214
+ Parses and flattens Danbooru JSON into a pandas Series.
215
+ """
216
+ flattened_data = flatten_dict(json_content)
217
+ return pd.Series(flattened_data)
218
+
219
+ def dicts_to_dataframe_parallel(dicts):
220
+ """
221
+ Converts a list of dicts to a flattened DataFrame using pandarallel.
222
+ """
223
+ df = pd.DataFrame(dicts)
224
+ flattened_df = df.parallel_apply(lambda row: extract_all_illust_info(row.to_dict()), axis=1)
225
+ return flattened_df
226
+ ```
227
 
228
  ## Dataset Structure
229
 
230
  Below is a partial overview of the DataFrame columns, derived directly from the Danbooru JSONs:
231
 
 
232
 
233
  ```python
234
  import unibox as ub
 
376
  </div>
377
 
378
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
379
  ### Recommendations
380
 
381
  Users should be aware of potential biases and limitations, including the presence of adult content in some tags. More details and mitigations may be needed.