The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 298, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 88, in _split_generators
                  inferred_arrow_schema = pa.concat_tables(pa_tables, promote_options="default").schema
                File "pyarrow/table.pxi", line 5317, in pyarrow.lib.concat_tables
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowTypeError: struct fields don't match or are in the wrong order: Input fields: struct<> output fields: struct<parent_post_id: string>
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 352, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 303, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

image/png

Sakugabooru2025: Curated Animation Clips from Enthusiasts

Sakugabooru.com is a booru-style imageboard dedicated to collecting and sharing noteworthy animation clips, emphasizing Japanese anime but open to creators worldwide. Over the years, it has amassed more than 240,000 animation clips, alongside informative blog posts for anime fans everywhere.

With the growing interest in generative video models and AI animations, the scarcity of proper animation-related video datasets has become a challenge. This dataset was created to provide resources for animation enthusiasts and industry researchers, to help advancing the frontier of animation-video research.

Potential uses include:

  • text-to-video training
  • multimodal animation research
  • video quality/aesthetics analysis

Dataset Details

This dataset includes over 155,238 video clips and 8,680 images of unique or noteworthy animation:

image files: 8680, video files: 155238, json files: 240242
total unique files: 404160
last post id: 273264

Note that Sakugabooru’s archives have been reduced by DMCA strikes over the years. Popular titles like Mob Psycho 100 may not appear because they were removed from the site.

  • Curated by: trojblue
  • Language(s) (NLP): English, Japanese
  • License: MIT

Dataset Structure

Files are split into tar archives by post ID modulo 1,000:

  • Example: post ID 42 goes to 0000.tar, post ID 5306 goes to 0005.tar.
  • Each tar is about 1GB, suitable for large-batch data processing.

The dataset follows WebDataset conventions, with each tar containing media files (~1GB) plus metadata:

./media/0.tar
# ./train/0.tar/{sakuga_5306.webm}
# ./train/0.tar/{sakuga_5306.json}

The index file ./sakugabooru-index.json is generated using widsindex (see WebDataset FAQ):

widsindex *.tar > mydataset-index.json

Dataset Creation

This dataset is sourced directly from Sakugabooru by querying IDs from 0 up to the latest, by Dec.28, 2024:

https://www.sakugabooru.com/post/show/{id}

Data Collection and Processing

Since Sakugabooru lacks a direct JSON API, metadata were gathered by visiting each page and extracting relevant parts. The scraper code is available here: arot-devs/sakuga-scraper.

Each post’s JSON metadata follows a format like:

{
    "post_id": 100112,
    "post_url": "https://www.sakugabooru.com/post/show/100112",
    "image_url": null,
    "tags": {},
    ...
    "status_notice": [
        "This post was deleted.\n\n      Reason: ..."
    ]
}

No additional processing was performed beyond metadata collection.

Bias, Risks, and Limitations

Owing to DMCA takedowns, duplicates, or other factors, only about 60% (163,918 of 273,264) post IDs have attached media. The selection also tends to favor older Japanese animations.

Recommendations

For more interesting reads about animation, visit: Sakuga Blog – The Art of Japanese Animation

You can support Sakugabooru through: Sakugabooru | Patreon

Downloads last month
18