|
--- |
|
license: cc-by-4.0 |
|
tags: |
|
- super-resolution |
|
pretty_name: BHI SISR Dataset |
|
size_categories: |
|
- 100K<n<1M |
|
--- |
|
|
|
# BHI SISR Dataset |
|
|
|
## Content |
|
- [HR Dataset](https://huggingface.co/datasets/Phips/BHI#hr-dataset) |
|
- [Used Datasets](https://huggingface.co/datasets/Phips/BHI#used-datasets) |
|
- [Tiling](https://huggingface.co/datasets/Phips/BHI#tiling) |
|
- [BHI Filtering](https://huggingface.co/datasets/Phips/BHI#bhi-filtering) |
|
- [Files](https://huggingface.co/datasets/Phips/BHI#files) |
|
- [Upload](https://huggingface.co/datasets/Phips/BHI#upload) |
|
- [Corresponding LR Sets](https://huggingface.co/datasets/Phips/BHI#corresponding-lr-sets) |
|
- [Trained models](https://huggingface.co/datasets/Phips/BHI#trained-models) |
|
|
|
## HR Dataset |
|
|
|
The BHI SISR Dataset's purpose is for training single image super-resolution models and is a result of tests on my BHI filtering method, which I made [a huggingface community blogpost about](https://huggingface.co/blog/Phips/bhi-filtering), which can be extremely summarized by that removing (by filtering) only the worst quality tiles from a training set has a way bigger positive effect on training metrics than keeping only the best quality training tiles. |
|
|
|
It consists of 390'035 images, which are all 512x512px dimensions and in the webp format. |
|
|
|
<figure> |
|
<img src="/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F634e9aa407e669188d3912f9%2FbV0oaFKJzdsEqRme_lqU8.png%26quot%3B%3C%2Fspan%3E alt="48 first training tiles"> |
|
<figcaption>Visual example - the first 48 training tiles</figcaption> |
|
</figure> |
|
|
|
The advantage of such a big dataset is when applying degradations in a randomized manner to create a corresponding LR for paired sisr training, the distribution of degradations and strenghts should be sufficient because of the quantity of training tiles. I will create some corresponding x4 LR datasets to this one and publish them aswell. |
|
|
|
Size on disc: |
|
``` |
|
du BHI_HR |
|
131148100 BHI_HR/ |
|
``` |
|
|
|
Also for the future, I am releasing the full dataset here. But there can of course be (community?) attempts in the future to make distilled versions of this dataset that perform better since I might find additional metrics or filtering methods in the future that might help reduce dataset size while achieving better training validation metric performance. |
|
|
|
In Summary: |
|
|
|
Advantage of this dataset is its large quantity of normalized (512x512px) training tiles |
|
- When applying degradations to create a corresponding LR, the distribution of degradation strengths should be sufficient, even when using multiple degradations. |
|
- Big arch options in general can profit from the amount of learning content in this dataset (big transformers like [DRCT-L](https://github.com/ming053l/DRCT), [HMA](https://github.com/korouuuuu/HMA), [HAT-L](https://github.com/XPixelGroup/HAT), [HATFIR](https://github.com/Zdafeng/SwinFIR), [ATD](https://github.com/LabShuHangGU/Adaptive-Token-Dictionary), [CFAT](https://github.com/rayabhisek123/CFAT), [RGT](https://github.com/zhengchen1999/RGT), [DAT2](https://github.com/zhengchen1999/dat). Probably also diffusion based upscalers like [osediff](https://github.com/cswry/osediff), [s3diff](https://github.com/arctichare105/s3diff), [SRDiff](https://github.com/LeiaLi/SRDiff), [resshift](https://github.com/zsyoaoa/resshift), [sinsr](https://github.com/wyf0912/sinsr), [cdformer](https://github.com/i2-multimedia-lab/cdformer)). Since it takes a while to reach a new epoch, higher training iters is advised for the big arch options to profit from the full content. The filtering method used here made sure that metrics should not worsen during training (for example due to blockiness filtering). |
|
- This dataset could still be distilled more to reach higher quality, if for example another promising filtering method is used in the future on this dataset |
|
|
|
### Used Datasets |
|
|
|
This BHI SISR Dataset consists of the following datasets: |
|
|
|
[HQ50K](https://github.com/littleYaang/HQ-50K) |
|
[ImageNet](https://www.image-net.org/) |
|
[FFHQ](https://github.com/NVlabs/ffhq-dataset) |
|
[LSDIR](https://github.com/ofsoundof/LSDIR) |
|
[DF2K](https://www.kaggle.com/datasets/thaihoa1476050/df2k-ost) |
|
[OST](https://www.kaggle.com/datasets/thaihoa1476050/df2k-ost) |
|
[iNaturalist 2019](https://github.com/visipedia/inat_comp/tree/master/2019) |
|
[COCO 2017 Train](https://cocodataset.org/#download) |
|
[COCO 2017 Unlabeled](https://cocodataset.org/#download) |
|
[Nomosv2](https://github.com/neosr-project/neosr?tab=readme-ov-file#-datasets) |
|
[HFA2K](https://github.com/neosr-project/neosr?tab=readme-ov-file#-datasets) |
|
[Nomos_Uni](https://github.com/neosr-project/neosr?tab=readme-ov-file#-datasets) |
|
[ModernAnimation1080_v3](https://huggingface.co/datasets/Zarxrax/ModernAnimation1080_v3) |
|
[Digital_Art_v2](https://huggingface.co/datasets/umzi/digital_art_v2) |
|
|
|
|
|
### Tiling |
|
|
|
These datasets have then been tiled to 512x512px for improved I/O training speed, and normalization of image dimensions is also nice, so it will take consistent ressources if processing. |
|
|
|
In some cases these led to fewer images in the dataset because they contained images with < 512px dimensions which were filtered out, some examples are: |
|
COCO 2017 unlabeled from 123'403 images -> 8'814 tiles. |
|
COCO 2017 train from 118'287 images -> 8'442 tiles. |
|
|
|
And in some cases this led to more images, because the original images were high resolution and therefore gave multiple 512x512 tiles per single image. |
|
For example HQ50K -> 213'396 tiles. |
|
|
|
### BHI Filtering |
|
|
|
I then filtered these sets with the BHI filtering method using the following thresholds: |
|
|
|
Blockiness < 30 |
|
HyperIQA >= 0.2 |
|
IC9600 >= 0.4 |
|
|
|
Which led to following dataset tile quantities that satisfied the filtering process, which made it into the BHI SISR Dataset: |
|
|
|
DF2K -> 12'462 Tiles |
|
FFHQ -> 35'111 Tiles |
|
HQ50K -> 61'647 Tiles |
|
ImageNet -> 4'479 Tiles |
|
LSDIR -> 116'141 Tiles |
|
OST -> 1'048 Tiles |
|
COCO2017_train -> 5'619 Tiles |
|
COCO2017_unlabeled -> 5'887 Tiles |
|
Digital_Art_v2 -> 1'620 Tiles |
|
HFA2K -> 2'280 Tiles |
|
ModernAnimation1080_v3 -> 4'109 Tiles |
|
Nomos_Uni -> 2'466 Tiles |
|
Nomosv2 -> 5'226 Tiles |
|
inaturalist_2019 -> 131'940 Tiles |
|
|
|
My main point here also would be that this dataset, even though still consisting of around 390k tiles, is already a strongly reduced version of these original datasets combined. |
|
|
|
|
|
### Files |
|
|
|
Files have been named with '{dataset_name}_{index}.webp' so that if one of these used datasets were problematic concerning public access, could still be removed in the future form this dataset. |
|
Some tiles have been filtered in a later step, so dont worry if some index numbers are missing, all files are listed in the [file list](https://huggingface.co/datasets/Phips/BHI/resolve/main/files.txt?download=true). |
|
|
|
Also all scores can be found in the [scores folder](https://huggingface.co/datasets/Phips/BHI/tree/main/scores). |
|
|
|
I did convert to webp because of file size reduction, because the dataset was originally at around 200GB, when I then used oxipng ("oxipng --strip safe --alpha *.png") for optimization. But lossless webp is just the best option available currently for lossless file size reduction. |
|
(JPEGXL is not supported by cv2 for training yet. WebP2 is experimental. FLIF was discontinued for JPEGXL.) |
|
|
|
<figure> |
|
<img src="/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F634e9aa407e669188d3912f9%2FBgkkzkhZQBrXY0qTxR_rm.png%26quot%3B%3C%2Fspan%3E alt="Lossless image formats"> |
|
<figcaption>Table 1 Page 3 from the paper "Comparison of Lossless Image Formats"</figcaption> |
|
</figure> |
|
|
|
|
|
### Upload |
|
|
|
I uploaded the dataset as multi-part zip archive files with a max of 25GB per file, resulting in 6 archive files. |
|
This should work with lfs file size limit, and i chose zip because its such a common format. |
|
|
|
## Corresponding LR Sets |
|
|
|
In most cases, only the HR part, meaning the part published here, is needed since LR sets, like a bicubic only downsampled counterpart for trainig 2x or 4x models can very simply be generated by the user. |
|
Also, if a degradation pipeline like the real-esrgan otf pipeline is used, only this HR set is needed, since it degrades images during training itself. |
|
However, I thought i would provide some prebuilt LR sets for paired training, which are ones I used to train models myself. The resulting models can of course be downloaded and tried out. |
|
All these datasets are scaled x4 to train 4x sisr models, which is the standard scale I train, for multiple reasons. |
|
See links for degradation details and download (separate dataset pages) |
|
|
|
[BHI_LR_multi](https://huggingface.co/datasets/Phips/BHI_LR_multi) was made by using multiple different downsampling/scaling algos. |
|
[BHI_LR_multiblur](https://huggingface.co/datasets/Phips/BHI_LR_multiblur) as above, but also added blur for deblurring/sharper results plus added both jpg and webp compression for compression handling. |
|
[BHI_LR_real](https://huggingface.co/datasets/Phips/BHI_LR_real) This is my attempt at a real degraded dataset for the trained upscaling model to handle images downloaded from the web. |
|
|
|
## Trained Models |
|
|
|
I also provide sisr models I trained on this dataset when either using the real-esrgan otf pipeline or then prebuilt LR sets for paired training, which are the exact sets I released above. |
|
These models are based on the realplksr arch (middle sized arch) and on the dat arch (big arch, slower but better quality). There are of course other options I could have gone with, but I might still release other models on this dataset in the future. |
|
|
|
Multiscale: [RealPLKSR](https://github.com/Phhofm/models/releases/tag/4xbhi_realplksr) // only non-degraded input |
|
Multiblur: [RealPLKSR](https://github.com/Phhofm/models/releases/tag/4xbhi_realplksr) // a bit sharper output |
|
Multiblurjpg: [DAT2](https://github.com/Phhofm/models/releases/tag/4xBHI_dat2_multiblurjpg) // handles jpg compression additionally |
|
OTF_nn: [RealPLKSR](https://github.com/Phhofm/models/releases/tag/4xbhi_realplksr) |
|
OTF(real-esrgan pipeline): [RealPLKSR](https://github.com/Phhofm/models/releases/tag/4xbhi_realplksr) | [DAT2](https://github.com/Phhofm/models/releases/tag/4xBHI_dat2_otf) // handles blur, noise, and compression |
|
Real: [RealPLKSR](https://github.com/Phhofm/models/releases/tag/4xbhi_realplksr) | [DAT2](https://github.com/Phhofm/models/releases/tag/4xBHI_dat2_real) // handles blur, noise, and jpg/webp compression |