Phips commited on
Commit
44a584b
·
verified ·
1 Parent(s): d34991a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -9
README.md CHANGED
@@ -83,21 +83,22 @@ Nomos_Uni -> 2'466 Tiles
83
  Nomosv2 -> 5'226 Tiles
84
  inaturalist_2019 -> 131'943 Tiles
85
 
 
86
 
87
- ## Files
88
-
89
- Files have been named with '{dataset_name}_{index}.png' so that if one of these used datasets were problematic concerning public access, could still be removed in the future form this dataset.
90
 
91
- (Note to myself: Tiles 'inaturalist_2019_65228.png','inaturalist_2019_54615.png','inaturalist_2019_22816.png' removed because of PNG error when checking with [pngcheck](http://www.libpng.org/pub/png/apps/pngcheck.html))
92
 
93
- ## Optimization
94
 
95
- Then I used [oxipng](https://github.com/shssoichiro/oxipng) ("oxipng --strip safe --alpha *.png") for optimization.
 
96
 
97
- ## WebP conversion
98
 
99
- The files have then been converted to lossless webp simply to save storage space locally and for faster uploading/downloading here on huggingface. This reduced size of the dataset by around 50 GB.
100
 
101
  ## Upload
102
 
103
- I uploaded the dataset as multi-part zip archive files with a max of 25GB per file, resulting in X archive files.
 
 
 
83
  Nomosv2 -> 5'226 Tiles
84
  inaturalist_2019 -> 131'943 Tiles
85
 
86
+ My main point here also would be that this dataset, even though still consisting of like 390k tiles, is already a strongly reduced version of these original datasets combined.
87
 
 
 
 
88
 
89
+ ## Files
90
 
91
+ Files have been named with '{dataset_name}_{index}.webp' so that if one of these used datasets were problematic concerning public access, could still be removed in the future form this dataset.
92
 
93
+ I did convert to webp because of file size reduction, because the dataset was originally at around 200GB, when I then used oxipng ("oxipng --strip safe --alpha *.png") for optimization. But lossless webp is just the best option available currently for lossless file size reduction.
94
+ Well, JPEG XL would be the absolute best modern option for lossless compression, but its relatively new as in not everything supports it yet (especially cv2 currently, which we use for training, so would be worthless having them in jpeg xl format at this moment). I will rant here about the decision of one company to disregard browser support of jpeg xl for avif, which is waaay worse (super bad) concerning lossless compression/file size, webp is way older and beats it.
95
 
96
+ TODO put paper stuff etc in here about webp / jpeg xl being superior concerning lossless compression
97
 
98
+ (Note to myself: Tiles 'inaturalist_2019_65228.png','inaturalist_2019_54615.png','inaturalist_2019_22816.png' removed because of PNG error when checking with [pngcheck](http://www.libpng.org/pub/png/apps/pngcheck.html))
99
 
100
  ## Upload
101
 
102
+ I uploaded the dataset as multi-part zip archive files with a max of 25GB per file, resulting in X archive files.
103
+ This should work with lfs file size limit, and i chose zip because its such a common format. I could have of course used another format like 7z or zpaq or something.
104
+ I actually once in the past worked on an archiver called [ShareArchiver](https://github.com/Phhofm/ShareArchiver) where my main idea was, that online shared data (like this dataset) generally gets archived once (by the uploader) but downloaded and extracted maybe a thousand times. So resulting file size (faster download time for those thousand downloads) and extraction speed (those thousand extraction) would be waay more important than compression speed. So the main idea is we are trading archiving time (very long time to archive) of that one person for faster downloads and extraction for all. The design of this archiver was that I chose only highly assymetrical compression algos, where compression times can very slow as long as decompression speed is high, and then it would brute force during compression, meaning of those available highly assymetric compression algos, it would compress each single file with all of them, check the resulting file sizes, and add only the smallest one to the .share archive. Just something from the past I wanted to mention. (one could also use the max flag to just use all of them, meaning also the symmetrical ones, just to brute force the smallest archive file possible (using paq8o etc), but of corse compression time would also be very long, but this flag was more for archiving purposes than online sharing purposes, in a case where store space would be waay more important than either compression or decompression speed.)