Update README.md
Browse files
README.md
CHANGED
@@ -180,51 +180,6 @@ This dataset can be used to:
|
|
180 |
- Compare against previous metadata versions to track changes, tag evolution, and historical trends
|
181 |
|
182 |
|
183 |
-
## Dataset Creation
|
184 |
-
|
185 |
-
We scraped all post IDs on Danbooru from 1 up to the latest. Some restricted tags (e.g. `loli`) were hidden by the site and require a gold account to access, so they are not present.
|
186 |
-
For a more complete (but older) metadata reference, you may wish to combine this with Danbooru2021 or similar previous scrapes.
|
187 |
-
|
188 |
-
The scraping process used a pool of roughly 400 IPs over six hours, ensuring consistent tag definitions. Below is a simplified example of the process used to convert the metadata into Parquet:
|
189 |
-
|
190 |
-
```python
|
191 |
-
import pandas as pd
|
192 |
-
from pandarallel import pandarallel
|
193 |
-
|
194 |
-
# Initialize pandarallel
|
195 |
-
pandarallel.initialize(nb_workers=4, progress_bar=True)
|
196 |
-
|
197 |
-
def flatten_dict(d, parent_key='', sep='_'):
|
198 |
-
"""
|
199 |
-
Flattens a nested dictionary.
|
200 |
-
"""
|
201 |
-
items = []
|
202 |
-
for k, v in d.items():
|
203 |
-
new_key = f"{parent_key}{sep}{k}" if parent_key else k
|
204 |
-
if isinstance(v, dict):
|
205 |
-
items.extend(flatten_dict(v, new_key, sep=sep).items())
|
206 |
-
elif isinstance(v, list):
|
207 |
-
items.append((new_key, ', '.join(map(str, v))))
|
208 |
-
else:
|
209 |
-
items.append((new_key, v))
|
210 |
-
return dict(items)
|
211 |
-
|
212 |
-
def extract_all_illust_info(json_content):
|
213 |
-
"""
|
214 |
-
Parses and flattens Danbooru JSON into a pandas Series.
|
215 |
-
"""
|
216 |
-
flattened_data = flatten_dict(json_content)
|
217 |
-
return pd.Series(flattened_data)
|
218 |
-
|
219 |
-
def dicts_to_dataframe_parallel(dicts):
|
220 |
-
"""
|
221 |
-
Converts a list of dicts to a flattened DataFrame using pandarallel.
|
222 |
-
"""
|
223 |
-
df = pd.DataFrame(dicts)
|
224 |
-
flattened_df = df.parallel_apply(lambda row: extract_all_illust_info(row.to_dict()), axis=1)
|
225 |
-
return flattened_df
|
226 |
-
```
|
227 |
-
|
228 |
## Dataset Structure
|
229 |
|
230 |
Below is a partial overview of the DataFrame columns, derived directly from the Danbooru JSONs:
|
@@ -376,6 +331,52 @@ Index(['approver_id', 'bit_flags', 'created_at', 'down_score', 'fav_count',
|
|
376 |
</div>
|
377 |
|
378 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
379 |
### Recommendations
|
380 |
|
381 |
Users should be aware of potential biases and limitations, including the presence of adult content in some tags. More details and mitigations may be needed.
|
|
|
180 |
- Compare against previous metadata versions to track changes, tag evolution, and historical trends
|
181 |
|
182 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
183 |
## Dataset Structure
|
184 |
|
185 |
Below is a partial overview of the DataFrame columns, derived directly from the Danbooru JSONs:
|
|
|
331 |
</div>
|
332 |
|
333 |
|
334 |
+
## Dataset Creation
|
335 |
+
|
336 |
+
We scraped all post IDs on Danbooru from 1 up to the latest. Some restricted tags (e.g. `loli`) were hidden by the site and require a gold account to access, so they are not present.
|
337 |
+
For a more complete (but older) metadata reference, you may wish to combine this with Danbooru2021 or similar previous scrapes.
|
338 |
+
|
339 |
+
The scraping process used a pool of roughly 400 IPs over six hours, ensuring consistent tag definitions. Below is a simplified example of the process used to convert the metadata into Parquet:
|
340 |
+
|
341 |
+
```python
|
342 |
+
import pandas as pd
|
343 |
+
from pandarallel import pandarallel
|
344 |
+
|
345 |
+
# Initialize pandarallel
|
346 |
+
pandarallel.initialize(nb_workers=4, progress_bar=True)
|
347 |
+
|
348 |
+
def flatten_dict(d, parent_key='', sep='_'):
|
349 |
+
"""
|
350 |
+
Flattens a nested dictionary.
|
351 |
+
"""
|
352 |
+
items = []
|
353 |
+
for k, v in d.items():
|
354 |
+
new_key = f"{parent_key}{sep}{k}" if parent_key else k
|
355 |
+
if isinstance(v, dict):
|
356 |
+
items.extend(flatten_dict(v, new_key, sep=sep).items())
|
357 |
+
elif isinstance(v, list):
|
358 |
+
items.append((new_key, ', '.join(map(str, v))))
|
359 |
+
else:
|
360 |
+
items.append((new_key, v))
|
361 |
+
return dict(items)
|
362 |
+
|
363 |
+
def extract_all_illust_info(json_content):
|
364 |
+
"""
|
365 |
+
Parses and flattens Danbooru JSON into a pandas Series.
|
366 |
+
"""
|
367 |
+
flattened_data = flatten_dict(json_content)
|
368 |
+
return pd.Series(flattened_data)
|
369 |
+
|
370 |
+
def dicts_to_dataframe_parallel(dicts):
|
371 |
+
"""
|
372 |
+
Converts a list of dicts to a flattened DataFrame using pandarallel.
|
373 |
+
"""
|
374 |
+
df = pd.DataFrame(dicts)
|
375 |
+
flattened_df = df.parallel_apply(lambda row: extract_all_illust_info(row.to_dict()), axis=1)
|
376 |
+
return flattened_df
|
377 |
+
```
|
378 |
+
|
379 |
+
|
380 |
### Recommendations
|
381 |
|
382 |
Users should be aware of potential biases and limitations, including the presence of adult content in some tags. More details and mitigations may be needed.
|