Datasets:
File size: 2,712 Bytes
f7f104e b453287 86dab26 f7f104e 86dab26 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
---
license: apache-2.0
dataset_info:
features:
- name: width
dtype: int64
- name: height
dtype: int64
- name: image
dtype: image
- name: objects
struct:
- name: bbox
sequence:
sequence: float64
- name: category
sequence: string
splits:
- name: train
num_bytes: 1258281789.658
num_examples: 7997
download_size: 1178990085
dataset_size: 1258281789.658
task_categories:
- object-detection
tags:
- ui
- design
- detection
size_categories:
- n<1K
---
# Dataset: Mobile UI Design Detection
## Introduction
This dataset is designed for object detection tasks with a focus on detecting elements in mobile UI designs. The targeted objects include text, images, and groups. The dataset contains images and object detection boxes, including class labels and location information.
## Dataset Content
Load the dataset and take a look at an example:
```python
>>> from datasets import load_dataset
>>>> ds = load_dataset("mrtoy/mobile-ui-design")
>>> example = ds[0]
>>> example
{'width': 375,
'height': 667,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=375x667>,
'objects': {'bbox': [[0.0, 0.0, 375.0, 667.0],
[0.0, 0.0, 375.0, 667.0],
[0.0, 0.0, 375.0, 20.0],
...
],
'category': ['artboard',
'rectangle',
'rectangle',
...]}}
```
The dataset has the following fields:
- image: PIL.Image.Image object containing the image.
- height: The image height.
- width: The image width.
- objects: A dictionary containing bounding box metadata for the objects in the image:
- bbox: The object’s bounding box (xmin,ymin,width,height).
- category: The object’s category, with possible values including artboard、rectangle、text、group、...
You can visualize the bboxes on the image using some internal torch utilities.
```python
import torch
from torchvision.ops import box_convert
from torchvision.utils import draw_bounding_boxes
from torchvision.transforms.functional import pil_to_tensor, to_pil_image
item = ds[0]
boxes_xywh = torch.tensor(item['objects']['bbox'])
boxes_xyxy = box_convert(boxes_xywh, 'xywh', 'xyxy')
to_pil_image(
draw_bounding_boxes(
pil_to_tensor(item['image']),
boxes_xyxy,
labels=item['objects']['category'],
)
)
```
## Applications
This dataset can be used for various applications, such as:
- Training and evaluating object detection models for mobile UI designs.
- Identifying design patterns and trends to aid UI designers and developers in creating high-quality mobile app UIs.
- Enhancing the automation process in generating UI design templates.
- Improving image recognition and analysis in the field of mobile UI design.
|