--- license: apache-2.0 size_categories: - 1K SPARK can reduce the fundamental multi-vision sensor information gap between images and multi-vision sensors. We generated 6,248 vision-language test samples automatically to investigate multi-vision sensory perception and multi-vision sensory reasoning on physical sensor knowledge proficiency across different formats, covering different types of sensor-related questions. ## Dataset Details ## Uses ### Direct Use ### Source Data #### Data Collection and Processing These instructions are built from five public datasets: [MS-COCO](https://arxiv.org/abs/1405.0312), [M3FD](https://arxiv.org/abs/2203.16220v1), [Dog&People](https://public.roboflow.com/object-detection/thermal-dogs-and-people), [RGB-D scene dataset](https://arxiv.org/abs/2110.11590), and [UNIFESP X-ray Body Part Classifier Competition dataset](https://www.kaggle.com/competitions/unifesp-x-ray-body-part-classifier). ## Citation **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Contact [SangYun Chung](https://sites.google.com/view/sang-yun-chung/profile): jelarum@kaist.ac.kr