Update README.md
Browse files
README.md
CHANGED
@@ -30,3 +30,46 @@ configs:
|
|
30 |
- split: train
|
31 |
path: data/train-*
|
32 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
- split: train
|
31 |
path: data/train-*
|
32 |
---
|
33 |
+
|
34 |
+
# SPARK (multi-vision Sensor Perception And Reasoning benchmarK)
|
35 |
+
|
36 |
+
<!-- Provide a quick summary of the dataset. -->
|
37 |
+
|
38 |
+
SPARK that can reduce the fundamental multi-vision sensor information gap between images and multi-vision sensors. We generated 6,248 vision-language test samples automatically to investigate multi-vision sensory perception and multi-vision sensory reasoning on physical sensor knowledge proficiency across different formats, covering different types of sensor-related questions.
|
39 |
+
|
40 |
+
## Dataset Details
|
41 |
+
|
42 |
+
|
43 |
+
## Uses
|
44 |
+
|
45 |
+
<!-- Address questions around how the dataset is intended to be used. -->
|
46 |
+
|
47 |
+
### Direct Use
|
48 |
+
|
49 |
+
<!-- This section describes suitable use cases for the dataset. -->
|
50 |
+
|
51 |
+
|
52 |
+
### Source Data
|
53 |
+
|
54 |
+
|
55 |
+
#### Data Collection and Processing
|
56 |
+
|
57 |
+
These instructions are built from five public datasets: MS-COCO, M3FD, Dog&People, RGB-D scene dataset, and UNIFESP X-ray Body Part Classifier Competition dataset.
|
58 |
+
|
59 |
+
|
60 |
+
## Citation
|
61 |
+
|
62 |
+
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
63 |
+
|
64 |
+
**BibTeX:**
|
65 |
+
|
66 |
+
[More Information Needed]
|
67 |
+
|
68 |
+
**APA:**
|
69 |
+
|
70 |
+
[More Information Needed]
|
71 |
+
|
72 |
+
|
73 |
+
## Contact
|
74 |
+
|
75 |
+
[SangYun Chung](https://sites.google.com/view/sang-yun-chung/profile): [email protected]
|