adymaharana commited on
Commit
b7020c8
·
1 Parent(s): e8c0c3f

Update Readme

Browse files
Files changed (1) hide show
  1. README.md +81 -1
README.md CHANGED
@@ -1,3 +1,83 @@
1
  ---
2
- license: cc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language:
5
+ - en
6
+ language_creators:
7
+ - found
8
+ license:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: CoCoCON
13
+ size_categories:
14
+ - 1K<n<10K
15
+ tags:
16
+ - consistency
17
+ - visual-reasoning
18
+ task_ids: []
19
  ---
20
+
21
+ # Dataset Card for CoCoCON
22
+
23
+ - [Dataset Description](#dataset-description)
24
+ - [Languages](#languages)
25
+ - [Dataset Structure](#dataset-structure)
26
+ - [Data Fields](#data-fields)
27
+ - [Data Splits](#data-splits)
28
+ - [Dataset Creation](#dataset-creation)
29
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
30
+ - [Licensing Information](#licensing-information)
31
+ - [Citation Information](#citation-information)
32
+
33
+ ## Dataset Description
34
+
35
+ CocoCON is a challenging dataset for evaluating cross-task consistency in vision-and-language models. We use contrast sets created by modifying COCO test instances for multiple tasks in small but semantically meaningful ways to change the gold label, and outline metrics for measuring if a model is consistent by ranking the original and perturbed instances across tasks. We find that state-of-the-art systems suffer from a surprisingly high degree of inconsistent behavior across tasks, especially for more heterogeneous tasks.
36
+
37
+ - **Homepage:**
38
+ https://adymaharana.github.io/cococon/
39
+ - **Repository:**
40
+ https://github.com/adymaharana/cococon
41
+ - **Paper:**
42
+ https://arxiv.org/abs/2303.16133
43
+ - **Point of Contact:**
44
45
+
46
+ ### Languages
47
+
48
+ English.
49
+
50
+ ## Dataset Structure
51
+
52
+ Each sample in this dataset corresponds to a COCO image, a set of ground truth annotations for the image captioning, visual question-answering (VQA), and localization (optional) tasks, and their respective contrast sets.
53
+
54
+ ### Data Fields
55
+
56
+ caption (string): ground truth caption.
57
+ query (string): VQA question.
58
+ answer (string): ground truth VQA answer.
59
+ question_id (int64): unordered unique identifier for sample.
60
+ image_id (int64): COCO image id.
61
+ detection (string): (optional) localization query.
62
+ boxes (list): (optional) list of ground truth bounding boxes for the localization query.
63
+ contrast_sets: Each sample in "contrast_sets" is a set of perturbed annotations corresponding to the ground truth annotations. Perturbed annotations are prefixed with "mutex_".
64
+ file_name (string): COCO filename for the image.
65
+ coco_url (string): url for downloading the image from the COCO server.
66
+ flickr_url (string): url for downloading the image from Flickr.
67
+ height (int64): height of image.
68
+ width (int64): width of image.
69
+ id (int64): ordered unique identifier for sample.
70
+
71
+ </details>
72
+ ## Dataset Creation
73
+ The CoCoCON dataset is created by a combination of machine + expert annotators who perturbed ground truth COCO annotations to create contrast sets.
74
+
75
+ ### Licensing Information
76
+ CC-By 4.0
77
+ ### Citation Information
78
+ @article{maharana2023cococon,
79
+ author = {Maharana, Adyasha and Kamath, Amita and Clark, Christopher and Bansal, Mohit and Kembhavi, Aniruddha},
80
+ title = {Exposing and Addressing Cross-Task Inconsistency in Unified Vision-Language Models.},
81
+ journal = {arxiv},
82
+ year = {2023},
83
+ }