UBCNLP commited on
Commit
2777a5f
·
verified ·
1 Parent(s): 1a6e359

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -4
README.md CHANGED
@@ -9,11 +9,62 @@ dataset_info:
9
  dtype: string
10
  splits:
11
  - name: test
12
- num_bytes: 134588653.0
13
  num_examples: 592
14
  download_size: 134008473
15
- dataset_size: 134588653.0
 
 
 
 
 
 
 
 
16
  ---
17
- # Dataset Card for "GlobalRG-Grounding"
18
 
19
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  dtype: string
10
  splits:
11
  - name: test
12
+ num_bytes: 134588653
13
  num_examples: 592
14
  download_size: 134008473
15
+ dataset_size: 134588653
16
+ task_categories:
17
+ - object-detection
18
+ tags:
19
+ - cultural
20
+ - visual
21
+ - grounding
22
+ size_categories:
23
+ - n<1K
24
  ---
 
25
 
26
+ ### GlobalRG - Cultural Visual Grounding Task
27
+ Despite recent advancements in vision-language models, their performance remains suboptimal on images from non-western cultures due to underrepresentation in training datasets. Various benchmarks have been proposed to test models' cultural inclusivity, but they have limited coverage of cultures and do not adequately assess cultural diversity across universal as well as culture-specific local concepts. We introduce the GlobalRG-Grounding benchmark, which aims at grounding culture-specific concepts within images from 15 countries.
28
+
29
+ > **Note:** The answers for the GlobalRG-Grounding benchmark are not publicly available. We are working on creating a competition where participants can upload their predictions and evaluate their models. Stay tuned for more updates!
30
+ If you need to urgently need to evaluate, please contact [email protected]
31
+
32
+ ### Loading the dataset
33
+ To load and use the GlobalRG-Grounding benchmark, use the following commands:
34
+ ```
35
+ from datasets import load_dataset
36
+ globalrg_grounding_dataset = load_dataset('UBCNLP/GlobalRG-Grounding')
37
+ ```
38
+ Once the dataset is loaded, each instance contains the following fields:
39
+
40
+ - `u_id`: A unique identifier for each image-region-concept tuple
41
+ - `image`: The image data in binary format
42
+ - `region`: The cultural region pertaining to the image
43
+ - `concept`: The cultural concept to be grounded in the image.
44
+
45
+ ### Usage and License
46
+ GlobalRG is a test-only benchmark and can be used to evaluate models. The images are scraped from the internet and are not owned by the authors. All annotations are released under the CC BY-SA 4.0 license.
47
+
48
+ ### Citation Information
49
+ If you are using this dataset, please cite
50
+ ```
51
+ @inproceedings{bhatia-etal-2024-local,
52
+ title = "From Local Concepts to Universals: Evaluating the Multicultural Understanding of Vision-Language Models",
53
+ author = "Bhatia, Mehar and
54
+ Ravi, Sahithya and
55
+ Chinchure, Aditya and
56
+ Hwang, EunJeong and
57
+ Shwartz, Vered",
58
+ editor = "Al-Onaizan, Yaser and
59
+ Bansal, Mohit and
60
+ Chen, Yun-Nung",
61
+ booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
62
+ month = nov,
63
+ year = "2024",
64
+ address = "Miami, Florida, USA",
65
+ publisher = "Association for Computational Linguistics",
66
+ url = "https://aclanthology.org/2024.emnlp-main.385",
67
+ doi = "10.18653/v1/2024.emnlp-main.385",
68
+ pages = "6763--6782"
69
+ }
70
+ ```