|
--- |
|
dataset_info: |
|
features: |
|
- name: pid |
|
dtype: int64 |
|
- name: question |
|
dtype: string |
|
- name: decoded_image |
|
dtype: image |
|
- name: image |
|
dtype: string |
|
- name: answer |
|
dtype: string |
|
- name: task |
|
dtype: string |
|
- name: category |
|
dtype: string |
|
- name: complexity |
|
dtype: int64 |
|
splits: |
|
- name: GRAB |
|
num_bytes: 466596459.9 |
|
num_examples: 2170 |
|
download_size: 406793109 |
|
dataset_size: 466596459.9 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: GRAB |
|
path: data/GRAB-* |
|
license: mit |
|
--- |
|
|
|
# GRAB: A Challenging GRaph Analysis Benchmark for Large Multimodal Models |
|
|
|
## Dataset Description |
|
|
|
- **Homepage:** [https://grab-benchmark.github.io](https://grab-benchmark.github.io) |
|
- **Paper:** [GRAB: A Challenging GRaph Analysis Benchmark for Large Multimodal Models](https://arxiv.org/abs/2408.11817) |
|
- **Repository** [GRAB](https://github.com/jonathan-roberts1/GRAB) |
|
- **Leaderboard** [https://grab-benchmark.github.io](https://grab-benchmark.github.io) |
|
|
|
### Dataset Summary |
|
Large multimodal models (LMMs) have exhibited proficiencies across many visual tasks. Although numerous benchmarks exist to evaluate model performance, they increasingly have insufficient headroom and are **unfit to evaluate the next generation of frontier LMMs**. |
|
|
|
To overcome this, we present **GRAB**, a challenging benchmark focused on the tasks **human analysts** might typically perform when interpreting figures. Such tasks include estimating the mean, intercepts or correlations of functions and data series and performing transforms. |
|
|
|
We evaluate a suite of **20 LMMs** on GRAB, finding it to be a challenging benchmark, with the current best model scoring just **21.7%**. |
|
|
|
### Example usage |
|
```python |
|
from datasets import load_dataset |
|
|
|
# load dataset |
|
grab_dataset = load_dataset("jonathan-roberts1/GRAB", split='GRAB') |
|
""" |
|
Dataset({ |
|
features: ['pid', 'question', 'decoded_image', 'image', 'answer', 'task', 'category', 'complexity'], |
|
num_rows: 2170 |
|
}) |
|
""" |
|
# query individual questions |
|
grab_dataset[40] # e.g., the 41st element |
|
""" |
|
{'pid': 40, 'question': 'What is the value of the y-intercept of the function? Give your answer as an integer.', |
|
'decoded_image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=5836x4842 at 0x12288EA60>, |
|
'image': 'images/40.png', 'answer': '1', 'task': 'properties', 'category': 'Intercepts and Gradients', |
|
'complexity': 0} |
|
""" |
|
question_40 = grab_dataset[40]['question'] # question |
|
answer_40 = grab_dataset[40]['answer'] # ground truth answer |
|
pil_image_40 = grab_dataset[0]['decoded_image'] |
|
``` |
|
Note -- the 'image' feature corresponds to filepaths in the ```images``` dir in this repository: (https://huggingface.co/datasets/jonathan-roberts1/GRAB/resolve/main/images.zip) |
|
|
|
Please visit our [GitHub repository](https://github.com/jonathan-roberts1/GRAB) for example inference code. |
|
|
|
### Dataset Curators |
|
|
|
This dataset was curated by Jonathan Roberts, Kai Han, and Samuel Albanie |
|
|
|
### Citation Information |
|
``` |
|
@article{roberts2024grab, |
|
title={GRAB: A Challenging GRaph Analysis Benchmark for Large Multimodal Models}, |
|
author={Roberts, Jonathan and Han, Kai and Albanie, Samuel}, |
|
journal={arXiv preprint arXiv:2408.11817}, |
|
year={2024} |
|
} |
|
``` |