---
task_categories:
- multiple-choice
- question-answering
- visual-question-answering
language:
- en
- zh
tags:
- multimodal
- intelligence
size_categories:
- 1K
## Paper Information
- Paper: Coming soon.
- Code: https://github.com/AceCHQ/MMIQ/tree/main
- Project: https://acechq.github.io/MMIQ-benchmark/
- Leaderboard: https://acechq.github.io/MMIQ-benchmark/#leaderboard
## Dataset Examples
Examples of our MM-IQ:
1. Logical Operation Reasoning
Prompt: Choose the most appropriate option from the given four choices to fill in the question mark, so that it presents a certain regularity:
🔍 Click to expand/collapse more examples
2. Mathematical Reasoning
Prompt1: Choose the most appropriate option from the given four options to present a certain regularity:
Option A: 4; Option B: 5; Option C: 6; Option D: 7.
3. 2D-geometry Reasoning
Prompt: The option that best fits the given pattern of figures is ( ).
4. 3D-geometry Reasoning
Prompt: The one that matches the top view is:
5. visual instruction Reasoning
Prompt: Choose the most appropriate option from the given four options to present a certain regularity:
6. Spatial Relationship Reasoning
Prompt: Choose the most appropriate option from the given four options to present a certain regularity:
7. Concrete Object Reasoning
Prompt: Choose the most appropriate option from the given four choices to fill in the question mark, so that it presents a certain regularity:
8. Temporal Movement Reasoning
Prompt:Choose the most appropriate option from the given four choices to fill in the question mark, so that it presents a certain regularity:
## Leaderboard
🏆 The leaderboard for the *MM-IQ* (2,710 problems) is available [here](https://acechq.github.io/MMIQ-benchmark/#leaderboard).
## Dataset Usage
### Data Downloading
You can download this dataset by the following command (make sure that you have installed [Huggingface Datasets](https://huggingface.co/docs/datasets/quickstart)):
```python
from IPython.display import display, Image
from datasets import load_dataset
dataset = load_dataset("huanqia/MM-IQ")
```
Here are some examples of how to access the downloaded dataset:
```python
# print the first example on the MM-IQ dataset
print(dataset["test"][0])
print(dataset["test"][0]['data_id']) # print the problem id
print(dataset["test"][0]['question']) # print the question text
print(dataset["test"][0]['answer']) # print the answer
# Display the image
print("Image context:")
display(dataset["test"][0]['image'])
```
We have uploaded a demo to illustrate how to access the MM-IQ dataset on Hugging Face, available at [hugging_face_dataset_demo.ipynb](https://github.com/AceCHQ/MMIQ/blob/main/mmiq/jupyter_notebook_demos/hugging_face_dataset_demo.ipynb).
### Data Format
The dataset is provided in Parquet format and contains the following attributes:
```json
{
"question": [string] The question text,
"answer": [string] The correct answer for the problem,
"data_id": [int] The problem id,
"category": [string] The category of reasoning pattern,
"image": [image] Containing image (raw bytes and image path) corresponding to the image in data.zip,
}
```
### Automatic Evaluation
🔔 To automatically evaluate a model on the dataset, please refer to our GitHub repository [here](https://github.com/AceCHQ/MMIQ/tree/main/mmiq).
## Citation
If you use the **MM-IQ** dataset in your work, please kindly cite the paper using this BibTeX:
```
@misc{cai2025mm-iq,
title = {MM-IQ: Benchmarking Human-Like Abstraction and Reasoning in Multimodal Models},
author = {Huanqia Cai and Yijun Yang and Winston Hu},
month = {January},
year = {2025}
}
```
## Contact
[Huanqia Cai](caihuanqia19@mails.ucas.ac.cn): caihuanqia19@mails.ucas.ac.cn