Datasets:

Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 4,492 Bytes
5a6ee2d
 
 
 
 
 
 
 
 
7e40f7e
 
7782e2c
7e40f7e
7782e2c
5d1238f
7e40f7e
 
7782e2c
 
7e40f7e
 
7782e2c
0c40443
7782e2c
 
 
5680a59
7782e2c
5680a59
7782e2c
5680a59
7782e2c
5680a59
7782e2c
5680a59
7782e2c
5680a59
7782e2c
5680a59
7782e2c
5680a59
7782e2c
5680a59
7e40f7e
 
f094fcf
c38da1f
f094fcf
 
7e40f7e
 
 
 
 
 
 
4662f5c
 
 
 
0b5019d
4662f5c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89423ea
 
0b5019d
89423ea
 
 
 
0b5019d
 
 
 
 
 
 
 
 
 
89423ea
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
---
license: apache-2.0
task_categories:
- question-answering
language:
- en
pretty_name: Deaftest
size_categories:
- n<1K
dataset_info:
  features:
  - name: question_id
    dtype: string
  - name: question_type_id
    dtype: string
  - name: data_type
    dtype: string
  - name: subfield
    dtype: string
  - name: question
    dtype: string
  - name: options
    sequence: string
  - name: answer
    dtype: string
  - name: image_1
    dtype: image
  - name: image_2
    dtype: image
  - name: image_3
    dtype: image
  - name: image_4
    dtype: image
  - name: video_1
    dtype: string
  - name: audio_1
    dtype: audio
  - name: audio_2
    dtype: audio
  - name: audio_3
    dtype: audio
  - name: audio_4
    dtype: audio
  splits:
  - name: test
    num_bytes: 2722106.18
    num_examples: 400
  download_size: 2715938
  dataset_size: 2722106.18
configs:
- config_name: default
  data_files:
  - split: test
    path: deaftest.parquet
---

Official Deaftest dataset for the paper "[AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?]()".

🌟 For more details, please refer to the project page with data examples: [https://av-odyssey.github.io/](https://av-odyssey.github.io/).

[[🌐 Webpage](https://av-odyssey.github.io/)] [[πŸ“– Paper](https://arxiv.org/abs/2412.02611)] [[πŸ€— Huggingface AV-Odyssey Dataset](https://huggingface.co/datasets/AV-Odyssey/AV_Odyssey_Bench)] [[πŸ€— Huggingface Deaftest Dataset](https://huggingface.co/datasets/AV-Odyssey/Deaftest_dataset)] [[πŸ† Leaderboard](https://huggingface.co/spaces/AV-Odyssey/AV_Odyssey_Bench_Leaderboard)]


---

## πŸ”₯ News
* **`2024.11.24`** 🌟 We release AV-Odyssey, the first-ever comprehensive evaluation benchmark to explore whether MLLMs really understand audio-visual information.



## πŸ‘€ About AV-Odyssey

Recently, multimodal large language models (MLLMs), such as GPT-4o, Gemini 1.5 Pro, and Reka Core, have expanded their capabilities to include vision and audio modalities. While these models demonstrate impressive performance across a wide range of audio-visual applications, our proposed **DeafTest** reveals that MLLMs often struggle with simple tasks humans find trivial: 1) determining which of two sounds is louder, and 2) determining which of two sounds has a higher pitch. Motivated by these observations, we introduce **AV-Odyssey Bench**. This benchmark encompasses **26** different tasks and **4,555** carefully crafted problems, each incorporating text, visual, and audio components. All data are **newly collected and annotated by humans**, not from any existing audio-visual dataset. AV-Odyssey Bench demonstrates three major features: 1. **Comprehensive** Audio Attributes; 2. **Extensive** Domains; 3. **Interleaved** Text, Audio, and Visual components.

<img src="assets/intro.png" style="zoom:50%;" />

## πŸ“ Data Examples

Please refer to our project page https://av-odyssey.github.io/ for exploring more examples.


### πŸ“AV-Odyssey Bench
<div align="center">
  <img src="assets/demo-1.svg" width="100%" />
</div>


## πŸ” Dataset

**License**:
```
AV-Odyssey is only used for academic research. Commercial use in any form is prohibited.
The copyright of all videos belongs to the video owners.
If there is any infringement in AV-Odyssey, please email [email protected] and we will remove it immediately.
Without prior approval, you cannot distribute, publish, copy, disseminate, or modify AV-Odyssey in whole or in part. 
You must strictly comply with the above restrictions.
```

Please send an email to **[[email protected]](mailto:[email protected])**. 🌟



## πŸ† Leaderboard

### Contributing to the AV-Odyssey Leaderboard

🚨 The [Leaderboard](https://huggingface.co/spaces/AV-Odyssey/AV_Odyssey_Bench_Leaderboard) for AV-Odyssey is continuously being updated, welcoming the contribution of your excellent MLLMs! 


## Citation

If you find our work helpful for your research, please consider citing our work.   


```bibtex
@misc{gong2024avodysseybenchmultimodalllms,
      title={AV-Odyssey Bench: Can Your Multimodal LLMs Really Understand Audio-Visual Information?}, 
      author={Kaixiong Gong and Kaituo Feng and Bohao Li and Yibing Wang and Mofan Cheng and Shijia Yang and Jiaming Han and Benyou Wang and Yutong Bai and Zhuoran Yang and Xiangyu Yue},
      year={2024},
      eprint={2412.02611},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.02611}, 
}
```