Datasets:

Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
BreakLee commited on
Commit
4662f5c
ยท
verified ยท
1 Parent(s): 8b6ac80

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -0
README.md CHANGED
@@ -54,3 +54,59 @@ configs:
54
  path: deaftest.parquet
55
  ---
56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
  path: deaftest.parquet
55
  ---
56
 
57
+ Official Deaftest dataset for the paper "[AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?]()".
58
+
59
+ ๐ŸŒŸ For more details, please refer to the project page with data examples: [https://av-odyssey.github.io/](https://av-odyssey.github.io/).
60
+
61
+ [[๐ŸŒ Webpage](https://av-odyssey.github.io/)] [[๐Ÿ“– Paper]()] [[๐Ÿค— Huggingface AV-Odyssey Dataset](https://huggingface.co/datasets/AV-Odyssey/AV_Odyssey_Bench)] [[๐Ÿค— Huggingface Deaftest Dataset](https://huggingface.co/datasets/AV-Odyssey/Deaftest_dataset)] [[๐Ÿ† Leaderboard](https://huggingface.co/spaces/AV-Odyssey/AV_Odyssey_Bench_Leaderboard)]
62
+
63
+
64
+ ---
65
+
66
+ ## ๐Ÿ”ฅ News
67
+ * **`2024.11.24`** ๐ŸŒŸ We release AV-Odyssey, the first-ever comprehensive evaluation benchmark to explore whether MLLMs really understand audio-visual information.
68
+
69
+
70
+
71
+ ## ๐Ÿ‘€ About AV-Odyssey
72
+
73
+ Recently, multimodal large language models (MLLMs), such as GPT-4o, Gemini 1.5 Pro, and Reka Core, have expanded their capabilities to include vision and audio modalities. While these models demonstrate impressive performance across a wide range of audio-visual applications, our proposed **DeafTest** reveals that MLLMs often struggle with simple tasks humans find trivial: 1) determining which of two sounds is louder, and 2) determining which of two sounds has a higher pitch. Motivated by these observations, we introduce **AV-Odyssey Bench**. This benchmark encompasses **26** different tasks and **4,555** carefully crafted problems, each incorporating text, visual, and audio components. All data are **newly collected and annotated by humans**, not from any existing audio-visual dataset. AV-Odyssey Bench demonstrates three major features: 1. **Comprehensive** Audio Attributes; 2. **Extensive** Domains; 3. **Interleaved** Text, Audio, and Visual components.
74
+
75
+ <img src="assets/intro.png" style="zoom:50%;" />
76
+
77
+ ## ๐Ÿ“ Data Examples
78
+
79
+ Please refer to our project page https://av-odyssey.github.io/ for exploring more examples.
80
+
81
+
82
+ ### ๐Ÿ“AV-Odyssey Bench
83
+ <div align="center">
84
+ <img src="assets/demo-1.svg" width="100%" />
85
+ </div>
86
+
87
+
88
+ ## ๐Ÿ” Dataset
89
+
90
+ **License**:
91
+ ```
92
+ AV-Odyssey is only used for academic research. Commercial use in any form is prohibited.
93
+ The copyright of all videos belongs to the video owners.
94
+ If there is any infringement in AV-Odyssey, please email [email protected] and we will remove it immediately.
95
+ Without prior approval, you cannot distribute, publish, copy, disseminate, or modify AV-Odyssey in whole or in part.
96
+ You must strictly comply with the above restrictions.
97
+ ```
98
+
99
+ Please send an email to **[[email protected]](mailto:[email protected])**. ๐ŸŒŸ
100
+
101
+
102
+ ## ๐Ÿ”ฎ Evaluation Pipeline
103
+
104
+
105
+
106
+
107
+
108
+ ## ๐Ÿ† Leaderboard
109
+
110
+ ### Contributing to the AV-Odyssey Leaderboard
111
+
112
+ ๐Ÿšจ The [Leaderboard](https://huggingface.co/spaces/AV-Odyssey/AV_Odyssey_Bench_Leaderboard) for AV-Odyssey is continuously being updated, welcoming the contribution of your excellent MLLMs!