bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 302
2.02k
| abstract
stringlengths 566
2.48k
| title
stringlengths 16
179
| authors
sequencelengths 1
76
| id
stringclasses 1
value | type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringlengths 0
40
| n_linked_authors
int64 -1
24
| upvotes
int64 -1
86
| num_comments
int64 -1
10
| n_authors
int64 -1
75
| Models
sequencelengths 0
37
| Datasets
sequencelengths 0
10
| Spaces
sequencelengths 0
26
| old_Models
sequencelengths 0
37
| old_Datasets
sequencelengths 0
10
| old_Spaces
sequencelengths 0
26
| paper_page_exists_pre_conf
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=NTkUXqDvlg | @inproceedings{
brennan2024using,
title={Using Unity to Help Solve Reinforcement Learning},
author={Connor Brennan and Andrew Robert Williams and Omar G. Younis and Vedant Vyas and Daria Yasafova and Irina Rish},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=NTkUXqDvlg}
} | Leveraging the depth and flexibility of XLand as well as the rapid prototyping features of the Unity engine, we present the United Unity Universe — an open-source toolkit designed to accelerate the creation of innovative reinforcement learning environments. This toolkit includes a robust implementation of XLand 2.0 complemented by a user-friendly interface which allows users to modify the details of procedurally generated terrains and task rules with ease. Additionally, we provide a curated selection of terrains and rule sets, accompanied by implementations of reinforcement learning baselines to facilitate quick experimentation with novel architectural designs for adaptive agents. Furthermore, we illustrate how the United Unity Universe serves as a high-level language that enables researchers to develop diverse and endlessly variable 3D environments within a unified framework. This functionality establishes the United Unity Universe (U3) as an essential tool for advancing the field of reinforcement learning, especially in the development of adaptive and generalizable learning systems. | Using Unity to Help Solve Reinforcement Learning | [
"Connor Brennan",
"Andrew Robert Williams",
"Omar G. Younis",
"Vedant Vyas",
"Daria Yasafova",
"Irina Rish"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=NQLZoMHm6u | @inproceedings{
deng2024newterm,
title={NewTerm: Benchmarking Real-Time New Terms for Large Language Models with Annual Updates},
author={Hexuan Deng and Wenxiang Jiao and Xuebo Liu and Min Zhang and Zhaopeng Tu},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=NQLZoMHm6u}
} | Despite their remarkable abilities in various tasks, large language models (LLMs) still struggle with real-time information (e.g., new facts and terms) due to the knowledge cutoff in their development process. However, existing benchmarks focus on outdated content and limited fields, facing difficulties in real-time updating and leaving new terms unexplored. To address this problem, we propose an adaptive benchmark, NewTerm, for real-time evaluation of new terms. We design a highly automated construction method to ensure high-quality benchmark construction with minimal human effort, allowing flexible updates for real-time information. Empirical results on various LLMs demonstrate over 20% performance reduction caused by new terms. Additionally, while updates to the knowledge cutoff of LLMs can cover some of the new terms, they are unable to generalize to more distant new terms. We also analyze which types of terms are more challenging and why LLMs struggle with new terms, paving the way for future research. Finally, we construct NewTerm 2022 and 2023 to evaluate the new terms updated each year and will continue updating annually. The benchmark and codes can be found at https://anonymous.4open.science/r/NewTerms. | NewTerm: Benchmarking Real-Time New Terms for Large Language Models with Annual Updates | [
"Hexuan Deng",
"Wenxiang Jiao",
"Xuebo Liu",
"Min Zhang",
"Zhaopeng Tu"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2410.20814 | [
"https://github.com/hexuandeng/newterm"
] | https://huggingface.co/papers/2410.20814 | 1 | 0 | 0 | 5 | [] | [
"hexuandeng/NewTerm"
] | [] | [] | [
"hexuandeng/NewTerm"
] | [] | 1 |
null | https://openreview.net/forum?id=NHob4eMg7R | @inproceedings{
jung2024scrream,
title={{SCRREAM} : {SC}an, Register, {RE}nder And Map: A Framework for Annotating Accurate and Dense 3D Indoor Scenes with a Benchmark},
author={HyunJun Jung and Weihang Li and Shun-Cheng Wu and William Bittner and Nikolas Brasch and Jifei Song and Eduardo P{\'e}rez-Pellitero and Zhensong Zhang and Arthur Moreau and Nassir Navab and Benjamin Busam},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=NHob4eMg7R}
} | Traditionally, 3d indoor datasets have generally prioritized scale over ground-truth accuracy in order to obtain improved generalization. However, using these datasets to evaluate dense geometry tasks, such as depth rendering, can be problematic as the meshes of the dataset are often incomplete and may produce wrong ground truth to evaluate the details. In this paper, we propose SCRREAM, a dataset annotation framework that allows annotation of fully dense meshes of objects in the scene and registers camera poses on the real image sequence, which can produce accurate ground truth for both sparse 3D as well as dense 3D tasks. We show the details of the dataset annotation pipeline and showcase four possible variants of datasets that can be obtained from our framework with example scenes, such as indoor reconstruction and SLAM, scene editing \& object removal, human reconstruction and 6d pose estimation. Recent pipelines for indoor reconstruction and SLAM serve as new benchmarks. In contrast to previous indoor dataset, our design allows to evaluate dense geometry tasks on eleven sample scenes against accurately rendered ground truth depth maps. | SCRREAM : SCan, Register, REnder And Map: A Framework for Annotating Accurate and Dense 3D Indoor Scenes with a Benchmark | [
"HyunJun Jung",
"Weihang Li",
"Shun-Cheng Wu",
"William Bittner",
"Nikolas Brasch",
"Jifei Song",
"Eduardo Pérez-Pellitero",
"Zhensong Zhang",
"Arthur Moreau",
"Nassir Navab",
"Benjamin Busam"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=NCaGHtbkKo | @inproceedings{
khirodkar2024harmonyd,
title={Harmony4D: A Video Dataset for In-The-Wild Close Human Interactions},
author={Rawal Khirodkar and Jyun-Ting Song and Jinkun Cao and Zhengyi Luo and Kris M. Kitani},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=NCaGHtbkKo}
} | Understanding how humans interact with each other is key to building realistic multi-human virtual reality systems. This area remains relatively unexplored due to the lack of large-scale datasets. Recent datasets focusing on this issue mainly consist of activities captured entirely in controlled indoor environments with choreographed actions, significantly affecting their diversity. To address this, we introduce Harmony4D, a multi-view video dataset for human-human interaction featuring in-the-wild activities such as wrestling, dancing, MMA,
and more. We use a flexible multi-view capture system to record these dynamic activities and provide annotations for human detection, tracking, 2D/3D pose estimation, and mesh recovery for closely interacting subjects. We propose a novel markerless algorithm to track 3D human poses in severe occlusion and close interaction to obtain our annotations with minimal manual intervention. Harmony4D consists of 1.66 million images and 3.32 million human instances from more than 20 synchronized cameras with 208 video sequences spanning diverse environments and 24 unique subjects. We rigorously evaluate existing state-of-the-art methods for mesh recovery and highlight their significant limitations in modeling close interaction scenarios. Additionally, we fine-tune a pre-trained HMR2.0 model on Harmony4D and demonstrate an improved performance of 54.8% PVE in scenes with severe occlusion and contact. “Harmony—a cohesive alignment of human behaviors." Code and data are available at https://jyuntins.github.io/harmony4d/. | Harmony4D: A Video Dataset for In-The-Wild Close Human Interactions | [
"Rawal Khirodkar",
"Jyun-Ting Song",
"Jinkun Cao",
"Zhengyi Luo",
"Kris M. Kitani"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2410.20294 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=Myc4q2g9xZ | @inproceedings{
linghu2024multimodal,
title={Multi-modal Situated Reasoning in 3D Scenes},
author={Xiongkun Linghu and Jiangyong Huang and Xuesong Niu and Xiaojian Ma and Baoxiong Jia and Siyuan Huang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=Myc4q2g9xZ}
} | Situation awareness is essential for understanding and reasoning about 3D scenes
in embodied AI agents. However, existing datasets and benchmarks for situated
understanding suffer from severe limitations in data modality, scope, diversity, and
scale. To address these limitations, we propose Multi-modal Situated Question
Answering (MSQA), a large-scale multi-modal situated reasoning dataset, scalably
collected leveraging 3D scene graphs and vision-language models (VLMs) across
a diverse range of real-world 3D scenes. MSQA includes 251K situated question
answering pairs across 9 distinct question categories, covering complex scenarios
and object modalities within 3D scenes. We introduce a novel interleaved multi
modal input setting in our benchmark to provide both texts, images, and point
clouds for situation and question description, aiming to resolve ambiguity in
describing situations with single-modality inputs (e.g., texts). Additionally, we
devise the Multi-modal Next-step Navigation (MSNN) benchmark to evaluate
models’ grounding of actions and transitions between situations. Comprehensive
evaluations on reasoning and navigation tasks highlight the limitations of existing
vision-language models and underscore the importance of handling multi-modal
interleaved inputs and situation modeling. Experiments on data scaling and cross
domain transfer further demonstrate the effectiveness of leveraging MSQA as
a pre-training dataset for developing more powerful situated reasoning models,
contributing to advancements in 3D scene understanding for embodied AI. | Multi-modal Situated Reasoning in 3D Scenes | [
"Xiongkun Linghu",
"Jiangyong Huang",
"Xuesong Niu",
"Xiaojian Ma",
"Baoxiong Jia",
"Siyuan Huang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2409.02389 | [
""
] | https://huggingface.co/papers/2409.02389 | 1 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=MsCSn0rlpP | @inproceedings{
bhardwaj2024the,
title={The State of Data Curation at Neur{IPS}: An Assessment of Dataset Development Practices in the Datasets and Benchmarks Track},
author={Eshta Bhardwaj and Harshit Gujral and Siyi Wu and Ciara Zogheib and Tegan Maharaj and Christoph Becker},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=MsCSn0rlpP}
} | Data curation is a field with origins in librarianship and archives, whose scholarship and thinking on data issues go back centuries, if not millennia. The field of machine learning is increasingly observing the importance of data curation to the advancement of both applications and fundamental understanding of machine learning models -- evidenced not least by the creation of the Datasets and Benchmarks track itself. This work provides an analysis of recent dataset development practices at NeurIPS through the lens of data curation. We present an evaluation framework for dataset documentation, consisting of a rubric and toolkit developed through a thorough literature review of data curation principles. We use the framework to systematically assess the strengths and weaknesses in current dataset development practices of 60 datasets published in the NeurIPS Datasets and Benchmarks track from 2021-2023. We summarize key findings and trends. Results indicate greater need for documentation about environmental footprint, ethical considerations, and data management. We suggest targeted strategies and resources to improve documentation in these areas and provide recommendations for the NeurIPS peer-review process that prioritize rigorous data curation in ML. We also provide guidelines for dataset developers on the use of our rubric as a standalone tool. Finally, we provide results in the format of a dataset that showcases aspects of recommended data curation practices. Our rubric and results are of interest for improving data curation practices broadly in the field of ML as well as to data curation and science and technology studies scholars studying practices in ML. Our aim is to support continued improvement in interdisciplinary research on dataset practices, ultimately improving the reusability and reproducibility of new datasets and benchmarks, enabling standardized and informed human oversight, and strengthening the foundation of rigorous and responsible ML research. | The State of Data Curation at NeurIPS: An Assessment of Dataset Development Practices in the Datasets and Benchmarks Track | [
"Eshta Bhardwaj",
"Harshit Gujral",
"Siyi Wu",
"Ciara Zogheib",
"Tegan Maharaj",
"Christoph Becker"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | 2410.22473 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=MojU63gze2 | @inproceedings{
meier2024wildppg,
title={Wild{PPG}: A Real-World {PPG} Dataset of Long Continuous Recordings},
author={Manuel Meier and Berken Utku Demirel and Christian Holz},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=MojU63gze2}
} | Reflective photoplethysmography (PPG) has become the default sensing technique in wearable devices to monitor cardiac activity via a person’s heart rate (HR). However, PPG-based HR estimates can be substantially impacted by factors such as the wearer’s activities, sensor placement and resulting motion artifacts, as well as environmental characteristics such as temperature and ambient light. These and other factors can significantly impact and decrease HR prediction reliability. In this paper, we show that state-of-the-art HR estimation methods struggle when processing representative data from everyday activities in outdoor environments, likely because they rely on existing datasets that captured controlled conditions. We introduce a novel multimodal dataset and benchmark results for continuous PPG recordings during outdoor activities from 16 participants over 13.5 hours, captured from four wearable sensors, each worn at a different location on the body, totaling 216 hours. Our recordings include accelerometer, temperature, and altitude data, as well as a synchronized Lead I-based electrocardiogram for ground-truth HR references. Participants completed a round trip from Zurich to Jungfraujoch, a tall mountain in Switzerland over the course of one day. The trip included outdoor and indoor activities such as walking, hiking, stair climbing, eating, drinking, and resting at various temperatures and altitudes (up to 3,571 m above sea level) as well as using cars, trains, cable cars, and lifts for transport—all of which impacted participants’ physiological dynamics. We also present a novel method that estimates HR values more robustly in such real-world scenarios than existing baselines.
Dataset & code for HR estimation: https://siplab.org/projects/WildPPG | WildPPG: A Real-World PPG Dataset of Long Continuous Recordings | [
"Manuel Meier",
"Berken Utku Demirel",
"Christian Holz"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Md1mEoPEaQ | @inproceedings{
dronen2024setlexsem,
title={{SETLEXSEM} {CHALLENGE}: Using Set Operations to Evaluate the Lexical and Semantic Robustness of Language Models},
author={Nicholas Andrew Dronen and Bardiya Akhbari and Manish Gawali},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=Md1mEoPEaQ}
} | Set theory is foundational to mathematics and, when sets are finite, to reasoning about the world. An intelligent system should perform set operations consistently, regardless of superficial variations in the operands. Initially designed for semantically-oriented NLP tasks, large language models (LLMs) are now being evaluated on algorithmic tasks. Because sets are comprised of arbitrary symbols (e.g. numbers, words), they provide an opportunity to test, systematically, the invariance of LLMs’ algorithmic abilities under simple lexical or semantic variations. To this end, we present the SETLEXSEM CHALLENGE, a synthetic benchmark that evaluates the performance of LLMs on set operations. SETLEXSEM assesses the robustness of LLMs’ instruction-following abilities under various conditions, focusing on the set operations and the nature and construction of the set members. Evaluating seven LLMs with SETLEXSEM, we find that they exhibit poor robust- ness to variation in both operation and operands. We show – via the framework’s systematic sampling of set members along lexical and semantic dimensions – that LLMs are not only not robust to variation along these dimensions but demonstrate unique failure modes in particular, easy-to-create semantic groupings of "deceptive" sets. We find that rigorously measuring language model robustness to variation in frequency and length is challenging and present an analysis that measures them in- dependently. The code for reproducing the results of this paper, and for generating the SETLEXSEM CHALLENGE dataset, is available https://github.com/amazon-science/SetLexSem-Challenge. | SETLEXSEM CHALLENGE: Using Set Operations to Evaluate the Lexical and Semantic Robustness of Language Models | [
"Nicholas Andrew Dronen",
"Bardiya Akhbari",
"Manish Gawali"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2411.07336 | [
"https://github.com/amazon-science/setlexsem-challenge"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=Mbd3QxXjq5 | @inproceedings{
toshniwal2024openmathinstruct,
title={OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset},
author={Shubham Toshniwal and Ivan Moshkov and Sean Narenthiran and Daria Gitman and Fei Jia and Igor Gitman},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=Mbd3QxXjq5}
} | Recent work has shown the immense potential of synthetically generated datasets for training large language models (LLMs), especially for acquiring targeted skills. Current large-scale math instruction tuning datasets such as MetaMathQA (Yu et al., 2024) and MAmmoTH (Yue et al., 2024) are constructed using outputs from closed-source LLMs with commercially restrictive licenses. A key reason limiting the use of open-source LLMs in these data generation pipelines has been the wide gap between the mathematical skills of the best closed-source LLMs, such as GPT-4, and the best open-source LLMs. Building on the recent progress in open-source LLMs, our proposed prompting novelty, and some brute-force scaling, we construct OpenMathInstruct-1, a math instruction tuning dataset with 1.8M problem-solution pairs. The dataset is constructed by synthesizing code-interpreter solutions for GSM8K and MATH, two popular math reasoning benchmarks, using the recently released and permissively licensed Mixtral model. Our best model, OpenMath-CodeLlama-70B, trained on a subset of OpenMathInstruct-1, achieves a score of 84.6% on GSM8K and 50.7% on MATH, which is competitive with the best gpt-distilled models. We will release our code, models, and the OpenMathInstruct-1 dataset under a commercially permissive license. | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | [
"Shubham Toshniwal",
"Ivan Moshkov",
"Sean Narenthiran",
"Daria Gitman",
"Fei Jia",
"Igor Gitman"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | 2402.10176 | [
"https://github.com/kipok/nemo-skills"
] | https://huggingface.co/papers/2402.10176 | 4 | 35 | 2 | 6 | [
"nvidia/OpenMath-Mistral-7B-v0.1-hf",
"nvidia/OpenMath-Mistral-7B-v0.1",
"nvidia/OpenMath-CodeLlama-70b-Python-hf",
"nvidia/OpenMath-CodeLlama-7b-Python-hf",
"nvidia/OpenMath-CodeLlama-70b-Python",
"nvidia/OpenMath-Llama-2-70b",
"nvidia/OpenMath-CodeLlama-7b-Python",
"nvidia/OpenMath-CodeLlama-34b-Python",
"nvidia/OpenMath-Llama-2-70b-hf",
"nvidia/OpenMath-CodeLlama-13b-Python",
"nvidia/OpenMath-CodeLlama-13b-Python-hf",
"nvidia/OpenMath-CodeLlama-34b-Python-hf",
"RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf",
"nold/OpenMath-Mistral-7B-v0.1-hf-GGUF",
"RichardErkhov/nvidia_-_OpenMath-CodeLlama-7b-Python-hf-4bits",
"RichardErkhov/nvidia_-_OpenMath-CodeLlama-7b-Python-hf-8bits"
] | [
"nvidia/OpenMathInstruct-1",
"kunishou/OpenMathInstruct-1-1.8m-ja",
"nvidia/OpenMath-MATH-masked",
"nvidia/OpenMath-GSM8K-masked"
] | [
"featherless-ai/try-this-model",
"Granther/try-this-model",
"Darok/Featherless-Feud",
"emekaboris/try-this-model",
"SC999/NV_Nemotron"
] | [
"nvidia/OpenMath-Mistral-7B-v0.1-hf",
"nvidia/OpenMath-Mistral-7B-v0.1",
"nvidia/OpenMath-CodeLlama-70b-Python-hf",
"nvidia/OpenMath-CodeLlama-7b-Python-hf",
"nvidia/OpenMath-CodeLlama-70b-Python",
"nvidia/OpenMath-Llama-2-70b",
"nvidia/OpenMath-CodeLlama-7b-Python",
"nvidia/OpenMath-CodeLlama-34b-Python",
"nvidia/OpenMath-Llama-2-70b-hf",
"nvidia/OpenMath-CodeLlama-13b-Python",
"nvidia/OpenMath-CodeLlama-13b-Python-hf",
"nvidia/OpenMath-CodeLlama-34b-Python-hf",
"RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf",
"nold/OpenMath-Mistral-7B-v0.1-hf-GGUF",
"RichardErkhov/nvidia_-_OpenMath-CodeLlama-7b-Python-hf-4bits",
"RichardErkhov/nvidia_-_OpenMath-CodeLlama-7b-Python-hf-8bits"
] | [
"nvidia/OpenMathInstruct-1",
"kunishou/OpenMathInstruct-1-1.8m-ja",
"nvidia/OpenMath-MATH-masked",
"nvidia/OpenMath-GSM8K-masked"
] | [
"featherless-ai/try-this-model",
"Granther/try-this-model",
"Darok/Featherless-Feud",
"emekaboris/try-this-model",
"SC999/NV_Nemotron"
] | 1 |
null | https://openreview.net/forum?id=MYyGhe9MBg | @inproceedings{
miao2024tvsafetybench,
title={T2{VS}afetyBench: Evaluating the Safety of Text-to-Video Generative Models},
author={Yibo Miao and Yifan Zhu and Lijia Yu and Jun Zhu and Xiao-Shan Gao and Yinpeng Dong},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=MYyGhe9MBg}
} | The recent development of Sora leads to a new era in text-to-video (T2V) generation. Along with this comes the rising concern about its safety risks. The generated videos may contain illegal or unethical content, and there is a lack of comprehensive quantitative understanding of their safety, posing a challenge to their reliability and practical deployment. Previous evaluations primarily focus on the quality of video generation. While some evaluations of text-to-image models have considered safety, they cover limited aspects and do not address the unique temporal risk inherent in video generation. To bridge this research gap, we introduce T2VSafetyBench, the first comprehensive benchmark for conducting safety-critical assessments of text-to-video models. We define 4 primary categories with 14 critical aspects of video generation safety and construct a malicious prompt dataset including real-world prompts, LLM-generated prompts, and jailbreak attack-based prompts. We then conduct a thorough safety evaluation on 9 recently released T2V models. Based on our evaluation results, we draw several important findings, including: 1) no single model excels in all aspects, with different models showing various strengths; 2) the correlation between GPT-4 assessments and manual reviews is generally high; 3) there is a trade-off between the usability and safety of text-to-video generative models. This indicates that as the field of video generation rapidly advances, safety risks are set to surge, highlighting the urgency of prioritizing video safety. We hope that T2VSafetyBench can provide insights for better understanding the safety of video generation in the era of generative AIs. Our code is publicly available at \url{https://github.com/yibo-miao/T2VSafetyBench}. | T2VSafetyBench: Evaluating the Safety of Text-to-Video Generative Models | [
"Yibo Miao",
"Yifan Zhu",
"Lijia Yu",
"Jun Zhu",
"Xiao-Shan Gao",
"Yinpeng Dong"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.05965 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=MUnPBKBaCY | @inproceedings{
hu2024noisy,
title={Noisy Ostracods: A Fine-Grained, Imbalanced Real-World Dataset for Benchmarking Robust Machine Learning and Label Correction Methods},
author={Jiamian Hu and Hong Yuanyuan and Yihua Chen and He Wang and Moriaki Yasuhara},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=MUnPBKBaCY}
} | We present the Noisy Ostracods, a noisy dataset for genus and species classification
of crustacean ostracods with specialists’ annotations. Over the 71466 specimens
collected, 5.58% of them are estimated to be noisy (possibly problematic) at genus
level. The dataset is created to addressing a real-world challenge: creating a
clean fine-grained taxonomy dataset. The Noisy Ostracods dataset has diverse
noises from multiple sources. Firstly, the noise is open-set, including new classes
discovered during curation that were not part of the original annotation. The
dataset has pseudo-classes, where annotators misclassified samples that should
belong to an existing class into a new pseudo-class. The Noisy Ostracods dataset
is highly imbalanced with a imbalance factor ρ = 22429. This presents a unique
challenge for robust machine learning methods, as existing approaches have not
been extensively evaluated on fine-grained classification tasks with such diverse
real-world noise. Initial experiments using current robust learning techniques
have not yielded significant performance improvements on the Noisy Ostracods
dataset compared to cross-entropy training on the raw, noisy data. On the other
hand, noise detection methods have underperformed in error hit rate compared
to naive cross-validation ensembling for identifying problematic labels. These
findings suggest that the fine-grained, imbalanced nature, and complex noise
characteristics of the dataset present considerable challenges for existing noiserobust
algorithms. By openly releasing the Noisy Ostracods dataset, our goal
is to encourage further research into the development of noise-resilient machine
learning methods capable of effectively handling diverse, real-world noise in finegrained
classification tasks. The dataset, along with its evaluation protocols, can be
accessed at https://github.com/H-Jamieu/Noisy_ostracods. | Noisy Ostracods: A Fine-Grained, Imbalanced Real-World Dataset for Benchmarking Robust Machine Learning and Label Correction Methods | [
"Jiamian Hu",
"Hong Yuanyuan",
"Yihua Chen",
"He Wang",
"Moriaki Yasuhara"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=MU2s9wwWLo | @inproceedings{
wu2024conceptmix,
title={ConceptMix: A Compositional Image Generation Benchmark with Controllable Difficulty},
author={Xindi Wu and Dingli Yu and Yangsibo Huang and Olga Russakovsky and Sanjeev Arora},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=MU2s9wwWLo}
} | Compositionality is a critical capability in Text-to-Image (T2I) models, as it reflects their ability to understand and combine multiple concepts from text descriptions. Existing evaluations of compositional capability rely heavily on human-designed text prompts or fixed templates, limiting their diversity and complexity, and yielding low discriminative power. We propose ConceptMix, a scalable, controllable, and customizable benchmark which automatically evaluates compositional generation ability of T2I models. This is done in two stages. First, ConceptMix generates the text prompts: concretely, using categories of visual concepts (e.g., objects, colors, shapes, spatial relationships), it randomly samples an object and k-tuples of visual concepts, then uses GPT-4o to generate text prompts for image generation based on these sampled concepts. Second, ConceptMix evaluates the images generated in response to these prompts: concretely, it checks how many of the k concepts actually appeared in the image by generating one question per visual concept and using a strong VLM to answer them. Through administering ConceptMix to a diverse set of T2I models (proprietary as well as open ones) using increasing values of k, we show that our ConceptMix has higher discrimination power than earlier benchmarks. Specifically, ConceptMix reveals that the performance of several models, especially open models, drops dramatically with increased k. Importantly, it also provides insight into the lack of prompt diversity in widely-used training datasets. Additionally, we conduct extensive human studies to validate the design of ConceptMix and compare our automatic grading with human judgement. We hope it will guide future T2I model development. | ConceptMix: A Compositional Image Generation Benchmark with Controllable Difficulty | [
"Xindi Wu",
"Dingli Yu",
"Yangsibo Huang",
"Olga Russakovsky",
"Sanjeev Arora"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2408.14339 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=MS4oxVfBHn | @inproceedings{
hui2024uda,
title={{UDA}: A Benchmark Suite for Retrieval Augmented Generation in Real-World Document Analysis},
author={Yulong Hui and Yao Lu and Huanchen Zhang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=MS4oxVfBHn}
} | The use of Retrieval-Augmented Generation (RAG) has improved Large Language Models (LLMs) in collaborating with external data, yet significant challenges exist in real-world scenarios. In areas such as academic literature and finance question answering, data are often found in raw text and tables in HTML or PDF formats, which can be lengthy and highly unstructured. In this paper, we introduce a benchmark suite, namely Unstructured Document Analysis (UDA), that involves 2,965 real-world documents and 29,590 expert-annotated Q&A pairs. We revisit popular LLM- and RAG-based solutions for document analysis and evaluate the design choices and answer qualities across multiple document domains and diverse query types. Our evaluation yields interesting findings and highlights the importance of data parsing and retrieval. We hope our benchmark can shed light and better serve real-world document analysis applications. The benchmark suite and code can be found at https://github.com/qinchuanhui/UDA-Benchmark | UDA: A Benchmark Suite for Retrieval Augmented Generation in Real-World Document Analysis | [
"Yulong Hui",
"Yao Lu",
"Huanchen Zhang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.15187 | [
"https://github.com/qinchuanhui/uda-benchmark"
] | https://huggingface.co/papers/2406.15187 | 0 | 0 | 0 | 3 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=M91nJrBrqG | @inproceedings{
hao2024is,
title={Is Your {HD} Map Constructor Reliable under Sensor Corruptions?},
author={Xiaoshuai Hao and Mengchuan Wei and Yifan Yang and Haimei Zhao and Hui Zhang and Yi ZHOU and Qiang Wang and Weiming Li and Lingdong Kong and Jing Zhang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=M91nJrBrqG}
} | Driving systems often rely on high-definition (HD) maps for precise environmental information, which is crucial for planning and navigation. While current HD map constructors perform well under ideal conditions, their resilience to real-world challenges, \eg, adverse weather and sensor failures, is not well understood, raising safety concerns. This work introduces MapBench, the first comprehensive benchmark designed to evaluate the robustness of HD map construction methods against various sensor corruptions. Our benchmark encompasses a total of 29 types of corruptions that occur from cameras and LiDAR sensors. Extensive evaluations across 31 HD map constructors reveal significant performance degradation of existing methods under adverse weather conditions and sensor failures, underscoring critical safety concerns. We identify effective strategies for enhancing robustness, including innovative approaches that leverage multi-modal fusion, advanced data augmentation, and architectural techniques. These insights provide a pathway for developing more reliable HD map construction methods, which are essential for the advancement of autonomous driving technology. The benchmark toolkit and affiliated code and model checkpoints have been made publicly accessible. | Is Your HD Map Constructor Reliable under Sensor Corruptions? | [
"Xiaoshuai Hao",
"Mengchuan Wei",
"Yifan Yang",
"Haimei Zhao",
"Hui Zhang",
"Yi ZHOU",
"Qiang Wang",
"Weiming Li",
"Lingdong Kong",
"Jing Zhang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.12214 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=M5JW7O9vc7 | @inproceedings{
zhao2024textttmodelglue,
title={\${\textbackslash}texttt\{Model-{GLUE}\}\$: Democratized {LLM} Scaling for A Large Model Zoo in the Wild},
author={Xinyu Zhao and Guoheng Sun and Ruisi Cai and Yukun Zhou and Pingzhi Li and Peihao Wang and Bowen Tan and Yexiao He and Li Chen and Yi Liang and Beidi Chen and Binhang Yuan and Hongyi Wang and Ang Li and Zhangyang Wang and Tianlong Chen},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=M5JW7O9vc7}
} | As Large Language Models (LLMs) excel across tasks and specialized domains, scaling LLMs based on existing models has gained significant attention, which is challenged by potential performance drop when combining disparate models.
Various techniques have been proposed to aggregate pre-trained LLMs, including model merging, Mixture-of-Experts, and stacking. Despite their merits, a comprehensive comparison and synergistic application of them to a diverse model zoo is yet to be adequately addressed.
In light of this research gap, this paper introduces $\texttt{Model-GLUE}$, a holistic LLM scaling guideline.
First, our work starts with a benchmarking of existing LLM scaling techniques, especially selective merging, and variants of mixture.
Utilizing the insights from the benchmark results, we formulate a strategy for the selection and aggregation of a heterogeneous model zoo characterizing different architectures and initialization.
Our methodology involves clustering mergeable models, selecting a merging strategy, and integrating model clusters through model-level mixture. Finally, evidenced by our experiments on a diverse Llama-2-based model zoo, $\texttt{Model-GLUE}$ shows an average performance enhancement of 5.61\%, achieved without additional training.
Codes are available at https://github.com/Model-GLUE/Model-GLUE. | : Democratized LLM Scaling for A Large Model Zoo in the Wild | [
"Xinyu Zhao",
"Guoheng Sun",
"Ruisi Cai",
"Yukun Zhou",
"Pingzhi Li",
"Peihao Wang",
"Bowen Tan",
"Yexiao He",
"Li Chen",
"Yi Liang",
"Beidi Chen",
"Binhang Yuan",
"Hongyi Wang",
"Ang Li",
"Zhangyang Wang",
"Tianlong Chen"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
"https://github.com/model-glue/model-glue"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=M32Ldpp4Oy | @inproceedings{
li2024logicity,
title={LogiCity: Advancing Neuro-Symbolic {AI} with Abstract Urban Simulation},
author={Bowen Li and Zhaoyu Li and Qiwei Du and Jinqi Luo and Wenshan Wang and Yaqi Xie and Simon Stepputtis and Chen Wang and Katia P. Sycara and Pradeep Kumar Ravikumar and Alexander G. Gray and Xujie Si and Sebastian Scherer},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=M32Ldpp4Oy}
} | Recent years have witnessed the rapid development of Neuro-Symbolic (NeSy) AI systems, which integrate symbolic reasoning into deep neural networks.
However, most of the existing benchmarks for NeSy AI fail to provide long-horizon reasoning tasks with complex multi-agent interactions.
Furthermore, they are usually constrained by fixed and simplistic logical rules over limited entities, making them far from real-world complexities.
To address these crucial gaps, we introduce LogiCity, the first simulator based on customizable first-order logic (FOL) for an urban-like environment with multiple dynamic agents.
LogiCity models diverse urban elements using semantic and spatial concepts, such as $\texttt{IsAmbulance}(\texttt{X})$ and $\texttt{IsClose}(\texttt{X}, \texttt{Y})$.
These concepts are used to define FOL rules that govern the behavior of various agents.
Since the concepts and rules are abstractions, they can be universally applied to cities with any agent compositions, facilitating the instantiation of diverse scenarios.
Besides, a key feature of LogiCity is its support for user-configurable abstractions, enabling customizable simulation complexities for logical reasoning.
To explore various aspects of NeSy AI, LogiCity introduces two tasks, one features long-horizon sequential decision-making, and the other focuses on one-step visual reasoning, varying in difficulty and agent behaviors.
Our extensive evaluation reveals the advantage of NeSy frameworks in abstract reasoning.
Moreover, we highlight the significant challenges of handling more complex abstractions in long-horizon multi-agent scenarios or under high-dimensional, imbalanced data.
With its flexible design, various features, and newly raised challenges, we believe LogiCity represents a pivotal step forward in advancing the next generation of NeSy AI.
All the code and data are open-sourced at our website. | LogiCity: Advancing Neuro-Symbolic AI with Abstract Urban Simulation | [
"Bowen Li",
"Zhaoyu Li",
"Qiwei Du",
"Jinqi Luo",
"Wenshan Wang",
"Yaqi Xie",
"Simon Stepputtis",
"Chen Wang",
"Katia P. Sycara",
"Pradeep Kumar Ravikumar",
"Alexander G. Gray",
"Xujie Si",
"Sebastian Scherer"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2411.00773 | [
"https://github.com/Jaraxxus-Me/LogiCity"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=LdxNWDNvC3 | @inproceedings{
liu2024afbench,
title={{AFB}ench: A Large-scale Benchmark for Airfoil Design},
author={Jian Liu and Jianyu Wu and Hairun Xie and Guoqing zhang and Jing Wang and Liu Wei and Wanli Ouyang and Junjun Jiang and Xianming Liu and SHIXIANG TANG and Miao Zhang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=LdxNWDNvC3}
} | Data-driven generative models have emerged as promising approaches towards achieving efficient mechanical inverse design. However, due to prohibitively high cost in time and money, there is still lack of open-source and large-scale benchmarks in this field. It is mainly the case for airfoil inverse design, which requires to generate and edit diverse geometric-qualified and aerodynamic-qualified airfoils following the multimodal instructions, \emph{i.e.,} dragging points and physical parameters. This paper presents the open-source endeavors in airfoil inverse design, \emph{AFBench}, including a large-scale dataset with 200 thousand airfoils and high-quality aerodynamic and geometric labels, two novel and practical airfoil inverse design tasks, \emph{i.e.,} conditional generation on multimodal physical parameters, controllable editing, and comprehensive metrics to evaluate various existing airfoil inverse design methods. Our aim is to establish \emph{AFBench} as an ecosystem for training and evaluating airfoil inverse design methods, with a specific focus on data-driven controllable inverse design models by multimodal instructions capable of bridging the gap between ideas and execution, the academic research and industrial applications. We have provided baseline models, comprehensive experimental observations, and analysis to accelerate future research. Our baseline model is trained on an RTX 3090 GPU within 16 hours. The codebase, datasets and benchmarks will be available at \url{https://hitcslj.github.io/afbench/}. | AFBench: A Large-scale Benchmark for Airfoil Design | [
"Jian Liu",
"Jianyu Wu",
"Hairun Xie",
"Guoqing zhang",
"Jing Wang",
"Liu Wei",
"Wanli Ouyang",
"Junjun Jiang",
"Xianming Liu",
"SHIXIANG TANG",
"Miao Zhang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.18846 | [
"https://github.com/hitcslj/afbench"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=LdRZ9SFBku | @inproceedings{
gong2024uknow,
title={{UK}now: A Unified Knowledge Protocol with Multimodal Knowledge Graph Datasets for Reasoning and Vision-Language Pre-Training},
author={Biao Gong and Shuai Tan and Yutong Feng and Xiaoying Xie and Yuyuan Li and Chaochao Chen and Kecheng Zheng and Yujun Shen and Deli Zhao},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=LdRZ9SFBku}
} | This work presents a unified knowledge protocol, called UKnow, which facilitates knowledge-based studies from the perspective of data. Particularly focusing on visual and linguistic modalities, we categorize data knowledge into five unit types, namely, in-image, in-text, cross-image, cross-text, and image-text, and set up an efficient pipeline to help construct the multimodal knowledge graph from any data collection. Thanks to the logical information naturally contained in knowledge graph, organizing datasets under UKnow format opens up more possibilities of data usage compared to the commonly used image-text pairs. Following UKnow protocol, we collect, from public international news, a large-scale multimodal knowledge graph dataset that consists of 1,388,568 nodes (with 571,791 vision-related ones) and 3,673,817 triplets. The dataset is also annotated with rich event tags, including 11 coarse labels and 9,185 fine labels. Experiments on four benchmarks demonstrate the potential of UKnow in supporting common-sense reasoning and boosting vision-language pre-training with a single dataset, benefiting from its unified form of knowledge organization. Code, dataset, and models will be made publicly available. See Appendix to download the dataset. | UKnow: A Unified Knowledge Protocol with Multimodal Knowledge Graph Datasets for Reasoning and Vision-Language Pre-Training | [
"Biao Gong",
"Shuai Tan",
"Yutong Feng",
"Xiaoying Xie",
"Yuyuan Li",
"Chaochao Chen",
"Kecheng Zheng",
"Yujun Shen",
"Deli Zhao"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2302.06891 | [
""
] | https://huggingface.co/papers/2302.06891 | 2 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=LXgbgMOygH | @inproceedings{
xue2024neuralplane,
title={NeuralPlane: An Efficiently Parallelizable Platform for Fixed-wing Aircraft Control with Reinforcement Learning},
author={Chuanyi Xue and Qihan Liu and Xiaoteng Ma and Xinyao Qin and Ning Gui and Yang Qi and Jinsheng Ren and Bin Liang and Jun Yang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=LXgbgMOygH}
} | Reinforcement learning (RL) demonstrates superior potential over traditional flight control methods for fixed-wing aircraft, particularly under extreme operational conditions. However, the high demand for training samples and the lack of efficient computation in existing simulators hinder its further application. In this paper, we introduce NeuralPlane, the first benchmark platform for large-scale parallel simulations of fixed-wing aircraft. NeuralPlane significantly boosts high-fidelity simulation via GPU-accelerated Flight Dynamics Model (FDM) computation, achieving a single-step simulation time of just 0.2 seconds at a parallel scale of $10^{6}$, far exceeding current platforms. We also provide clear code templates, comprehensive evaluation/visualization tools and hierarchical frameworks for integrating RL and traditional control methods. We believe that NeuralPlane can accelerate the development of RL-based fixed-wing flight control and serve as a new challenging benchmark for the RL community. Our NeuralPlane is open-source and accessible at https://github.com/xuecy22/NeuralPlane. | NeuralPlane: An Efficiently Parallelizable Platform for Fixed-wing Aircraft Control with Reinforcement Learning | [
"Chuanyi Xue",
"Qihan Liu",
"Xiaoteng Ma",
"Xinyao Qin",
"Ning Gui",
"Yang Qi",
"Jinsheng Ren",
"Bin Liang",
"Jun Yang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=LOcLhezm1C | @inproceedings{
saul2024is,
title={Is Function Similarity Over-Engineered? Building a Benchmark},
author={Rebecca Saul and Chang Liu and Noah Fleischmann and Richard J Zak and Kristopher Micinski and Edward Raff and James Holt},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=LOcLhezm1C}
} | Binary analysis is a core component of many critical security tasks, including reverse engineering, malware analysis, and vulnerability detection. Manual analysis is often time-consuming, but identifying commonly-used or previously-seen functions can reduce the time it takes to understand a new file. However, given the complexity of assembly, and the NP-hard nature of determining function equivalence, this task is extremely difficult. Common approaches often use sophisticated disassembly and decompilation tools, graph analysis, and other expensive pre-processing steps to perform function similarity searches over some corpus. In this work, we identify a number of discrepancies between the current research environment and the underlying application need. To remedy this, we build a new benchmark, REFuSe-Bench, for binary function similarity detection consisting of high-quality datasets and tests that better reflect real-world use cases. In doing so, we address issues like data duplication and accurate labeling, experiment with real malware, and perform the first serious evaluation of ML binary function similarity models on Windows data. Our benchmark reveals that a new, simple baseline — one which looks at only the raw bytes of a function, and requires no disassembly or other pre-processing --- is able to achieve state-of-the-art performance in multiple settings. Our findings challenge conventional assumptions that complex models with highly-engineered features are being used to their full potential, and demonstrate that simpler approaches can provide significant value. | Is Function Similarity Over-Engineered? Building a Benchmark | [
"Rebecca Saul",
"Chang Liu",
"Noah Fleischmann",
"Richard J Zak",
"Kristopher Micinski",
"Edward Raff",
"James Holt"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2410.22677 | [
"https://github.com/FutureComputing4AI/Reverse-Engineering-Function-Search"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=LFCWIE5iS2 | @inproceedings{
sivakumar2024emgqwerty,
title={emg2qwerty: A Large Dataset with Baselines for Touch Typing using Surface Electromyography},
author={Viswanath Sivakumar and Jeffrey Seely and Alan Du and Sean R Bittner and Adam Berenzweig and Anuoluwapo Bolarinwa and Alexandre Gramfort and Michael I Mandel},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=LFCWIE5iS2}
} | Surface electromyography (sEMG) non-invasively measures signals generated by muscle activity with sufficient sensitivity to detect individual spinal neurons and richness to identify dozens of gestures and their nuances. Wearable wrist-based sEMG sensors have the potential to offer low friction, subtle, information rich, always available human-computer inputs. To this end, we introduce emg2qwerty, a large-scale dataset of non-invasive electromyographic signals recorded at the wrists while touch typing on a QWERTY keyboard, together with ground-truth annotations and reproducible baselines. With 1,135 sessions spanning 108 users and 346 hours of recording, this is the largest such public dataset to date. These data demonstrate non-trivial, but well defined hierarchical relationships both in terms of the generative process, from neurons to muscles and muscle combinations, as well as in terms of domain shift across users and user sessions. Applying standard modeling techniques from the closely related field of Automatic Speech Recognition (ASR), we show strong baseline performance on predicting key-presses using sEMG signals alone. We believe the richness of this task and dataset will facilitate progress in several problems of interest to both the machine learning and neuroscientific communities. Dataset and code can be accessed at https://github.com/facebookresearch/emg2qwerty. | emg2qwerty: A Large Dataset with Baselines for Touch Typing using Surface Electromyography | [
"Viswanath Sivakumar",
"Jeffrey Seely",
"Alan Du",
"Sean R Bittner",
"Adam Berenzweig",
"Anuoluwapo Bolarinwa",
"Alexandre Gramfort",
"Michael I Mandel"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2410.20081 | [
"https://github.com/facebookresearch/emg2qwerty"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=LC1QAqhePv | @inproceedings{
zhang2024sciinstruct,
title={SciInstruct: a Self-Reflective Instruction Annotated Dataset for Training Scientific Language Models},
author={Dan Zhang and Ziniu Hu and Sining Zhoubian and Zhengxiao Du and Kaiyu Yang and Zihan Wang and Yisong Yue and Yuxiao Dong and Jie Tang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=LC1QAqhePv}
} | Large Language Models (LLMs) have shown promise in assisting scientific discovery. However, such applications are currently limited by LLMs' deficiencies in understanding intricate scientific concepts, deriving symbolic equations, and solving advanced numerical calculations. To bridge these gaps, we introduce SciInstruct, a suite of scientific instructions for training scientific language models capable of college-level scientific reasoning. Central to our approach is a novel self-reflective instruction annotation framework to address the data scarcity challenge in the science domain. This framework leverages existing LLMs to generate step-by-step reasoning for unlabelled scientific questions, followed by a process of self-reflective critic-and-revise. Applying this framework, we curated a diverse and high-quality dataset encompassing physics, chemistry, math, and formal proofs. We analyze the curated SciInstruct from multiple interesting perspectives (e.g., domain, scale, source, question type, answer length, etc.). To verify the effectiveness of SciInstruct, we fine-tuned different language models with SciInstruct, i.e., ChatGLM3 (6B and 32B), Llama3-8B-Instruct, and Mistral-7B: MetaMath, enhancing their scientific and mathematical reasoning capabilities, without sacrificing the language understanding capabilities of the base model. We release all codes and SciInstruct at https://github.com/THUDM/SciGLM. | SciInstruct: a Self-Reflective Instruction Annotated Dataset for Training Scientific Language Models | [
"Dan Zhang",
"Ziniu Hu",
"Sining Zhoubian",
"Zhengxiao Du",
"Kaiyu Yang",
"Zihan Wang",
"Yisong Yue",
"Yuxiao Dong",
"Jie Tang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2401.07950 | [
"https://github.com/thudm/sciglm"
] | https://huggingface.co/papers/2401.07950 | 0 | 4 | 0 | 9 | [
"zd21/SciGLM-6B"
] | [] | [] | [
"zd21/SciGLM-6B"
] | [] | [] | 1 |
null | https://openreview.net/forum?id=L4yLhMjCOR | @inproceedings{
ahmed2024dcompat,
title={3{DC}o{MP}aT200: Language Grounded Large-Scale 3D Vision Dataset for Compositional Recognition},
author={Mahmoud Ahmed and Xiang Li and Arpit Prajapati and Mohamed Elhoseiny},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=L4yLhMjCOR}
} | Understanding objects in 3D at the part level is essential for humans and robots to navigate and interact with the environment. Current datasets for part-level 3D object understanding encompass a limited range of categories. For instance, the ShapeNet-Part and PartNet datasets only include 16, and 24 object categories respectively. The 3DCoMPaT dataset, specifically designed for compositional understanding of parts and materials, contains only 42 object categories. To foster richer and fine-grained part-level 3D understanding, we introduce 3DCoMPaT200, a large-scale dataset tailored for compositional understanding of object parts and materials, with 200 object categories with approximately 5 times larger object vocabulary compared to 3DCoMPaT and almost 4 times larger part categories. Concretely, 3DCoMPaT200 significantly expands upon 3DCoMPaT, featuring 1,031 fine-grained part categories and 293 distinct material classes for compositional application to 3D object parts. Additionally, to address the complexities of compositional 3D modeling, we propose a novel task of Compositional Part Shape Retrieval using ULIP to provide a strong 3D foundational model for 3D Compositional Understanding. This method evaluates the model shape retrieval performance given one, three, or six parts described in text format. These results show that the model's performance improves with an increasing number of style compositions, highlighting the critical role of the compositional dataset. Such results underscore the dataset's effectiveness in enhancing models' capability to understand complex 3D shapes from a compositional perspective. Code and Data can be found here: https://github.com/3DCoMPaT200/3DCoMPaT200/ | 3DCoMPaT200: Language Grounded Large-Scale 3D Vision Dataset for Compositional Recognition | [
"Mahmoud Ahmed",
"Xiang Li",
"Arpit Prajapati",
"Mohamed Elhoseiny"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=L0oSfTroNE | @inproceedings{
ye2024benchmarking,
title={Benchmarking {LLM}s via Uncertainty Quantification},
author={Fanghua Ye and Mingming Yang and Jianhui Pang and Longyue Wang and Derek F. Wong and Emine Yilmaz and Shuming Shi and Zhaopeng Tu},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=L0oSfTroNE}
} | The proliferation of open-source Large Language Models (LLMs) from various institutions has highlighted the urgent need for comprehensive evaluation methods. However, current evaluation platforms, such as the widely recognized HuggingFace open LLM leaderboard, neglect a crucial aspect -- uncertainty, which is vital for thoroughly assessing LLMs. To bridge this gap, we introduce a new benchmarking approach for LLMs that integrates uncertainty quantification. Our examination involves nine LLMs (LLM series) spanning five representative natural language processing tasks. Our findings reveal that: I) LLMs with higher accuracy may exhibit lower certainty; II) Larger-scale LLMs may display greater uncertainty compared to their smaller counterparts; and III) Instruction-finetuning tends to increase the uncertainty of LLMs. These results underscore the significance of incorporating uncertainty in the evaluation of LLMs. Our implementation is available at https://github.com/smartyfh/LLM-Uncertainty-Bench. | Benchmarking LLMs via Uncertainty Quantification | [
"Fanghua Ye",
"Mingming Yang",
"Jianhui Pang",
"Longyue Wang",
"Derek F. Wong",
"Emine Yilmaz",
"Shuming Shi",
"Zhaopeng Tu"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2401.12794 | [
"https://github.com/smartyfh/llm-uncertainty-bench"
] | https://huggingface.co/papers/2401.12794 | 0 | 1 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=Kxta8IInyN | @inproceedings{
yao2024clave,
title={{CLAVE}: An Adaptive Framework for Evaluating Values of {LLM} Generated Responses},
author={Jing Yao and Xiaoyuan Yi and Xing Xie},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=Kxta8IInyN}
} | The rapid progress in Large Language Models (LLMs) poses potential risks such as generating unethical content. Assessing the values embedded in LLMs' generated responses can help expose their misalignment, but this relies on reference-free value evaluators, e.g. fine-tuned LLMs or closed-source models like GPT-4. Nevertheless, two key challenges emerge in open-ended value evaluation: the evaluator should adapt to changing human value definitions with minimal annotation, against their own bias (adaptability); and remain robust across varying value expressions and scenarios (generalizability). To handle these challenges, we introduce CLAVE, a novel framework that integrates two complementary LLMs: a large model to extract high-level value concepts from diverse responses, leveraging its extensive knowledge and generalizability, and a small model fine-tuned on these concepts to adapt to human value annotations. This dual-model framework enables adaptation to any value system using <100 human-labeled samples per value type. We also present ValEval, a comprehensive dataset comprising 13k+ (text,value,label) tuples across diverse domains, covering three major value systems. We benchmark the performance of 15+ popular LLM evaluators and fully analyze their strengths and weaknesses. Our findings reveal that CLAVE combining a large prompt-based model and a small fine-tuned one serves as an optimal balance in value evaluation. | CLAVE: An Adaptive Framework for Evaluating Values of LLM Generated Responses | [
"Jing Yao",
"Xiaoyuan Yi",
"Xing Xie"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.10725 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=KoSSEp6Du5 | @inproceedings{
liu2024et,
title={E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding},
author={Ye Liu and Zongyang Ma and Zhongang Qi and Yang Wu and Ying Shan and Chang Wen Chen},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=KoSSEp6Du5}
} | Recent advances in Video Large Language Models (Video-LLMs) have demonstrated their great potential in general-purpose video understanding. To verify the significance of these models, a number of benchmarks have been proposed to diagnose their capabilities in different scenarios. However, existing benchmarks merely evaluate models through video-level question-answering, lacking fine-grained event-level assessment and task diversity. To fill this gap, we introduce E.T. Bench (Event-Level & Time-Sensitive Video Understanding Benchmark), a large-scale and high-quality benchmark for open-ended event-level video understanding. Categorized within a 3-level task taxonomy, E.T. Bench encompasses 7.3K samples under 12 tasks with 7K videos (251.4h total length) under 8 domains, providing comprehensive evaluations. We extensively evaluated 8 Image-LLMs and 12 Video-LLMs on our benchmark, and the results reveal that state-of-the-art models for coarse-level (video-level) understanding struggle to solve our fine-grained tasks, e.g., grounding event-of-interests within videos, largely due to the short video context length, improper time representations, and lack of multi-event training data. Focusing on these issues, we further propose a strong baseline model, E.T. Chat, together with an instruction-tuning dataset E.T. Instruct 164K tailored for fine-grained event-level understanding. Our simple but effective solution demonstrates superior performance in multiple scenarios. | E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding | [
"Ye Liu",
"Zongyang Ma",
"Zhongang Qi",
"Yang Wu",
"Ying Shan",
"Chang Wen Chen"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2409.18111 | [
"https://github.com/PolyU-ChenLab/ETBench"
] | https://huggingface.co/papers/2409.18111 | 3 | 5 | 2 | 6 | [
"PolyU-ChenLab/ETChat-Phi3-Mini-Stage-1",
"PolyU-ChenLab/ETChat-Phi3-Mini-Stage-2",
"PolyU-ChenLab/ETChat-Phi3-Mini-Stage-3"
] | [
"PolyU-ChenLab/ET-Instruct-164K",
"PolyU-ChenLab/ETBench"
] | [] | [
"PolyU-ChenLab/ETChat-Phi3-Mini-Stage-1",
"PolyU-ChenLab/ETChat-Phi3-Mini-Stage-2",
"PolyU-ChenLab/ETChat-Phi3-Mini-Stage-3"
] | [
"PolyU-ChenLab/ET-Instruct-164K",
"PolyU-ChenLab/ETBench"
] | [] | 1 |
null | https://openreview.net/forum?id=Km2XEjH0I5 | @inproceedings{
caciularu2024tact,
title={{TACT}: Advancing Complex Aggregative Reasoning with Information Extraction Tools},
author={Avi Caciularu and Alon Jacovi and Eyal Ben-David and Sasha Goldshtein and Tal Schuster and Jonathan Herzig and Gal Elidan and Amir Globerson},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=Km2XEjH0I5}
} | Large Language Models (LLMs) often do not perform well on queries that require the aggregation of information across texts. To better evaluate this setting and facilitate modeling efforts, we introduce TACT - Text And Calculations through Tables, a dataset crafted to evaluate LLMs' reasoning and computational abilities using complex instructions. TACT contains challenging instructions that demand stitching information scattered across one or more texts, and performing complex integration on this information to generate the answer. We construct this dataset by leveraging an existing dataset of texts and their associated tables. For each such tables, we formulate new queries, and gather their respective answers. We demonstrate that all contemporary LLMs perform poorly on this dataset, achieving an accuracy below 38%. To pinpoint the difficulties and thoroughly dissect the problem, we analyze model performance across three components: table-generation, Pandas command-generation, and execution. Unexpectedly, we discover that each component presents substantial challenges for current LLMs. These insights lead us to propose a focused modeling framework, which we refer to as _IE as a tool_. Specifically, we propose to add "tools" for each of the above steps, and implement each such tool with few-shot prompting. This approach shows an improvement over existing prompting techniques, offering a promising direction for enhancing model capabilities in these tasks. | TACT: Advancing Complex Aggregative Reasoning with Information Extraction Tools | [
"Avi Caciularu",
"Alon Jacovi",
"Eyal Ben-David",
"Sasha Goldshtein",
"Tal Schuster",
"Jonathan Herzig",
"Gal Elidan",
"Amir Globerson"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.03618 | [
""
] | https://huggingface.co/papers/2406.03618 | 2 | 2 | 0 | 8 | [] | [
"google/TACT"
] | [] | [] | [
"google/TACT"
] | [] | 1 |
null | https://openreview.net/forum?id=KgeQqLI7OD | @inproceedings{
yejinchoi2024towards,
title={Towards Visual Text Design Transfer Across Languages},
author={Yejinchoi and Jiwan Chung and Sumin Shim and Giyeong Oh and Youngjae Yu},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=KgeQqLI7OD}
} | Visual text design plays a critical role in conveying themes, emotions, and atmospheres in multimodal formats such as film posters and album covers. Translating these visual and textual elements across languages extends the concept of translation beyond mere text, requiring the adaptation of aesthetic and stylistic features. To address this, we introduce a novel task of Multimodal Style Translation (MuST-Bench), a benchmark designed to evaluate the ability of visual text generation models to perform translation across different writing systems while preserving design intent.
Our initial experiments on MuST-Bench reveal that existing visual text generation models struggle with the proposed task due to the inadequacy of textual descriptions in conveying visual design.
In response, we introduce SIGIL, a framework for multimodal style translation that eliminates the need for style descriptions.
SIGIL enhances image generation models through three innovations: glyph latent for multilingual settings, pre-trained VAEs for stable style guidance, and an OCR model with reinforcement learning feedback for optimizing readable character generation. SIGIL outperforms existing baselines by achieving superior style consistency and legibility while maintaining visual fidelity, setting itself apart from traditional description-based approaches. We release MuST-Bench publicly for broader use and exploration https://huggingface.co/datasets/yejinc/MuST-Bench. | Towards Visual Text Design Transfer Across Languages | [
"Yejinchoi",
"Jiwan Chung",
"Sumin Shim",
"Giyeong Oh",
"Youngjae Yu"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2410.18823 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=KZlJF8kguO | @inproceedings{
wang2024brain,
title={Brain Treebank: Large-scale intracranial recordings from naturalistic language stimuli},
author={Christopher Wang and Adam Uri Yaari and Aaditya K Singh and Vighnesh Subramaniam and Dana Rosenfarb and Jan DeWitt and Pranav Misra and Joseph R. Madsen and Scellig Stone and Gabriel Kreiman and Boris Katz and Ignacio Cases and Andrei Barbu},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=KZlJF8kguO}
} | We present the Brain Treebank, a large-scale dataset of electrophysiological neural responses, recorded from intracranial probes while 10 subjects watched one or more Hollywood movies. Subjects watched on average 2.6 Hollywood movies, for an average viewing time of 4.3 hours, and a total of 43 hours. The audio track for each movie was transcribed with manual corrections. Word onsets were manually annotated on spectrograms of the audio track for each movie. Each transcript was automatically parsed and manually corrected into the universal dependencies (UD) formalism, assigning a part of speech to every word and a dependency parse to every sentence. In total, subjects heard over 38,000 sentences (223,000 words), while they had on average 168 electrodes implanted. This is the largest dataset of intracranial recordings featuring grounded naturalistic language, one of the largest English UD treebanks in general, and one of only a few UD treebanks aligned to multimodal features. We hope that this dataset serves as a bridge between linguistic concepts, perception, and their neural representations. To that end, we present an analysis of which electrodes are sensitive to language features while also mapping out a rough time course of language processing across these electrodes. The Brain Treebank is available at https://BrainTreebank.dev/ | Brain Treebank: Large-scale intracranial recordings from naturalistic language stimuli | [
"Christopher Wang",
"Adam Uri Yaari",
"Aaditya K Singh",
"Vighnesh Subramaniam",
"Dana Rosenfarb",
"Jan DeWitt",
"Pranav Misra",
"Joseph R. Madsen",
"Scellig Stone",
"Gabriel Kreiman",
"Boris Katz",
"Ignacio Cases",
"Andrei Barbu"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | 2411.08343 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=KZLE5BaaOH | @inproceedings{
souly2024a,
title={A Strong{REJECT} for Empty Jailbreaks},
author={Alexandra Souly and Qingyuan Lu and Dillon Bowen and Tu Trinh and Elvis Hsieh and Sana Pandey and Pieter Abbeel and Justin Svegliato and Scott Emmons and Olivia Watkins and Sam Toyer},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=KZLE5BaaOH}
} | Most jailbreak papers claim the jailbreaks they propose are highly effective, often boasting near-100% attack success rates. However, it is perhaps more common than not for jailbreak developers to substantially exaggerate the effectiveness of their jailbreaks. We suggest this problem arises because jailbreak researchers lack a standard, high-quality benchmark for evaluating jailbreak performance, leaving researchers to create their own. To create a benchmark, researchers must choose a dataset of forbidden prompts to which a victim model will respond, along with an evaluation method that scores the harmfulness of the victim model’s responses. We show that existing benchmarks suffer from significant shortcomings and introduce the StrongREJECT benchmark to address these issues. StrongREJECT's dataset contains prompts that victim models must answer with specific, harmful information, while its automated evaluator measures the extent to which a response gives useful information to forbidden prompts. In doing so, the StrongREJECT evaluator achieves state-of-the-art agreement with human judgments of jailbreak effectiveness. Notably, we find that existing evaluation methods significantly overstate jailbreak effectiveness compared to human judgments and the StrongREJECT evaluator. We describe a surprising and novel phenomenon that explains this discrepancy: jailbreaks bypassing a victim model’s safety fine-tuning tend to reduce its capabilities. Together, our findings underscore the need for researchers to use a high-quality benchmark, such as StrongREJECT, when developing new jailbreak attacks. We release the StrongREJECT code and data at https://strong-reject.readthedocs.io/. | A StrongREJECT for Empty Jailbreaks | [
"Alexandra Souly",
"Qingyuan Lu",
"Dillon Bowen",
"Tu Trinh",
"Elvis Hsieh",
"Sana Pandey",
"Pieter Abbeel",
"Justin Svegliato",
"Scott Emmons",
"Olivia Watkins",
"Sam Toyer"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2402.10260 | [
"https://github.com/alexandrasouly/strongreject"
] | https://huggingface.co/papers/2402.10260 | 3 | 0 | 0 | 11 | [] | [
"walledai/StrongREJECT"
] | [] | [] | [
"walledai/StrongREJECT"
] | [] | 1 |
null | https://openreview.net/forum?id=KYxzmRLF6i | @inproceedings{
ma2024spreadsheetbench,
title={SpreadsheetBench: Towards Challenging Real World Spreadsheet Manipulation},
author={Zeyao Ma and Bohan Zhang and Jing Zhang and Jifan Yu and Xiaokang Zhang and Xiaohan Zhang and Sijia Luo and Xi Wang and Jie Tang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=KYxzmRLF6i}
} | We introduce SpreadsheetBench, a challenging spreadsheet manipulation benchmark exclusively derived from real-world scenarios, designed to immerse current large language models (LLMs) in the actual workflow of spreadsheet users.
Unlike existing benchmarks that rely on synthesized queries and simplified spreadsheet files, SpreadsheetBench is built from 912 real questions gathered from online Excel forums, which reflect the intricate needs of users. The associated spreadsheets from the forums contain a variety of tabular data such as multiple tables, non-standard relational tables, and abundant non-textual elements. Furthermore, we propose a more reliable evaluation metric akin to online judge platforms, where multiple spreadsheet files are created as test cases for each instruction, ensuring the evaluation of robust solutions capable of handling spreadsheets with varying values.
Our comprehensive evaluation of various LLMs under both single-round and multi-round inference settings reveals a substantial gap between the state-of-the-art (SOTA) models and human performance, highlighting the benchmark's difficulty. | SpreadsheetBench: Towards Challenging Real World Spreadsheet Manipulation | [
"Zeyao Ma",
"Bohan Zhang",
"Jing Zhang",
"Jifan Yu",
"Xiaokang Zhang",
"Xiaohan Zhang",
"Sijia Luo",
"Xi Wang",
"Jie Tang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | 2406.14991 | [
""
] | https://huggingface.co/papers/2406.14991 | 0 | 2 | 1 | 9 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=K6b8LCXBeQ | @inproceedings{
chen2024gmaimmbench,
title={{GMAI}-{MMB}ench: A Comprehensive Multimodal Evaluation Benchmark Towards General Medical {AI}},
author={pengcheng chen and Jin Ye and Guoan Wang and Yanjun Li and Zhongying Deng and Wei Li and Tianbin Li and Haodong Duan and Ziyan Huang and Yanzhou Su and Benyou Wang and Shaoting Zhang and Bin Fu and Jianfei Cai and Bohan Zhuang and Eric J Seibel and Junjun He and Yu Qiao},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=K6b8LCXBeQ}
} | Large Vision-Language Models (LVLMs) are capable of handling diverse data types such as imaging, text, and physiological signals, and can be applied in various fields. In the medical field, LVLMs have a high potential to offer substantial assistance for diagnosis and treatment. Before that, it is crucial to develop benchmarks to evaluate LVLMs' effectiveness in various medical applications. Current benchmarks are often built upon specific academic literature, mainly focusing on a single domain, and lacking varying perceptual granularities. Thus, they face specific challenges, including limited clinical relevance, incomplete evaluations, and insufficient guidance for interactive LVLMs. To address these limitations, we developed the GMAI-MMBench, the most comprehensive general medical AI benchmark with well-categorized data structure and multi-perceptual granularity to date. It is constructed from 284 datasets across 38 medical image modalities, 18 clinical-related tasks, 18 departments, and 4 perceptual granularities in a Visual Question Answering (VQA) format. Additionally, we implemented a lexical tree structure that allows users to customize evaluation tasks, accommodating various assessment needs and substantially supporting medical AI research and applications. We evaluated 50 LVLMs, and the results show that even the advanced GPT-4o only achieves an accuracy of 53.96\%, indicating significant room for improvement. Moreover, we identified five key insufficiencies in current cutting-edge LVLMs that need to be addressed to advance the development of better medical applications. We believe that GMAI-MMBench will stimulate the community to build the next generation of LVLMs toward GMAI. | GMAI-MMBench: A Comprehensive Multimodal Evaluation Benchmark Towards General Medical AI | [
"pengcheng chen",
"Jin Ye",
"Guoan Wang",
"Yanjun Li",
"Zhongying Deng",
"Wei Li",
"Tianbin Li",
"Haodong Duan",
"Ziyan Huang",
"Yanzhou Su",
"Benyou Wang",
"Shaoting Zhang",
"Bin Fu",
"Jianfei Cai",
"Bohan Zhuang",
"Eric J Seibel",
"Junjun He",
"Yu Qiao"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2408.03361 | [
"https://github.com/uni-medical/GMAI-MMBench"
] | https://huggingface.co/papers/2408.03361 | 8 | 85 | 2 | 18 | [] | [
"OpenGVLab/GMAI-MMBench"
] | [] | [] | [
"OpenGVLab/GMAI-MMBench"
] | [] | 1 |
null | https://openreview.net/forum?id=JrJW21IP9p | @inproceedings{
wang2024enhancing,
title={Enhancing vision-language models for medical imaging: bridging the 3D gap with innovative slice selection},
author={Yuli Wang and Peng jian and Yuwei Dai and Craig Jones and Haris I. Sair and Jinglai Shen and Nicolas Loizou and jing wu and Wen-Chi Hsu and Maliha Rubaiyat Imami and Zhicheng Jiao and Paul J Zhang and Harrison Bai},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=JrJW21IP9p}
} | Recent approaches to vision-language tasks are built on the remarkable capabilities of large vision-language models (VLMs). These models excel in zero-shot and few-shot learning, enabling them to learn new tasks without parameter updates. However, their primary challenge lies in their design, which primarily accommodates 2D input, thus limiting their effectiveness for medical images, particularly radiological images like MRI and CT, which are typically 3D. To bridge the gap between state-of-the-art 2D VLMs and 3D medical image data, we developed an innovative, one-pass, unsupervised representative slice selection method called Vote-MI, which selects representative 2D slices from 3D medical imaging. To evaluate the effectiveness of vote-MI when implemented with VLMs, we introduce BrainMD, a robust, multimodal dataset comprising 2,453 annotated 3D MRI brain scans with corresponding textual radiology reports and electronic health records. Based on BrainMD, we further develop two benchmarks, BrainMD-select (including the most representative 2D slice of 3D image) and BrainBench (including various vision-language downstream tasks). Extensive experiments on the BrainMD dataset and its two corresponding benchmarks demonstrate that our representative selection method significantly improves performance in zero-shot and few-shot learning tasks. On average, Vote-MI achieves a 14.6\% and 16.6\% absolute gain for zero-shot and few-shot learning, respectively, compared to randomly selecting examples. Our studies represent a significant step toward integrating AI in medical imaging to enhance patient care and facilitate medical research. We hope this work will serve as a foundation for data selection as vision-language models are increasingly applied to new tasks. | Enhancing vision-language models for medical imaging: bridging the 3D gap with innovative slice selection | [
"Yuli Wang",
"Peng jian",
"Yuwei Dai",
"Craig Jones",
"Haris I. Sair",
"Jinglai Shen",
"Nicolas Loizou",
"jing wu",
"Wen-Chi Hsu",
"Maliha Rubaiyat Imami",
"Zhicheng Jiao",
"Paul J Zhang",
"Harrison Bai"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Jfg3vw2bjx | @inproceedings{
liu2024apigen,
title={{APIG}en: Automated {PI}peline for Generating Verifiable and Diverse Function-Calling Datasets},
author={Zuxin Liu and Thai Quoc Hoang and Jianguo Zhang and Ming Zhu and Tian Lan and Shirley Kokane and Juntao Tan and Weiran Yao and Zhiwei Liu and Yihao Feng and Rithesh R N and Liangwei Yang and Silvio Savarese and Juan Carlos Niebles and Huan Wang and Shelby Heinecke and Caiming Xiong},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=Jfg3vw2bjx}
} | The advancement of function-calling agent models requires diverse, reliable, and high-quality datasets. This paper presents APIGen, an automated data generation pipeline designed to synthesize high-quality datasets for function-calling applications. We leverage APIGen and collect 3,673 executable APIs across 21 different categories to generate diverse function-calling datasets in a scalable and structured manner. Each data in our dataset is verified through three hierarchical stages: format checking, actual function executions, and semantic verification, improving its reliability and correctness. We demonstrate that models trained with our curated datasets, even with only 7B parameters, can achieve state-of-the-art performance on the Berkeley Function-Calling Benchmark, outperforming multiple GPT-4 models. Moreover, our 1B model achieves exceptional performance, surpassing GPT-3.5-Turbo and Claude-3 Haiku. We release a dataset containing 60,000 high-quality entries, aiming to advance the field of function-calling agent domains. The dataset and models are available on the project homepage \url{https://apigen-pipeline.github.io/}. | APIGen: Automated PIpeline for Generating Verifiable and Diverse Function-Calling Datasets | [
"Zuxin Liu",
"Thai Quoc Hoang",
"Jianguo Zhang",
"Ming Zhu",
"Tian Lan",
"Shirley Kokane",
"Juntao Tan",
"Weiran Yao",
"Zhiwei Liu",
"Yihao Feng",
"Rithesh R N",
"Liangwei Yang",
"Silvio Savarese",
"Juan Carlos Niebles",
"Huan Wang",
"Shelby Heinecke",
"Caiming Xiong"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.18518 | [
""
] | https://huggingface.co/papers/2406.18518 | 6 | 23 | 1 | 17 | [
"Salesforce/xLAM-7b-fc-r",
"Salesforce/xLAM-1b-fc-r",
"Salesforce/xLAM-8x22b-r",
"Salesforce/xLAM-7b-r",
"Salesforce/xLAM-1b-fc-r-gguf",
"Salesforce/xLAM-7b-fc-r-gguf",
"Salesforce/xLAM-8x7b-r",
"QuantFactory/xLAM-7b-fc-r-GGUF",
"QuantFactory/xLAM-1b-fc-r-GGUF",
"RichardErkhov/Salesforce_-_xLAM-8x7b-r-gguf",
"jncraton/xLAM-1b-fc-r-ct2-int8",
"RichardErkhov/Salesforce_-_xLAM-1b-fc-r-gguf",
"RichardErkhov/Salesforce_-_xLAM-7b-r-gguf",
"RichardErkhov/Salesforce_-_xLAM-7b-fc-r-gguf",
"RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf",
"RichardErkhov/Salesforce_-_xLAM-7b-r-4bits",
"RichardErkhov/Salesforce_-_xLAM-7b-r-8bits"
] | [
"Salesforce/xlam-function-calling-60k",
"argilla/Synth-APIGen-v0.1",
"argilla/apigen-function-calling",
"argilla-warehouse/synth-apigen-qwen",
"argilla-warehouse/synth-apigen-llama",
"plaguss/pipe_with_citation",
"plaguss/synth-apigen-llama-exec",
"plaguss/synth-apigen-qwen-exec"
] | [
"Tonic/Salesforce-Xlam-7b-r",
"Tonic/On-Device-Function-Calling",
"nerozhao/Model_Test",
"fangchagnjun/LLAMA3.2"
] | [
"Salesforce/xLAM-7b-fc-r",
"Salesforce/xLAM-1b-fc-r",
"Salesforce/xLAM-8x22b-r",
"Salesforce/xLAM-7b-r",
"Salesforce/xLAM-1b-fc-r-gguf",
"Salesforce/xLAM-7b-fc-r-gguf",
"Salesforce/xLAM-8x7b-r",
"QuantFactory/xLAM-7b-fc-r-GGUF",
"QuantFactory/xLAM-1b-fc-r-GGUF",
"RichardErkhov/Salesforce_-_xLAM-8x7b-r-gguf",
"jncraton/xLAM-1b-fc-r-ct2-int8",
"RichardErkhov/Salesforce_-_xLAM-1b-fc-r-gguf",
"RichardErkhov/Salesforce_-_xLAM-7b-r-gguf",
"RichardErkhov/Salesforce_-_xLAM-7b-fc-r-gguf",
"RichardErkhov/Salesforce_-_xLAM-8x22b-r-gguf",
"RichardErkhov/Salesforce_-_xLAM-7b-r-4bits",
"RichardErkhov/Salesforce_-_xLAM-7b-r-8bits"
] | [
"Salesforce/xlam-function-calling-60k",
"argilla/Synth-APIGen-v0.1",
"argilla/apigen-function-calling",
"argilla-warehouse/synth-apigen-qwen",
"argilla-warehouse/synth-apigen-llama",
"plaguss/pipe_with_citation",
"plaguss/synth-apigen-llama-exec",
"plaguss/synth-apigen-qwen-exec"
] | [
"Tonic/Salesforce-Xlam-7b-r",
"Tonic/On-Device-Function-Calling",
"nerozhao/Model_Test",
"fangchagnjun/LLAMA3.2"
] | 1 |
null | https://openreview.net/forum?id=Jaye8aWpmZ | @inproceedings{
li2024when,
title={When {LLM}s Meet Cunning Texts: A Fallacy Understanding Benchmark for Large Language Models},
author={Yinghui Li and Qingyu Zhou and Yuanzhen Luo and Shirong Ma and Yangning Li and Hai-Tao Zheng and Xuming Hu and Philip S. Yu},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=Jaye8aWpmZ}
} | Recently, Large Language Models (LLMs) make remarkable evolutions in language understanding and generation. Following this, various benchmarks for measuring all kinds of capabilities of LLMs have sprung up. In this paper, we challenge the reasoning and understanding abilities of LLMs by proposing a FaLlacy Understanding Benchmark (FLUB) containing cunning texts that are easy for humans to understand but difficult for models to grasp. Specifically, the cunning texts that FLUB focuses on mainly consist of the tricky, humorous, and misleading texts collected from the real internet environment. And we design three tasks with increasing difficulty in the FLUB benchmark to evaluate the fallacy understanding ability of LLMs. Based on FLUB, we investigate the performance of multiple representative and advanced LLMs, reflecting our FLUB is challenging and worthy of more future study. Interesting discoveries and valuable insights are achieved in our extensive experiments and detailed analyses. We hope that our benchmark can encourage the community to improve LLMs' ability to understand fallacies. Our data and codes are available at https://github.com/THUKElab/FLUB. | When LLMs Meet Cunning Texts: A Fallacy Understanding Benchmark for Large Language Models | [
"Yinghui Li",
"Qingyu Zhou",
"Yuanzhen Luo",
"Shirong Ma",
"Yangning Li",
"Hai-Tao Zheng",
"Xuming Hu",
"Philip S. Yu"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2402.11100 | [
"https://github.com/thukelab/flub"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=JU0QvhhfVp | @inproceedings{
arevalo2024motive,
title={{MOTIVE}: A Drug-Target Interaction Graph For Inductive Link Prediction},
author={John Arevalo and Ellen Su and Anne E Carpenter and Shantanu Singh},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=JU0QvhhfVp}
} | Drug-target interaction (DTI) prediction is crucial for identifying new therapeutics and detecting mechanisms of action. While structure-based methods accurately model physical interactions between a drug and its protein target, cell-based assays such as Cell Painting can better capture complex DTI interactions. This paper introduces MOTIVE, a Morphological cOmpound Target Interaction Graph dataset comprising Cell Painting features for 11,000 genes and 3,600 compounds, along with their relationships extracted from seven publicly available databases. We provide random, cold-source (new drugs), and cold-target (new genes) data splits to enable rigorous evaluation under realistic use cases. Our benchmark results show that graph neural networks that use Cell Painting features consistently outperform those that learn from graph structure alone, feature-based models, and topological heuristics. MOTIVE accelerates both graph ML research and drug discovery by promoting the development of more reliable DTI prediction models. MOTIVE resources are available at https://github.com/carpenter-singh-lab/motive. | MOTIVE: A Drug-Target Interaction Graph For Inductive Link Prediction | [
"John Arevalo",
"Ellen Su",
"Anne E Carpenter",
"Shantanu Singh"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | 2406.08649 | [
"https://github.com/carpenter-singh-lab/motive"
] | https://huggingface.co/papers/2406.08649 | 1 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=JRMSC08gSF | @inproceedings{
kotalwar2024hintsinbrowser,
title={Hints-In-Browser: Benchmarking Language Models for Programming Feedback Generation},
author={Nachiket Kotalwar and Alkis Gotovos and Adish Singla},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=JRMSC08gSF}
} | Generative AI and large language models hold great promise in enhancing programming education by generating individualized feedback and hints for learners. Recent works have primarily focused on improving the quality of generated feedback to achieve human tutors' quality. While quality is an important performance criterion, it is not the only criterion to optimize for real-world educational deployments. In this paper, we benchmark language models for programming feedback generation across several performance criteria, including quality, cost, time, and data privacy. The key idea is to leverage recent advances in the new paradigm of in-browser inference that allow running these models directly in the browser, thereby providing direct benefits across cost and data privacy. To boost the feedback quality of small models compatible with in-browser inference engines, we develop a fine-tuning pipeline based on GPT-4 generated synthetic data. We showcase the efficacy of fine-tuned Llama3-8B and Phi3-3.8B 4-bit quantized models using WebLLM's in-browser inference engine on three different Python programming datasets. We will release the full implementation along with a web app and datasets to facilitate further research on in-browser language models. | Hints-In-Browser: Benchmarking Language Models for Programming Feedback Generation | [
"Nachiket Kotalwar",
"Alkis Gotovos",
"Adish Singla"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.05053 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=JLvtwGlezU | @inproceedings{
udandarao2024a,
title={A Practitioner's Guide to Real-World Continual Multimodal Pretraining},
author={Vishaal Udandarao and Karsten Roth and Sebastian Dziadzio and Ameya Prabhu and Mehdi Cherti and Oriol Vinyals and Olivier J Henaff and Samuel Albanie and Zeynep Akata and Matthias Bethge},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=JLvtwGlezU}
} | Multimodal foundation models serve numerous applications at the intersection of vision and language. Still, despite being pretrained on extensive data, they become outdated over time.
To keep models updated, research into continual pretraining mainly explores scenarios with either (1) infrequent, indiscriminate updates on large-scale new data, or (2) frequent, sample-level updates.
However, practical model deployment often operates in the gap between these two limit cases, as real-world applications demand adaptation to specific subdomains, tasks or concepts --- spread over the entire, varying life cycle of a model.
In this work, we complement current perspectives on continual pretraining through a research test bed and offer comprehensive guidance for effective continual model updates in such scenarios.
We first introduce FoMo-in-Flux, a continual multimodal pretraining benchmark with realistic compute constraints and practical deployment requirements, constructed over 63 datasets with diverse visual and semantic coverage.
Using FoMo-in-Flux, we explore the complex landscape of practical continual pretraining through multiple perspectives: (1) data mixtures and stream orderings that emulate real-world deployment settings, (2) methods ranging from simple fine-tuning and traditional continual learning strategies to parameter-efficient updates and model merging, (3) meta-learning-rate schedules and mechanistic design choices, and (4) model and compute scaling. Together, our insights provide a practitioner's guide to continual multimodal pretraining for real-world deployment. Benchmark and code is provided here: https://github.com/ExplainableML/fomo_in_flux. | A Practitioner's Guide to Real-World Continual Multimodal Pretraining | [
"Vishaal Udandarao",
"Karsten Roth",
"Sebastian Dziadzio",
"Ameya Prabhu",
"Mehdi Cherti",
"Oriol Vinyals",
"Olivier J Henaff",
"Samuel Albanie",
"Zeynep Akata",
"Matthias Bethge"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=J9oefdGUuM | @inproceedings{
ru2024ragchecker,
title={{RAGC}hecker: A Fine-grained Framework for Diagnosing Retrieval-Augmented Generation},
author={Dongyu Ru and Lin Qiu and Xiangkun Hu and Tianhang Zhang and Peng Shi and Shuaichen Chang and Cheng Jiayang and Cunxiang Wang and Shichao Sun and Huanyu Li and Zizhao Zhang and Binjie Wang and Jiarong Jiang and Tong He and Zhiguo Wang and Pengfei Liu and Yue Zhang and Zheng Zhang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=J9oefdGUuM}
} | Despite Retrieval-Augmented Generation (RAG) has shown promising capability in leveraging external knowledge, a comprehensive evaluation of RAG systems is still challenging due to the modular nature of RAG, evaluation of long-form responses and reliability of measurements. In this paper, we propose a fine-grained evaluation framework, RAGChecker, that incorporates a suite of diagnostic metrics for both the retrieval and generation modules. Meta evaluation verifies that RAGChecker has significantly better correlations with human judgments than other evaluation metrics. Using RAGChecker, we evaluate 8 RAG systems and conduct an in-depth analysis of their performance, revealing insightful patterns and trade-offs in the design choices of RAG architectures. The metrics of RAGChecker can guide researchers and practitioners in developing more effective RAG systems. | RAGChecker: A Fine-grained Framework for Diagnosing Retrieval-Augmented Generation | [
"Dongyu Ru",
"Lin Qiu",
"Xiangkun Hu",
"Tianhang Zhang",
"Peng Shi",
"Shuaichen Chang",
"Cheng Jiayang",
"Cunxiang Wang",
"Shichao Sun",
"Huanyu Li",
"Zizhao Zhang",
"Binjie Wang",
"Jiarong Jiang",
"Tong He",
"Zhiguo Wang",
"Pengfei Liu",
"Yue Zhang",
"Zheng Zhang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2408.08067 | [
"https://github.com/amazon-science/ragchecker"
] | https://huggingface.co/papers/2408.08067 | 0 | 0 | 0 | 18 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=IlFk5U9cEg | @inproceedings{
saparina2024ambrosia,
title={{AMBROSIA}: A Benchmark for Parsing Ambiguous Questions into Database Queries},
author={Irina Saparina and Mirella Lapata},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=IlFk5U9cEg}
} | Practical semantic parsers are expected to understand user utterances and map them to executable programs, even when these are ambiguous. We introduce a new benchmark, AMBROSIA, which we hope will inform and inspire the development of text-to-SQL parsers capable of recognizing and interpreting ambiguous requests. Our dataset contains questions showcasing three different types of ambiguity (scope ambiguity, attachment ambiguity, and vagueness), their interpretations, and corresponding SQL queries. In each case, the ambiguity persists even when the database context is provided. This is achieved through a novel approach that involves controlled generation of databases from scratch. We benchmark various LLMs on AMBROSIA, revealing that even the most advanced models struggle to identify and interpret ambiguity in questions. | AMBROSIA: A Benchmark for Parsing Ambiguous Questions into Database Queries | [
"Irina Saparina",
"Mirella Lapata"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | 2406.19073 | [
"https://github.com/saparina/ambrosia"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=IkA54A6KKe | @inproceedings{
deng2024textttdattri,
title={\${\textbackslash}texttt\{dattri\}\$: A Library for Efficient Data Attribution},
author={Junwei Deng and Ting Wei Li and Shiyuan Zhang and Shixuan Liu and Yijun Pan and Hao Huang and Xinhe Wang and Pingbang Hu and Xingjian Zhang and Jiaqi Ma},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=IkA54A6KKe}
} | Data attribution methods aim to quantify the influence of individual training samples on the prediction of artificial intelligence (AI) models. As training data plays an increasingly crucial role in the modern development of large-scale AI models, data attribution has found broad applications in improving AI performance and safety. However, despite a surge of new data attribution methods being developed recently, there lacks a comprehensive library that facilitates the development, benchmarking, and deployment of different data attribution methods. In this work, we introduce $\texttt{dattri}$, an open-source data attribution library that addresses the above needs. Specifically, $\texttt{dattri}$ highlights three novel design features. Firstly, $\texttt{dattri}$ proposes a unified and easy-to-use API, allowing users to integrate different data attribution methods into their PyTorch-based machine learning pipeline with a few lines of code changed. Secondly, $\texttt{dattri}$ modularizes low-level utility functions that are commonly used in data attribution methods, such as Hessian-vector product, inverse-Hessian-vector product or random projection, making it easier for researchers to develop new data attribution methods. Thirdly, $\texttt{dattri}$ provides a comprehensive benchmark framework with pre-trained models and ground truth annotations for a variety of benchmark settings, including generative AI settings. We have implemented a variety of state-of-the-art efficient data attribution methods that can be applied to large-scale neural network models, and will continuously update the library in the future. Using the developed $\texttt{dattri}$ library, we are able to perform a comprehensive and fair benchmark analysis across a wide range of data attribution methods. The source code of $\texttt{dattri}$ is available at https://github.com/TRAIS-Lab/dattri. | : A Library for Efficient Data Attribution | [
"Junwei Deng",
"Ting Wei Li",
"Shiyuan Zhang",
"Shixuan Liu",
"Yijun Pan",
"Hao Huang",
"Xinhe Wang",
"Pingbang Hu",
"Xingjian Zhang",
"Jiaqi Ma"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | [
"https://github.com/trais-lab/dattri"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Ich4tv4202 | @inproceedings{
han2024wildguard,
title={WildGuard: Open One-stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of {LLM}s},
author={Seungju Han and Kavel Rao and Allyson Ettinger and Liwei Jiang and Bill Yuchen Lin and Nathan Lambert and Yejin Choi and Nouha Dziri},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=Ich4tv4202}
} | We introduce WildGuard---an open, light-weight moderation tool for LLM safety that achieves three goals: (1) identifying malicious intent in user prompts, (2) detecting safety risks of model responses, and (3) determining model refusal rate. Together, WildGuard serves the increasing needs for automatic safety moderation and evaluation of LLM interactions, providing a one-stop tool with enhanced accuracy and broad coverage across 13 risk categories. While existing open moderation tools such as Llama-Guard2 score reasonably well in classifying straightforward model interactions, they lag far behind a prompted GPT-4, especially in identifying adversarial jailbreaks and in evaluating models' refusals, a key measure for evaluating safety behaviors in model responses.
To address these challenges, we construct WildGuardMix, a large-scale and carefully balanced multi-task safety moderation dataset with 92K labeled examples that cover vanilla (direct) prompts and adversarial jailbreaks, paired with various refusal and compliance responses. WildGuardMix is a combination of WildGuardTrain, the training data of WildGuard, and WildGuardTest, a high-quality human-annotated moderation test set with 5K labeled items covering broad risk scenarios.
Through extensive evaluations on WildGuardTest and ten existing public benchmarks, we show that WildGuard establishes state-of-the-art performance in open-source safety moderation across all the three tasks compared to ten strong existing open-source moderation models (e.g., up to 25.3% improvement on refusal detection). Importantly, WildGuard matches and sometimes exceeds GPT-4 performance (e.g., up to 4.8% improvement on prompt harmfulness identification). WildGuard serves as a highly effective safety moderator in an LLM interface, reducing the success rate of jailbreak attacks from 79.8% to 2.4%. We will make all our data, models and training/evaluation code publicly available under CC BY 4.0 license. | WildGuard: Open One-stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs | [
"Seungju Han",
"Kavel Rao",
"Allyson Ettinger",
"Liwei Jiang",
"Bill Yuchen Lin",
"Nathan Lambert",
"Yejin Choi",
"Nouha Dziri"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.18495 | [
"https://github.com/allenai/wildguard"
] | https://huggingface.co/papers/2406.18495 | 4 | 12 | 1 | 8 | [
"allenai/wildguard",
"iknow-lab/llama-3.2-3B-wildguard-ko-2410",
"RichardErkhov/iknow-lab_-_llama-3.2-3B-wildguard-ko-2410-gguf"
] | [
"allenai/wildguardmix",
"allenai/xstest-response",
"walledai/WildGuardTest",
"iknow-lab/wildguardmix-test-ko",
"iknow-lab/wildguardmix-train-ko-11k",
"iknow-lab/wildguardmix-train-ko"
] | [] | [
"allenai/wildguard",
"iknow-lab/llama-3.2-3B-wildguard-ko-2410",
"RichardErkhov/iknow-lab_-_llama-3.2-3B-wildguard-ko-2410-gguf"
] | [
"allenai/wildguardmix",
"allenai/xstest-response",
"walledai/WildGuardTest",
"iknow-lab/wildguardmix-test-ko",
"iknow-lab/wildguardmix-train-ko-11k",
"iknow-lab/wildguardmix-train-ko"
] | [] | 1 |
null | https://openreview.net/forum?id=I79q7wIRkS | @inproceedings{
granqvist2024textttpflresearch,
title={\${\textbackslash}texttt\{pfl-research\}\$: simulation framework for accelerating research in Private Federated Learning},
author={Filip Granqvist and Congzheng Song and {\'A}ine Cahill and Rogier van Dalen and Martin Pelikan and YI SHENG CHAN and Xiaojun Feng and Natarajan Krishnaswami and Vojta J and Mona Chitnis},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=I79q7wIRkS}
} | Federated learning (FL) is an emerging machine learning (ML) training paradigm where clients own their data and collaborate to train a global model, without revealing any data to the server and other participants. Researchers commonly perform experiments in a simulation environment to quickly iterate on ideas. However, existing open-source tools do not offer the efficiency required to simulate FL on larger and more realistic FL datasets. We introduce $\texttt{pfl-research}$, a fast, modular, and easy-to-use Python framework for simulating FL. It supports TensorFlow, PyTorch, and non-neural network models, and is tightly integrated with state-of-the-art privacy algorithms. We study the speed of open-source FL frameworks and show that $\texttt{pfl-research}$ is 7-72$\times$ faster than alternative open-source frameworks on common cross-device setups. Such speedup will significantly boost the productivity of the FL research community and enable testing hypotheses on realistic FL datasets that were previously too resource intensive. We release a suite of benchmarks that evaluates an algorithm's overall performance on a diverse set of realistic scenarios. | : simulation framework for accelerating research in Private Federated Learning | [
"Filip Granqvist",
"Congzheng Song",
"Áine Cahill",
"Rogier van Dalen",
"Martin Pelikan",
"YI SHENG CHAN",
"Xiaojun Feng",
"Natarajan Krishnaswami",
"Vojta J",
"Mona Chitnis"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
"https://github.com/apple/pfl-research"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=I2VOdtAc3H | @inproceedings{
victor2024off,
title={Off to new Shores: A Dataset \& Benchmark for (near-)coastal Flood Inundation Forecasting},
author={Brandon Victor and Mathilde Letard and Peter Jack Naylor and Karim Douch and Nicolas Long{\'e}p{\'e} and Zhen He and Patrick Ebel},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=I2VOdtAc3H}
} | Floods are among the most common and devastating natural hazards, imposing immense costs on our society and economy due to their disastrous consequences. Recent progress in weather prediction and spaceborne flood mapping demonstrated the feasibility of anticipating extreme events and reliably detecting their catastrophic effects afterwards. However, these efforts are rarely linked to one another and there is a critical lack of datasets and benchmarks to enable the direct forecasting of flood extent. To resolve this issue, we curate a novel dataset enabling a timely prediction of flood extent. Furthermore, we provide a representative evaluation of state-of-the-art methods, structured into two benchmark tracks for forecasting flood inundation maps i) in general and ii) focused on coastal regions. Altogether, our dataset and benchmark provide a comprehensive platform for evaluating flood forecasts, enabling future solutions for this critical challenge. Data, code \& models are shared at https://github.com/Multihuntr/GFF under a CC0 license. | Off to new Shores: A Dataset Benchmark for (near-)coastal Flood Inundation Forecasting | [
"Brandon Victor",
"Mathilde Letard",
"Peter Jack Naylor",
"Karim Douch",
"Nicolas Longépé",
"Zhen He",
"Patrick Ebel"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
"https://github.com/multihuntr/gff"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=I2Q3XwO2cz | @inproceedings{
veitch-michaelis2024oamtcd,
title={{OAM}-{TCD}: A globally diverse dataset of high-resolution tree cover maps},
author={Joshua Veitch-Michaelis and Andrew Cottam and Daniella Schweizer and Eben Broadbent and David Dao and Ce Zhang and Angelica Almeyda Zambrano and Simeon Max},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=I2Q3XwO2cz}
} | Accurately quantifying tree cover is an important metric for ecosystem monitoring and for assessing progress in restored sites. Recent works have shown that deep learning-based segmentation algorithms are capable of accurately mapping trees at country and continental scales using high-resolution aerial and satellite imagery. Mapping at high (ideally sub-meter) resolution is necessary to identify individual trees, however there are few open-access datasets containing instance level annotations and those that exist are small or not geographically diverse. We present a novel open-access dataset for individual tree crown delineation (TCD) in high-resolution aerial imagery sourced from OpenAerialMap (OAM). Our dataset, OAM-TCD, comprises 5072 2048x2048 px images at 10 cm/px resolution with associated human-labeled instance masks for over 280k individual and 56k groups of trees. By sampling imagery from around the world, we are able to better capture the diversity and morphology of trees in different terrestrial biomes and in both urban and natural environments. Using our dataset, we train reference instance and semantic segmentation models that compare favorably to existing state-of-the-art models. We assess performance through k-fold cross-validation and comparison with existing datasets; additionally we demonstrate compelling results on independent aerial imagery captured over Switzerland and compare to municipal tree inventories and LIDAR-derived canopy maps in the city of Zurich. Our dataset, models and training/benchmark code are publicly released under permissive open-source licenses: Creative Commons (majority CC BY 4.0), and Apache 2.0 respectively. | OAM-TCD: A globally diverse dataset of high-resolution tree cover maps | [
"Joshua Veitch-Michaelis",
"Andrew Cottam",
"Daniella Schweizer",
"Eben Broadbent",
"David Dao",
"Ce Zhang",
"Angelica Almeyda Zambrano",
"Simeon Max"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.11743 | [
"https://github.com/restor-foundation/tcd"
] | https://huggingface.co/papers/2407.11743 | 0 | 0 | 0 | 8 | [] | [
"restor/tcd"
] | [] | [] | [
"restor/tcd"
] | [] | 1 |
null | https://openreview.net/forum?id=I0zpivK0A0 | @inproceedings{
chen2024terra,
title={Terra: A Multimodal Spatio-Temporal Dataset Spanning the Earth},
author={Wei Chen and Xixuan Hao and wu yuankai and Yuxuan Liang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=I0zpivK0A0}
} | Since the inception of our planet, the meteorological environment, as reflected through spatio-temporal data, has always been a fundamental factor influencing human life, socio-economic progress, and ecological conservation. A comprehensive exploration of this data is thus imperative to gain a deeper understanding and more accurate forecasting of these environmental shifts. Despite the success of deep learning techniques within the realm of spatio-temporal data and earth science, existing public datasets are beset with limitations in terms of spatial scale, temporal coverage, and reliance on limited time series data. These constraints hinder their optimal utilization in practical applications. To address these issues, we introduce **Terra**, a multimodal spatio-temporal dataset spanning the earth. This dataset encompasses hourly time series data from 6,480,000 grid areas worldwide over the past 45 years, while also incorporating multimodal spatial supplementary information including geo-images and explanatory text. Through a detailed data analysis and evaluation of existing deep learning models within earth sciences, utilizing our constructed dataset. we aim to provide valuable opportunities for enhancing future research in spatio-temporal data mining, thereby advancing towards more spatio-temporal general intelligence. Our source code and data can be accessed at https://github.com/CityMind-Lab/NeurIPS24-Terra. | Terra: A Multimodal Spatio-Temporal Dataset Spanning the Earth | [
"Wei Chen",
"Xixuan Hao",
"wu yuankai",
"Yuxuan Liang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=HdIiSPLgzC | @inproceedings{
awadalla2024mintt,
title={{MINT}-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens},
author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=HdIiSPLgzC}
} | Multimodal interleaved datasets featuring free-form interleaved sequences of images and text are crucial for training frontier large multimodal models (LMMs). Despite the rapid progression of open-source LMMs, there remains a pronounced scarcity of large-scale, open-source multimodal interleaved datasets.
In response, we introduce MINT-1T, the most extensive and diverse open-source Multimodal INTerleaved dataset to date. MINT-1T comprises of one trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. As scaling multimodal interleaved datasets requires substantial engineering effort, sharing the data curation process and releasing the dataset greatly benefits the community. Our experiments show that LMMs trained on MINT-1T rival the performance of models trained on the previous leading dataset, OBELICS. We release our data at https://github.com/mlfoundations/MINT-1T. | MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens | [
"Anas Awadalla",
"Le Xue",
"Oscar Lo",
"Manli Shu",
"Hannah Lee",
"Etash Kumar Guha",
"Sheng Shen",
"Mohamed Awadalla",
"Silvio Savarese",
"Caiming Xiong",
"Ran Xu",
"Yejin Choi",
"Ludwig Schmidt"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.11271 | [
"https://github.com/mlfoundations/mint-1t"
] | https://huggingface.co/papers/2406.11271 | 3 | 19 | 1 | 14 | [] | [
"mlfoundations/MINT-1T-HTML",
"mlfoundations/MINT-1T-ArXiv",
"Salesforce/blip3-kale",
"mlfoundations/MINT-1T-PDF-CC-2024-18",
"mlfoundations/MINT-1T-PDF-CC-2023-50",
"mlfoundations/MINT-1T-PDF-CC-2024-10",
"mlfoundations/MINT-1T-PDF-CC-2023-06",
"mlfoundations/MINT-1T-PDF-CC-2023-40",
"mlfoundations/MINT-1T-PDF-CC-2023-23",
"mlfoundations/MINT-1T-PDF-CC-2023-14"
] | [] | [] | [
"mlfoundations/MINT-1T-HTML",
"mlfoundations/MINT-1T-ArXiv",
"Salesforce/blip3-kale",
"mlfoundations/MINT-1T-PDF-CC-2024-18",
"mlfoundations/MINT-1T-PDF-CC-2023-50",
"mlfoundations/MINT-1T-PDF-CC-2024-10",
"mlfoundations/MINT-1T-PDF-CC-2023-06",
"mlfoundations/MINT-1T-PDF-CC-2023-40",
"mlfoundations/MINT-1T-PDF-CC-2023-23",
"mlfoundations/MINT-1T-PDF-CC-2023-14"
] | [] | 1 |
null | https://openreview.net/forum?id=HcLFNuQwy5 | @inproceedings{
roberts2024scifibench,
title={Sci{FIB}ench: Benchmarking Large Multimodal Models for Scientific Figure Interpretation},
author={Jonathan Roberts and Kai Han and Neil Houlsby and Samuel Albanie},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=HcLFNuQwy5}
} | Large multimodal models (LMMs) have proven flexible and generalisable across many tasks and fields. Although they have strong potential to aid scientific research, their capabilities in this domain are not well characterised. A key aspect of scientific research is the ability to understand and interpret figures, which serve as a rich, compressed source of complex information. In this work, we present SciFIBench, a scientific figure interpretation benchmark consisting of 2000 questions split between two tasks across 8 categories. The questions are curated from arXiv paper figures and captions, using adversarial filtering to find hard negatives and human verification for
quality control. We evaluate 28 LMMs on SciFIBench, finding it to be a challenging benchmark. Finally, we investigate the alignment and reasoning faithfulness of the LMMs on augmented question sets from our benchmark. We release SciFIBench to encourage progress in this domain. | SciFIBench: Benchmarking Large Multimodal Models for Scientific Figure Interpretation | [
"Jonathan Roberts",
"Kai Han",
"Neil Houlsby",
"Samuel Albanie"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2405.08807 | [
"https://github.com/jonathan-roberts1/SciFIBench"
] | https://huggingface.co/papers/2405.08807 | 1 | 0 | 0 | 4 | [] | [
"jonathan-roberts1/SciFIBench"
] | [] | [] | [
"jonathan-roberts1/SciFIBench"
] | [] | 1 |
null | https://openreview.net/forum?id=HV5JhUZGpP | @inproceedings{
wang2024beancounter,
title={BeanCounter: A low-toxicity, large-scale, and open dataset of business-oriented text},
author={Siyan Wang and Bradford Levy},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=HV5JhUZGpP}
} | Many of the recent breakthroughs in language modeling have resulted from scaling effectively the same model architecture to larger datasets. In this vein, recent work has highlighted performance gains from increasing training dataset size and quality, suggesting a need for novel sources of large-scale datasets. In this work, we introduce BeanCounter, a public dataset consisting of more than 159B tokens extracted from businesses' disclosures. We show that this data is indeed novel: less than 0.1% of BeanCounter appears in Common Crawl-based datasets and it is an order of magnitude larger than datasets relying on similar sources. Given the data's provenance, we hypothesize that BeanCounter is comparatively more factual and less toxic than web-based datasets. Exploring this hypothesis, we find that many demographic identities occur with similar prevalence in BeanCounter but with significantly less toxic context relative to other datasets. To demonstrate the utility of BeanCounter, we evaluate and compare two LLMs continually pre-trained on BeanCounter with their base models. We find an 18-33% reduction in toxic generation and improved performance within the finance domain for the continually pretrained models. Collectively, our work suggests that BeanCounter is a novel source of low-toxicity and high-quality domain-specific data with sufficient scale to train multi-billion parameter LLMs. | BeanCounter: A low-toxicity, large-scale, and open dataset of business-oriented text | [
"Siyan Wang",
"Bradford Levy"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2409.17827 | [
""
] | https://huggingface.co/papers/2409.17827 | 0 | 0 | 0 | 2 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=HRkwnZewLC | @inproceedings{
klein2024navigating,
title={Navigating the Maze of Explainable {AI}: A Systematic Approach to Evaluating Methods and Metrics},
author={Lukas Klein and Carsten T. L{\"u}th and Udo Schlegel and Till J. Bungert and Mennatallah El-Assady and Paul F Jaeger},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=HRkwnZewLC}
} | Explainable AI (XAI) is a rapidly growing domain with a myriad of proposed methods as well as metrics aiming to evaluate their efficacy. However, current studies are often of limited scope, examining only a handful of XAI methods and ignoring underlying design parameters for performance, such as the model architecture or the nature of input data. Moreover, they often rely on one or a few metrics and neglect thorough validation, increasing the risk of selection bias and ignoring discrepancies among metrics. These shortcomings leave practitioners confused about which method to choose for their problem. In response, we introduce LATEC, a large-scale benchmark that critically evaluates 17 prominent XAI methods using 20 distinct metrics. We systematically incorporate vital design parameters like varied architectures and diverse input modalities, resulting in 7,560 examined combinations. Through LATEC, we showcase the high risk of conflicting metrics leading to unreliable rankings and consequently propose a more robust evaluation scheme. Further, we comprehensively evaluate various XAI methods to assist practitioners in selecting appropriate methods aligning with their needs. Curiously, the emerging top-performing method, Expected Gradients, is not examined in any relevant related study. LATEC reinforces its role in future XAI research by publicly releasing all 326k saliency maps and 378k metric scores as a (meta-)evaluation dataset. The benchmark is hosted at: https://github.com/IML-DKFZ/latec. | Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics | [
"Lukas Klein",
"Carsten T. Lüth",
"Udo Schlegel",
"Till J. Bungert",
"Mennatallah El-Assady",
"Paul F Jaeger"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2409.16756 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=HB5q6pC5eb | @inproceedings{
li2024perteval,
title={PertEval: Unveiling Real Knowledge Capacity of {LLM}s with Knowledge-Invariant Perturbations},
author={Jiatong Li and Renjun Hu and Kunzhe Huang and Yan Zhuang and Qi Liu and Mengxiao Zhu and Xing Shi and Wei Lin},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=HB5q6pC5eb}
} | Expert-designed close-ended benchmarks are indispensable in assessing the knowledge capacity of large language models (LLMs). Despite their widespread use, concerns have mounted regarding their reliability due to limited test scenarios and an unavoidable risk of data contamination. To rectify this, we present PertEval, a toolkit devised for in-depth probing of LLMs' knowledge capacity through **knowledge-invariant perturbations**. These perturbations employ human-like restatement techniques to generate on-the-fly test samples from static benchmarks, meticulously retaining knowledge-critical content while altering irrelevant details. Our toolkit further includes a suite of **response consistency analyses** that compare performance on raw vs. perturbed test sets to precisely assess LLMs' genuine knowledge capacity. Six representative LLMs are re-evaluated using PertEval. Results reveal significantly inflated performance of the LLMs on raw benchmarks, including an absolute 25.8% overestimation for GPT-4. Additionally, through a nuanced response pattern analysis, we discover that PertEval retains LLMs' uncertainty to specious knowledge, and reveals their potential rote memorization to correct options which leads to overestimated performance. We also find that the detailed response consistency analyses by PertEval could illuminate various weaknesses in existing LLMs' knowledge mastery and guide the development of refinement. Our findings provide insights for advancing more robust and genuinely knowledgeable LLMs. Our code is available at https://github.com/aigc-apps/PertEval. | PertEval: Unveiling Real Knowledge Capacity of LLMs with Knowledge-Invariant Perturbations | [
"Jiatong Li",
"Renjun Hu",
"Kunzhe Huang",
"Yan Zhuang",
"Qi Liu",
"Mengxiao Zhu",
"Xing Shi",
"Wei Lin"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | 2405.19740 | [
"https://github.com/aigc-apps/perteval"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=H5bUdfM55S | @inproceedings{
xiong2024lvdm,
title={{LVD}-2M: A Long-take Video Dataset with Temporally Dense Captions},
author={Tianwei Xiong and Yuqing Wang and Daquan Zhou and Zhijie Lin and Jiashi Feng and Xihui Liu},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=H5bUdfM55S}
} | The efficacy of video generation models heavily depends on the quality of their training datasets. Most previous video generation models are trained on short video clips, while recently there has been increasing interest in training long video generation models directly on longer videos. However, the lack of such high-quality long videos impedes the advancement long video generation. To promote research in long video generation, we desire a new dataset with four key features essential for training long video generation models: (1) long videos covering at least 10 seconds, (2) long-take videos without cuts, (3) large motion and diverse contents, and (4) temporally dense captions. To achieve this, we introduce a new pipeline for filtering high-quality long-take videos and generating temporally dense captions. Specifically, we define a set of metrics to quantitatively assess video quality including scene cuts, dynamic degrees, and semantic-level scores, enabling us to filter high-quality long-take videos from a large amount of source videos. Subsequently, we develop a hierarchical video captioning pipeline to annotate long videos with temporally-dense captions. With this pipeline, we curate the first long-take video dataset, LVD-2M, comprising 2 million long-take videos, each covering more than 10 seconds and annotated with temporally dense captions. We further validate the effectiveness of LVD-2M by fine-tuning video generation models to generate long videos with dynamic motions. We believe it will significantly contribute to future research in long video generation. | LVD-2M: A Long-take Video Dataset with Temporally Dense Captions | [
"Tianwei Xiong",
"Yuqing Wang",
"Daquan Zhou",
"Zhijie Lin",
"Jiashi Feng",
"Xihui Liu"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2410.10816 | [
"https://github.com/silentview/lvd-2m"
] | https://huggingface.co/papers/2410.10816 | 3 | 19 | 3 | 6 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=GtYd9PCaaB | @inproceedings{
belharbi2024srcaco,
title={{SR}-{CACO}-2: A Dataset for Confocal Fluorescence Microscopy Image Super-Resolution},
author={Soufiane Belharbi and Mara KM Whitford and Phuong Hoang and Shakeeb Murtaza and Luke McCaffrey and Eric Granger},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=GtYd9PCaaB}
} | Confocal fluorescence microscopy is one of the most accessible and widely used imaging techniques for the study of biological processes at the cellular and subcellular levels. Scanning confocal microscopy allows the capture of high-quality images from thick three-dimensional (3D) samples, yet suffers from well-known limitations such as photobleaching and phototoxicity of specimens caused by intense light exposure, which limits its use in some applications, especially for living cells. Cellular damage can be alleviated by changing imaging parameters to reduce light exposure, often at the expense of image quality.
Machine/deep learning methods for single-image super-resolution (SISR) can be applied to restore image quality by upscaling lower-resolution (LR) images to produce high-resolution images (HR). These SISR methods have been successfully applied to photo-realistic images due partly to the abundance of publicly available datasets. In contrast, the lack of publicly available data partly limits their application and success in scanning confocal microscopy.
In this paper, we introduce a large scanning confocal microscopy dataset named SR-CACO-2 that is comprised of low- and high-resolution image pairs marked for three different fluorescent markers. It allows to evaluate the performance of SISR methods on three different upscaling levels (X2, X4, X8). SR-CACO-2 contains the human epithelial cell line Caco-2 (ATCC HTB-37), and it is composed of 2,200 unique images, captured with four resolutions and three markers, that have been translated in the form of 9,937
patches for experiments with SISR methods. Given the new SR-CACO-2 dataset, we also provide benchmarking results for 16 state-of-the-art methods that are representative of the main SISR families. Results show that these methods have limited success in producing high-resolution textures, indicating that SR-CACO-2 represents a challenging problem. The dataset is released under a Creative Commons license (CC BY-NC-SA 4.0), and it can be accessed freely. Our dataset, code and pretrained weights for SISR methods are publicly available: https://github.com/sbelharbi/sr-caco-2. | SR-CACO-2: A Dataset for Confocal Fluorescence Microscopy Image Super-Resolution | [
"Soufiane Belharbi",
"Mara KM Whitford",
"Phuong Hoang",
"Shakeeb Murtaza",
"Luke McCaffrey",
"Eric Granger"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.09168 | [
"https://github.com/sbelharbi/sr-caco-2"
] | https://huggingface.co/papers/2406.09168 | 0 | 0 | 0 | 6 | [
"sbelharbi/sr-caco-2"
] | [] | [] | [
"sbelharbi/sr-caco-2"
] | [] | [] | 1 |
null | https://openreview.net/forum?id=GNhwwbZEZ7 | @inproceedings{
johansson2024incomescm,
title={Income{SCM}: From tabular data set to time-series simulator and causal estimation benchmark},
author={Fredrik D. Johansson},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=GNhwwbZEZ7}
} | Evaluating observational estimators of causal effects demands information that is rarely available: unconfounded interventions and outcomes from the population of interest, created either by randomization or adjustment. As a result, it is customary to fall back on simulators when creating benchmark tasks. Simulators offer great control but are often too simplistic to make challenging tasks, either because they are hand-designed and lack the nuances of real-world data, or because they are fit to observational data without structural constraints. In this work, we propose a general, repeatable strategy for turning observational data into sequential structural causal models and challenging estimation tasks by following two simple principles: 1) fitting real-world data where possible, and 2) creating complexity by composing simple, hand-designed mechanisms. We implement these ideas in a highly configurable software package and apply it to the well-known Adult income data set to construct the IncomeSCM simulator. From this, we devise multiple estimation tasks and sample data sets to compare established estimators of causal effects. The tasks present a suitable challenge, with effect estimates varying greatly in quality between methods, despite similar performance in the modeling of factual outcomes, highlighting the need for dedicated causal estimators and model selection criteria. | IncomeSCM: From tabular data set to time-series simulator and causal estimation benchmark | [
"Fredrik D. Johansson"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2405.16069 | [
"https://github.com/Healthy-AI/IncomeSCM"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=GHlJM45fWY | @inproceedings{
picek2024geoplant,
title={GeoPlant: Spatial Plant Species Prediction Dataset},
author={Lukas Picek and Christophe Botella and Maximilien Servajean and C{\'e}sar Leblanc and R{\'e}mi Palard and Theo Larcher and Benjamin Deneu and Diego Marcos and Pierre Bonnet and Alexis Joly},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=GHlJM45fWY}
} | The difficulty of monitoring biodiversity at fine scales and over large areas limits ecological knowledge and conservation efforts. To fill this gap, Species Distribution Models (SDMs) predict species across space from spatially explicit features. Yet, they face the challenge of integrating the rich but heterogeneous data made available over the past decade, notably millions of opportunistic species observations and standardized surveys, as well as multi-modal remote sensing data.
In light of that, we have designed and developed a new European-scale dataset for SDMs at high spatial resolution (10--50m), including more than 10k species (i.e., most of the European flora). The dataset comprises 5M heterogeneous Presence-Only records and 90k exhaustive Presence-Absence survey records, all accompanied by diverse environmental rasters (e.g., elevation, human footprint, and soil) traditionally used in SDMs. In addition, it provides Sentinel-2 RGB and NIR satellite images with 10 m resolution, a 20-year time series of climatic variables, and satellite time series from the Landsat program.
In addition to the data, we provide an openly accessible SDM benchmark (hosted on Kaggle), which has already attracted an active community and a set of strong baselines for single predictor/modality and multimodal approaches.
All resources, e.g., the dataset, pre-trained models, and baseline methods (in the form of notebooks), are available on Kaggle, allowing one to start with our dataset literally with two mouse clicks. | GeoPlant: Spatial Plant Species Prediction Dataset | [
"Lukas Picek",
"Christophe Botella",
"Maximilien Servajean",
"César Leblanc",
"Rémi Palard",
"Theo Larcher",
"Benjamin Deneu",
"Diego Marcos",
"Pierre Bonnet",
"Alexis Joly"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | 2408.13928 | [
""
] | https://huggingface.co/papers/2408.13928 | 0 | 0 | 0 | 10 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=FbDgxp7LAa | @inproceedings{
eyzaguirre2024streaming,
title={Streaming Detection of Queried Event Start},
author={Cristobal Eyzaguirre and Eric Tang and Shyamal Buch and Adrien Gaidon and Jiajun Wu and Juan Carlos Niebles},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=FbDgxp7LAa}
} | Robotics, autonomous driving, augmented reality, and many embodied computer vision applications must quickly react to user-defined events unfolding in real time. We address this setting by proposing a novel task for multimodal video understanding---Streaming Detection of Queried Event Start (SDQES).
The goal of SDQES is to identify the beginning of a complex event as described by a natural language query, with high accuracy and low latency.
We introduce a new benchmark based on the Ego4D dataset, as well as new task-specific metrics to study streaming multimodal detection of diverse events in an egocentric video setting.
Inspired by parameter-efficient fine-tuning methods in NLP and for video tasks, we propose adapter-based baselines that enable image-to-video transfer learning, allowing for efficient online video modeling.
We evaluate three vision-language backbones and three adapter architectures on both short-clip and untrimmed video settings. | Streaming Detection of Queried Event Start | [
"Cristobal Eyzaguirre",
"Eric Tang",
"Shyamal Buch",
"Adrien Gaidon",
"Jiajun Wu",
"Juan Carlos Niebles"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=FXTeJvHE0k | @inproceedings{
dauner2024navsim,
title={{NAVSIM}: Data-Driven Non-Reactive Autonomous Vehicle Simulation and Benchmarking},
author={Daniel Dauner and Marcel Hallgarten and Tianyu Li and Xinshuo Weng and Zhiyu Huang and Zetong Yang and Hongyang Li and Igor Gilitschenski and Boris Ivanovic and Marco Pavone and Andreas Geiger and Kashyap Chitta},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=FXTeJvHE0k}
} | Benchmarking vision-based driving policies is challenging. On one hand, open-loop evaluation with real data is easy, but these results do not reflect closed-loop performance. On the other, closed-loop evaluation is possible in simulation, but is hard to scale due to its significant computational demands. Further, the simulators available today exhibit a large domain gap to real data. This has resulted in an inability to draw clear conclusions from the rapidly growing body of research on end-to-end autonomous driving. In this paper, we present NAVSIM, a middle ground between these evaluation paradigms, where we use large datasets in combination with a non-reactive simulator to enable large-scale real-world benchmarking. Specifically, we gather simulation-based metrics, such as progress and time to collision, by unrolling bird's eye view abstractions of the test scenes for a short simulation horizon. Our simulation is non-reactive, i.e., the evaluated policy and environment do not influence each other. As we demonstrate empirically, this decoupling allows open-loop metric computation while being better aligned with closed-loop evaluations than traditional displacement errors. NAVSIM enabled a new competition held at CVPR 2024, where 143 teams submitted 463 entries, resulting in several new insights. On a large set of challenging scenarios, we observe that simple methods with moderate compute requirements such as TransFuser can match recent large-scale end-to-end driving architectures such as UniAD. Our modular framework can potentially be extended with new datasets, data curation strategies, and metrics, and will be continually maintained to host future challenges. Our code is available at https://github.com/autonomousvision/navsim. | NAVSIM: Data-Driven Non-Reactive Autonomous Vehicle Simulation and Benchmarking | [
"Daniel Dauner",
"Marcel Hallgarten",
"Tianyu Li",
"Xinshuo Weng",
"Zhiyu Huang",
"Zetong Yang",
"Hongyang Li",
"Igor Gilitschenski",
"Boris Ivanovic",
"Marco Pavone",
"Andreas Geiger",
"Kashyap Chitta"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.15349 | [
"https://github.com/autonomousvision/navsim"
] | https://huggingface.co/papers/2406.15349 | 6 | 5 | 1 | 12 | [
"autonomousvision/navsim_baselines"
] | [] | [] | [
"autonomousvision/navsim_baselines"
] | [] | [] | 1 |
null | https://openreview.net/forum?id=FN02v4nD8y | @inproceedings{
karpowicz2024fewshot,
title={Few-shot Algorithms for Consistent Neural Decoding ({FALCON}) Benchmark},
author={Brianna M. Karpowicz and Joel Ye and Chaofei Fan and Pablo Tostado-Marcos and Fabio Rizzoglio and Clayton B Washington and Thiago Scodeler and Diogo S de Lucena and Samuel R. Nason-Tomaszewski and Matthew Mender and Xuan Ma and Ezequiel Matias Arneodo and Leigh Hochberg and Cynthia Chestek and Jaimie M. Henderson and Timothy Q Gentner and Vikash Gilja and Lee E. Miller and Adam G. Rouse and Robert Gaunt and Jennifer L Collinger and Chethan Pandarinath},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=FN02v4nD8y}
} | Intracortical brain-computer interfaces (iBCIs) can restore movement and communication abilities to individuals with paralysis by decoding their intended behavior from neural activity recorded with an implanted device. While this activity yields high-performance decoding over short timescales, neural data is often nonstationary, which can lead to decoder failure if not accounted for. To maintain performance, users must frequently recalibrate decoders, which requires the arduous collection of new neural and behavioral data. Aiming to reduce this burden, several approaches have been developed that either limit recalibration data requirements (few-shot approaches) or eliminate explicit recalibration entirely (zero-shot approaches). However, progress is limited by a lack of standardized datasets and comparison metrics, causing methods to be compared in an ad hoc manner. Here we introduce the FALCON benchmark suite (Few-shot Algorithms for COnsistent Neural decoding) to standardize evaluation of iBCI robustness. FALCON curates five datasets of neural and behavioral data that span movement and communication tasks to focus on behaviors of interest to modern-day iBCIs. Each dataset includes calibration data, optional few-shot recalibration data, and private evaluation data. We implement a flexible evaluation platform which only requires user-submitted code to return behavioral predictions on unseen data. We also seed the benchmark by applying baseline methods spanning several classes of possible approaches. FALCON aims to provide rigorous selection criteria for robust iBCI decoders, easing their translation to real-world devices. https://snel-repo.github.io/falcon/ | Few-shot Algorithms for Consistent Neural Decoding (FALCON) Benchmark | [
"Brianna M. Karpowicz",
"Joel Ye",
"Chaofei Fan",
"Pablo Tostado-Marcos",
"Fabio Rizzoglio",
"Clayton B Washington",
"Thiago Scodeler",
"Diogo S de Lucena",
"Samuel R. Nason-Tomaszewski",
"Matthew Mender",
"Xuan Ma",
"Ezequiel Matias Arneodo",
"Leigh Hochberg",
"Cynthia Chestek",
"Jaimie M. Henderson",
"Timothy Q Gentner",
"Vikash Gilja",
"Lee E. Miller",
"Adam G. Rouse",
"Robert Gaunt",
"Jennifer L Collinger",
"Chethan Pandarinath"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=FI89ORf7YH | @inproceedings{
shangguan2024scalable,
title={Scalable Early Childhood Reading Performance Prediction},
author={Zhongkai Shangguan and Zanming Huang and Eshed Ohn-Bar and Ola Ozernov-Palchik and Derek Kosty and Michael Stoolmiller and Hank Fien},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=FI89ORf7YH}
} | Models for student reading performance can empower educators and institutions to proactively identify at-risk students, thereby enabling early and tailored instructional interventions. However, there are no suitable publicly available educational datasets for modeling and predicting future reading performance. In this work, we introduce the Enhanced Core Reading Instruction (ECRI) dataset, a novel large-scale longitudinal tabular dataset collected across 44 schools with 6,916 students and 172 teachers. We leverage the dataset to empirically evaluate the ability of state-of-the-art machine learning models to recognize early childhood educational patterns in multivariate and partial measurements. Specifically, we demonstrate a simple self-supervised strategy in which a Multi-Layer Perception (MLP) network is pre-trained over masked inputs to outperform several strong baselines while generalizing over diverse educational settings. To facilitate future developments in precise modeling and responsible use of models for individualized and early intervention strategies, our data and code are available at https://ecri-data.github.io/. | Scalable Early Childhood Reading Performance Prediction | [
"Zhongkai Shangguan",
"Zanming Huang",
"Eshed Ohn-Bar",
"Ola Ozernov-Palchik",
"Derek Kosty",
"Michael Stoolmiller",
"Hank Fien"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=F7rAX6yiS2 | @inproceedings{
wang2024direct,
title={DiRe{CT}: Diagnostic Reasoning for Clinical Notes via Large Language Models},
author={Bowen Wang and Jiuyang Chang and Yiming Qian and Guoxin Chen and Junhao Chen and Zhouqiang Jiang and Jiahao Zhang and Yuta Nakashima and Hajime Nagahara},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=F7rAX6yiS2}
} | Large language models (LLMs) have recently showcased remarkable capabilities, spanning a wide range of tasks and applications, including those in the medical domain. Models like GPT-4 excel in medical question answering but may face challenges in the lack of interpretability when handling complex tasks in real clinical settings. We thus introduce the diagnostic reasoning dataset for clinical notes (DiReCT), aiming at evaluating the reasoning ability and interpretability of LLMs compared to human doctors. It contains 511 clinical notes, each meticulously annotated by physicians, detailing the diagnostic reasoning process from observations in a clinical note to the final diagnosis. Additionally, a diagnostic knowledge graph is provided to offer essential knowledge for reasoning, which may not be covered in the training data of existing LLMs. Evaluations of leading LLMs on DiReCT bring out a significant gap between their reasoning ability and that of human doctors, highlighting the critical need for models that can reason effectively in real-world clinical scenarios. | DiReCT: Diagnostic Reasoning for Clinical Notes via Large Language Models | [
"Bowen Wang",
"Jiuyang Chang",
"Yiming Qian",
"Guoxin Chen",
"Junhao Chen",
"Zhouqiang Jiang",
"Jiahao Zhang",
"Yuta Nakashima",
"Hajime Nagahara"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2408.01933 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=EvgyfFsv0w | @inproceedings{
zheng2024stylebreeder,
title={Stylebreeder: Exploring and Democratizing Artistic Styles through Text-to-Image Models},
author={Matthew Zheng and Enis Simsar and Hidir Yesiltepe and Federico Tombari and Joel Simon and Pinar Yanardag},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=EvgyfFsv0w}
} | Text-to-image models are becoming increasingly popular, revolutionizing the landscape of digital art creation by enabling highly detailed and creative visual content generation. These models have been widely employed across various domains, particularly in art generation, where they facilitate a broad spectrum of creative expression and democratize access to artistic creation. In this paper, we introduce STYLEBREEDER, a comprehensive dataset of 6.8M images and 1.8M prompts generated by 95K users on Artbreeder, a platform that has emerged as a significant hub for creative exploration with over 13M users. We introduce a series of tasks with this dataset aimed at identifying diverse artistic styles, generating personalized content, and recommending styles based on user interests. By documenting unique, user-generated styles that transcend conventional categories like 'cyberpunk' or 'Picasso,' we explore the potential for unique, crowd-sourced styles that could provide deep insights into the collective creative psyche of users worldwide. We also evaluate different personalization methods to enhance artistic expression and introduce a style atlas, making these models available in LoRA format for public use. Our research demonstrates the potential of text-to-image diffusion models to uncover and promote unique artistic expressions, further democratizing AI in art and fostering a more diverse and inclusive artistic community. The dataset, code, and models are available at https://stylebreeder.github.io under a Public Domain (CC0) license. | Stylebreeder: Exploring and Democratizing Artistic Styles through Text-to-Image Models | [
"Matthew Zheng",
"Enis Simsar",
"Hidir Yesiltepe",
"Federico Tombari",
"Joel Simon",
"Pinar Yanardag"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.14599 | [
""
] | https://huggingface.co/papers/2406.14599 | 3 | 16 | 2 | 6 | [] | [
"stylebreeder/stylebreeder"
] | [] | [] | [
"stylebreeder/stylebreeder"
] | [] | 1 |
null | https://openreview.net/forum?id=EvEqYlQv8T | @inproceedings{
ying2024automating,
title={Automating Dataset Updates Towards Reliable and Timely Evaluation of Large Language Models},
author={Jiahao Ying and Yixin Cao and Yushi Bai and Qianru Sun and Bo Wang and Wei Tang and Zhaojun Ding and Yizhe Yang and Xuanjing Huang and Shuicheng YAN},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=EvEqYlQv8T}
} | Large language models (LLMs) have achieved impressive performance across various natural language benchmarks, prompting a continual need to curate more difficult datasets for larger LLMs, which is costly and time-consuming. In this paper, we propose to automate dataset updating and provide systematical analysis regarding its effectiveness in dealing with benchmark leakage issue, difficulty control, and stability. Thus, once current benchmark has been mastered or leaked, we can update it for timely and reliable evaluation. There are two updating strategies: 1) mimicking strategy to generate similar samples based on original data, preserving stylistic and contextual essence, and 2) extending strategy that further expands existing samples at varying cognitive levels by adapting Bloom’s taxonomy of educational objectives. Extensive experiments on updated MMLU and BIG-Bench demonstrate the stability of the proposed strategies and find that the mimicking strategy can effectively alleviate issues of overestimation from benchmark leakage. In cases where the efficient mimicking strategy fails, our extending strategy still shows promising results. Additionally, by controlling the difficulty, we can better discern the models’ performance and enable fine-grained analysis — neither too difficult nor too easy an exam can fairly judge students’ learning status. To the best of our knowledge, we are the first to automate updating benchmarks for reliable and timely evaluation. Our demo leaderboard can be found at https://yingjiahao14.github.io/Automating-DatasetUpdates/. | Automating Dataset Updates Towards Reliable and Timely Evaluation of Large Language Models | [
"Jiahao Ying",
"Yixin Cao",
"Yushi Bai",
"Qianru Sun",
"Bo Wang",
"Wei Tang",
"Zhaojun Ding",
"Yizhe Yang",
"Xuanjing Huang",
"Shuicheng YAN"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2402.11894 | [
""
] | https://huggingface.co/papers/2402.11894 | 1 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=EqaSEbU4LP | @inproceedings{
tankala2024wikido,
title={Wiki{DO}: A New Benchmark Evaluating Cross-Modal Retrieval for Vision-Language Models},
author={Pavan Kalyan Tankala and Piyush Singh Pasi and Sahil Dharod and Azeem Motiwala and Preethi Jyothi and Aditi Chaudhary and Krishna Srinivasan},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=EqaSEbU4LP}
} | Cross-modal (image-to-text and text-to-image) retrieval is an established task used in evaluation benchmarks to test the performance of vision-language models (VLMs). Several state-of-the-art VLMs (e.g. CLIP, BLIP-2) have achieved near-perfect performance on widely-used image-text retrieval benchmarks such as MSCOCO-Test-5K and Flickr30K-Test-1K. As a measure of out-of-distribution (OOD) generalization, prior works rely on zero-shot performance evaluated on one dataset (Flickr) using a VLM finetuned on another one (MSCOCO). We argue that such comparisons are insufficient to assess the OOD generalization capability of models due to high visual and linguistic similarity between the evaluation and finetuning datasets. To address this gap, we introduce WikiDO (drawn from Wikipedia Diversity Observatory), a novel cross-modal retrieval benchmark to assess the OOD generalization capabilities of pretrained VLMs. This consists of newly scraped 380K image-text pairs from Wikipedia with domain labels, a carefully curated, human-verified a)in-distribution (ID) test set (3K) and b) OOD test set (3K). The image-text pairs are very diverse in topics and geographical locations. We evaluate different VLMs of varying capacity on the \wikido benchmark; BLIP-2 achieves zero-shot performance of $R@1\approx66\%$ on the OOD test set, compared to $\approx$ $81\%$ on COCO and $\approx95\%$ on Flickr. When fine-tuned on WikiDO, the $R@1$ improvement is at most $\approx5\%$ on OOD instances compared to $\approx12\%$ on ID instances. We probe the VLMs with varying finetuning objectives and datasets of varying sizes to identify what aids OOD generalization the most. Our results confirm that WikiDO offers a strong cross-modal benchmark for current VLMs in specifically evaluating for OOD generalization. Our benchmark is hosted as a competition at https://kaggle.com/competitions/wikido24 with public access to dataset and code. | WikiDO: A New Benchmark Evaluating Cross-Modal Retrieval for Vision-Language Models | [
"Pavan Kalyan Tankala",
"Piyush Singh Pasi",
"Sahil Dharod",
"Azeem Motiwala",
"Preethi Jyothi",
"Aditi Chaudhary",
"Krishna Srinivasan"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=EpnsUQavJA | @inproceedings{
chen2024coin,
title={Co{IN}: A Benchmark of Continual Instruction Tuning for Multimodel Large Language Models},
author={Cheng Chen and Junchen Zhu and Xu Luo and Heng Tao Shen and Jingkuan Song and Lianli Gao},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=EpnsUQavJA}
} | Instruction tuning demonstrates impressive performance in adapting Multimodal Large Language Models (MLLMs) to follow task instructions and improve generalization ability. By extending tuning across diverse tasks, MLLMs can further enhance their understanding of world knowledge and instruction intent. However, continual instruction tuning has been largely overlooked and there are no public benchmarks available. In this paper, we present CoIN, a comprehensive benchmark tailored for assessing the behavior of existing MLLMs under continual instruction tuning. CoIN comprises 10 meticulously crafted datasets spanning 8 tasks, ensuring diversity and serving as a robust evaluation framework to assess crucial aspects of continual instruction tuning, such as task order, instruction diversity and volume. Additionally, apart from traditional evaluation, we design another LLM-based metric to assess the knowledge preserved within MLLMs for reasoning. Following an in-depth evaluation of several MLLMs, we demonstrate that they still suffer catastrophic forgetting, and the failure in instruction alignment assumes the main responsibility, instead of reasoning knowledge forgetting. To this end, we introduce MoELoRA which is effective in retaining the previous instruction alignment. | CoIN: A Benchmark of Continual Instruction Tuning for Multimodel Large Language Models | [
"Cheng Chen",
"Junchen Zhu",
"Xu Luo",
"Heng Tao Shen",
"Jingkuan Song",
"Lianli Gao"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Eogs84mv7N | @inproceedings{
cui2024biomedical,
title={Biomedical Visual Instruction Tuning with Clinician Preference Alignment},
author={Hejie Cui and Lingjun Mao and Xin LIANG and Jieyu Zhang and Hui Ren and Quanzheng Li and Xiang Li and Carl Yang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=Eogs84mv7N}
} | Recent advancements in multimodal foundation models have showcased impressive capabilities in understanding and reasoning with visual and textual information. Adapting these foundation models trained for general usage to specialized domains like biomedicine requires large-scale domain-specific instruction datasets. While existing works have explored curating such datasets automatically, the resultant datasets are not explicitly aligned with domain expertise. In this work, we propose a data-centric framework, Biomedical Visual Instruction Tuning with Clinician Preference Alignment (BioMed-VITAL), that incorporates clinician preferences into both stages of generating and selecting instruction data for tuning biomedical multimodal foundation models. First, during the generation stage, we prompt the GPT-4V generator with a diverse set of clinician-selected demonstrations for preference-aligned data candidate generation. Then, during the selection phase, we train a separate selection model, which explicitly distills clinician and policy-guided model preferences into a rating function to select high-quality data for medical instruction tuning. Results show that the model tuned with the instruction-following data from our method demonstrates a significant improvement in open visual chat (18.5% relatively) and medical VQA (win rate up to 81.73%). Our instruction-following data and models are available at https://BioMed-VITAL.github.io. | Biomedical Visual Instruction Tuning with Clinician Preference Alignment | [
"Hejie Cui",
"Lingjun Mao",
"Xin LIANG",
"Jieyu Zhang",
"Hui Ren",
"Quanzheng Li",
"Xiang Li",
"Carl Yang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.13173 | [
"https://github.com/mao1207/BioMed-VITAL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=ElUrNM9U8c | @inproceedings{
khrabrov2024nabladft,
title={\${\textbackslash}nabla{\textasciicircum}2\${DFT}: A Universal Quantum Chemistry Dataset of Drug-Like Molecules and a Benchmark for Neural Network Potentials},
author={Kuzma Khrabrov and Anton Ber and Artem Tsypin and Konstantin Ushenin and Egor Rumiantsev and Alexander Telepov and Dmitry Protasov and Ilya Shenbin and Anton M. Alekseev and Mikhail Shirokikh and Sergey Nikolenko and Elena Tutubalina and Artur Kadurin},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=ElUrNM9U8c}
} | Methods of computational quantum chemistry provide accurate approximations of molecular properties crucial for computer-aided drug discovery and other areas of chemical science.
However, high computational complexity limits the scalability of their applications.
Neural network potentials (NNPs) are a promising alternative to quantum chemistry methods, but they require large and diverse datasets for training.
This work presents a new dataset and benchmark called $\nabla^2$DFT that is based on the nablaDFT.
It contains twice as much molecular structures, three times more conformations, new data types and tasks, and state-of-the-art models.
The dataset includes energies, forces, 17 molecular properties, Hamiltonian and overlap matrices, and a wavefunction object.
All calculations were performed at the DFT level ($\omega$B97X-D/def2-SVP) for each conformation.
Moreover, $\nabla^2$DFT is the first dataset that contains relaxation trajectories for a substantial number of drug-like molecules.
We also introduce a novel benchmark for evaluating NNPs in molecular property prediction, Hamiltonian prediction, and conformational optimization tasks.
Finally, we propose an extendable framework for training NNPs and implement 10 models within it. | ∇^2DFT: A Universal Quantum Chemistry Dataset of Drug-Like Molecules and a Benchmark for Neural Network Potentials | [
"Kuzma Khrabrov",
"Anton Ber",
"Artem Tsypin",
"Konstantin Ushenin",
"Egor Rumiantsev",
"Alexander Telepov",
"Dmitry Protasov",
"Ilya Shenbin",
"Anton M. Alekseev",
"Mikhail Shirokikh",
"Sergey Nikolenko",
"Elena Tutubalina",
"Artur Kadurin"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=EiH6WWLzlu | @inproceedings{
chen2024sharegptvideo,
title={Share{GPT}4Video: Improving Video Understanding and Generation with Better Captions},
author={Lin Chen and Xilin Wei and Jinsong Li and Xiaoyi Dong and Pan Zhang and Yuhang Zang and Zehui Chen and Haodong Duan and Bin Lin and Zhenyu Tang and Li Yuan and Yu Qiao and Dahua Lin and Feng Zhao and Jiaqi Wang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=EiH6WWLzlu}
} | We present the ShareGPT4Video series, aiming to facilitate the video understanding of large video-language models (LVLMs) and the video generation of text-to-video models (T2VMs) via dense and precise captions. The series comprises: 1) ShareGPT4Video, 40K GPT4V annotated dense captions of videos with various lengths and sources, developed through carefully designed data filtering and annotating strategy. 2) ShareCaptioner-Video, an efficient and capable captioning model for arbitrary videos, with 4.8M high-quality aesthetic videos annotated by it. 3) ShareGPT4Video-8B, a simple yet superb LVLM that reached SOTA performance on three advancing video benchmarks. To achieve this, taking aside the non-scalable costly human annotators, we find using GPT4V to caption video with a naive multi-frame or frame-concatenation input strategy leads to less detailed and sometimes temporal-confused results. We argue the challenge of designing a high-quality video captioning strategy lies in three aspects: 1) Inter-frame precise temporal change understanding. 2) Intra-frame detailed content description. 3) Frame-number scalability for arbitrary-length videos. To this end, we meticulously designed a differential video captioning strategy, which is stable, scalable, and efficient for generating captions for videos with arbitrary resolution, aspect ratios, and length. Based on it, we construct ShareGPT4Video, which contains 40K high-quality videos spanning a wide range of categories, and the resulting captions encompass rich world knowledge, object attributes, camera movements, and crucially, detailed and precise temporal descriptions of events. Based on ShareGPT4Video, we further develop ShareCaptioner-Video, a superior captioner capable of efficiently generating high-quality captions for arbitrary videos. We annotated 4.8M aesthetically appealing videos by it and verified their effectiveness on a 10-second text2video generation task. For video understanding, we verified the effectiveness of ShareGPT4Video on several current LVLM architectures and presented our superb new LVLM ShareGPT4Video-8B. All the models, strategies, and annotations will be open-sourced and we hope this project can serve as a pivotal resource for advancing both the LVLMs and T2VMs community. | ShareGPT4Video: Improving Video Understanding and Generation with Better Captions | [
"Lin Chen",
"Xilin Wei",
"Jinsong Li",
"Xiaoyi Dong",
"Pan Zhang",
"Yuhang Zang",
"Zehui Chen",
"Haodong Duan",
"Bin Lin",
"Zhenyu Tang",
"Li Yuan",
"Yu Qiao",
"Dahua Lin",
"Feng Zhao",
"Jiaqi Wang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.04325 | [
""
] | https://huggingface.co/papers/2406.04325 | 10 | 72 | 4 | 15 | [
"Lin-Chen/sharegpt4video-8b",
"Lin-Chen/ShareCaptioner-Video"
] | [
"ShareGPT4Video/ShareGPT4Video",
"lodestones/ShareGPT4Video",
"lodestone-horizon/ShareGPT4Video"
] | [
"Lin-Chen/ShareGPT4Video-8B",
"Lin-Chen/ShareCaptioner-Video",
"KwabsHug/GameConfigIdea",
"cocktailpeanut/ShareCaptioner-Video",
"NotYuSheng/Playground"
] | [
"Lin-Chen/sharegpt4video-8b",
"Lin-Chen/ShareCaptioner-Video"
] | [
"ShareGPT4Video/ShareGPT4Video",
"lodestones/ShareGPT4Video",
"lodestone-horizon/ShareGPT4Video"
] | [
"Lin-Chen/ShareGPT4Video-8B",
"Lin-Chen/ShareCaptioner-Video",
"KwabsHug/GameConfigIdea",
"cocktailpeanut/ShareCaptioner-Video",
"NotYuSheng/Playground"
] | 1 |
null | https://openreview.net/forum?id=EXwf5iE98P | @inproceedings{
liu2024ikea,
title={{IKEA} Manuals at Work: 4D Grounding of Assembly Instructions on Internet Videos},
author={Yunong Liu and Cristobal Eyzaguirre and Manling Li and Shubh Khanna and Juan Carlos Niebles and Vineeth Ravi and Saumitra Mishra and Weiyu Liu and Jiajun Wu},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=EXwf5iE98P}
} | Shape assembly is a ubiquitous task in daily life, integral for constructing complex 3D structures like IKEA furniture. While significant progress has been made in developing autonomous agents for shape assembly, existing datasets have not yet tackled the 4D grounding of assembly instructions in videos, essential for a holistic understanding of assembly in 3D space over time. We introduce IKEA Video Manuals, a dataset that features 3D models of furniture parts, instructional manuals, assembly videos from the Internet, and most importantly, annotations of dense spatio-temporal alignments between these data modalities. To demonstrate the utility of IKEA Video Manuals, we present five applications essential for shape assembly: assembly plan generation, part-conditioned segmentation, part-conditioned pose estimation, video object segmentation, and furniture assembly based on instructional video manuals. For each application, we provide evaluation metrics and baseline methods. Through experiments on our annotated data, we highlight many challenges in grounding assembly instructions in videos to improve shape assembly, including handling occlusions, varying viewpoints, and extended assembly sequences. | IKEA Manuals at Work: 4D Grounding of Assembly Instructions on Internet Videos | [
"Yunong Liu",
"Cristobal Eyzaguirre",
"Manling Li",
"Shubh Khanna",
"Juan Carlos Niebles",
"Vineeth Ravi",
"Saumitra Mishra",
"Weiyu Liu",
"Jiajun Wu"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2411.11409 | [
""
] | https://huggingface.co/papers/2411.11409 | 0 | 0 | 0 | 9 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=EWm9zR5Qy1 | @inproceedings{
angeloudi2024the,
title={The Multimodal Universe: Enabling Large-Scale Machine Learning with 100{TB} of Astronomical Scientific Data},
author={Eirini Angeloudi and Jeroen Audenaert and Micah Bowles and Benjamin M. Boyd and David Chemaly and Brian Cherinka and Ioana Ciuca and Miles Cranmer and Aaron Do and Matthew Grayling and Erin Elizabeth Hayes and Tom Hehir and Shirley Ho and Marc Huertas-Company and Kartheik G. Iyer and Maja Jablonska and Francois Lanusse and Henry W. Leung and Kaisey Mandel and Juan Rafael Mart{\'\i}nez-Galarza and Peter Melchior and Lucas Thibaut Meyer and Liam Holden Parker and Helen Qu and Jeff Shen and Michael J. Smith and Connor Stone and Mike Walmsley and John F Wu},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=EWm9zR5Qy1}
} | We present the `Multimodal Universe`, a large-scale multimodal dataset of scientific astronomical data, compiled specifically to facilitate machine learning research. Overall, our dataset contains hundreds of millions of astronomical observations, constituting 100TB of multi-channel and hyper-spectral images, spectra, multivariate time series, as well as a wide variety of associated scientific measurements and metadata. In addition, we include a range of benchmark tasks representative of standard practices for machine learning methods in astrophysics. This massive dataset will enable the development of large multi-modal models specifically targeted towards scientific applications. All codes used to compile the dataset, and a description of how to access the data is available at https://github.com/MultimodalUniverse/MultimodalUniverse | The Multimodal Universe: Enabling Large-Scale Machine Learning with 100TB of Astronomical Scientific Data | [
"Eirini Angeloudi",
"Jeroen Audenaert",
"Micah Bowles",
"Benjamin M. Boyd",
"David Chemaly",
"Brian Cherinka",
"Ioana Ciuca",
"Miles Cranmer",
"Aaron Do",
"Matthew Grayling",
"Erin Elizabeth Hayes",
"Tom Hehir",
"Shirley Ho",
"Marc Huertas-Company",
"Kartheik G. Iyer",
"Maja Jablonska",
"Francois Lanusse",
"Henry W. Leung",
"Kaisey Mandel",
"Juan Rafael Martínez-Galarza",
"Peter Melchior",
"Lucas Thibaut Meyer",
"Liam Holden Parker",
"Helen Qu",
"Jeff Shen",
"Michael J. Smith",
"Connor Stone",
"Mike Walmsley",
"John F Wu"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=ETZk7lqyaF | @inproceedings{
zhang2024personalsum,
title={PersonalSum: A User-Subjective Guided Personalized Summarization Dataset for Large Language Models},
author={Lemei Zhang and Peng Liu and Marcus Tiedemann Oekland Henriksboe and Even W. Lauvrak and Jon Atle Gulla and Heri Ramampiaro},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=ETZk7lqyaF}
} | With the rapid advancement of Natural Language Processing in recent years, numerous studies have shown that generic summaries generated by Large Language Models (LLMs) can sometimes surpass those annotated by experts, such as journalists, according to human evaluations. However, there is limited research on whether these generic summaries meet the individual needs of ordinary people. The biggest obstacle is the lack of human-annotated datasets from the general public. Existing work on personalized summarization often relies on pseudo datasets created from generic summarization datasets or controllable tasks that focus on specific named entities or other aspects, such as the length and specificity of generated summaries, collected from hypothetical tasks without the annotators' initiative. To bridge this gap, we propose a high-quality, personalized, manually annotated summarization dataset called PersonalSum. This dataset is the first to investigate whether the focus of public readers differs from the generic summaries generated by LLMs. It includes user profiles, personalized summaries accompanied by source sentences from given articles, and machine-generated generic summaries along with their sources. We investigate several personal signals — entities/topics, plot, and structure of articles—that may affect the generation of personalized summaries using LLMs in a few-shot in-context learning scenario. Our preliminary results and analysis indicate that entities/topics are merely one of the key factors that impact the diverse preferences of users, and personalized summarization remains a significant challenge for existing LLMs. | PersonalSum: A User-Subjective Guided Personalized Summarization Dataset for Large Language Models | [
"Lemei Zhang",
"Peng Liu",
"Marcus Tiedemann Oekland Henriksboe",
"Even W. Lauvrak",
"Jon Atle Gulla",
"Heri Ramampiaro"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2410.03905 | [
"https://github.com/smartmediaai/personalsum"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=EQhLbuitns | @inproceedings{
chandrasegaran2024hourvideo,
title={HourVideo: 1-Hour Video-Language Understanding},
author={Keshigeyan Chandrasegaran and Agrim Gupta and Lea M. Hadzic and Taran Kota and Jimming He and Cristobal Eyzaguirre and Zane Durante and Manling Li and Jiajun Wu and Li Fei-Fei},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=EQhLbuitns}
} | We present **HourVideo**, a benchmark dataset for one hour video-language understanding. Our dataset consists of a novel task suite comprising summarization, perception (*recall*, *tracking*), visual reasoning (*spatial*, *temporal*, *predictive*, *causal*, *counterfactual*), and navigation (*room-to-room*, *object retrieval*) tasks. HourVideo includes 500 manually curated egocentric videos from the Ego4D dataset, spanning durations of 20 to 120 minutes, and features **12,976 high-quality, five-way multiple-choice questions**. Benchmarking results reveal that multimodal models, including GPT-4V and LLaVA-NeXT, achieve marginal improvements over random chance. In stark contrast, human experts significantly outperform the state-of-the-art long-context multimodal model, Gemini Pro 1.5 (85.0\% vs. 37.3\%), highlighting a substantial gap in multimodal capabilities. Our benchmark, evaluation toolkit, prompts, and documentation are available at https://hourvideo.stanford.edu. | HourVideo: 1-Hour Video-Language Understanding | [
"Keshigeyan Chandrasegaran",
"Agrim Gupta",
"Lea M. Hadzic",
"Taran Kota",
"Jimming He",
"Cristobal Eyzaguirre",
"Zane Durante",
"Manling Li",
"Jiajun Wu",
"Li Fei-Fei"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2411.04998 | [
""
] | https://huggingface.co/papers/2411.04998 | 1 | 1 | 0 | 10 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=EFV7fLZRWO | @inproceedings{
schneider2024muscles,
title={Muscles in Time: Learning to Understand Human Motion In-Depth by Simulating Muscle Activations},
author={David Schneider and Simon Rei{\ss} and Marco Kugler and Alexander Jaus and Kunyu Peng and Susanne Sutschet and M. Saquib Sarfraz and Sven Matthiesen and Rainer Stiefelhagen},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=EFV7fLZRWO}
} | Exploring the intricate dynamics between muscular and skeletal structures is pivotal for understanding human motion. This domain presents substantial challenges, primarily attributed to the intensive resources required for acquiring ground truth muscle activation data, resulting in a scarcity of datasets.
In this work, we address this issue by establishing Muscles in Time (MinT), a large-scale synthetic muscle activation dataset.
For the creation of MinT, we enriched existing motion capture datasets by incorporating muscle activation simulations derived from biomechanical human body models using the OpenSim platform, a common framework used in biomechanics and human motion research.
Starting from simple pose sequences, our pipeline enables us to extract detailed information about the timing of muscle activations within the human musculoskeletal system.
Muscles in Time contains over nine hours of simulation data covering 227 subjects and 402 simulated muscle strands.
We demonstrate the utility of this dataset by presenting results on neural network-based muscle activation estimation from human pose sequences with two different sequence-to-sequence architectures. | Muscles in Time: Learning to Understand Human Motion In-Depth by Simulating Muscle Activations | [
"David Schneider",
"Simon Reiß",
"Marco Kugler",
"Alexander Jaus",
"Kunyu Peng",
"Susanne Sutschet",
"M. Saquib Sarfraz",
"Sven Matthiesen",
"Rainer Stiefelhagen"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=EEwb201bnO | @inproceedings{
jia2024infer,
title={Infer Induced Sentiment of Comment Response to Video: A New Task, Dataset and Baseline},
author={Qi Jia and Baoyu Fan and Cong Xu and Lu Liu and Liang Jin and Guoguang Du and Zhenhua Guo and Yaqian Zhao and Xuanjing Huang and Rengang Li},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=EEwb201bnO}
} | Existing video multi-modal sentiment analysis mainly focuses on the sentiment expression of people within the video, yet often neglects the induced sentiment of viewers while watching the videos. Induced sentiment of viewers is essential for inferring the public response to videos and has broad application in analyzing public societal sentiment, effectiveness of advertising and other areas. The micro videos and the related comments provide a rich application scenario for viewers’ induced sentiment analysis. In light of this, we introduces a novel research task, Multimodal Sentiment Analysis for Comment Response of Video Induced(MSA-CRVI), aims to infer opinions and emotions according to comments response to micro video. Meanwhile, we manually annotate a dataset named Comment Sentiment toward to Micro Video (CSMV) to support this research. It is the largest video multi-modal sentiment dataset in terms of scale and video duration to our knowledge, containing 107, 267 comments and 8, 210 micro videos with a video duration of 68.83 hours. To infer the induced sentiment of comment should leverage the video content, we propose the Video Content-aware Comment Sentiment Analysis (VC-CSA) method as a baseline to address the challenges inherent in this new task. Extensive experiments demonstrate that our method is showing significant improvements over other established baselines. We make the dataset and source code publicly available at https://github.com/IEIT-AGI/MSA-CRVI. | Infer Induced Sentiment of Comment Response to Video: A New Task, Dataset and Baseline | [
"Qi Jia",
"Baoyu Fan",
"Cong Xu",
"Lu Liu",
"Liang Jin",
"Guoguang Du",
"Zhenhua Guo",
"Yaqian Zhao",
"Xuanjing Huang",
"Rengang Li"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=EADRzNJFn1 | @inproceedings{
gastinger2024tgb,
title={{TGB} 2.0: A Benchmark for Learning on Temporal Knowledge Graphs and Heterogeneous Graphs},
author={Julia Gastinger and Shenyang Huang and Mikhail Galkin and Erfan Loghmani and Ali Parviz and Farimah Poursafaei and Jacob Danovitch and Emanuele Rossi and Ioannis Koutis and Heiner Stuckenschmidt and Reihaneh Rabbany and Guillaume Rabusseau},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=EADRzNJFn1}
} | Multi-relational temporal graphs are powerful tools for modeling real-world data, capturing the evolving and interconnected nature of entities over time. Recently, many novel models are proposed for ML on such graphs intensifying the need for robust evaluation and standardized benchmark datasets. However, the availability of such resources remains scarce and evaluation faces added complexity due to reproducibility issues in experimental protocols. To address these challenges, we introduce Temporal Graph Benchmark 2.0 (TGB 2.0), a novel benchmarking framework tailored for evaluating methods for predicting future links on Temporal Knowledge Graphs and Temporal Heterogeneous Graphs with a focus on large-scale datasets, extending the Temporal Graph Benchmark. TGB 2.0 facilitates comprehensive evaluations by presenting eight novel datasets spanning five domains with up to 53 million edges. TGB 2.0 datasets are significantly larger
than existing datasets in terms of number of nodes, edges, or timestamps. In addition, TGB 2.0 provides a reproducible and realistic evaluation pipeline for multi-relational temporal graphs. Through extensive experimentation, we observe that 1) leveraging edge-type information is crucial to obtain high performance, 2) simple heuristic baselines are often competitive with more complex methods, 3) most methods fail to run on our largest datasets, highlighting the need for research on more scalable methods. | TGB 2.0: A Benchmark for Learning on Temporal Knowledge Graphs and Heterogeneous Graphs | [
"Julia Gastinger",
"Shenyang Huang",
"Mikhail Galkin",
"Erfan Loghmani",
"Ali Parviz",
"Farimah Poursafaei",
"Jacob Danovitch",
"Emanuele Rossi",
"Ioannis Koutis",
"Heiner Stuckenschmidt",
"Reihaneh Rabbany",
"Guillaume Rabusseau"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.09639 | [
"https://github.com/erfanloghmani/myket-android-application-market-dataset"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=E8EAeyTxOy | @inproceedings{
li2024infibench,
title={InfiBench: Evaluating the Question-Answering Capabilities of Code Large Language Models},
author={Linyi Li and Shijie Geng and Zhenwen Li and Yibo He and Hao Yu and Ziyue Hua and Guanghan Ning and Siwei Wang and Tao Xie and Hongxia Yang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=E8EAeyTxOy}
} | Large Language Models for code (code LLMs) have witnessed tremendous progress in recent years. With the rapid development of code LLMs, many popular evaluation benchmarks, such as HumanEval, DS-1000, and MBPP, have emerged to measure the performance of code LLMs with a particular focus on code generation tasks. However, they are insufficient to cover the full range of expected capabilities of code LLMs, which span beyond code generation to answering diverse coding-related questions. To fill this gap, we propose InfiBench, the first large-scale freeform question-answering (QA) benchmark for code to our knowledge, comprising 234 carefully selected high-quality Stack Overflow questions that span across 15 programming languages. InfiBench uses four types of model-free automatic metrics to evaluate response correctness where domain experts carefully concretize the criterion for each question. We conduct a systematic evaluation for over 100 latest code LLMs on InfiBench, leading to a series of novel and insightful findings. Our detailed analyses showcase potential directions for further advancement of code LLMs. InfiBench is fully open source at https://infi-coder.github.io/infibench and continuously expanding to foster more scientific and systematic practices for code LLM evaluation. | InfiBench: Evaluating the Question-Answering Capabilities of Code Large Language Models | [
"Linyi Li",
"Shijie Geng",
"Zhenwen Li",
"Yibo He",
"Hao Yu",
"Ziyue Hua",
"Guanghan Ning",
"Siwei Wang",
"Tao Xie",
"Hongxia Yang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2404.07940 | [
"https://github.com/infi-coder/infibench-evaluator"
] | https://huggingface.co/papers/2404.07940 | 1 | 0 | 0 | 10 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=E18kRXTGmV | @inproceedings{
mogrovejo2024cvqa,
title={{CVQA}: Culturally-diverse Multilingual Visual Question Answering Benchmark},
author={David Orlando Romero Mogrovejo and Chenyang Lyu and Haryo Akbarianto Wibowo and Santiago G{\'o}ngora and Aishik Mandal and Sukannya Purkayastha and Jesus-German Ortiz-Barajas and Emilio Villa Cueva and Jinheon Baek and Soyeong Jeong and Injy Hamed and Zheng Xin Yong and Zheng Wei Lim and Paula M{\'o}nica Silva and Jocelyn Dunstan and M{\'e}lanie Jouitteau and David LE MEUR and Joan Nwatu and Ganzorig Batnasan and Munkh-Erdene Otgonbold and Munkhjargal Gochoo and Guido Ivetta and Luciana Benotti and Laura Alonso Alemany and Hern{\'a}n Maina and Jiahui Geng and Tiago Timponi Torrent and Frederico Belcavello and Marcelo Viridiano and Jan Christian Blaise Cruz and Dan John Velasco and Oana Ignat and Zara Burzo and Chenxi Whitehouse and Artem Abzaliev and Teresa Clifford and Gr{\'a}inne Caulfield and Teresa Lynn and Christian Salamea-Palacios and Vladimir Araujo and Yova Kementchedjhieva and Mihail Minkov Mihaylov and Israel Abebe Azime and Henok Biadglign Ademtew and Bontu Fufa Balcha and Naome A Etori and David Ifeoluwa Adelani and Rada Mihalcea and Atnafu Lambebo Tonja and Maria Camila Buitrago Cabrera and Gisela Vallejo and Holy Lovenia and Ruochen Zhang and Marcos Estecha-Garitagoitia and Mario Rodr{\'\i}guez-Cantelar and Toqeer Ehsan and Rendi Chevi and Muhammad Farid Adilazuarda and Ryandito Diandaru and Samuel Cahyawijaya and Fajri Koto and Tatsuki Kuribayashi and Haiyue Song and Aditya Nanda Kishore Khandavally and Thanmay Jayakumar and Raj Dabre and Mohamed Fazli Mohamed Imam and Kumaranage Ravindu Yasas Nagasinghe and Alina Dragonetti and Luis Fernando D'Haro and Olivier NIYOMUGISHA and Jay Gala and Pranjal A Chitale and Fauzan Farooqui and Thamar Solorio and Alham Fikri Aji},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=E18kRXTGmV}
} | Visual Question Answering~(VQA) is an important task in multimodal AI, which requires models to understand and reason on knowledge present in visual and textual data. However, most of the current VQA datasets and models are primarily focused on English and a few major world languages, with images that are Western-centric. While recent efforts have tried to increase the number of languages covered on VQA datasets, they still lack diversity in low-resource languages. More importantly, some datasets extend the text to other languages, either via translation or some other approaches, but usually keep the same images, resulting in narrow cultural representation. To address these limitations, we create CVQA, a new Culturally-diverse Multilingual Visual Question Answering benchmark dataset, designed to cover a rich set of languages and regions, where we engage native speakers and cultural experts in the data collection process. CVQA includes culturally-driven images and questions from across 28 countries in four continents, covering 26 languages with 11 scripts, providing a total of 9k questions. We benchmark several Multimodal Large Language Models (MLLMs) on CVQA, and we show that the dataset is challenging for the current state-of-the-art models. This benchmark will serve as a probing evaluation suite for assessing the cultural bias of multimodal models and hopefully encourage more research efforts towards increasing cultural awareness and linguistic diversity in this field. | CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark | [
"David Orlando Romero Mogrovejo",
"Chenyang Lyu",
"Haryo Akbarianto Wibowo",
"Santiago Góngora",
"Aishik Mandal",
"Sukannya Purkayastha",
"Jesus-German Ortiz-Barajas",
"Emilio Villa Cueva",
"Jinheon Baek",
"Soyeong Jeong",
"Injy Hamed",
"Zheng Xin Yong",
"Zheng Wei Lim",
"Paula Mónica Silva",
"Jocelyn Dunstan",
"Mélanie Jouitteau",
"David LE MEUR",
"Joan Nwatu",
"Ganzorig Batnasan",
"Munkh-Erdene Otgonbold",
"Munkhjargal Gochoo",
"Guido Ivetta",
"Luciana Benotti",
"Laura Alonso Alemany",
"Hernán Maina",
"Jiahui Geng",
"Tiago Timponi Torrent",
"Frederico Belcavello",
"Marcelo Viridiano",
"Jan Christian Blaise Cruz",
"Dan John Velasco",
"Oana Ignat",
"Zara Burzo",
"Chenxi Whitehouse",
"Artem Abzaliev",
"Teresa Clifford",
"Gráinne Caulfield",
"Teresa Lynn",
"Christian Salamea-Palacios",
"Vladimir Araujo",
"Yova Kementchedjhieva",
"Mihail Minkov Mihaylov",
"Israel Abebe Azime",
"Henok Biadglign Ademtew",
"Bontu Fufa Balcha",
"Naome A Etori",
"David Ifeoluwa Adelani",
"Rada Mihalcea",
"Atnafu Lambebo Tonja",
"Maria Camila Buitrago Cabrera",
"Gisela Vallejo",
"Holy Lovenia",
"Ruochen Zhang",
"Marcos Estecha-Garitagoitia",
"Mario Rodríguez-Cantelar",
"Toqeer Ehsan",
"Rendi Chevi",
"Muhammad Farid Adilazuarda",
"Ryandito Diandaru",
"Samuel Cahyawijaya",
"Fajri Koto",
"Tatsuki Kuribayashi",
"Haiyue Song",
"Aditya Nanda Kishore Khandavally",
"Thanmay Jayakumar",
"Raj Dabre",
"Mohamed Fazli Mohamed Imam",
"Kumaranage Ravindu Yasas Nagasinghe",
"Alina Dragonetti",
"Luis Fernando D'Haro",
"Olivier NIYOMUGISHA",
"Jay Gala",
"Pranjal A Chitale",
"Fauzan Farooqui",
"Thamar Solorio",
"Alham Fikri Aji"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | 2406.05967 | [
""
] | https://huggingface.co/papers/2406.05967 | 5 | 5 | 1 | 75 | [] | [
"afaji/cvqa",
"Bretagne/cvqa_br_fr_en"
] | [] | [] | [
"afaji/cvqa",
"Bretagne/cvqa_br_fr_en"
] | [] | 1 |
null | https://openreview.net/forum?id=Dx88A9Zgnv | @inproceedings{
li2024naturalbench,
title={NaturalBench: Evaluating Vision-Language Models on Natural Adversarial Samples},
author={Baiqi Li and Zhiqiu Lin and Wenxuan Peng and Jean de Dieu Nyandwi and Daniel Jiang and Zixian Ma and Simran Khanuja and Ranjay Krishna and Graham Neubig and Deva Ramanan},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=Dx88A9Zgnv}
} | Vision-language models (VLMs) have made significant progress in recent visual-question-answering (VQA) benchmarks that evaluate complex visio-linguistic reasoning. However, are these models truly effective? In this work, we show that VLMs still struggle with natural images and questions that humans can easily answer, which we term $\textbf{natural adversarial samples}$. We also find it surprisingly easy to generate these VQA samples from natural image-text corpora using off-the-shelf models like CLIP and ChatGPT. We propose a semi-automated approach to collect a new benchmark, ${\bf NaturalBench}$, for reliably evaluating VLMs with 10,000 human-verified VQA samples. Crucially, we adopt a $\textbf{vision-centric}$ design by pairing each question with two images that yield different answers, preventing ``blind'' solutions from answering without using the images. This makes NaturalBench more challenging than previous benchmarks that can largely be solved with language priors like commonsense knowledge. We evaluate ${\bf 53}$ state-of-the-art VLMs on NaturalBench, showing that models like BLIP-3, LLaVA-OneVision, Cambrian-1, InternLM-XC2, Llama3.2-Vision, Molmo, Qwen2-VL, and even the (closed-source) GPT-4o lag 50%-70% behind human performance (which is above 90%). We analyze why NaturalBench is hard from two angles: (1) ${\bf Compositionality:}$ Solving NaturalBench requires diverse visio-linguistic skills, including understanding attribute bindings, object relationships, and advanced reasoning like logic and counting. To this end, unlike prior work that uses a single tag per sample, we tag each NaturalBench sample with 1 to 8 skill tags for fine-grained evaluation. (2) ${\bf Biases: }$ NaturalBench exposes severe biases in VLMs, as models often choose the same answer regardless of the image. We show that debiasing can be crucial for VLM performance. Lastly, we apply our benchmark curation method to diverse data sources, including long captions (over 100 words) and non-English languages like Chinese and Hindi, highlighting its potential for dynamic evaluations of VLMs. | NaturalBench: Evaluating Vision-Language Models on Natural Adversarial Samples | [
"Baiqi Li",
"Zhiqiu Lin",
"Wenxuan Peng",
"Jean de Dieu Nyandwi",
"Daniel Jiang",
"Zixian Ma",
"Simran Khanuja",
"Ranjay Krishna",
"Graham Neubig",
"Deva Ramanan"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2410.14669 | [
""
] | https://huggingface.co/papers/2410.14669 | 10 | 35 | 4 | 10 | [] | [
"BaiqiL/NaturalBench",
"BaiqiL/NaturalBench-lmms-eval"
] | [] | [] | [
"BaiqiL/NaturalBench",
"BaiqiL/NaturalBench-lmms-eval"
] | [] | 1 |
null | https://openreview.net/forum?id=DjCSjizgsH | @inproceedings{
li2024simrealfire,
title={Sim2Real-Fire: A Multi-modal Simulation Dataset for Forecast and Backtracking of Real-world Forest Fire},
author={Yanzhi Li and Keqiu Li and LI GUOHUI and zumin wang and Chanqing Ji and Lubo Wang and Die Zuo and Qing Guo and Feng Zhang and Manyu Wang and Di Lin},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=DjCSjizgsH}
} | The latest research on wildfire forecast and backtracking has adopted AI models, which require a large amount of data from wildfire scenarios to capture fire spread patterns. This paper explores using cost-effective simulated wildfire scenarios to train AI models and apply them to the analysis of real-world wildfire. This solution requires AI models to minimize the Sim2Real gap, a brand-new topic in the fire spread analysis research community. To investigate the possibility of minimizing the Sim2Real gap, we collect the Sim2Real-Fire dataset that contains 1M simulated scenarios with multi-modal environmental information for training AI models. We prepare 1K real-world wildfire scenarios for testing the AI models. We also propose a deep transformer, S2R-FireTr, which excels in considering the multi-modal environmental information for forecasting and backtracking the wildfire. S2R-FireTr surpasses state-of-the-art methods in real-world wildfire scenarios. | Sim2Real-Fire: A Multi-modal Simulation Dataset for Forecast and Backtracking of Real-world Forest Fire | [
"Yanzhi Li",
"Keqiu Li",
"LI GUOHUI",
"zumin wang",
"Chanqing Ji",
"Lubo Wang",
"Die Zuo",
"Qing Guo",
"Feng Zhang",
"Manyu Wang",
"Di Lin"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Dgy5WVgPd2 | @inproceedings{
wu2024instruction,
title={Instruction Tuning Large Language Models to Understand Electronic Health Records},
author={Zhenbang Wu and Anant Dadu and Michael Nalls and Faraz Faghri and Jimeng Sun},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=Dgy5WVgPd2}
} | Large language models (LLMs) have shown impressive capabilities in solving a wide range of tasks based on human instructions. However, developing a conversational AI assistant for electronic health record (EHR) data remains challenging due to (1) the lack of large-scale instruction-following datasets and (2) the limitations of existing model architectures in handling complex and heterogeneous EHR data.
In this paper, we introduce MIMIC-Instr, a dataset comprising over 400K open-ended instruction-following examples derived from the MIMIC-IV EHR database. This dataset covers various topics and is suitable for instruction-tuning general-purpose LLMs for diverse clinical use cases. Additionally, we propose Llemr, a general framework that enables LLMs to process and interpret EHRs with complex data structures. Llemr demonstrates competitive performance in answering a wide range of patient-related questions based on EHR data.
Furthermore, our evaluations on clinical predictive modeling benchmarks reveal that the fine-tuned Llemr achieves performance comparable to state-of-the-art (SOTA) baselines using curated features. The dataset and code are available at \url{https://github.com/zzachw/llemr}. | Instruction Tuning Large Language Models to Understand Electronic Health Records | [
"Zhenbang Wu",
"Anant Dadu",
"Michael Nalls",
"Faraz Faghri",
"Jimeng Sun"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=DfhcOelEnP | @inproceedings{
sundar2024cpapers,
title={c{PAPERS}: A Dataset of Situated and Multimodal Interactive Conversations in Scientific Papers},
author={Anirudh Sundar and Jin Xu and William Gay and Christopher Gordon Richardson and Larry Heck},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=DfhcOelEnP}
} | An emerging area of research in situated and multimodal interactive conversations (SIMMC) includes interactions in scientific papers. Since scientific papers are primarily composed of text, equations, figures, and tables, SIMMC methods must be developed specifically for each component to support the depth of inquiry and interactions required by research scientists. This work introduces $Conversational Papers$ (cPAPERS), a dataset of conversational question-answer pairs from reviews of academic papers grounded in these paper components and their associated references from scientific documents available on arXiv. We present a data collection strategy to collect these question-answer pairs from OpenReview and associate them with contextual information from $LaTeX$ source files. Additionally, we present a series of baseline approaches utilizing Large Language Models (LLMs) in both zero-shot and fine-tuned configurations to address the cPAPERS dataset. | cPAPERS: A Dataset of Situated and Multimodal Interactive Conversations in Scientific Papers | [
"Anirudh Sundar",
"Jin Xu",
"William Gay",
"Christopher Gordon Richardson",
"Larry Heck"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.08398 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=DJVyRhT8nP | @inproceedings{
li2024humanaware,
title={Human-Aware Vision-and-Language Navigation: Bridging Simulation to Reality with Dynamic Human Interactions},
author={Heng Li and Minghan Li and Zhi-Qi Cheng and Yifei Dong and Yuxuan Zhou and Jun-Yan He and Qi Dai and Teruko Mitamura and Alexander G Hauptmann},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=DJVyRhT8nP}
} | Vision-and-Language Navigation (VLN) aims to develop embodied agents that navigate based on human instructions. However, current VLN frameworks often rely on static environments and optimal expert supervision, limiting their real-world applicability. To address this, we introduce Human-Aware Vision-and-Language Navigation (HA-VLN), extending traditional VLN by incorporating dynamic human activities and relaxing key assumptions. We propose the Human-Aware 3D (HA3D) simulator, which combines dynamic human activities with the Matterport3D dataset, and the Human-Aware Room-to-Room (HA-R2R) dataset, extending R2R with human activity descriptions. To tackle HA-VLN challenges, we present the Expert-Supervised Cross-Modal (VLN-CM) and Non-Expert-Supervised Decision Transformer (VLN-DT) agents, utilizing cross-modal fusion and diverse training strategies for effective navigation in dynamic human environments. A comprehensive evaluation, including metrics considering human activities, and systematic analysis of HA-VLN's unique challenges, underscores the need for further research to enhance HA-VLN agents' real-world robustness and adaptability. Ultimately, this work provides benchmarks and insights for future research on embodied AI and Sim2Real transfer, paving the way for more realistic and applicable VLN systems in human-populated environments. | Human-Aware Vision-and-Language Navigation: Bridging Simulation to Reality with Dynamic Human Interactions | [
"Heng Li",
"Minghan Li",
"Zhi-Qi Cheng",
"Yifei Dong",
"Yuxuan Zhou",
"Jun-Yan He",
"Qi Dai",
"Teruko Mitamura",
"Alexander G Hauptmann"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | 2406.19236 | [
"https://github.com/lpercc/ha3d_simulator"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=DFr5hteojx | @inproceedings{
kirk2024the,
title={The {PRISM} Alignment Dataset: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models},
author={Hannah Rose Kirk and Alexander Whitefield and Paul R{\"o}ttger and Andrew Michael Bean and Katerina Margatina and Rafael Mosquera and Juan Manuel Ciro and Max Bartolo and Adina Williams and He He and Bertie Vidgen and Scott A. Hale},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=DFr5hteojx}
} | Human feedback is central to the alignment of Large Language Models (LLMs). However, open questions remain about the methods (how), domains (where), people (who) and objectives (to what end) of feedback processes. To navigate these questions, we introduce PRISM, a new dataset which maps the sociodemographics and stated preferences of 1,500 diverse participants from 75 countries, to their contextual preferences and fine-grained feedback in 8,011 live conversations with 21 LLMs. With PRISM, we contribute (i) wider geographic and demographic participation in feedback; (ii) census-representative samples for two countries (UK, US); and (iii) individualised ratings that link to detailed participant profiles, permitting personalisation and attribution of sample artefacts. We target subjective and multicultural perspectives on value-laden and controversial issues, where we expect interpersonal and cross-cultural disagreement. We use PRISM in three case studies to demonstrate the need for careful consideration of which humans provide alignment data. | The PRISM Alignment Dataset: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models | [
"Hannah Rose Kirk",
"Alexander Whitefield",
"Paul Röttger",
"Andrew Michael Bean",
"Katerina Margatina",
"Rafael Mosquera",
"Juan Manuel Ciro",
"Max Bartolo",
"Adina Williams",
"He He",
"Bertie Vidgen",
"Scott A. Hale"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=DFb1gwnhQS | @inproceedings{
li2024fire,
title={{FIRE}: A Dataset for Feedback Integration and Refinement Evaluation of Multimodal Models},
author={Pengxiang Li and Zhi Gao and Bofei Zhang and Tao Yuan and Yuwei Wu and Mehrtash Harandi and Yunde Jia and Song-Chun Zhu and Qing Li},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=DFb1gwnhQS}
} | Vision language models (VLMs) have achieved impressive progress in diverse applications, becoming a prevalent research direction. In this paper, we build FIRE, a feedback-refinement dataset, consisting of 1.1M multi-turn conversations that are derived from 27 source datasets, empowering VLMs to spontaneously refine their responses based on user feedback across diverse tasks. To scale up the data collection, FIRE is collected in two components: FIRE-100K and FIRE-1M, where FIRE-100K is generated by GPT-4V, and FIRE-1M is freely generated via models trained on FIRE-100K. Then, we build FIRE-Bench, a benchmark to comprehensively evaluate the feedback-refining capability of VLMs, which contains 11K feedback-refinement conversations as the test data, two evaluation settings, and a model to provide feedback for VLMs. We develop the FIRE-LLaVA model by fine-tuning LLaVA on FIRE-100K and FIRE-1M, which shows remarkable feedback-refining capability on FIRE-Bench and outperforms untrained VLMs by 50%, making more efficient user-agent interactions and underscoring the significance of the FIRE dataset. | FIRE: A Dataset for Feedback Integration and Refinement Evaluation of Multimodal Models | [
"Pengxiang Li",
"Zhi Gao",
"Bofei Zhang",
"Tao Yuan",
"Yuwei Wu",
"Mehrtash Harandi",
"Yunde Jia",
"Song-Chun Zhu",
"Qing Li"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.11522 | [
""
] | https://huggingface.co/papers/2407.11522 | 2 | 8 | 2 | 9 | [] | [
"PengxiangLi/FIRE"
] | [] | [] | [
"PengxiangLi/FIRE"
] | [] | 1 |
null | https://openreview.net/forum?id=DFDCtGQs7S | @inproceedings{
yang2024biotrove,
title={BioTrove: A Large Curated Image Dataset Enabling {AI} for Biodiversity},
author={Chih-Hsuan Yang and Benjamin Feuer and Talukder Zaki Jubery and Zi K. Deng and Andre Nakkab and Md Zahid Hasan and Shivani Chiranjeevi and Kelly O. Marshall and Nirmal Baishnab and Asheesh K Singh and ARTI SINGH and Soumik Sarkar and Nirav Merchant and Chinmay Hegde and Baskar Ganapathysubramanian},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=DFDCtGQs7S}
} | We introduce BioTrove, the largest publicly accessible dataset designed to advance AI applications in biodiversity. Curated from the iNaturalist platform and vetted to include only research-grade data, BioTrove contains 161.9 million images, offering unprecedented scale and diversity from three primary kingdoms: Animalia ("animals"), Fungi ("fungi"), and Plantae ("plants"), spanning approximately 366.6K species. Each image is annotated with scientific names, taxonomic hierarchies, and common names, providing rich metadata to support accurate AI model development across diverse species and ecosystems.
We demonstrate the value of BioTrove by releasing a suite of CLIP models trained using a subset of 40 million captioned images, known as BioTrove-Train. This subset focuses on seven categories within the dataset that are underrepresented in standard image recognition models, selected for their critical role in biodiversity and agriculture: Aves ("birds"), Arachnida} ("spiders/ticks/mites"), Insecta ("insects"), Plantae ("plants"), Fungi ("fungi"), Mollusca ("snails"), and Reptilia ("snakes/lizards"). To support rigorous assessment, we introduce several new benchmarks and report model accuracy for zero-shot learning across life stages, rare species, confounding species, and multiple taxonomic levels.
We anticipate that BioTrove will spur the development of AI models capable of supporting digital tools for pest control, crop monitoring, biodiversity assessment, and environmental conservation. These advancements are crucial for ensuring food security, preserving ecosystems, and mitigating the impacts of climate change. BioTrove is publicly available, easily accessible, and ready for immediate use. | BioTrove: A Large Curated Image Dataset Enabling AI for Biodiversity | [
"Chih-Hsuan Yang",
"Benjamin Feuer",
"Talukder Zaki Jubery",
"Zi K. Deng",
"Andre Nakkab",
"Md Zahid Hasan",
"Shivani Chiranjeevi",
"Kelly O. Marshall",
"Nirmal Baishnab",
"Asheesh K Singh",
"ARTI SINGH",
"Soumik Sarkar",
"Nirav Merchant",
"Chinmay Hegde",
"Baskar Ganapathysubramanian"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=DERtzUdhkk | @inproceedings{
wu2024torchspatial,
title={TorchSpatial: A Location Encoding Framework and Benchmark for Spatial Representation Learning},
author={Nemin Wu and Qian Cao and Zhangyu Wang and Zeping Liu and Yanlin Qi and Jielu Zhang and Joshua Ni and X. Angela Yao and Hongxu Ma and Lan Mu and Stefano Ermon and Tanuja Ganu and Akshay Nambi and Ni Lao and Gengchen Mai},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=DERtzUdhkk}
} | Spatial representation learning (SRL) aims at learning general-purpose neural network representations from various types of spatial data (e.g., points, polylines, polygons, networks, images, etc.) in their native formats. Learning good spatial representations is a fundamental problem for various downstream applications such as species distribution modeling, weather forecasting, trajectory generation, geographic question answering, etc. Even though SRL has become the foundation of almost all geospatial artificial intelligence (GeoAI) research, we have not yet seen significant efforts to develop an extensive deep learning framework and benchmark to support SRL model development and evaluation. To fill this gap, we propose TorchSpatial, a learning framework and benchmark· for location (point) encoding, which is one of the most fundamental data types of spatial representation learning. TorchSpatial contains three key components: 1) a unified location encoding framework that consolidates 15 commonly recognized location encoders, ensuring scalability and reproducibility of the implementations; 2) the LocBench benchmark tasks encompassing 7 geo-aware image classification and 10 geo-aware image regression datasets; 3) a comprehensive suite of evaluation metrics to quantify geo-aware models’ overall performance as well as their geographic bias, with a novel Geo-Bias Score metric. Finally, we provide a detailed analysis and insights into the model performance and geographic bias of different location encoders. We believe TorchSpatial will foster future advancement of spatial representation learning and spatial fairness in GeoAI research. The TorchSpatial model framework, LocBench, and Geo-Bias Score evaluation framework are available at https://github.com/seai-lab/TorchSpatial. | TorchSpatial: A Location Encoding Framework and Benchmark for Spatial Representation Learning | [
"Nemin Wu",
"Qian Cao",
"Zhangyu Wang",
"Zeping Liu",
"Yanlin Qi",
"Jielu Zhang",
"Joshua Ni",
"X. Angela Yao",
"Hongxu Ma",
"Lan Mu",
"Stefano Ermon",
"Tanuja Ganu",
"Akshay Nambi",
"Ni Lao",
"Gengchen Mai"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.15658 | [
"https://github.com/seai-lab/torchspatial"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=D3jyWDBZTk | @inproceedings{
jin2024shopping,
title={Shopping {MMLU}: A Massive Multi-Task Online Shopping Benchmark for Large Language Models},
author={Yilun Jin and Zheng Li and Chenwei Zhang and Tianyu Cao and Yifan Gao and Pratik Sridatt Jayarao and Mao Li and Xin Liu and Ritesh Sarkhel and Xianfeng Tang and Haodong Wang and Zhengyang Wang and Wenju Xu and Jingfeng Yang and Qingyu Yin and Xian Li and Priyanka Nigam and Yi Xu and Kai Chen and Qiang Yang and Meng Jiang and Bing Yin},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=D3jyWDBZTk}
} | Online shopping is a complex multi-task, few-shot learning problem with a wide and evolving range of entities, relations, and tasks. However, existing models and benchmarks are commonly tailored to specific tasks, falling short of capturing the full complexity of online shopping. Large Language Models (LLMs), with their multi-task and few-shot learning abilities, have the potential to profoundly transform online shopping by alleviating task-specific engineering efforts and by providing users with interactive conversations. Despite the potential, LLMs face unique challenges in online shopping, such as domain-specific concepts, implicit knowledge, and heterogeneous user behaviors. Motivated by the potential and challenges, we propose Shopping MMLU, a diverse multi-task online shopping benchmark derived from real-world Amazon data. Shopping MMLU consists of 57 tasks covering 4 major shopping skills: concept understanding, knowledge reasoning, user behavior alignment, and multi-linguality, and can thus comprehensively evaluate the abilities of LLMs as general shop assistants. With Shoppping MMLU, we benchmark over 20 existing LLMs and uncover valuable insights about practices and prospects of building versatile LLM-based shop assistants. Shopping MMLU can be publicly accessed at https://github.com/KL4805/ShoppingMMLU. In addition, with Shopping MMLU, we are hosting a competition in KDD Cup 2024 with over 500 participating teams. The winning solutions and the associated workshop can be accessed at our website https://amazon-kddcup24.github.io/. | Shopping MMLU: A Massive Multi-Task Online Shopping Benchmark for Large Language Models | [
"Yilun Jin",
"Zheng Li",
"Chenwei Zhang",
"Tianyu Cao",
"Yifan Gao",
"Pratik Sridatt Jayarao",
"Mao Li",
"Xin Liu",
"Ritesh Sarkhel",
"Xianfeng Tang",
"Haodong Wang",
"Zhengyang Wang",
"Wenju Xu",
"Jingfeng Yang",
"Qingyu Yin",
"Xian Li",
"Priyanka Nigam",
"Yi Xu",
"Kai Chen",
"Qiang Yang",
"Meng Jiang",
"Bing Yin"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2410.20745 | [
"https://github.com/kl4805/shoppingmmlu"
] | https://huggingface.co/papers/2410.20745 | 2 | 0 | 0 | 22 | [] | [] | [
"KL4805/shopping_mmlu_leaderboard"
] | [] | [] | [
"KL4805/shopping_mmlu_leaderboard"
] | 1 |
null | https://openreview.net/forum?id=CyrKKKN3fs | @inproceedings{
jin2024fairmedfm,
title={FairMed{FM}: Fairness Benchmarking for Medical Imaging Foundation Models},
author={Ruinan Jin and Zikang Xu and Yuan Zhong and Qingsong Yao and Qi Dou and S Kevin Zhou and Xiaoxiao Li},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=CyrKKKN3fs}
} | The advent of foundation models (FMs) in healthcare offers unprecedented opportunities to enhance medical diagnostics through automated classification and segmentation tasks. However, these models also raise significant concerns about their fairness, especially when applied to diverse and underrepresented populations in healthcare applications. Currently, there is a lack of comprehensive benchmarks, standardized pipelines, and easily adaptable libraries to evaluate and understand the fairness performance of FMs in medical imaging, leading to considerable challenges in formulating and implementing solutions that ensure equitable outcomes across diverse patient populations. To fill this gap, we introduce FairMedFM, a fairness benchmark for FM research in medical imaging. FairMedFM integrates with 17 popular medical imaging datasets, encompassing different modalities, dimensionalities, and sensitive attributes. It explores 20 widely used FMs, with various usages such as zero-shot learning, linear probing, parameter-efficient fine-tuning, and prompting in various downstream tasks -- classification and segmentation. Our exhaustive analysis evaluates the fairness performance over different evaluation metrics from multiple perspectives, revealing the existence of bias, varied utility-fairness trade-offs on different FMs, consistent disparities on the same datasets regardless FMs, and limited effectiveness of existing unfairness mitigation methods. Furthermore, FairMedFM provides an open-sourced codebase at https://github.com/FairMedFM/FairMedFM, supporting extendible functionalities and applications and inclusive for studies on FMs in medical imaging over the long term. | FairMedFM: Fairness Benchmarking for Medical Imaging Foundation Models | [
"Ruinan Jin",
"Zikang Xu",
"Yuan Zhong",
"Qingsong Yao",
"Qi Dou",
"S Kevin Zhou",
"Xiaoxiao Li"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.00983 | [
"https://github.com/FairMedFM/FairMedFM"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=CxNXoMnCKc | @inproceedings{
shao2024privacylens,
title={PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action},
author={Yijia Shao and Tianshi Li and Weiyan Shi and Yanchen Liu and Diyi Yang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=CxNXoMnCKc}
} | As language models (LMs) are widely utilized in personalized communication scenarios (e.g., sending emails, writing social media posts) and endowed with a certain level of agency, ensuring they act in accordance with the contextual privacy norms becomes increasingly critical. However, quantifying the privacy norm awareness of LMs and the emerging privacy risk in LM-mediated communication is challenging due to (1) the contextual and long-tailed nature of privacy-sensitive cases, and (2) the lack of evaluation approaches that capture realistic application scenarios. To address these challenges, we propose PrivacyLens, a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories, enabling multi-level evaluation of privacy leakage in LM agents' actions. We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds. Using this dataset, we reveal a discrepancy between LM performance in answering probing questions and their actual behavior when executing user instructions in an agent setup. State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions. We also demonstrate the dynamic nature of PrivacyLens by extending each seed into multiple trajectories to red-team LM privacy leakage risk. Dataset and code are available at https://github.com/SALT-NLP/PrivacyLens. | PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action | [
"Yijia Shao",
"Tianshi Li",
"Weiyan Shi",
"Yanchen Liu",
"Diyi Yang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2409.00138 | [
"https://github.com/salt-nlp/privacylens"
] | https://huggingface.co/papers/2409.00138 | 1 | 1 | 2 | 5 | [] | [
"SALT-NLP/PrivacyLens"
] | [] | [] | [
"SALT-NLP/PrivacyLens"
] | [] | 1 |
null | https://openreview.net/forum?id=ChKCF75Ocd | @inproceedings{
tsoukalas2024putnambench,
title={PutnamBench: Evaluating Neural Theorem-Provers on the Putnam Mathematical Competition},
author={George Tsoukalas and Jasper Lee and John Jennings and Jimmy Xin and Michelle Ding and Michael Jennings and Amitayush Thakur and Swarat Chaudhuri},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=ChKCF75Ocd}
} | We present PutnamBench, a new multi-language benchmark for evaluating the ability of neural theorem-provers to solve competition mathematics problems. PutnamBench consists of 1692 hand-constructed formalizations of 640 theorems sourced from the William Lowell Putnam Mathematical Competition, the premier undergraduate-level mathematics competition in North America.
All the problems have formalizations in Lean 4 and Isabelle; a substantial subset also has Coq formalizations. PutnamBench requires significant problem-solving ability and proficiency in a broad range of topics taught in undergraduate mathematics courses. We use PutnamBench to evaluate several established neural and symbolic theorem-provers.
These approaches can only solve a handful of the PutnamBench problems, establishing the benchmark as a difficult open challenge for research on neural theorem-proving. PutnamBench is available at https://github.com/trishullab/PutnamBench. | PutnamBench: Evaluating Neural Theorem-Provers on the Putnam Mathematical Competition | [
"George Tsoukalas",
"Jasper Lee",
"John Jennings",
"Jimmy Xin",
"Michelle Ding",
"Michael Jennings",
"Amitayush Thakur",
"Swarat Chaudhuri"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.11214 | [
"https://github.com/trishullab/putnambench"
] | https://huggingface.co/papers/2407.11214 | 0 | 0 | 0 | 8 | [] | [
"brando/putnam_bench_informal"
] | [] | [] | [
"brando/putnam_bench_informal"
] | [] | 1 |
null | https://openreview.net/forum?id=CaAJeNkceP | @inproceedings{
formanek2024dispelling,
title={Dispelling the Mirage of Progress in Offline {MARL} through Standardised Baselines and Evaluation},
author={Juan Claude Formanek and Callum Rhys Tilbury and Louise Beyers and Jonathan Phillip Shock and Arnu Pretorius},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=CaAJeNkceP}
} | Offline multi-agent reinforcement learning (MARL) is an emerging field with great promise for real-world applications. Unfortunately, the current state of research in offline MARL is plagued by inconsistencies in baselines and evaluation protocols, which ultimately makes it difficult to accurately assess progress, trust newly proposed innovations, and allow researchers to easily build upon prior work. In this paper, we firstly identify significant shortcomings in existing methodologies for measuring the performance of novel algorithms through a representative study of published offline MARL work. Secondly, by directly comparing to this prior work, we demonstrate that simple, well-implemented baselines can achieve state-of-the-art (SOTA) results across a wide range of tasks. Specifically, we show that on 35 out of 47 datasets used in prior work (almost 75\% of cases), we match or surpass the performance of the current purported SOTA. Strikingly, our baselines often substantially outperform these more sophisticated algorithms. Finally, we correct for the shortcomings highlighted from this prior work by introducing a straightforward standardised methodology for evaluation and by providing our baseline implementations with statistically robust results across several scenarios, useful for comparisons in future work. Our proposal includes simple and sensible steps that are easy to adopt, which in combination with solid baselines and comparative results, could substantially improve the overall rigour of empirical science in offline MARL moving forward. | Dispelling the Mirage of Progress in Offline MARL through Standardised Baselines and Evaluation | [
"Juan Claude Formanek",
"Callum Rhys Tilbury",
"Louise Beyers",
"Jonathan Phillip Shock",
"Arnu Pretorius"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.09068 | [
"https://github.com/instadeepai/og-marl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=CW9SJyhpVt | @inproceedings{
li2024gvrep,
title={{GV}-Rep: A Large-Scale Dataset for Genetic Variant Representation Learning},
author={Zehui Li and Vallijah Subasri and Guy-Bart Stan and Yiren Zhao and BO WANG},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=CW9SJyhpVt}
} | Genetic variants (GVs) are defined as differences in the DNA sequences among individuals and play a crucial role in diagnosing and treating genetic diseases. The rapid decrease in next generation sequencing cost, analogous to Moore’s Law, has led to an exponential increase in the availability of patient-level GV data. This growth poses a challenge for clinicians who must efficiently prioritize patient-specific GVs and integrate them with existing genomic databases to inform patient management. To addressing the interpretation of GVs, genomic foundation models (GFMs) have emerged. However, these models lack standardized performance assessments, leading to considerable variability in model evaluations. This poses the question: *How effectively do deep learning methods classify unknown GVs and align them with clinically-verified GVs?* We argue that representation learning, which transforms raw data into meaningful feature spaces, is an effective approach for addressing both indexing and classification challenges. We introduce a large-scale Genetic Variant dataset, named $\textsf{GV-Rep}$, featuring variable-length contexts and detailed annotations, designed for deep learning models to learn GV representations across various traits, diseases, tissue types, and experimental contexts. Our contributions are three-fold: (i) $\textbf{Construction}$ of a comprehensive dataset with 7 million records, each labeled with characteristics of the corresponding variants, alongside additional data from 17,548 gene knockout tests across 1,107 cell types, 1,808 variant combinations, and 156 unique clinically-verified GVs from real-world patients. (ii) $\textbf{Analysis}$ of the structure and properties of the dataset. (iii) $\textbf{Experimentation}$ of the dataset with pre-trained genomic foundation models (GFMs). The results highlight a significant disparity between the current capabilities of GFMs and the accurate representation of GVs. We hope this dataset will advance genomic deep learning to bridge this gap. | GV-Rep: A Large-Scale Dataset for Genetic Variant Representation Learning | [
"Zehui Li",
"Vallijah Subasri",
"Guy-Bart Stan",
"Yiren Zhao",
"BO WANG"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.16940 | [
"https://github.com/bowang-lab/genomic-fm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=CNWdWn47IE | @inproceedings{
li2024datacomplm,
title={DataComp-{LM}: In search of the next generation of training sets for language models},
author={Jeffrey Li and Alex Fang and Georgios Smyrnis and Maor Ivgi and Matt Jordan and Samir Yitzhak Gadre and Hritik Bansal and Etash Kumar Guha and Sedrick Keh and Kushal Arora and Saurabh Garg and Rui Xin and Niklas Muennighoff and Reinhard Heckel and Jean Mercat and Mayee F Chen and Suchin Gururangan and Mitchell Wortsman and Alon Albalak and Yonatan Bitton and Marianna Nezhurina and Amro Kamal Mohamed Abbas and Cheng-Yu Hsieh and Dhruba Ghosh and Joshua P Gardner and Maciej Kilian and Hanlin Zhang and Rulin Shao and Sarah M Pratt and Sunny Sanyal and Gabriel Ilharco and Giannis Daras and Kalyani Marathe and Aaron Gokaslan and Jieyu Zhang and Khyathi Chandu and Thao Nguyen and Igor Vasiljevic and Sham M. Kakade and Shuran Song and Sujay Sanghavi and Fartash Faghri and Sewoong Oh and Luke Zettlemoyer and Kyle Lo and Alaaeldin El-Nouby and Hadi Pouransari and Alexander T Toshev and Stephanie Wang and Dirk Groeneveld and Luca Soldaini and Pang Wei Koh and Jenia Jitsev and Thomas Kollar and Alex Dimakis and Yair Carmon and Achal Dave and Ludwig Schmidt and Vaishaal Shankar},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=CNWdWn47IE}
} | We introduce DataComp for Language Models, a testbed for controlled dataset experiments with the goal of improving language models.
As part of DCLM, we provide a standardized corpus of 240T tokens extracted from Common Crawl, effective pretraining recipes based on the OpenLM framework, and a broad suite of 53 downstream evaluations.
Participants in the DCLM benchmark can experiment with data curation strategies such as deduplication, filtering, and data mixing at
model scales ranging from 412M to 7B parameters.
As a baseline for DCLM, we conduct extensive experiments and find that model-based filtering is key to assembling a high-quality training set.
The resulting dataset, DCLM-Baseline, enables training a 7B parameter language model from scratch to 63% 5-shot accuracy on MMLU with 2T training tokens.
Compared to MAP-Neo, the previous state-of-the-art in open-data language models, DCLM-Baseline represents a 6 percentage point improvement on MMLU while being trained with half the compute.
Our results highlight the importance of dataset design for training language models and offer a starting point for further research on data curation. We release the \dclm benchmark, framework, models, and datasets at https://www.datacomp.ai/dclm/ | DataComp-LM: In search of the next generation of training sets for language models | [
"Jeffrey Li",
"Alex Fang",
"Georgios Smyrnis",
"Maor Ivgi",
"Matt Jordan",
"Samir Yitzhak Gadre",
"Hritik Bansal",
"Etash Kumar Guha",
"Sedrick Keh",
"Kushal Arora",
"Saurabh Garg",
"Rui Xin",
"Niklas Muennighoff",
"Reinhard Heckel",
"Jean Mercat",
"Mayee F Chen",
"Suchin Gururangan",
"Mitchell Wortsman",
"Alon Albalak",
"Yonatan Bitton",
"Marianna Nezhurina",
"Amro Kamal Mohamed Abbas",
"Cheng-Yu Hsieh",
"Dhruba Ghosh",
"Joshua P Gardner",
"Maciej Kilian",
"Hanlin Zhang",
"Rulin Shao",
"Sarah M Pratt",
"Sunny Sanyal",
"Gabriel Ilharco",
"Giannis Daras",
"Kalyani Marathe",
"Aaron Gokaslan",
"Jieyu Zhang",
"Khyathi Chandu",
"Thao Nguyen",
"Igor Vasiljevic",
"Sham M. Kakade",
"Shuran Song",
"Sujay Sanghavi",
"Fartash Faghri",
"Sewoong Oh",
"Luke Zettlemoyer",
"Kyle Lo",
"Alaaeldin El-Nouby",
"Hadi Pouransari",
"Alexander T Toshev",
"Stephanie Wang",
"Dirk Groeneveld",
"Luca Soldaini",
"Pang Wei Koh",
"Jenia Jitsev",
"Thomas Kollar",
"Alex Dimakis",
"Yair Carmon",
"Achal Dave",
"Ludwig Schmidt",
"Vaishaal Shankar"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.11794 | [
""
] | https://huggingface.co/papers/2406.11794 | 24 | 48 | 3 | 59 | [
"apple/DCLM-7B",
"apple/DCLM-7B-8k",
"mlfoundations/fasttext-oh-eli5",
"TRI-ML/DCLM-1B",
"TRI-ML/DCLM-1B-v0",
"mlfoundations/dclm-7b-it",
"TRI-ML/DCLM-1B-IT",
"mllmTeam/PhoneLM-0.5B",
"mllmTeam/PhoneLM-1.5B"
] | [
"mlfoundations/dclm-baseline-1.0",
"mlfoundations/dclm-baseline-1.0-parquet",
"Zyphra/dclm-dedup"
] | [
"jmercat/DCLM-demo",
"Ireneo/apple_dclm",
"Tonic/DCLM-1B",
"ZMaxAIru/apple_dclm"
] | [
"apple/DCLM-7B",
"apple/DCLM-7B-8k",
"mlfoundations/fasttext-oh-eli5",
"TRI-ML/DCLM-1B",
"TRI-ML/DCLM-1B-v0",
"mlfoundations/dclm-7b-it",
"TRI-ML/DCLM-1B-IT",
"mllmTeam/PhoneLM-0.5B",
"mllmTeam/PhoneLM-1.5B"
] | [
"mlfoundations/dclm-baseline-1.0",
"mlfoundations/dclm-baseline-1.0-parquet",
"Zyphra/dclm-dedup"
] | [
"jmercat/DCLM-demo",
"Ireneo/apple_dclm",
"Tonic/DCLM-1B",
"ZMaxAIru/apple_dclm"
] | 1 |
null | https://openreview.net/forum?id=ByknnPI5Km | @inproceedings{
defrance2024abcfair,
title={{ABCF}air: an Adaptable Benchmark approach for Comparing Fairness Methods},
author={MaryBeth Defrance and Maarten Buyl and Tijl De Bie},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=ByknnPI5Km}
} | Numerous methods have been implemented that pursue fairness with respect to sensitive features by mitigating biases in machine learning. Yet, the problem settings that each method tackles vary significantly, including the stage of intervention, the composition of sensitive features, the fairness notion, and the distribution of the output. Even in binary classification, the greatest common denominator of problem settings is small, significantly complicating benchmarking.
Hence, we introduce ABCFair, a benchmark approach which allows adapting to the desiderata of the real-world problem setting, enabling proper comparability between methods for any use case. We apply this benchmark to a range of pre-, in-, and postprocessing methods on both large-scale, traditional datasets and on a dual label (biased and unbiased) dataset to sidestep the fairness-accuracy trade-off. | ABCFair: an Adaptable Benchmark approach for Comparing Fairness Methods | [
"MaryBeth Defrance",
"Maarten Buyl",
"Tijl De Bie"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2409.16965 | [
"https://github.com/aida-ugent/abcfair"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=BZxtiElo0c | @inproceedings{
zou2024gess,
title={Ge{SS}: Benchmarking Geometric Deep Learning under Scientific Applications with Distribution Shifts},
author={Deyu Zou and Shikun Liu and Siqi Miao and Victor Fung and Shiyu Chang and Pan Li},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=BZxtiElo0c}
} | Geometric deep learning (GDL) has gained significant attention in scientific fields, for its proficiency in modeling data with intricate geometric structures.
Yet, very few works have delved into its capability of tackling the distribution shift problem, a prevalent challenge in many applications.
To bridge this gap, we propose GeSS, a comprehensive benchmark designed for evaluating the performance of GDL models in scientific scenarios with distribution shifts.
Our evaluation datasets cover diverse scientific domains from particle physics, materials science to biochemistry, and encapsulate a broad spectrum of distribution shifts including conditional, covariate, and concept shifts.
Furthermore, we study three levels of information access from the out-of-distribution (OOD) test data, including no OOD information, only unlabeled OOD data, and OOD data with a few labels.
Overall, our benchmark results in 30 different experiment settings, and evaluates 3 GDL backbones and 11 learning algorithms in each setting. A thorough analysis of the evaluation results is provided, poised to illuminate insights for GDL researchers and domain practitioners who are to use GDL in their applications. | GeSS: Benchmarking Geometric Deep Learning under Scientific Applications with Distribution Shifts | [
"Deyu Zou",
"Shikun Liu",
"Siqi Miao",
"Victor Fung",
"Shiyu Chang",
"Pan Li"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=BZe6dmDk5K | @inproceedings{
chen2024gaia,
title={{GAIA}: Rethinking Action Quality Assessment for {AI}-Generated Videos},
author={Zijian Chen and Wei Sun and Yuan Tian and Jun Jia and Zicheng Zhang and Wang Jiarui and Ru Huang and Xiongkuo Min and Guangtao Zhai and Wenjun Zhang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=BZe6dmDk5K}
} | Assessing action quality is both imperative and challenging due to its significant impact on the quality of AI-generated videos, further complicated by the inherently ambiguous nature of actions within AI-generated video (AIGV). Current action quality assessment (AQA) algorithms predominantly focus on actions from real specific scenarios and are pre-trained with normative action features, thus rendering them inapplicable in AIGVs. To address these problems, we construct GAIA, a Generic AI-generated Action dataset, by conducting a large-scale subjective evaluation from a novel causal reasoning-based perspective, resulting in 971,244 ratings among 9,180 video-action pairs. Based on GAIA, we evaluate a suite of popular text-to-video (T2V) models on their ability to generate visually rational actions, revealing their pros and cons on different categories of actions. We also extend GAIA as a testbed to benchmark the AQA capacity of existing automatic evaluation methods. Results show that traditional AQA methods, action-related metrics in recent T2V benchmarks, and mainstream video quality methods perform poorly with an average SRCC of 0.454, 0.191, and 0.519, respectively, indicating a sizable gap between current models and human action perception patterns in AIGVs. Our findings underscore the significance of action quality as a unique perspective for studying AIGVs and can catalyze progress towards methods with enhanced capacities for AQA in AIGVs. | GAIA: Rethinking Action Quality Assessment for AI-Generated Videos | [
"Zijian Chen",
"Wei Sun",
"Yuan Tian",
"Jun Jia",
"Zicheng Zhang",
"Wang Jiarui",
"Ru Huang",
"Xiongkuo Min",
"Guangtao Zhai",
"Wenjun Zhang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | 2406.06087 | [
"https://github.com/zijianchen98/gaia"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=BKu8JPQdQD | @inproceedings{
estermann2024puzzles,
title={{PUZZLES}: A Benchmark for Neural Algorithmic Reasoning},
author={Benjamin Estermann and Luca A Lanzend{\"o}rfer and Yannick Niedermayr and Roger Wattenhofer},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=BKu8JPQdQD}
} | Algorithmic reasoning is a fundamental cognitive ability that plays a pivotal role in problem-solving and decision-making processes. Reinforcement Learning (RL) has demonstrated remarkable proficiency in tasks such as motor control, handling perceptual input, and managing stochastic environments. These advancements have been enabled in part by the availability of benchmarks. In this work we introduce PUZZLES, a benchmark based on Simon Tatham's Portable Puzzle Collection, aimed at fostering progress in algorithmic and logical reasoning in RL. PUZZLES contains 40 diverse logic puzzles of adjustable sizes and varying levels of complexity, providing detailed information on the strengths and generalization capabilities of RL agents. Furthermore, we evaluate various RL algorithms on PUZZLES, providing baseline comparisons and demonstrating the potential for future research. All the software, including the environment, is available at this https url. | PUZZLES: A Benchmark for Neural Algorithmic Reasoning | [
"Benjamin Estermann",
"Luca A Lanzendörfer",
"Yannick Niedermayr",
"Roger Wattenhofer"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.00401 | [
"https://github.com/eth-disco/rlp"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=AxToUp4FMU | @inproceedings{
chen2024crosscare,
title={Cross-Care: Assessing the Healthcare Implications of Pre-training Data on Language Model Bias},
author={Shan Chen and Jack Gallifant and Mingye Gao and Pedro Jos{\'e} Ferreira Moreira and Nikolaj Munch and Ajay Muthukkumar and Arvind Rajan and Jaya Kolluri and Amelia Fiske and Janna Hastings and Hugo Aerts and Brian W. Anthony and Leo Anthony Celi and William La Cava and Danielle Bitterman},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=AxToUp4FMU}
} | Large language models (LLMs) are increasingly essential in processing natural languages, yet their application is frequently compromised by biases and inaccuracies originating in their training data.
In this study, we introduce \textbf{Cross-Care}, the first benchmark framework dedicated to assessing biases and real world knowledge in LLMs, specifically focusing on the representation of disease prevalence across diverse demographic groups.
We systematically evaluate how demographic biases embedded in pre-training corpora like $ThePile$ influence the outputs of LLMs.
We expose and quantify discrepancies by juxtaposing these biases against actual disease prevalences in various U.S. demographic groups.
Our results highlight substantial misalignment between LLM representation of disease prevalence and real disease prevalence rates across demographic subgroups, indicating a pronounced risk of bias propagation and a lack of real-world grounding for medical applications of LLMs.
Furthermore, we observe that various alignment methods minimally resolve inconsistencies in the models' representation of disease prevalence across different languages.
For further exploration and analysis, we make all data and a data visualization tool available at: \url{www.crosscare.net}. | Cross-Care: Assessing the Healthcare Implications of Pre-training Data on Language Model Bias | [
"Shan Chen",
"Jack Gallifant",
"Mingye Gao",
"Pedro José Ferreira Moreira",
"Nikolaj Munch",
"Ajay Muthukkumar",
"Arvind Rajan",
"Jaya Kolluri",
"Amelia Fiske",
"Janna Hastings",
"Hugo Aerts",
"Brian W. Anthony",
"Leo Anthony Celi",
"William La Cava",
"Danielle Bitterman"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2405.05506 | [
"https://github.com/shan23chen/cross-care"
] | https://huggingface.co/papers/2405.05506 | 2 | 1 | 0 | 15 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=Awu8YlEofZ | @inproceedings{
lee2024gsblur,
title={{GS}-Blur: A 3D Scene-Based Dataset for Realistic Image Deblurring},
author={Dongwoo Lee and JoonKyu Park and Kyoung Mu Lee},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=Awu8YlEofZ}
} | To train a deblurring network, an appropriate dataset with paired blurry and sharp images is essential.
Existing datasets collect blurry images either synthetically by aggregating consecutive sharp frames or using sophisticated camera systems to capture real blur.
However, these methods offer limited diversity in blur types (blur trajectories) or require extensive human effort to reconstruct large-scale datasets, failing to fully reflect real-world blur scenarios.
To address this, we propose GS-Blur, a dataset of synthesized realistic blurry images created using a novel approach.
To this end, we first reconstruct 3D scenes from multi-view images using 3D Gaussian Splatting~(3DGS), then render blurry images by moving the camera view along the randomly generated motion trajectories.
By adopting various camera trajectories in reconstructing our GS-Blur, our dataset contains realistic and diverse types of blur, offering a large-scale dataset that generalizes well to real-world blur.
Using GS-Blur with various deblurring methods, we demonstrate its ability to generalize effectively compared to previous synthetic or real blur datasets, showing significant improvements in deblurring performance.
We will publicly release our dataset. | GS-Blur: A 3D Scene-Based Dataset for Realistic Image Deblurring | [
"Dongwoo Lee",
"JoonKyu Park",
"Kyoung Mu Lee"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2410.23658 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=AdpSHMOujG | @inproceedings{
eppel2024infusing,
title={Infusing Synthetic Data with Real-World Patterns for Zero-Shot Material State Segmentation},
author={Sagi Eppel and Jolina Yining Li and Manuel S. Drehwald and Alan Aspuru-Guzik},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=AdpSHMOujG}
} | Visual recognition of materials and their states is essential for understanding the physical world, from identifying wet regions on surfaces or stains on fabrics to detecting infected areas or minerals in rocks. Collecting data that captures this vast variability is complex due to the scattered and gradual nature of material states. Manually annotating real-world images is constrained by cost and precision, while synthetic data, although accurate and inexpensive, lacks real-world diversity. This work aims to bridge this gap by infusing patterns automatically extracted from real-world images into synthetic data. Hence, patterns collected from natural images are used to generate and map materials into synthetic scenes. This unsupervised approach captures the complexity of the real world while maintaining the precision and scalability of synthetic data. We also present the first comprehensive benchmark for zero-shot material state segmentation, utilizing real-world images across a diverse range of domains, including food, soils, construction, plants, liquids, and more, each appears in various states such as wet, dry, infected, cooked, burned, and many others. The annotation includes partial similarity between regions with similar but not identical materials and hard segmentation of only identical material states. This benchmark eluded top foundation models, exposing the limitations of existing data collection methods. Meanwhile, nets trained on the infused data performed significantly better on this and related tasks. The dataset, code, and trained model are publicly available. We also share 300,000 extracted textures and SVBRDF/PBR materials to facilitate future datasets generation. | Infusing Synthetic Data with Real-World Patterns for Zero-Shot Material State Segmentation | [
"Sagi Eppel",
"Jolina Yining Li",
"Manuel S. Drehwald",
"Alan Aspuru-Guzik"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=ADLaALtdoG | @inproceedings{
tian2024scicode,
title={SciCode: A Research Coding Benchmark Curated by Scientists},
author={Minyang Tian and Luyu Gao and Dylan Zhang and Xinan Chen and Cunwei Fan and Xuefei Guo and Roland Haas and Pan Ji and Kittithat Krongchon and Yao Li and Shengyan Liu and Di Luo and Yutao Ma and HAO TONG and Kha Trinh and Chenyu Tian and Zihan Wang and Bohao Wu and Shengzhu Yin and Minhui Zhu and Kilian Lieret and Yanxin Lu and Genglin Liu and Yufeng Du and Tianhua Tao and Ofir Press and Jamie Callan and Eliu A Huerta and Hao Peng},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=ADLaALtdoG}
} | Since language models (LMs) now outperform average humans on many challenging tasks, it is becoming increasingly difficult to develop challenging, high-quality, and realistic evaluations. We address this by examining LM capabilities to generate code for solving real scientific research problems. Incorporating input from scientists and AI researchers in 16 diverse natural science sub-fields, including mathematics, physics, chemistry, biology, and materials science, we create a scientist-curated coding benchmark, SciCode. The problems naturally factorize into multiple subproblems, each involving knowledge recall, reasoning, and code synthesis. In total, SciCode contains 338 subproblems decomposed from 80 challenging main problems, and it offers optional descriptions specifying useful scientific background information and scientist-annotated gold-standard solutions and test cases for evaluation. OpenAI o1-preview, the best-performing model among those tested, can solve only 7.7\% of the problems in the most realistic setting. We believe that SciCode demonstrates both contemporary LMs' progress towards realizing helpful scientific assistants and sheds light on the building and evaluation of scientific AI in the future. | SciCode: A Research Coding Benchmark Curated by Scientists | [
"Minyang Tian",
"Luyu Gao",
"Dylan Zhang",
"Xinan Chen",
"Cunwei Fan",
"Xuefei Guo",
"Roland Haas",
"Pan Ji",
"Kittithat Krongchon",
"Yao Li",
"Shengyan Liu",
"Di Luo",
"Yutao Ma",
"HAO TONG",
"Kha Trinh",
"Chenyu Tian",
"Zihan Wang",
"Bohao Wu",
"Shengzhu Yin",
"Minhui Zhu",
"Kilian Lieret",
"Yanxin Lu",
"Genglin Liu",
"Yufeng Du",
"Tianhua Tao",
"Ofir Press",
"Jamie Callan",
"Eliu A Huerta",
"Hao Peng"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.13168 | [
""
] | https://huggingface.co/papers/2407.13168 | 15 | 13 | 2 | 30 | [] | [] | [] | [] | [] | [] | 1 |