bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
215
445
abstract
stringlengths
820
2.37k
title
stringlengths
24
147
authors
sequencelengths
1
13
id
stringclasses
1 value
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
33 values
n_linked_authors
int64
-1
4
upvotes
int64
-1
21
num_comments
int64
-1
4
n_authors
int64
-1
11
Models
sequencelengths
0
1
Datasets
sequencelengths
0
1
Spaces
sequencelengths
0
4
old_Models
sequencelengths
0
1
old_Datasets
sequencelengths
0
1
old_Spaces
sequencelengths
0
4
paper_page_exists_pre_conf
int64
0
1
null
https://openreview.net/forum?id=peyB8AbCdY
@inproceedings{ hu2024reliable, title={Reliable Attribute-missing Multi-view Clustering with Instance-level and feature-level Cooperative Imputation}, author={Dayu Hu and Suyuan Liu and Jun Wang and Junpu Zhang and Siwei Wang and Xingchen Hu and Xinzhong Zhu and Chang Tang and Xinwang Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=peyB8AbCdY} }
Multi-view clustering (MVC) constitutes a distinct approach to data mining within the field of machine learning. Due to limitations in the data collection process, missing attributes are frequently encountered. However, existing MVC methods primarily focus on missing instances, showing limited attention to missing attributes. A small number of studies employ the reconstruction of missing instances to address missing attributes, potentially overlooking the synergistic effects between the instance and feature spaces, which could lead to distorted imputation outcomes. Furthermore, current methods uniformly treat all missing attributes as zero values, thus failing to differentiate between real and technical zeroes, potentially resulting in data over-imputation. To mitigate these challenges, we introduce a novel Reliable Attribute-Missing Multi-View Clustering method (RAM-MVC). Specifically, feature reconstruction is utilized to address missing attributes, while similarity graphs are simultaneously constructed within the instance and feature spaces. By leveraging structural information from both spaces, RAM-MVC learns a high-quality feature reconstruction matrix during the joint optimization process. Additionally, we introduce a reliable imputation guidance module that distinguishes between real and technical attribute-missing events, enabling discriminative imputation. The proposed RAM-MVC method outperforms nine baseline methods, as evidenced by real-world experiments using single-cell multi-view data.
Reliable Attribute-missing Multi-view Clustering with Instance-level and feature-level Cooperative Imputation
[ "Dayu Hu", "Suyuan Liu", "Jun Wang", "Junpu Zhang", "Siwei Wang", "Xingchen Hu", "Xinzhong Zhu", "Chang Tang", "Xinwang Liu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=pXY8tluwLV
@inproceedings{ duc2024multiscale, title={Multi-scale Twin-attention for 3D Instance Segmentation}, author={Tran Dang Trung Duc and Byeongkeun Kang and Yeejin Lee}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=pXY8tluwLV} }
Recently, transformer-based techniques incorporating superpoints have become prevalent in 3D instance segmentation. However, they often encounter an over-segmentation problem, especially noticeable with large objects. Additionally, unreliable mask predictions stemming from superpoint mask prediction further compound this issue. To address these challenges, we propose a novel framework called MSTA3D. It leverages multi-scale feature representation and introduces twin-attention mechanisms to effectively capture them. Furthermore, MSTA3D integrates a box query with a box regularizer, offering a complementary spatial constraint alongside semantic queries. Experimental evaluations on ScanNetV2, ScanNet200, and S3DIS datasets demonstrate that our approach surpasses state-of-the-art 3D instance segmentation methods.
Multi-scale Twin-attention for 3D Instance Segmentation
[ "Tran Dang Trung Duc", "Byeongkeun Kang", "Yeejin Lee" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=pUstcaVrRL
@inproceedings{ chen2024finecliper, title={Fine{CLIPER}: Multi-modal Fine-grained {CLIP} for Dynamic Facial Expression Recognition with Adapt{ER}s}, author={Haodong Chen and Haojian Huang and Junhao Dong and Mingzhe Zheng and Dian Shao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=pUstcaVrRL} }
Dynamic Facial Expression Recognition (DFER) is crucial for understanding human behavior. However, current methods exhibit limited performance mainly due to the scarcity of high-quality data, the insufficient utilization of facial dynamics, and the ambiguity of expression semantics, etc. To this end, we propose a novel framework, named Multi-modal Fine-grained CLIP for Dynamic Facial Expression Recognition with AdaptERs (FineCLIPER), incorporating the following novel designs: 1) To better distinguish between similar facial expressions, we extend the class labels to textual descriptions from both positive and negative aspects, and obtain supervision by calculating the cross-modal similarity based on the CLIP model; 2) Our FineCLIPER adopts a hierarchical manner to effectively mine useful cues from DFE videos. Specifically, besides directly embedding video frames as input (low semantic level), we propose to extract the face segmentation masks and landmarks based on each frame (middle semantic level) and utilize the Multi-modal Large Language Model (MLLM) to further generate detailed descriptions of facial changes across frames with designed prompts (high semantic level). Additionally, we also adopt Parameter-Efficient Fine-Tuning (PEFT) to enable efficient adaptation of large pre-trained models (i.e., CLIP) for this task. Our FineCLIPER achieves SOTA performance on the DFEW, FERV39k, and MAFW datasets in both supervised and zero-shot settings with few tunable parameters. Analysis and ablation studies further validate its effectiveness.
FineCLIPER: Multi-modal Fine-grained CLIP for Dynamic Facial Expression Recognition with AdaptERs
[ "Haodong Chen", "Haojian Huang", "Junhao Dong", "Mingzhe Zheng", "Dian Shao" ]
Conference
poster
2407.02157
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=pQPlJRBLVK
@inproceedings{ wu2024edgeassisted, title={Edge-assisted Real-time Dynamic 3D Point Cloud Rendering for Multi-party Mobile Virtual Reality}, author={Ximing Wu and Kongyange Zhao and Teng Liang and Xu Chen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=pQPlJRBLVK} }
Multi-party Mobile Virtual Reality (MMVR) enables multiple mobile users to share virtual scenes for immersive multimedia experience in scenarios such as gaming, social interaction, and industrial mission collaboration. Dynamic 3D Point Cloud (DPCL) is an emerging representation form of MMVR that can be consumed as a free-viewpoint video with 6 degree of freedom. Given that it is challenging to render DPCL at a satisfying frame rate with limited on-device resources, offloading rendering tasks to edge servers is recognized as a practical solution. However, repeated loading of DPCL scenes with a substantial amount of metadata introduces a significant redundancy overhead that cannot be overlooked when enabling multiple edge servers to support the rendering requirements of user groups. In this paper, we design PoClVR, an edge-assisted DPCL rendering system for MMVR applications, which breaks down the rendering process of the complete dynamic scene into multiple rendering tasks of individual dynamic objects. PoClVR significantly reduces the repetitive loading overhead of DPCL scenes on edge servers and periodically adjusts the rendering task allocation for edge servers during the application running to accommodate rendering requirements. We deploy PoClVR based on a real-world implementation and the experimental evaluation results show that PoClVR can reduce GPU utilization by up to 15.1\% and increase rendering frame rate by up to 34.6\% compared to other baselines while ensuring that the image quality viewed by the user is virtually unchanged.
Edge-assisted Real-time Dynamic 3D Point Cloud Rendering for Multi-party Mobile Virtual Reality
[ "Ximing Wu", "Kongyange Zhao", "Teng Liang", "Xu Chen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=pJHu4hDlLX
@inproceedings{ li2024simcen, title={Sim{CEN}: Simple Contrast-enhanced Network for {CTR} Prediction}, author={Honghao Li and Lei Sang and Yi Zhang and Yiwen Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=pJHu4hDlLX} }
Click-through rate (CTR) prediction is an essential component of industrial multimedia recommendation, and the key to enhancing the accuracy of CTR prediction lies in the effective modeling of feature interactions using rich user profiles, item attributes, and contextual information. Most of the current deep CTR models resort to parallel or stacked structures to break through the performance bottleneck of Multi-Layer Perceptron (MLP). However, we identify two limitations in these models: (1) parallel or stacked structures often treat explicit and implicit components as isolated entities, leading to a loss of mutual information; (2) traditional CTR models, whether in terms of supervision signals or interaction methods, lack the ability to filter out noise information, thereby limiting the effectiveness of the models. In response to this gap, this paper introduces a novel model by integrating alternate structure and contrastive learning into only one simple MLP, discarding the design of multiple MLPs responsible for different semantic spaces, named the Simple Contrast-enhanced Network (SimCEN), which employs a contrastive product to build second-order feature interactions that have the same semantic but different representation spaces. Additionally, it employs an external-gated mechanism between linear layers to facilitate explicit learning of feature interactions and to filter out noise. At the final representation layer of the MLP, a contrastive loss is incorporated to help the MLP obtain self-supervised signals for higher-quality representations. Experiments conducted on six real-world datasets demonstrate the effectiveness and compatibility of this simple framework, which can serve as a substitute for MLP to enhance various representative baselines. Our source code and detailed running logs will be made available at https://anonymous.4open.science/r/SimCEN-8E21.
SimCEN: Simple Contrast-enhanced Network for CTR Prediction
[ "Honghao Li", "Lei Sang", "Yi Zhang", "Yiwen Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=pIHHAUa500
@inproceedings{ hu2024maskable, title={Maskable Retentive Network for Video Moment Retrieval}, author={Jingjing Hu and Dan Guo and Kun Li and Zhan Si and Xun Yang and Meng Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=pIHHAUa500} }
Video Moment Retrieval (MR) tasks involve predicting the moment described by a given natural language or spoken language query in an untrimmed video. In this paper, we propose a novel Maskable Retentive Network (MRNet) to address two key challenges in MR tasks: cross-modal guidance and video sequence modeling. Our approach introduces a new retention mechanism into the multimodal Transformer architecture, incorporating modality-specific attention modes. Specifically, we employ the Unlimited Attention for language-related attention regions to maximize cross-modal mutual guidance. Then, we introduce the Maskable Retention for video-only attention region to enhance video sequence modeling, that is, recognizing two crucial characteristics of video sequences: 1) bidirectional, decaying, and non-linear temporal associations between video clips, and 2) sparse associations of key information semantically related to the query. We propose a bidirectional decay retention mask to explicitly model temporal-distant context dependencies of video sequences, along with a learnable sparse retention mask to adaptively capture strong associations relevant to the target event. Extensive experiments conducted on five popular benchmarks ActivityNet Captions, TACoS, Charades-STA, ActivityNet Speech, and QVHighlights for MR tasks demonstrate the significant improvements achieved by our method over existing approaches. Code is available at https://github.com/xian-sh/MRNet.
Maskable Retentive Network for Video Moment Retrieval
[ "Jingjing Hu", "Dan Guo", "Kun Li", "Zhan Si", "Xun Yang", "Meng Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=pCxZTmGr4O
@inproceedings{ hou2024linearlyevolved, title={Linearly-evolved Transformer for Pan-sharpening}, author={Junming Hou and Zihan Cao and Naishan Zheng and Xuan Li and Xiaoyu Chen and Xinyang Liu and Xiaofeng Cong and Danfeng Hong and Man Zhou}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=pCxZTmGr4O} }
Vision transformer family has dominated the satellite pan-sharpening field driven by the global-wise spatial information modeling mechanism from the core self-attention ingredient. The standard modeling rules within these promising pan-sharpening methods are to roughly stack the transformer variants in a cascaded manner. Despite the remarkable advancement, their success may be at the huge cost of model parameters and FLOPs, thus preventing its application over low-resource satellites. To address this challenge between favorable performance and expensive computation, we tailor an efficient linearly-evolved transformer variant and employ it to construct a lightweight pan-sharpening framework. In detail, we deepen into the popular cascaded transformer modeling with cutting-edge methods and develop the alternative 1-order linearly-evolved transformer variant with the 1-dimensional linear convolution chain to achieve the same function. In this way, our proposed method is capable of benefiting the cascaded modeling rule while achieving favorable performance in the efficient manner. Extensive experiments over multiple satellite datasets suggest that our proposed method achieves competitive performance against other state-of-the-art with fewer computational resources. Further, the consistently favorable performance has been verified over the hyper-spectral image fusion task. Our main focus is to provide an alternative global modeling framework with an efficient structure. The code is publicly available at \url{https://github.com/coder-JMHou/LFormer}.
Linearly-evolved Transformer for Pan-sharpening
[ "Junming Hou", "Zihan Cao", "Naishan Zheng", "Xuan Li", "Xiaoyu Chen", "Xinyang Liu", "Xiaofeng Cong", "Danfeng Hong", "Man Zhou" ]
Conference
poster
2404.12804
[ "https://github.com/294coder/efficient-mif" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=p4MdxsQVXu
@inproceedings{ yunannan2024adaptive, title={Adaptive Vision Transformer for Event-Based Human Pose Estimation}, author={yunannan and Tao Ma and Jiqing Zhang and Yuji Zhang and Qirui Bao and Xiaopeng Wei and Xin Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=p4MdxsQVXu} }
Human pose estimation has made progress based on deep learning. However, it still faces challenges when encountering exposure, low light, and high-speed scenarios such as motion blur and miss human contours in low light scenes. Moreover, due to the extensive operations required for large-scale convolutional neural network (CNN) inference, marker-free human pose estimation based on standard frame-based cameras is still slow and power consuming for real-time feedback interaction. Event-based cameras quickly output asynchronous sparse moving-edge information, which is low latency and low power consumption for real-time interaction with human pose estimators. For further study. this paper proposed a high-frame rate labeled event-based human pose estimation dataset named Event Multi Movement HPE (EventMM HPE). It consists of records from synchronized event camera, high frame rate camera and Vicon motion capture system, with each sequence recording multiple action combinations and high frame rate (240Hz) annotations. This paper also proposed an event-based human pose estimation model, which utilizes adaptive patches to efficiently achieves good performance for the sparse and reduced input data from DVS. The source code, dataset, and pre-trained models will be released upon acceptance.
Adaptive Vision Transformer for Event-Based Human Pose Estimation
[ "yunannan", "Tao Ma", "Jiqing Zhang", "Yuji Zhang", "Qirui Bao", "Xiaopeng Wei", "Xin Yang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=owccAmgQKL
@inproceedings{ yang2024towards, title={Towards Open-vocabulary {HOI} Detection with Calibrated Vision-language Models and Locality-aware Queries}, author={Zhenhao Yang and Xin Liu and Deqiang Ouyang and Guiduo Duan and Dongyang Zhang and Tao He and Yuan-Fang Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=owccAmgQKL} }
The open-vocabulary human-object interaction (Ov-HOI) detection aims to identify both base and novel categories of humanobject interactions while only base categories are available during training. Existing Ov-HOI methods commonly leverage knowledge distilled from CLIP to extend their ability to detect previously unseen interaction categories. However, our empirical observations indicate that the inherent noise present in CLIP has a detrimental effect on HOI prediction. Moreover, the absence of novel humanobject position distributions often leads to overfitting on the base categories within their learned queries. To address these issues, we propose a two-step framework named, CaM-LQ, Calibrating visual-language Models, (e.g., CLIP) for open-vocabulary HOI detection with Locality-aware Queries. By injecting fine-grained HOI supervision from the calibrated CLIP into the HOI decoder, our model can achieve the goal of predicting novel interactions. Extensive experimental results demonstrate that our approach performs well in open-vocabulary human-object interaction detection, surpassing state-of-the-art methods across multiple metrics on mainstream datasets and showing superior open-vocabulary HOI detection performance, e.g., with 4.54 points improvement on the HICO-DET dataset over the SoTA CLIP4HOI on the UV task with the same backbone ResNet-50.
Towards Open-vocabulary HOI Detection with Calibrated Vision-language Models and Locality-aware Queries
[ "Zhenhao Yang", "Xin Liu", "Deqiang Ouyang", "Guiduo Duan", "Dongyang Zhang", "Tao He", "Yuan-Fang Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=opVCEvRsTM
@inproceedings{ jia2024adaptive, title={Adaptive Hierarchical Aggregation for Federated Object Detection}, author={Ruofan Jia and Weiying Xie and Jie Lei and Yunsong Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=opVCEvRsTM} }
In practical object detection scenarios, distributed data and stringent privacy protections significantly limit the feasibility of traditional centralized training methods. Federated learning (FL) emerges as a promising solution to this dilemma. Nonetheless, the issue of data heterogeneity introduces distinct challenges to federated object detection, evident in diminished object perception, classification, and localization abilities. In response, we introduce a task-driven federated learning methodology, dubbed Adaptive Hierarchical Aggregation (FedAHA), tailored to overcome these obstacles. Our algorithm unfolds in two strategic phases from shallow-to-deep layers: (1) Structure-aware Aggregation (SAA) aligns feature extractors during the aggregation phase, thus bolstering the global model's object perception capabilities; (2) Convex Semantic Calibration (CSC) leverages convex function theory to average semantic features instead of model parameters, enhancing the global model's classification and localization precision. We demonstrate experimentally and theoretically the effectiveness of the proposed two modules respectively. Our method consistently outperforming the state-of-the-art methods across multiple valuable application scenarios. Moreover, we build a real FL system using Raspberry Pis to demonstrate that our approach achieves a good trade-off between performance and efficiency.
Adaptive Hierarchical Aggregation for Federated Object Detection
[ "Ruofan Jia", "Weiying Xie", "Jie Lei", "Yunsong Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=oo7PyBieWB
@inproceedings{ zeng2024mambamos, title={Mamba{MOS}: Li{DAR}-based 3D Moving Object Segmentation with Motion-aware State Space Model}, author={Kang Zeng and Hao Shi and Jiacheng Lin and Siyu Li and Jintao Cheng and Kaiwei Wang and Zhiyong Li and Kailun Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=oo7PyBieWB} }
LiDAR-based Moving Object Segmentation (MOS) aims to locate and segment moving objects in point clouds of the current scan using motion information from previous scans. Despite the promising results achieved by previous MOS methods, several key issues, such as the weak coupling of temporal and spatial information, still need further study. In this paper, we propose a novel LiDAR-based 3D Moving Object Segmentation with Motion-aware State Space Model, termed MambaMOS. Firstly, we develop a novel embedding module, the Time Clue Bootstrapping Embedding (TCBE), to enhance the coupling of temporal and spatial information in point clouds and alleviate the issue of overlooked temporal clues. Secondly, we introduce the Motion-aware State Space Model (MSSM) to endow the model with the capacity to understand the temporal correlations of the same object across different time steps. Specifically, MSSM emphasizes the motion states of the same object at different time steps through two distinct temporal modeling and correlation steps. We utilize an improved state space model to represent these motion differences, significantly modeling the motion states. Finally, extensive experiments on the SemanticKITTI-MOS and KITTI-Road benchmarks demonstrate that the proposed MambaMOS achieves state-of-the-art performance. The source code of this work will be made publicly available.
MambaMOS: LiDAR-based 3D Moving Object Segmentation with Motion-aware State Space Model
[ "Kang Zeng", "Hao Shi", "Jiacheng Lin", "Siyu Li", "Jintao Cheng", "Kaiwei Wang", "Zhiyong Li", "Kailun Yang" ]
Conference
poster
2404.12794
[ "https://github.com/terminal-k/mambamos" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=olYqZN50xO
@inproceedings{ zhang2024mitigating, title={Mitigating Social Hazards: Early Detection of Fake News via Diffusion-Guided Propagation Path Generation}, author={Litian Zhang and Xiaoming Zhang and Chaozhuo Li and Ziyi Zhou and Jiacheng Liu and Feiran Huang and Xi Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=olYqZN50xO} }
The detection of fake news has emerged as a pressing issue in the era of online social media. To detect meticulously fabricated fake news, propagation paths are introduced to provide nuanced social context to complement the pure semantics within news content. However, existing propagation-enhanced models face a dilemma between detection efficacy and social hazard. In this paper, we investigate the novel problem of early fake news detection via propagation path generation, capable of enjoying the merits of rich social context within propagation paths while alleviating potential social hazards. In contrast to previous discriminative detection models, we further propose a novel generative model, DGA-Fake, by simulating realistic propagation paths based on news content before actual spreading. A guided diffusion module is integrated into DGA-Fake to generate simulated user interaction sequences, guided by historical interactions and news content. Evaluation across three datasets demonstrates the superiority of our proposal. Our code is publicly available in https://anonymous.4open.science/r/DGA-Fake-1D5F/.
Mitigating Social Hazards: Early Detection of Fake News via Diffusion-Guided Propagation Path Generation
[ "Litian Zhang", "Xiaoming Zhang", "Chaozhuo Li", "Ziyi Zhou", "Jiacheng Liu", "Feiran Huang", "Xi Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=oiEyxnMrF0
@inproceedings{ li2024selm, title={SelM: Selective Mechanism based Audio-Visual Segmentation}, author={Jiaxu Li and Songsong Yu and Yifan Wang and Lijun Wang and Huchuan Lu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=oiEyxnMrF0} }
Audio-Visual Segmentation (AVS) aims to segment sound-producing objects in videos according to associated audio cues, where both modalities are affected by noise to different extents, such as the blending of background noises in audio or the presence of distracted objects in video.Most existing methods focus on learning interactions between modalities at high semantic levels but is incapable of filtering low-level noise or achieving fine-grained representational interactions during the early feature extraction phase. Consequently, they struggle with illusion issues, where nonexistent audio cues are erroneously linked to visual objects.In this paper, we present SelM, a novel architecture that leverages selective mechanisms to counteract these illusions. SelM employs State Space model for noise reduction and robust feature selection. By imposing additional bidirectional constraints on audio and visual embeddings, it is able to precisely identify crutial features corresponding to sound-emitting targets.To fill the existing gap in early fusion within AVS, SelM introduces a dual alignment mechanism specifically engineered to facilitate intricate spatio-temporal interactions between audio and visual streams, achieving more fine-grained representations. Moreover, we develop a cross-level decoder for layered reasoning, significantly enhancing segmentation precision by exploring the complex relationships between audio and visual information.SelM achieves state-of-the-art performance in AVS tasks, especially in the challenging Audio-Visual Semantic Segmentation setting.Source code will be made publicly available.
SelM: Selective Mechanism based Audio-Visual Segmentation
[ "Jiaxu Li", "Songsong Yu", "Yifan Wang", "Lijun Wang", "Huchuan Lu" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ogfSPe0ff1
@inproceedings{ du2024ldbfr, title={{LD}-{BFR}: Vector-Quantization-Based Face Restoration Model with Latent Diffusion Enhancement}, author={Yuzhen Du and Teng Hu and Ran Yi and Lizhuang Ma}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ogfSPe0ff1} }
Blind Face Restoration (BFR) aims to restore high-quality face images from low-quality images with unknown degradation. Previous GAN-based or ViT-based methods have shown promising results, but have identity details loss once degradation is severe; while recent diffusion-based methods work on image level and take a lot of time to infer. To restore images in any degradation types with high quality and spend less time, we propose LD-BFR, a novel BFR framework that integrates both the strengths of vector quantization and latent diffusion. First, we employ a Dual Cross-Attention vector quantization to restore the degraded image in a global manner. Then we utilize the restored high-quality quantized feature as the guidance in our latent diffusion model to generate high-quality restored images with rich details. With the help of the proposed high-quality feature injection module, our LD-BFR effectively injects the high-quality feature as a condition to guide the generation of our latent diffusion model. Extensive experiments demonstrate the superior performance of our model over the state-of-the-art BFR methods.
LD-BFR: Vector-Quantization-Based Face Restoration Model with Latent Diffusion Enhancement
[ "Yuzhen Du", "Teng Hu", "Ran Yi", "Lizhuang Ma" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=oenYLjTVdK
@inproceedings{ tang2024arts, title={{ARTS}: Semi-Analytical Regressor using Disentangled Skeletal Representations for Human Mesh Recovery from Videos}, author={Tao Tang and Hong Liu and Yingxuan You and Ti Wang and Wenhao Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=oenYLjTVdK} }
Although existing video-based 3D human mesh recovery methods have made significant progress, simultaneously estimating human pose and shape from low-resolution image features limits their performance. These image features lack sufficient spatial information about the human body and contain various noises (e.g., background, lighting, and clothing), which often results in inaccurate pose and inconsistent motion. Inspired by the rapid advance in human pose estimation, we discover that compared to image features, skeletons inherently contain accurate human pose and motion. Therefore, we propose a novel semi-Analytical Regressor using disenTangled Skeletal representations for human mesh recovery from videos, called ARTS, which effectively leverages disentangled information in skeletons. Specifically, a skeleton estimation and disentanglement module is proposed to estimate the 3D skeletons from a video and decouple them into disentangled skeletal representations (i.e., joint position, bone length, and human motion). Then, to fully utilize these representations, we introduce a semi-analytical regressor to estimate the parameters of the human mesh model. The regressor consists of three modules: Temporal Inverse Kinematics (TIK), Bone-guided Shape Fitting (BSF), and Motion-Centric Refinement (MCR). TIK utilizes joint position to estimate initial pose parameters and BSF leverages bone length to regress bone-aligned shape parameters. Finally, MCR combines human motion representation with image features to refine the initial parameters of the human model and enhance temporal consistency. Extensive experiments demonstrate that our ARTS surpasses existing state-of-the-art video-based methods in both per-frame accuracy and temporal consistency on popular benchmarks: 3DPW, MPI-INF-3DHP, and Human3.6M. Code is available at https://github.com/TangTao-PKU/ARTS.
ARTS: Semi-Analytical Regressor using Disentangled Skeletal Representations for Human Mesh Recovery from Videos
[ "Tao Tang", "Hong Liu", "Yingxuan You", "Ti Wang", "Wenhao Li" ]
Conference
poster
2410.15582
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=obknKk80Am
@inproceedings{ xie2024roiguided, title={{ROI}-Guided Point Cloud Geometry Compression Towards Human and Machine Vision}, author={Liang Xie and Wei Gao and Huiming Zheng and Ge Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=obknKk80Am} }
Point cloud data is pivotal in applications like autonomous driving, virtual reality, and robotics. However, its substantial volume poses significant challenges in storage and transmission. In order to obtain a high compression ratio, crucial semantic details usually confront severe damage, leading to difficulties in guaranteeing the accuracy of downstream tasks. To tackle this problem, we are the first to introduce a novel Region of Interest (ROI)-guided Point Cloud Geometry Compression (RPCGC) method for human and machine vision. Our framework employs a dual-branch parallel structure, where the base layer encodes and decodes a simplified version of the point cloud, and the enhancement layer refines this by focusing on geometry details. Furthermore, the residual information of the enhancement layer undergoes refinement through an ROI prediction network. This network generates mask information, which is then incorporated into the residuals, serving as a strong supervision signal. Additionally, we intricately apply these mask details in the Rate-Distortion (RD) optimization process, with each point weighted in the distortion calculation. Our loss function includes RD loss and detection loss to better guide point cloud encoding for the machine. Experiment results demonstrate that RPCGC achieves exceptional compression performance and better detection accuracy (10\% gain) than some learning-based compression methods at high bitrates in ScanNet and SUN RGB-D datasets.
ROI-Guided Point Cloud Geometry Compression Towards Human and Machine Vision
[ "Liang Xie", "Wei Gao", "Huiming Zheng", "Ge Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=obaazx0Hbz
@inproceedings{ gao2024aesmamba, title={AesMamba: Universal Image Aesthetic Assessment with State Space Models}, author={Fei Gao and Yuhao Lin and Jiaqi Shi and Maoying Qiao and Nannan Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=obaazx0Hbz} }
Image Aesthetic Assessment (IAA) aims to objectively predict the generic or personalized evaluations, of the aesthetic or fine-grained multi-attributes, based on visual or multimodal inputs. Previously, researchers have designed diverse and specialized methods, for specific IAA tasks, based on different input-output situations. Is it possible to design a universal IAA framework applicable for the whole IAA task taxonomy? In this paper, we explore this issue, and propose a modular IAA framework, dubbed AesMamba. Specially, we use the Visual State Space Model (VMamba), instead of CNNs or ViTs, to learn comprehensive representations of aesthetic-related attributes; because VMamba can efficiently achieve both global and local effective receptive fields. Afterward, a modal-adaptive module is used to automatically produce the integrated representations, conditioned on the type of input. In the prediction module, we propose a Multitask Balanced Adaptation (MBA) module, to boost task-specific features, with emphasis on the tail instances. Finally, we formulate the personalized IAA task as a multimodal learning problem, by converting a user's anonymous subject characters to a text prompt. This prompting strategy effectively employs the semantics of flexibly selected characters, for inferring individual preferences. AesMamba can be applied to diverse IAA tasks, through flexible combination of these modules. Extensive experiments on numerous datasets, demonstrate that AesMamba consistently achieves superior or competitive performance, on all IAA tasks, in comparison with previous SOTA methods. The code has been released at https://github.com/AiArt-Gao/AesMamba.
AesMamba: Universal Image Aesthetic Assessment with State Space Models
[ "Fei Gao", "Yuhao Lin", "Jiaqi Shi", "Maoying Qiao", "Nannan Wang" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=oTtuEMLgM8
@inproceedings{ lu2024dcafuse, title={{DCAF}use: Dual-Branch Diffusion-{CNN} Complementary Feature Aggregation Network for Multi-Modality Image Fusion}, author={Xudong Lu and Yuqi Jiang and Haiwen Hong and Qi Sun and Cheng Zhuo}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=oTtuEMLgM8} }
Multi-modality image fusion (MMIF) aims to integrate the complementary features of source images into the fused image, including target saliency and texture specifics. Recently, image fusion methods leveraging diffusion models have demonstrated commendable results. Despite their strengths, diffusion models reduce the capability to perceive local features. Additionally, their inherent working mechanism, introducing noise to the inputs, consequently leads to a loss of original information. To overcome this problem, we propose a novel Diffusion-CNN feature Aggregation Fusion (DCAFuse) network that can extract complementary features from the dual branches and aggregate them effectively. Specifically, we utilize the denoising diffusion probabilistic model (DDPM) in the diffusion-based branch to construct global information, and multi-scale convolutional kernels in the CNN-based branch to extract local detailed features. Afterward, we design a novel complementary feature aggregation module (CFAM). By constructing coordinate attention maps for the concatenated features, CFAM captures long-range dependencies in both horizontal and vertical directions, thereby dynamically guiding the aggregation weights of branches. In addition, to further improve the complementarity of dual-branch features, we introduce a novel loss function based on cosine similarity and a unique denoising timestep selection strategy. Extensive experimental results show that our proposed DCAFuse outperforms other state-of-the-art methods in multiple image fusion tasks, including infrared and visible image fusion (IVF) and medical image fusion (MIF). The source code will be publicly available at https://xxx/xxx/xxx.
DCAFuse: Dual-Branch Diffusion-CNN Complementary Feature Aggregation Network for Multi-Modality Image Fusion
[ "Xudong Lu", "Yuqi Jiang", "Haiwen Hong", "Qi Sun", "Cheng Zhuo" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=oRK5jDcpwr
@inproceedings{ huang2024label, title={Label Decoupling and Reconstruction: A Two-Stage Training Framework for Long-tailed Multi-label Medical Image Recognition}, author={Jie Huang and Zhao-Min Chen and Xiaoqin Zhang and YisuGe and Lusi Ye and Guodao Zhang and Huiling Chen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=oRK5jDcpwr} }
Deep learning has made significant advancements and breakthroughs in medical image recognition. However, the clinical reality is complex and multifaceted, with patients often suffering from multiple intertwined diseases, not all of which are equally common, leading to medical datasets that are frequently characterized by multi-labels and a long-tailed distribution. In this paper, we propose a method involving label decoupling and reconstruction (LDRNet) to address these two specific challenges. The label decoupling utilizes the fusion of semantic information from both categories and images to capture the class-aware features across different labels. This process not only integrates semantic information from labels and images to improve the model's ability to recognize diseases, but also captures comprehensive features across various labels to facilitate a deeper understanding of disease characteristics within the dataset. Following this, our label reconstruction method uses the class-aware features to reconstruct the label distribution. This step generates a diverse array of virtual features for tail categories, promoting unbiased learning for the classifier and significantly enhancing the model’s generalization ability and robustness. Extensive experiments conducted on three multi-label long-tailed medical image datasets, including the Axial Spondyloarthritis Dataset, NIH Chest X-ray 14 Dataset, and ODIR-5K Dataset, have demonstrated that our approach achieves state-of-the-art performance, showcasing its effectiveness in handling the complexities associated with multi-label and long-tailed distributions in medical image recognition.
Label Decoupling and Reconstruction: A Two-Stage Training Framework for Long-tailed Multi-label Medical Image Recognition
[ "Jie Huang", "Zhao-Min Chen", "Xiaoqin Zhang", "YisuGe", "Lusi Ye", "Guodao Zhang", "Huiling Chen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=oQahsz6vWe
@inproceedings{ zou2024wavemamba, title={Wave-Mamba: Wavelet State Space Model for Ultra-High-Definition Low-Light Image Enhancement}, author={Wenbin Zou and Hongxia Gao and Weipeng Yang and Tongtong Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=oQahsz6vWe} }
Ultra-high-definition (UHD) technology has attracted widespread attention due to its exceptional visual quality, but it also poses new challenges for low-light image enhancement (LLIE) techniques. UHD images inherently possess high computational complexity, leading existing UHD LLIE methods to employ high-magnification downsampling to reduce computational costs, which in turn results in information loss. The wavelet transform not only allows downsampling without loss of information, but also separates the image content from the noise. It enables state space models (SSMs) to avoid being affected by noise when modeling long sequences, thus making full use of the long-sequence modeling capability of SSMs. On this basis, we propose Wave-Mamba, a novel approach based on two pivotal insights derived from the wavelet domain: 1) most of the content information of an image exists in the low-frequency component, less in the high-frequency component. 2) The high-frequency component exerts a minimal influence on the outcomes of low-light enhancement. Specifically, to efficiently model global content information on UHD images, we proposed a low-frequency state space block (LFSSBlock) by improving SSMs to focus on restoring the information of low-frequency sub-bands. Moreover, we propose a high-frequency enhance block (HFEBlock) for high-frequency sub-band information, which uses the enhanced low-frequency information to correct the high-frequency information and effectively restore the correct high-frequency details. Through comprehensive evaluation, our method has demonstrated superior performance, significantly outshining current leading techniques while maintaining a more streamlined architecture. The code is available at https://github.com/AlexZou14/Wave-Mamba.
Wave-Mamba: Wavelet State Space Model for Ultra-High-Definition Low-Light Image Enhancement
[ "Wenbin Zou", "Hongxia Gao", "Weipeng Yang", "Tongtong Liu" ]
Conference
poster
2408.01276
[ "https://github.com/alexzou14/wave-mamba" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=oMNzlPKiTh
@inproceedings{ he2024hgoe, title={{HGOE}: Hybrid External and Internal Graph Outlier Exposure for Graph Out-of-Distribution Detection}, author={Junwei He and Qianqian Xu and Yangbangyan Jiang and Zitai Wang and Yuchen Sun and Qingming Huang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=oMNzlPKiTh} }
With the progressive advancements in deep graph learning, out-of-distribution (OOD) detection for graph data has emerged as a critical challenge. While the efficacy of auxiliary datasets in enhancing OOD detection has been extensively studied for image and text data, such approaches have not yet been explored for graph data. Unlike Euclidean data, graph data exhibits greater diversity but lower robustness to perturbations, complicating the integration of outliers. To tackle these challenges, we propose the introduction of \textbf{H}ybrid External and Internal \textbf{G}raph \textbf{O}utlier \textbf{E}xposure (HGOE) to improve graph OOD detection performance. Our framework involves using realistic external graph data from various domains and synthesizing internal outliers within ID subgroups to address the poor robustness and presence of OOD samples within the ID class. Furthermore, we develop a boundary-aware OE loss that adaptively assigns weights to outliers, maximizing the use of high-quality OOD samples while minimizing the impact of low-quality ones. Our proposed HGOE framework is model-agnostic and designed to enhance the effectiveness of existing graph OOD detection models. Experimental results demonstrate that our HGOE framework can significantly improve the performance of existing OOD detection models across all 8 real datasets.
HGOE: Hybrid External and Internal Graph Outlier Exposure for Graph Out-of-Distribution Detection
[ "Junwei He", "Qianqian Xu", "Yangbangyan Jiang", "Zitai Wang", "Yuchen Sun", "Qingming Huang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=oHF1aMP5p2
@inproceedings{ hu2024comd, title={{COMD}: Training-free Video Motion Transfer With Camera-Object Motion Disentanglement}, author={Teng Hu and Jiangning Zhang and Ran Yi and Yating Wang and Jieyu Weng and Hongrui Huang and Yabiao Wang and Lizhuang Ma}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=oHF1aMP5p2} }
The emergence of diffusion models has greatly propelled the progress in image and video generation. Recently, some efforts have been made in controllable video generation, including text-to-video, image-to-video generation, video editing, and video motion control, among which camera motion control is an important topic. However, existing camera motion control methods rely on training a temporal camera module, and necessitate substantial computation resources due to the large amount of parameters in video generation models. Moreover, existing methods pre-define camera motion types during training, which limits their flexibility in camera control, preventing the realization of some specific camera controls, such as various camera movements in films. Therefore, to reduce training costs and achieve flexible camera control, we propose COMD, a novel training-free video motion transfer model, which disentangles camera motions and object motions in source videos and transfers the extracted camera motions to new videos. We first propose a one-shot camera motion disentanglement method to extract camera motion from a single source video, which separates the moving objects from the background and estimates the camera motion in the moving objects region based on the motion in the background by solving a Poisson equation. Furthermore, we propose a few-shot camera motion disentanglement method to extract the common camera motion from multiple videos with similar camera motions, which employs a window-based clustering technique to extract the common features in temporal attention maps of multiple videos. Finally, we propose a motion combination method to combine different types of camera motions together, enabling our model a more controllable and flexible camera control. Extensive experiments demonstrate that our training-free approach can effectively decouple camera-object motion and apply the decoupled camera motion to a wide range of controllable video generation tasks, achieving flexible and diverse camera motion control.
COMD: Training-free Video Motion Transfer With Camera-Object Motion Disentanglement
[ "Teng Hu", "Jiangning Zhang", "Ran Yi", "Yating Wang", "Jieyu Weng", "Hongrui Huang", "Yabiao Wang", "Lizhuang Ma" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=oFsIK2JefP
@inproceedings{ liang2024simple, title={Simple Yet Effective: Structure Guided Pre-trained Transformer for Multi-modal Knowledge Graph Reasoning}, author={KE LIANG and Lingyuan Meng and Yue Liu and Meng Liu and Wei Wei and Siwei Wang and Suyuan Liu and Wenxuan Tu and sihang zhou and Xinwang Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=oFsIK2JefP} }
Various information in different modalities in an intuitive way in multi-modal knowledge graphs (MKGs), which are utilized in different downstream tasks, like recommendation. However, most MKGs are still far from complete, which motivates the flourishing of MKG reasoning models. Recently, with the development of general artificial intelligence, pre-trained transformers have drawn increasing attention, especially in multi-modal scenarios. However, the research of multi-modal pre-trained transformers (MPT) for knowledge graph reasoning (KGR) is still at an early stage. As the biggest difference between MKG and other multi-modal data, the rich structural information underlying the MKG is still not fully utilized in previous MPT. Most of them only use the graph structure as a retrieval map for matching images and texts connected with the same entity, which hinders their reasoning performances. To this end, the graph Structure Guided Multi-modal Pre-trained Transformer is proposed for knowledge graph reasoning (SGMPT). Specifically, the graph structure encoder is adopted for structural feature encoding. Then, a structure-guided fusion module with two simple yet effective strategies, i.e., weighted summation and alignment constraint, is designed to inject the structural information into both the textual and visual features. To the best of our knowledge, SGMPT is the first MPT for multi-modal KGR, which mines structural information underlying MKGs. Extensive experiments on FB15k-237-IMG and WN18-IMG, demonstrate that our SGMPT outperforms existing state-of-the-art models, and proves the effectiveness of the designed strategies.
Simple Yet Effective: Structure Guided Pre-trained Transformer for Multi-modal Knowledge Graph Reasoning
[ "KE LIANG", "Lingyuan Meng", "Yue Liu", "Meng Liu", "Wei Wei", "Siwei Wang", "Suyuan Liu", "Wenxuan Tu", "sihang zhou", "Xinwang Liu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=oEhi4pd0e1
@inproceedings{ ding2024maf, title={2M-{AF}: A Strong Multi-Modality Framework For Human Action Quality Assessment with Self-supervised Representation Learning}, author={Yuning Ding and Sifan Zhang and Shenglan Liu and Jinrong Zhang and Wenyue Chen and Duan Haifei and bingcheng dong and Tao Sun}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=oEhi4pd0e1} }
Human Action Quality Assessment (AQA) is a prominent area of research in human action analysis. Current mainstream methods only consider the RGB modality which results in limited feature representation and insufficient performance due to the complexity of the AQA task. In this paper, we propose a simple and modular framework called the Two-Modality Assessment Framework (2M-AF), which comprises a skeleton stream, an RGB stream and a regression module. For the skeleton stream, we develop the Self-supervised Mask Encoder Graph Convolution Network (SME-GCN) to achieve representation learning, and further implement score assessment. Additionally, we propose a Preference Fusion Module (PFM) to fuse features, which can effectively avoid the disadvantages of different modalities. Our experimental results demonstrate the superiority of the proposed 2M-AF over current state-of-the-art methods on three publicly available datasets: AQA-7, UNLV-Diving, and MMFS-63.
2M-AF: A Strong Multi-Modality Framework For Human Action Quality Assessment with Self-supervised Representation Learning
[ "Yuning Ding", "Sifan Zhang", "Shenglan Liu", "Jinrong Zhang", "Wenyue Chen", "Duan Haifei", "bingcheng dong", "Tao Sun" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=oDEqOhKYoO
@inproceedings{ chen2024simplifying, title={Simplifying Cross-modal Interaction via Modality-Shared Features for {RGBT} Tracking}, author={LiQiu Chen and Yuqing Huang and Hengyu li and Zikun Zhou and Zhenyu He}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=oDEqOhKYoO} }
Thermal infrared (TIR) data exhibits higher tolerance to extreme environments, making it a valuable complement to RGB data in tracking tasks. RGB-T tracking aims to leverage information from both RGB and TIR images for stable and robust tracking. However, existing RGB-T tracking methods often face challenges due to significant modality differences and selective emphasis on interactive information, leading to inefficiencies in the cross-modal interaction process. To address these issues, we propose a novel Integrating Interaction into Modality-shared Fearues with ViT(IIMF) framework, which is a simplified cross-modal interaction network including modality-shared, RGB modality-specific, and TIR modality-specific branches. Modality-shared branch aggregates modality-shared information and implements inter-modal interaction with the Vision Transformer(ViT). Specifically, our approach first extracts modality-shared features from RGB and TIR features using a cross-attention mechanism. Furthermore, we design a Cross-Attention-based Modality-shared Information Aggregation (CAMIA) module to further aggregate modality-shared information with modality-shared tokens.
Simplifying Cross-modal Interaction via Modality-Shared Features for RGBT Tracking
[ "LiQiu Chen", "Yuqing Huang", "Hengyu li", "Zikun Zhou", "Zhenyu He" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=o82T2PzswK
@inproceedings{ yang2024onceforall, title={Once-for-all: Efficient Visual Face Privacy Protection via Person-specific Veils}, author={Zixuan Yang and Yushu Zhang and Tao Wang and Zhongyun Hua and Zhihua Xia and Jian Weng}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=o82T2PzswK} }
As billions of face images stored on cloud platforms contain sensitive information to human vision, the public confronts substantial threats to visual face privacy. In response, the community has proposed some perturbation-based schemes to mitigate visual privacy leakage. However, these schemes need to generate a new protective perturbation for each image, failing to satisfy the real-time requirement of cloud platforms. To address this issue, we present an efficient visual face privacy protection scheme by utilizing person-specific veils, which can be conveniently applied to all images of the same user without regeneration. The protected images exhibit significant visual differences from the originals but remain identifiable to face recognition models. Furthermore, the protected images can be recovered to originals under certain circumstances. In the process of generating the veils, we propose a feature alignment loss to promote consistency between the recognition outputs of protected and original images with approximate construction of feature subspace. Meanwhile, the block variance loss is designed to enhance the concealment of visual identity information. Extensive experimental results demonstrate that our scheme can significantly eliminate the visual appearance of original images and almost has no impact on face recognition models.
Once-for-all: Efficient Visual Face Privacy Protection via Person-specific Veils
[ "Zixuan Yang", "Yushu Zhang", "Tao Wang", "Zhongyun Hua", "Zhihua Xia", "Jian Weng" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=o2axlPlXYY
@inproceedings{ cui2024profd, title={Pro{FD}: Prompt-Guided Feature Disentangling for Occluded Person Re-Identification}, author={Can Cui and Siteng Huang and Wenxuan Song and Pengxiang Ding and Zhang Min and Donglin Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=o2axlPlXYY} }
To address the occlusion issues in person Re-Identification (ReID) tasks, many methods have been proposed to extract part features by introducing external spatial information. However, due to missing part appearance information caused by occlusion and noisy spatial information from external model, these purely vision-based approaches fail to correctly learn the features of human body parts from limited training data and struggle in accurately locating body parts, ultimately leading to misaligned part features. To tackle these challenges, we propose a Prompt-guided Feature Disentangling method (ProFD), which leverages the rich pre-trained knowledge in the textual modality facilitate model to generate well-aligned part features. ProFD first designs part-specific prompts and utilizes noisy segmentation mask to preliminarily align visual and textual embedding, enabling the textual prompts to have spatial awareness. Furthermore, to alleviate the noise from external masks, ProFD adopts a hybrid-attention decoder, ensuring spatial and semantic consistency during the decoding process to minimize noise impact. Additionally, to avoid catastrophic forgetting, we employ a self-distillation strategy, retaining pre-trained knowledge of CLIP to mitigate over-fitting. Evaluation results on the Market1501, DukeMTMC-ReID, Occluded-Duke, Occluded-ReID, and P-DukeMTMC datasets demonstrate that ProFD achieves state-of-the-art results.
ProFD: Prompt-Guided Feature Disentangling for Occluded Person Re-Identification
[ "Can Cui", "Siteng Huang", "Wenxuan Song", "Pengxiang Ding", "Zhang Min", "Donglin Wang" ]
Conference
poster
2409.20081
[ "https://github.com/cuixxx/profd" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=nycUC9g6IO
@inproceedings{ wei2024benchmarking, title={Benchmarking In-the-wild Multimodal Disease Recognition and A Versatile Baseline}, author={Tianqi Wei and Zhi Chen and Zi Huang and Xin Yu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=nycUC9g6IO} }
Existing plant disease classification models have achieved remarkable performance in recognizing in-laboratory diseased images. However, their performance often significantly degrades in classifying in-the-wild images. Furthermore, we observed that in-the-wild plant images may exhibit similar appearances across various diseases (i.e., small inter-class discrepancy) while the same diseases may look quite different (i.e., large intra-class variance). Motivated by this observation, we propose an in-the-wild multimodal plant disease recognition dataset that contains the largest number of disease classes but also text-based descriptions for each disease. Particularly, the newly provided text descriptions are introduced to provide rich information in textual modality and facilitate in-the-wild disease classification with small inter-class discrepancy and large intra-class variance issues. Therefore, our proposed dataset can be regarded as an ideal testbed for evaluating disease recognition methods in the real world. In addition, we further present a strong yet versatile baseline that models text descriptions and visual data through multiple prototypes for a given class. By fusing the contributions of multimodal prototypes in classification, our baseline can effectively address the small inter-class discrepancy and large intra-class variance issues. Remarkably, our baseline model can not only classify diseases but also recognize diseases in few-shot or training-free scenarios. Extensive benchmarking results demonstrate that our proposed in-the-wild multimodal dataset sets many new challenges to the plant disease recognition task and there is a large space to improve for future works.
Benchmarking In-the-wild Multimodal Disease Recognition and A Versatile Baseline
[ "Tianqi Wei", "Zhi Chen", "Zi Huang", "Xin Yu" ]
Conference
poster
2408.03120
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=nuXhhXkqGL
@inproceedings{ lei2024seeing, title={Seeing Beyond Classes: Zero-Shot Grounded Situation Recognition via Language Explainer}, author={Jiaming Lei and Lin Li and Chunping Wang and Jun Xiao and Long Chen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=nuXhhXkqGL} }
Benefiting from strong generalization ability, pre-trained vision-language models (VLMs), e.g., CLIP, have been widely utilized in zero-shot scene understanding. Unlike simple recognition tasks, grounded situation recognition (GSR) requires the model not only to classify salient activity (verb) in the image, but also to detect all semantic roles that participate in the action. This complex task usually involves three steps: verb recognition, semantic role grounding, and noun recognition. Directly employing class-based prompts with VLMs and grounding models for this task suffers from several limitations, e.g., it struggles to distinguish ambiguous verb concepts, accurately localize roles with fixed verb-centric template input, and achieve context-aware noun predictions. In this paper, we argue that these limitations stem from the model’s poor understanding of verb/noun classes. To this end, we introduce a new approach for zero-shot GSR via Language EXplainer(LEX), which significantly boosts the model’s comprehensive capabilities through three explainers: 1) verb explainer, which generates general verb-centric descriptions to enhance the discriminability of different verb classes; 2) grounding explainer, which rephrases verb-centric templates for clearer understanding, thereby enhancing precise semantic role localization; and 3) noun explainer, which creates scene-specific noun descriptions to ensure context-aware noun recognition. By equipping each step of the GSR process with an auxiliary explainer, LEX facilitates complex scene understanding in real-world scenarios. Our extensive validations on the SWiG dataset demonstrate LEX’s effectiveness and interoperability in zero-shot GSR.
Seeing Beyond Classes: Zero-Shot Grounded Situation Recognition via Language Explainer
[ "Jiaming Lei", "Lin Li", "Chunping Wang", "Jun Xiao", "Long Chen" ]
Conference
poster
2404.15785
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ntmfqeiDcN
@inproceedings{ xu2024seeing, title={Seeing Text in the Dark: Algorithm and Benchmark}, author={Chengpei Xu and Hao Fu and Long Ma and Wenjing Jia and Chengqi Zhang and Feng Xia and Xiaoyu Ai and Binghao Li and Wenjie Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ntmfqeiDcN} }
Localizing text in low-light environments is challenging due to visual degradations. Although a straightforward solution involves a two-stage pipeline with low-light image enhancement (LLE) as the initial step followed by detection, LLE is primarily designed for human vision rather than machine vision and can accumulate errors. In this work, we propose an efficient and effective single-stage approach for localizing text in the dark that circumvents the need for LLE. We introduce a constrained learning module as an auxiliary mechanism during the training stage of the text detector. This module is designed to guide the text detector in preserving textual spatial features amidst feature map resizing, thus minimizing the loss of spatial information in texts under low-light visual degradations. Specifically, we incorporate spatial reconstruction and spatial semantic constraints within this module to ensure the text detector acquires essential positional and contextual range knowledge. Our approach enhances the original text detector's ability to identify text's local topological features using a dynamic snake feature pyramid network and adopts a bottom-up contour shaping strategy with a novel rectangular accumulation technique for accurate delineation of streamlined text features. In addition, we present a comprehensive low-light dataset for arbitrary-shaped text, encompassing diverse scenes and languages. Notably, our method achieves state-of-the-art results on this low-light dataset and exhibits comparable performance on standard normal light datasets. The code and dataset will be released.
Seeing Text in the Dark: Algorithm and Benchmark
[ "Chengpei Xu", "Hao Fu", "Long Ma", "Wenjing Jia", "Chengqi Zhang", "Feng Xia", "Xiaoyu Ai", "Binghao Li", "Wenjie Zhang" ]
Conference
poster
2404.08965
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=nsvjKWy22R
@inproceedings{ zheng2024viewpcgc, title={View{PCGC}: View-Guided Learned Point Cloud Geometry Compression}, author={Huiming Zheng and Wei Gao and Zhuozhen Yu and Tiesong Zhao and Ge Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=nsvjKWy22R} }
With the rise of immersive media applications such as digital museums, virtual reality, and interactive exhibitions, point clouds, as a three-dimensional data storage format, have gained increasingly widespread attention. The massive data volume of point clouds imposes extremely high requirements on transmission bandwidth in the above applications, gradually becoming a bottleneck for immersive media applications. Although existing learning-based point cloud compression methods have achieved specific successes in compression efficiency by mining the spatial redundancy of their local structural features, these methods often overlook the intrinsic connections between point cloud data and other modality data (such as image modality), thereby limiting further improvements in compression efficiency. To address the limitation, we innovatively propose a view-guided learned point cloud geometry compression scheme, namely ViewPCGC. We adopt a novel self-attention mechanism and cross-modality attention mechanism based on sparse convolution to align the modality features of the point cloud and the view image, removing view redundancy through Modality Redundancy Removal Module (MRRM). Simultaneously, side information of the view image is introduced into the Conditional Checkboard Entropy Model (CCEM), significantly enhancing the accuracy of the probability density function estimation for point cloud geometry. In addition, we design a View-Guided Quality Enhancement Module (VG-QEM) in the decoder, utilizing the contour information of the point cloud in the view image to supplement reconstruction details. The superior experimental performance demonstrates the effectiveness of our method. Compared to the state-of-the-art point cloud geometry compression methods, ViewPCGC exhibits an average performance gain exceeding 10% on D1-PSNR metric.
ViewPCGC: View-Guided Learned Point Cloud Geometry Compression
[ "Huiming Zheng", "Wei Gao", "Zhuozhen Yu", "Tiesong Zhao", "Ge Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=nsHBUGeZmH
@inproceedings{ liu2024smdepth, title={{SM}4Depth: Seamless Monocular Metric Depth Estimation across Multiple Cameras and Scenes by One Mode}, author={Yihao Liu and Feng Xue and Anlong Ming and Mingshuai Zhao and Huadong Ma and Nicu Sebe}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=nsHBUGeZmH} }
In the last year, universal monocular metric depth estimation (universal MMDE) has gained considerable attention, serving as the foundation model for various multimedia tasks, such as video and image editing. Nonetheless, current approaches face challenges in maintaining consistent accuracy across diverse scenes without scene-specific parameters and pre-training, hindering the practicality of MMDE. Furthermore, these methods rely on extensive datasets comprising millions, if not tens of millions, of data for training, leading to significant time and hardware expenses. This paper presents SM4Depth, a model that seamlessly works for both indoor and outdoor scenes, without needing extensive training data and GPU clusters. Firstly, to obtain consistent depth across diverse scenes, we propose a novel metric scale modeling, i.e., variationbased unnormalized depth bins. It reduces the ambiguity of the conventional metric bins and enables better adaptation to large depth-gaps of scenes during training. Secondly, we propose a “divide and conquer" solution to reduce reliance on massive training data. Instead of estimating directly from the vast solution space, the metric bins are estimated from multiple solution sub-spaces to reduce complexity. Additionally, we introduce an uncut depth dataset, Campus Depth, to evaluate the depth accuracy and consistency across various indoor and outdoor scenes. Trained on a consumer-grade GPU using just 150K RGB-D pairs, SM4Depth achieves outstanding performance on the most never-before-seen datasets, especially maintaining consistent accuracy across indoors and outdoors. The code can be found in the supplementary material.
SM4Depth: Seamless Monocular Metric Depth Estimation across Multiple Cameras and Scenes by One Mode
[ "Yihao Liu", "Feng Xue", "Anlong Ming", "Mingshuai Zhao", "Huadong Ma", "Nicu Sebe" ]
Conference
poster
[ "https://github.com/mrobotit/sm4depth" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=nrt0w3gJ3f
@inproceedings{ tian2024timefrequency, title={Time-Frequency Domain Fusion Enhancement for Audio Super-Resolution}, author={Ye Tian and Zhe Wang and Jianguo Sun and Liguo Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=nrt0w3gJ3f} }
Audio super-resolution aims to improve the quality of acoustic signals and is able to reconstruct corresponding high-resolution acoustic signals from low-resolution acoustic signals. However, since acoustic signals can be divided into two forms: time-domain acoustic waves or frequency-domain spectrograms, most existing research focuses on data enhancement in a single field, which can only obtain partial or local features of the audio signal, resulting in limitations of data analysis. Therefore, this paper proposes a time-frequency domain fusion enhanced audio super-resolution method to mine the complementarity of the two representations of acoustic signals. Specifically, we propose an end-to-end audio super-resolution network. Including the variational autoencoder based sound wave super-resolution module (SWSRM), U-Net-based Spectrogram Super-Resolution Module (SSRM), and attention-based Time-Frequency Domain Fusion Module (TFDFM). SWSRM and SSRM can generate more high-frequency and low-frequency components for audio respectively. As a critical component of our method, TFDFM performs weighted fusion on the above two outputs to obtain a super-resolution audio signal. Compared with other methods, experimental results on the VCTK and Piano datasets in natural scenes show that the time-frequency domain fusion audio super-resolution model has a state-of-the-art bandwidth expansion effect. Furthermore, we perform super-resolution on the ShipsEar dataset containing underwater acoustic signals. The super-resolution results are used to test ship target recognition, and and the accuracy is improved by 12.66%. Therefore, the proposed super-resolution method has excellent signal enhancement effect and generalization ability.
Time-Frequency Domain Fusion Enhancement for Audio Super-Resolution
[ "Ye Tian", "Zhe Wang", "Jianguo Sun", "Liguo Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=nqC7VakWxw
@inproceedings{ li2024translinkguard, title={TransLinkGuard: Safeguarding Transformer Models Against Model Stealing in Edge Deployment}, author={Qinfeng Li and Zhiqiang Shen and Zhenghan Qin and Yangfan Xie and Xuhong Zhang and Tianyu Du and Sheng Cheng and Xun Wang and Jianwei Yin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=nqC7VakWxw} }
Proprietary large language models (LLMs) have been widely applied in various scenarios. Additionally, deploying LLMs on edge devices is trending for efficiency and privacy reasons. However, edge deployment of proprietary LLMs introduces new security challenges: edge-deployed models are exposed as white-box accessible to users, enabling adversaries to conduct effective model stealing (MS) attacks. Unfortunately, existing defense mechanisms fail to provide effective protection. Specifically, we identify four critical protection properties that existing methods fail to simultaneously satisfy: (1) maintaining protection after a model is physically copied; (2) authorizing model access at request level; (3) safeguarding runtime reverse engineering; (4) achieving high security with negligible runtime overhead. To address the above issues, we propose TransLinkGuard, a plug-and-play model protection approach against model stealing on edge devices. The core part of TransLinkGuard is a lightweight authorization module residing in a secure environment, e.g., TEE. The authorization module can freshly authorize each request based on its input. Extensive experiments show that TransLinkGuard achieves the same security protection as the black-box security guarantees with negligible overhead.
TransLinkGuard: Safeguarding Transformer Models Against Model Stealing in Edge Deployment
[ "Qinfeng Li", "Zhiqiang Shen", "Zhenghan Qin", "Yangfan Xie", "Xuhong Zhang", "Tianyu Du", "Sheng Cheng", "Xun Wang", "Jianwei Yin" ]
Conference
poster
2404.11121
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=njL8SXiIbC
@inproceedings{ liu2024priorfree, title={Prior-free Balanced Replay: Uncertainty-guided Reservoir Sampling for Long-Tailed Continual Learning}, author={Lei Liu and Li Liu and Yawen Cui}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=njL8SXiIbC} }
Even in the era of large models, one of the well-known issues in continual learning (CL) is catastrophic forgetting, which is significantly challenging when the continual data stream exhibits a long-tailed distribution, termed as Long-Tailed Continual Learning (LTCL). Existing LTCL solutions generally require the label distribution of the data stream to achieve re-balance training. However, obtaining such prior information is often infeasible in real scenarios since the model should learn without pre-identifying the majority and minority classes. To this end, we propose a novel Prior-free Balanced Replay (PBR) framework to learn from long-tailed data stream with less forgetting. Concretely, motivated by our experimental finding that the minority classes are more likely to be forgotten due to the higher uncertainty, we newly design an uncertainty-guided reservoir sampling strategy to prioritize rehearsing minority data without using any prior information, which is based on the mutual dependence between the model and samples. Additionally, we incorporate two prior-free components to further reduce the forgetting issue: (1) Boundary constraint is to preserve uncertain boundary supporting samples for continually re-estimating task boundaries. (2) Prototype constraint is to maintain the consistency of learned class prototypes along with training. Our approach is evaluated on three standard long-tailed benchmarks, demonstrating superior performance to existing CL methods and previous SOTA LTCL approach in both task- and class-incremental learning settings, as well as ordered- and shuffled-LTCL settings.
Prior-free Balanced Replay: Uncertainty-guided Reservoir Sampling for Long-Tailed Continual Learning
[ "Lei Liu", "Li Liu", "Yawen Cui" ]
Conference
poster
2408.14976
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ngQ1ZBt7HT
@inproceedings{ wen2024gaussian, title={Gaussian Mutual Information Maximization for Efficient Graph Self-Supervised Learning: Bridging Contrastive-based to Decorrelation-based}, author={Jinyong Wen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ngQ1ZBt7HT} }
Enlightened by the InfoMax principle, Graph Contrastive Learning (GCL) has achieved remarkable performance in processing large amounts of unlabeled graph data. Due to the impracticality of precisely calculating mutual information (MI), conventional contrastive methods turn to approximate its lower bound using parametric neural estimators, which inevitably introduces additional parameters and leads to increased computational complexity. Building upon a common Gaussian assumption on the distribution of node representations, a computationally tractable surrogate for the original MI can be rigorously derived, termed as Gaussian Mutual Information (GMI). Leveraging multi-view priors of GCL, we induce an efficient contrastive objective based on GMI with performance guarantees, eliminating the reliance on parameterized estimators and negative samples. The emergence of another decorrelation-based self-supervised learning branch parallels contrastive-based approaches. By positioning the proposed GMI-based objective as a pivot, we bridge the gap between these two research areas from two aspects of approximate form and consistent solution, which contributes to the advancement of a unified theoretical framework for self-supervised learning. Extensive comparison experiments and visual analysis provide compelling evidence for the effectiveness and efficiency of our method while supporting our theoretical achievements.
Gaussian Mutual Information Maximization for Efficient Graph Self-Supervised Learning: Bridging Contrastive-based to Decorrelation-based
[ "Jinyong Wen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=nagiMHx4A3
@inproceedings{ wang2024artspeech, title={ArtSpeech: Adaptive Text-to-Speech Synthesis with Articulatory Representations}, author={Zhongxu Wang and Yujia Wang and Mingzhu Li and Hua Huang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=nagiMHx4A3} }
We devise an articulatory representation-based text-to-speech (TTS) model, ArtSpeech, an explainable and effective network for human-like speech synthesis, by revisiting the sound production system. Current deep TTS models learn acoustic-text mapping in a fully parametric manner, ignoring the explicit physical significance of articulation movement. ArtSpeech, on the contrary, leverages articulatory representations to perform adaptive TTS, clearly describing the voice tone and speaking prosody of different speakers. Specifically, energy, F0, and vocal tract variables are utilized to represent airflow forced by articulatory organs, the degree of tension in the vocal folds of the larynx, and the coordinated movements between different organs, respectively. We further designed a multi-dimensional style mapping network to extract speaking styles from diverse articulatory representations. These speaking styles will be utilized to guide the output of the articulatory variation predictors respectively, and ultimately predict the final mel spectrogram out-put. Experiment results show that, compared to other open-source zero-shot TTS systems, ArtSpeech enhances synthesis quality and greatly boosts the similarity between the generated results and the target speaker’s voice and prosody.
ArtSpeech: Adaptive Text-to-Speech Synthesis with Articulatory Representations
[ "Zhongxu Wang", "Yujia Wang", "Mingzhu Li", "Hua Huang" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=nUTpqf7rgP
@inproceedings{ he2024onebit, title={One-bit Semantic Hashing: Towards Resource-Efficient Hashing Model with Binary Neural Network}, author={Liyang He and Zhenya Huang and Chenglong Liu and Rui Li and Runze Wu and Qi Liu and Enhong Chen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=nUTpqf7rgP} }
Deep Hashing (DH) has emerged as an indispensable technique for fast image search in recent years. However, using full-precision Convolutional Neural Networks (CNN) in DH makes it challenging to deploy on devices with limited resources. To deploy DH on resource-limited devices, the Binary Neural Network (BNN) offers a solution that significantly reduces computations and parameters compared to CNN. Unfortunately, applying BNN directly to DH will lead to huge performance degradation. To tackle this problem, we first conducted extensive experiments and discovered that the center-based method provides a fundamental guarantee for BNN-DH performance. Subsequently, we delved deeper into the impact of BNNs on center-based methods and revealed two key insights. First, we find reducing the distance between hash codes and hash centers is challenging for BNN-DH compared to CNN-based DH. This can be attributed to the limited representation capability of BNN. Second, the evolution of hash code aggregation undergoes two stages in BNN-DH, which is different from CNN-based DH. Thus, we need to take into account the changing trends in code aggregation at different stages. Based on these findings, we designed a strong and general method called One-bit Deep Hashing (ODH). First, ODH incorporates a semantic self-adaptive hash center module to address the problem of hash codes inadequately converging to their hash centers. Then, it employs a novel two-stage training method to consider the evolution of hash code aggregation. Finally, extensive experiments on two datasets demonstrate that ODH can achieve significant superiority over other BNN-DH models. The code for ODH is available at https://anonymous.4open.science/r/OSH-1730.
One-bit Semantic Hashing: Towards Resource-Efficient Hashing Model with Binary Neural Network
[ "Liyang He", "Zhenya Huang", "Chenglong Liu", "Rui Li", "Runze Wu", "Qi Liu", "Enhong Chen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=nSUMQhITdd
@inproceedings{ kuang2024consistency, title={Consistency Guided Diffusion Model with Neural Syntax for Perceptual Image Compression}, author={Haowei Kuang and Yiyang Ma and Wenhan Yang and Zongming Guo and Jiaying Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=nSUMQhITdd} }
Diffusion models show impressive performances in image generation with excellent perceptual quality. However, its tendency to introduce additional distortion prevents its direct application in image compression. To address the issue, this paper introduces a Consistency Guided Diffusion Model (CGDM) tailored for perceptual image compression, which integrates an end-to-end image compression model with a diffusion-based post-processing network, aiming to learn richer detail representations with less fidelity loss. In detail, the compression and post-processing networks are cascaded and a branch of consistency guided features is added to constrain the deviation in the diffusion process for better reconstruction quality. Furthermore, a Syntax driven Feature Fusion (SFF) module is constructed to take an extra ultra-low bitstream from the encoding end as input, guiding the adaptive fusion of information from the two branches. In addition, we design a globally uniform boundary control strategy with overlapped patches and adopt a continuous online optimization mode to improve both coding efficiency and global consistency. Extensive experiments validate the superiority of our method to existing perceptual compression techniques and the effectiveness of each component in our method.
Consistency Guided Diffusion Model with Neural Syntax for Perceptual Image Compression
[ "Haowei Kuang", "Yiyang Ma", "Wenhan Yang", "Zongming Guo", "Jiaying Liu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=nRIh1NRksL
@inproceedings{ xu2024a, title={A Chinese Multimodal Social Video Dataset for Controversy Detection}, author={Tianjiao Xu and Aoxuan Chen and Yuxi Zhao and Jinfei Gao and Tian Gan}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=nRIh1NRksL} }
Social video platforms have emerged as significant channels for information dissemination, facilitating lively public discussions that often give rise to controversies. However, existing approaches to controversy detection primarily focus on textual features, which raises three key concerns: it underutilizes the potential of visual information available on social media platforms; it is ineffective when faced with incomplete or absent textual information; and the existing datasets fail to adequately address the need for comprehensive multimodal resources on social media platforms. To address these challenges, we construct a large-scale Multimodal Controversial Dataset (MMCD) in Chinese. Additionally, we propose a novel framework named Multi-view Controversy Detection (MVCD) to effectively model controversies from multiple perspectives. Through extensive experiments using state-of-the-art models on the MMCD, we demonstrate MVCD's effectiveness and potential impact.
A Chinese Multimodal Social Video Dataset for Controversy Detection
[ "Tianjiao Xu", "Aoxuan Chen", "Yuxi Zhao", "Jinfei Gao", "Tian Gan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=nQ74f9uyO9
@inproceedings{ feng2024improving, title={Improving Composed Image Retrieval via Contrastive Learning with Scaling Positives and Negatives}, author={Zhangchi Feng and Richong Zhang and Zhijie Nie}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=nQ74f9uyO9} }
The Composed Image Retrieval (CIR) task aims to retrieve target images using a composed query consisting of a reference image and a modified text. Advanced methods often utilize contrastive learning as the optimization objective, which benefits from adequate positive and negative examples. However, the triplet for CIR incurs high manual annotation costs, resulting in limited positive examples. Furthermore, existing methods commonly use in-batch negative sampling, which reduces the negative number available for the model. To address the problem of lack of positives, we propose a data generation method by leveraging a multi-modal large language model to construct triplets for CIR. To introduce more negatives during fine-tuning, we design a two-stage fine-tuning framework for CIR, whose second stage introduces plenty of static representations of negatives to optimize the representation space rapidly. The above two improvements can be effectively stacked and designed to be plug-and-play, easily applied to existing CIR models without changing their original architectures. Extensive experiments and ablation analysis demonstrate that our method effectively scales positives and negatives and achieves state-of-the-art results on both FashionIQ and CIRR datasets. In addition, our methods also perform well in zero-shot composed image retrieval, providing a new CIR solution for the low-resources scenario. The code is released at https://anonymous.4open.science/r/45F4 and will be publicly available upon acceptance.
Improving Composed Image Retrieval via Contrastive Learning with Scaling Positives and Negatives
[ "Zhangchi Feng", "Richong Zhang", "Zhijie Nie" ]
Conference
poster
2404.11317
[ "https://github.com/BUAADreamer/SPN4CIR" ]
https://huggingface.co/papers/2404.11317
1
0
0
3
[ "BUAADreamer/SPN4CIR" ]
[]
[]
[ "BUAADreamer/SPN4CIR" ]
[]
[]
1
null
https://openreview.net/forum?id=nFrcliTxAC
@inproceedings{ ding2024domainagnostic, title={Domain-Agnostic Crowd Counting via Uncertainty-Guided Style Diversity Augmentation}, author={Guanchen Ding and Lingbo Liu and Zhenzhong Chen and Chang Wen Chen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=nFrcliTxAC} }
Domain shift poses a significant barrier to the performance of crowd counting algorithms in unseen domains. While domain adaptation methods address this challenge by utilizing images from the target domain, they become impractical when target domain images acquisition is problematic. Additionally, these methods require extra training time due to the need for fine-tuning on target domain images. To tackle this problem, we propose an Uncertainty-Guided Style Diversity Augmentation (UGSDA) method, enabling the crowd counting models to be trained solely on the source domain and directly generalized to different unseen target domains. It is achieved by generating sufficiently diverse and realistic samples during the training process. Specifically, our UGSDA method incorporates three tailor-designed components: the Global Styling Elements Extraction (GSEE) module, the Local Uncertainty Perturbations (LUP) module, and the Density Distribution Consistency (DDC) loss. The GSEE extracts global style elements from the feature space of the whole source domain. The LUP aims to obtain uncertainty perturbations from the batch-level input to form style distributions beyond the source domain, which used to generate diversified stylized samples together with global style elements. To regulate the extent of perturbations, the DDC loss imposes constraints between the source samples and the stylized samples, ensuring the stylized samples maintain a higher degree of realism and reliability. Comprehensive experiments validate the superiority of our approach, demonstrating its strong generalization capabilities across various datasets and models. Our code will be made publicly available.
Domain-Agnostic Crowd Counting via Uncertainty-Guided Style Diversity Augmentation
[ "Guanchen Ding", "Lingbo Liu", "Zhenzhong Chen", "Chang Wen Chen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=nAhFSkcnz3
@inproceedings{ fan2024msfnet, title={{MSFN}et: Multi-Scale Fusion Network for Brain-Controlled Speaker Extraction}, author={Cunhang Fan and Jingjing Zhang and Hongyu Zhang and Wang Xiang and Jianhua Tao and Xinhui Li and Jiangyan Yi and Dianbo Sui and Zhao Lv}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=nAhFSkcnz3} }
Speaker extraction aims to selectively extract the target speaker from the multi-talker environment under the guidance of auxiliary reference. Recent studies have shown that the attended speaker's information can be decoded by the auditory attention decoding from the listener's brain activity. However, how to more effectively utilize the common information about the target speaker contained in both electroencephalography (EEG) and speech is still an unresolved problem. In this paper, we propose a multi-scale fusion network (MSFNet) for brain-controlled speaker extraction, which utilizes the EEG recorded from the listener to extract the target speech. In order to make full use of the speech information, the mixed speech is encoded with multiple time scales so that the multi-scale embeddings are acquired. In addition, to effectively extract the non-Euclidean data of EEG, the graph convolutional networks are used as the EEG encoder. Finally, these multi-scale embeddings are separately fused with the EEG features. To facilitate research related to auditory attention decoding and further validate the effectiveness of the proposed method, we also construct the AVED dataset, a new EEG-Audio dataset. Experimental results on both the public Cocktail Party dataset and the newly proposed AVED dataset in this paper show that our MSFNet model significantly outperforms the state-of-the-art method in certain objective evaluation metrics.
MSFNet: Multi-Scale Fusion Network for Brain-Controlled Speaker Extraction
[ "Cunhang Fan", "Jingjing Zhang", "Hongyu Zhang", "Wang Xiang", "Jianhua Tao", "Xinhui Li", "Jiangyan Yi", "Dianbo Sui", "Zhao Lv" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=n6dMs3Qpax
@inproceedings{ ji2024eliminate, title={Eliminate Before Align: A Remote Sensing Image-Text Retrieval Framework with Keyword Explicit Reasoning}, author={Zhong Ji and Changxu Meng and Yan Zhang and Haoran Wang and Yanwei Pang and Jungong Han}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=n6dMs3Qpax} }
Mountains of researches center around the Remote Sensing Image-Text Retrieval (RSITR), aiming at retrieving the corresponding targets based on the given query. Among them, the transfer of Foundation Models (FMs), such as CLIP, to remote sensing domain shows promising results. However, existing FM-based approaches neglect the negative impact of weakly correlated sample pairs and the key distinctions among remote sensing texts, leading to biased and superficial exploration of sample pairs. To address these challenges, we propose a novel Eliminate Before Align strategy with Keyword Explicit Reasoning framework (EBAKER) for RSITR. Specifically, we devise an innovative Eliminate Before Align (EBA) strategy to filter out the weakly correlated sample pairs to mitigate their deviations from optimal embedding space during alignment. Moreover, we introduce a Keyword Explicit Reasoning (KER) module to facilitate the positive role of subtle key concept differences. Without bells and whistles, our method achieves a one-step transformation from FM to RSITR task, obviating the necessity for extra pretraining on remote sensing data. Extensive experiments on three popular benchmark datasets validate that our proposed EBAKER method outperform the state-of-the-art methods with fewer training data. Our source code will be released soon.
Eliminate Before Align: A Remote Sensing Image-Text Retrieval Framework with Keyword Explicit Reasoning
[ "Zhong Ji", "Changxu Meng", "Yan Zhang", "Haoran Wang", "Yanwei Pang", "Jungong Han" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=n3uCCaGTxl
@inproceedings{ wu2024semantic, title={Semantic Alignment for Multimodal Large Language Models}, author={Tao Wu and Mengze Li and Jingyuan Chen and Wei Ji and Wang Lin and Jinyang Gao and Kun Kuang and Zhou Zhao and Fei Wu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=n3uCCaGTxl} }
Research on Multi-modal Large Language Models (MLLMs) towards the multi-image cross-modal instruction has received increasing attention and made significant progress, particularly in scenarios involving closely resembling images (e.g., change captioning). Existing MLLMs typically follow a two-step process in their pipelines: first, extracting visual tokens independently for each input image, and then aligning these visual tokens from different images with the Large Language Model (LLM) in its textual feature space. However, the independent extraction of visual tokens for each image may result in different semantics being prioritized for different images in the first step, leading to a lack of preservation of linking information among images for subsequent LLM analysis. This issue becomes more serious in scenarios where significant variations exist among the images (e.g., visual storytelling). To address this challenge, we introduce Semantic Alignment for Multi-modal large language models (SAM). By involving the bidirectional semantic guidance between different images in the visual-token extraction process, SAM aims to enhance the preservation of linking information for coherent analysis and align the semantics of different images before feeding them into LLM. As the test bed, we propose a large-scale dataset named MmLINK consisting of 69K samples. Different from most existing datasets for MLLMs fine-tuning, our MmLINK dataset comprises multi-modal instructions with significantly diverse images. Extensive experiments on the group captioning task and the storytelling task prove the effectiveness of our SAM model, surpassing the state-of-the-art methods by a large margin (+37% for group captioning and +22% for storytelling on CIDEr score). Project page: https://anonymous.4open.science/r/SAM-F596.
Semantic Alignment for Multimodal Large Language Models
[ "Tao Wu", "Mengze Li", "Jingyuan Chen", "Wei Ji", "Wang Lin", "Jinyang Gao", "Kun Kuang", "Zhou Zhao", "Fei Wu" ]
Conference
poster
2408.12867
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=n10Ax1bixC
@inproceedings{ yang2024a, title={A Medical Data-Effective Learning Benchmark for Highly Efficient Pre-training of Foundation Models}, author={Wenxuan Yang and Weimin Tan and Yuqi Sun and Bo Yan}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=n10Ax1bixC} }
Foundation models, pre-trained on massive datasets, have achieved unprecedented generalizability. However, is it truly necessary to involve such vast amounts of data in pre-training, consuming extensive computational resources? This paper introduces Data-effective learning, aiming to use data in the most impactful way to pre-train foundation models. This involves strategies that focus on data quality rather than quantity, ensuring the data used for training has high informational value. Data-effective learning plays a profound role in accelerating Foundation Model training, reducing computational costs, and saving data storage, which is very important as the volume of medical data in recent years has grown beyond many people's expectations. However, due to the lack of standards and comprehensive benchmark, research on medical data-effective learning is poorly studied. To address this gap, our paper introduces a comprehensive benchmark specifically for evaluating data-effective learning in the medical field. This benchmark includes a dataset with millions of data samples from 31 medical centers (DataDEL), a baseline method for comparison (MedDEL), and a new evaluation metric (NormDEL) to objectively measure data-effective learning performance. Our extensive experimental results show the baseline MedDEL can achieve performance comparable to the original large dataset with only 5% of the data. Establishing such an open data-effective learning benchmark is crucial for the medical AI research community because it facilitates efficient data use, promotes collaborative breakthroughs, and fosters the development of cost-effective, scalable, and impactful healthcare solutions.
A Medical Data-Effective Learning Benchmark for Highly Efficient Pre-training of Foundation Models
[ "Wenxuan Yang", "Weimin Tan", "Yuqi Sun", "Bo Yan" ]
Conference
poster
2401.17542
[ "https://github.com/shadow2469/data-effective-learning-a-comprehensive-medical-benchmark" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=mq5Kg0XBWd
@inproceedings{ zhang2024app, title={{APP}: Adaptive Pose Pooling for 3D Human Pose Estimation from Videos}, author={Jinyan Zhang and Mengyuan Liu and Hong Liu and Guoquan Wang and Wenhao Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=mq5Kg0XBWd} }
Current advancements in 3D human pose estimation have attained notable success by converting 2D poses into their 3D counterparts. However, this approach is inherently influenced by the errors introduced by 2D pose detectors and overlooks the intrinsic spatial information embedded within RGB images. To address these challenges, we introduce a versatile module called Adaptive Pose Pooling (APP), compatible with many existing 2D-to-3D lifting models. The APP module includes three novel sub-modules: Pose-Aware Offsets Generation (PAOG), Pose-Aware Sampling (PAS), and Spatial Temporal Information Fusion (STIF). First, we extract latent features of the multi-frame lifting model. Then, a 2D pose detector is utilized to extract multi-level feature maps from the image. After that, PAOG generates offsets according to featuremaps. PAS uses offsets to sample featuremaps. Then, STIF can fuse PAS sampling features and latent features. This innovative design allows the APP module to simultaneously capture spatial and temporal information. We conduct comprehensive experiments on two widely used datasets: Human3.6M and MPI-INF-3DHP. Meanwhile, we employ various lifting models to demonstrate the efficacy of the APP module. Our results show that the proposed APP module consistently enhances the performance of lifting models, achieving state-of-the-art results. Significantly, our module achieves these performance boosts without necessitating alterations to the architecture of the lifting model.
APP: Adaptive Pose Pooling for 3D Human Pose Estimation from Videos
[ "Jinyan Zhang", "Mengyuan Liu", "Hong Liu", "Guoquan Wang", "Wenhao Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=moPkVVKKTY
@inproceedings{ jiang2024haleval, title={Hal-Eval: A Universal and Fine-grained Hallucination Evaluation Framework for Large Vision Language Models}, author={Chaoya Jiang and Wei Ye and Mengfan Dong and Jia Hongrui and Haiyang Xu and Ming Yan and Ji Zhang and Shikun Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=moPkVVKKTY} }
Large Vision-Language Models (LVLMs) exhibit remarkable capabilities but struggle with "hallucinations"—inconsistencies between images and their descriptions. Previous hallucination evaluation studies on LVLMs have identified hallucinations in terms of objects, attributes, and relations but overlooked complex hallucinations that create an entire narrative around a fictional entity. In this paper, we introduce a refined taxonomy of hallucinations, featuring a new category: Event Hallucination. We then utilize advanced LLMs to generate and filter fine-grained hallucinatory data consisting of various types of hallucinations, with a particular focus on event hallucinations, laying the groundwork for integrating discriminative and generative evaluation methods within our universal evaluation framework. The proposed benchmark distinctively assesses LVLMs' ability to tackle a broad spectrum of hallucinations, making it a reliable and comprehensive tool for gauging LVLMs' efficacy in handling hallucinations. We will release our code and data.
Hal-Eval: A Universal and Fine-grained Hallucination Evaluation Framework for Large Vision Language Models
[ "Chaoya Jiang", "Wei Ye", "Mengfan Dong", "Jia Hongrui", "Haiyang Xu", "Ming Yan", "Ji Zhang", "Shikun Zhang" ]
Conference
oral
2402.15721
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=mmlnMWQLLp
@inproceedings{ zhang2024starstream, title={StarStream: Live Video Analytics over Space Networking}, author={Miao Zhang and Jiaxing Li and Haoyuan Zhao and Linfeng Shen and Jiangchuan Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=mmlnMWQLLp} }
Streaming videos from resource-constrained front-end devices over networks to resource-rich cloud servers has long been a common practice for surveillance and analytics. Most existing live video analytics (LVA) systems, however, are built over terrestrial networks, limiting their applications during natural disasters and in remote areas that desperately call for real-time visual data delivery and scene analysis. With the recent advent of space networking, in particular, low Earth orbit (LEO) satellite constellations such as Starlink, high-speed truly global Internet access is becoming available and affordable. This paper examines the challenges and potentials of LVA over modern LEO satellite networking (LSN). Using Starlink as the testbed, we have carried out extensive in-the-wild measurements to gain insights into its achievable performance for LVA. The results reveal that, the uplink bottleneck in today's LSN, together with the volatile network conditions, can significantly affect the service quality of LVA and necessitate prompt adaptation. We accordingly develop StarStream, a novel LSN-adaptive streaming framework for LVA. At its core, StarStream is empowered by a transformer-based network performance predictor tailored for LSN and a content-aware configuration optimizer. We discuss a series of key design and implementation issues of StarStream and demonstrate its effectiveness and superiority through trace-driven experiments with real-world network and video processing data.
StarStream: Live Video Analytics over Space Networking
[ "Miao Zhang", "Jiaxing Li", "Haoyuan Zhao", "Linfeng Shen", "Jiangchuan Liu" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=mlhG8gkLnd
@inproceedings{ ji2024a, title={A Principled Approach to Natural Language Watermarking}, author={Zhe Ji and Qiansiqi Hu and Yicheng Zheng and Liyao Xiang and Xinbing Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=mlhG8gkLnd} }
Recently, there is a surge in machine-generated natural language content being misused by unauthorized parties. Watermarking is a well-recognized technique to address the issue by tracing the provenance of the text. However, we found most existing watermarking systems for texts are subject to ad hoc design and thus suffer from fundamental vulnerabilities. We propose a principled design for text watermarking based on a theoretical information-hiding framework. The watermarking party and attacker play a rate-distortion-constrained capacity game to achieve the maximum rate of reliable transmission, i.e., watermark capacity. The capacity can be expressed by the mutual information between the encoding and the attacker's corrupted text, indicating how many watermark bits are effectively conveyed under distortion constraints. The system is realized by a learning-based framework with mutual information neural estimators. In the framework, we adopt the assumption of an omniscient attacker and let the watermarking party pit against the attacker who is fully aware of the watermarking strategy. The watermarking party thus achieves higher robustness against removal attacks. We further show that the incorporation of side information substantially enhances the efficacy and robustness of the watermarking system. Experimental results have shown the superiority of our watermarking system compared to the state-of-the-art in terms of capacity, robustness, and preserving text semantics.
A Principled Approach to Natural Language Watermarking
[ "Zhe Ji", "Qiansiqi Hu", "Yicheng Zheng", "Liyao Xiang", "Xinbing Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=mk8p2JKdu0
@inproceedings{ bi2024eagle, title={{EAGLE}: Egocentric {AG}gregated Language-video Engine}, author={Jing Bi and Yunlong Tang and Luchuan Song and Ali Vosoughi and Nguyen Nguyen and Chenliang Xu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=mk8p2JKdu0} }
The rapid evolution of egocentric video analysis brings new insights into understanding human activities and intentions from a first-person perspective. Despite this progress, the fragmentation in tasks like action recognition, procedure learning, and moment retrieval, \etc, coupled with inconsistent annotations and isolated model development, hinders a holistic interpretation of video content. In response, we introduce the EAGLE (Egocentric AGgregated Language-video Engine) model and the EAGLE-400K dataset to provide a unified framework that integrates various egocentric video understanding tasks. EAGLE-400K, the \textit{first} large-scale instruction-tuning dataset tailored for egocentric video, features 400K diverse samples to enhance a broad spectrum task from activity recognition to procedure knowledge learning. Moreover, EAGLE, a strong video-based multimodal large language model (MLLM), is designed to effectively capture both spatial and temporal information. In addition, we propose a set of evaluation metrics designed to facilitate a thorough assessment of MLLM for egocentric video understanding. Our extensive experiments demonstrate EAGLE's superior performance over existing models, highlighting its ability to balance task-specific understanding with comprehensive video interpretation. With EAGLE, we aim to pave the way for novel research opportunities and practical applications in real-world scenarios.
EAGLE: Egocentric AGgregated Language-video Engine
[ "Jing Bi", "Yunlong Tang", "Luchuan Song", "Ali Vosoughi", "Nguyen Nguyen", "Chenliang Xu" ]
Conference
poster
2409.17523
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=mfiL5WJKb4
@inproceedings{ han2024towards, title={Towards Practical Human Motion Prediction with Li{DAR} Point Clouds}, author={Xiao Han and Yiming Ren and Yichen Yao and Yujing Sun and Yuexin Ma}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=mfiL5WJKb4} }
Human motion prediction is crucial for human-centric multimedia understanding and interacting. Current methods typically rely on ground truth human poses as observed input, which is not practical for real-world scenarios where only raw visual sensor data is available. To implement these methods in practice, a pre-phrase of pose estimation is essential. However, such two-stage approach often leads to performance degradation due to the accumulation of errors. Moreover, reducing raw visual data to sparse keypoint representations significantly diminishes the density of information, resulting in the loss of fine-grained features. In this paper, we propose LiDAR-HMP, the first single-LiDAR-based 3D human motion prediction approach, which receives the raw LiDAR point cloud as input and forecasts future 3D human poses directly. Building upon our novel structure-aware body feature descriptor, LiDAR-HMP adaptively maps the observed motion manifold to future poses and effectively models the spatial-temporal correlations of human motions for further refinement of prediction results. Extensive experiments show that our method achieves state-of-the-art performance on two public benchmarks and demonstrates remarkable robustness and efficacy in real-world deployments.
Towards Practical Human Motion Prediction with LiDAR Point Clouds
[ "Xiao Han", "Yiming Ren", "Yichen Yao", "Yujing Sun", "Yuexin Ma" ]
Conference
oral
2408.08202
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=me7vwVCtfr
@inproceedings{ shi2024mgrdark, title={{MGR}-Dark: A Large Multimodal Video Dataset and {RGB}-{IR} benchmark for Gesture Recognition in Darkness}, author={Yuanyuan Shi and Yunan Li and Siyu Liang and Huizhou Chen and Qiguang Miao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=me7vwVCtfr} }
Gesture recognition plays a crucial role in natural human-computer interaction and sign language recognition. Despite considerable progress in normal daylight, research dedicated to gesture recognition in dark environments is scarce. This is partly due to the lack of sufficient datasets for such a task. We bridge the gap of the lack of data for this task by collecting a new dataset: a large-scale multimodal video dataset for gesture recognition in darkness (MGR-Dark). MGR-Dark is distinguished from existing gesture datasets by its gesture collection in darkness, multimodal videos(RGB, Depth, and Infrared), and high video quality. To the best of our knowledge, this is the first multimodal dataset dedicated to human gesture action in dark videos of high quality. Building upon this, we propose a Modality Translation and Cross-modal Distillation (MTCD) RGB-IR benchmark framework. Specifically, the modality translator is firstly utilized to transfer RGB data to pseudo-Infrared data, a progressive cross-modal feature distillation module is then designed to exploit the underlying relations between RGB, pseudo-Infrared and Infrared modalities to guide RGB feature learning. The experiments demonstrate that the dataset and benchmark proposed in this paper are expected to advance research in gesture recognition in dark videos. The dataset and code will be available upon acceptance.
MGR-Dark: A Large Multimodal Video Dataset and RGB-IR benchmark for Gesture Recognition in Darkness
[ "Yuanyuan Shi", "Yunan Li", "Siyu Liang", "Huizhou Chen", "Qiguang Miao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=mYmxQBntqv
@inproceedings{ gui2024navigating, title={Navigating Weight Prediction with Diet Diary}, author={Yinxuan Gui and Bin Zhu and Jingjing Chen and Chong-Wah Ngo and Yu-Gang Jiang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=mYmxQBntqv} }
Current research in food analysis primarily concentrates on tasks such as food recognition, recipe retrieval and nutrition estimation from a single image. Nevertheless, there is a significant gap in exploring the impact of food intake on physiological indicators (e.g., weight) over time. This paper addresses this gap by introducing the DietDiary dataset, which encompasses daily dietary diaries and corresponding weight measurements of real users. Furthermore, we propose a novel task of weight prediction with a dietary diary that aims to leverage historical food intake and weight to predict future weights. To tackle this task, we propose a model-agnostic time series forecasting framework. Specifically, we introduce a Unified Meal Representation Learning (UMRL) module to extract representations for each meal. Additionally, we design a diet-aware loss function to associate food intake with weight variations. By conducting experiments on the DietDiary dataset with two state-of-the-art time series forecasting models, NLinear and iTransformer, we demonstrate that our proposed framework achieves superior performance compared to the original models. We will make our dataset, code, and models publicly available.
Navigating Weight Prediction with Diet Diary
[ "Yinxuan Gui", "Bin Zhu", "Jingjing Chen", "Chong-Wah Ngo", "Yu-Gang Jiang" ]
Conference
oral
2408.05445
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=mYG7uEVlQd
@inproceedings{ liu2024zepo, title={ZePo: Zero-Shot Portrait Stylization with Faster Sampling}, author={Jin Liu and Huaibo Huang and Jie Cao and Ran He}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=mYG7uEVlQd} }
Diffusion-based text-to-image generation models have significantly advanced the field of art content synthesis. However, current portrait stylization methods generally require either model fine-tuning based on examples or the employment of DDIM Inversion to revert images to noise space, both of which substantially decelerate the image generation process. To overcome these limitations, this paper presents an inversion-free portrait stylization framework based on diffusion models that accomplishes content and style feature fusion in merely four sampling steps. We observed that Latent Consistency Models employing consistency distillation can effectively extract representative Consistency Features from noisy images. To blend the Consistency Features extracted from both content and style images, we introduce a Style Enhancement Attention Control technique that meticulously merges content and style features within the attention space of the target image. Moreover, we propose a feature merging strategy to amalgamate redundant features in Consistency Features, thereby reducing the computational load of attention control. Extensive experiments have validated the effectiveness of our proposed framework in enhancing stylization efficiency and fidelity.
ZePo: Zero-Shot Portrait Stylization with Faster Sampling
[ "Jin Liu", "Huaibo Huang", "Jie Cao", "Ran He" ]
Conference
poster
2408.05492
[ "https://github.com/liujin112/zepo" ]
https://huggingface.co/papers/2408.05492
2
6
2
4
[]
[]
[ "Jinl/ZePo" ]
[]
[]
[ "Jinl/ZePo" ]
1
null
https://openreview.net/forum?id=mWVXGBqbGw
@inproceedings{ chen2024safepaint, title={SafePaint: Anti-forensic Image Inpainting with Domain Adaptation}, author={Dunyun Chen and Xin Liao and Xiaoshuai Wu and Shiwei Chen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=mWVXGBqbGw} }
Existing image inpainting methods have achieved remarkable accomplishments in generating visually appealing results, often accompanied by a trend toward creating more intricate structural textures. However, while these models excel at creating more realistic image content, they often leave noticeable traces of tampering, posing a significant threat to security. In this work, we take the anti-forensic capabilities into consideration, firstly proposing an end-to-end training framework for anti-forensic image inpainting named SafePaint. Specifically, we innovatively formulated image inpainting as two major tasks: semantically plausible content completion and region-wise optimization. The former is similar to current inpainting methods that aim to restore the missing regions of corrupted images. The latter, through domain adaptation, endeavors to reconcile the discrepancies between the inpainted region and the unaltered area to achieve anti-forensic goals. Through comprehensive theoretical analysis, we validate the effectiveness of domain adaptation for anti-forensic performance. Furthermore, we meticulously crafted a region-wise separated attention (RWSA) module, which not only aligns with our objective of anti-forensics but also enhances the performance of the model. Extensive qualitative and quantitative evaluations show our approach achieves comparable results to existing image inpainting methods while offering anti-forensic capabilities not available in other methods.
SafePaint: Anti-forensic Image Inpainting with Domain Adaptation
[ "Dunyun Chen", "Xin Liao", "Xiaoshuai Wu", "Shiwei Chen" ]
Conference
oral
2404.18136
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=mUPwlZGnUs
@inproceedings{ liu2024virtual, title={Virtual Agent Positioning Driven by Personal Characteristics}, author={Jingjing Liu and Youyi Zheng and Kun Zhou}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=mUPwlZGnUs} }
When people use agent characters to travel through different spaces (such as virtual scenes and real scenes, or different game spaces), it is important to reasonably position the characters in the new scene according to their personal characteristics. In this paper, we propose a novel pipeline for relocating virtual agents in new scenarios based on their personal characteristics. We extract the characteristics of the characters (including figure, posture, social distance). Then a cost function is designed to evaluate the agent's position in the scene, which consists of a spatial term and an personalized term. Finally, a a Markov Chain Monte Carlo optimization method is applied to search for the optimized solution. The results generated by our approach are evaluated through extensive user study experiments, verifying the effectiveness of our approach compared with other alternative approaches.
Virtual Agent Positioning Driven by Personal Characteristics
[ "Jingjing Liu", "Youyi Zheng", "Kun Zhou" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=mRsa7615AA
@inproceedings{ yin2024expanded, title={Expanded Convolutional Neural Network Based Look-Up Tables for High Efficient Single-Image Super-Resolution}, author={Kai Yin and Jie Shen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=mRsa7615AA} }
Advanced mobile computing has led to a surge in the need for practical super-resolution (SR) techniques. The look-up table (LUT) based SR-LUT has pioneered a new avenue of research without needing hardware acceleration. Nevertheless, all preceding methods that drew inspiration from the SR-LUT framework invariably resort to interpolation and rotation techniques for diminishing the LUT size, thereby prolonging the inference time and contradicting the original objective of efficient SR. Recently, a study named EC-LUT proposed an expanded convolution method to avoid interpolation operations. However, the performance of EC-LUT regarding SR quality and LUT volume is unsatisfactory. To address these limitations, this paper proposes a novel expanded convolutional neural network (ECNN). Specifically, we further extend feature fusion to the feature channel dimension to enhance mapping ability. In addition, our approach reduces the number of single indexed pixels to just one, eliminating the need for rotation tricks and dramatically reducing the LUT size from the MB level to the KB level, thus improving cache hit rates. By leveraging these improvements, we can stack expanded convolutional layers to form an ECNN, with each layer convertible to LUTs during inference. Experiments show that our method improves the overall performance of the upper limit of LUT based methods. For example, under comparable SR quality conditions, our model achieves state-of-the-art performance in speed and LUT volume.
Expanded Convolutional Neural Network Based Look-Up Tables for High Efficient Single-Image Super-Resolution
[ "Kai Yin", "Jie Shen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=mQcebyha6k
@inproceedings{ han2024exploring, title={Exploring Stable Meta-optimization Patterns via Differentiable Reinforcement Learning for Few-shot Classification}, author={Zheng Han and Xiaobin Zhu and Chun Yang and Hongyang Zhou and Jingyan Qin and Xu-Cheng Yin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=mQcebyha6k} }
Existing few-shot learning methods generally focus on designing exquisite structures of meta-learners for learning task-specific prior to improve the discriminative ability of global embeddings. However, they often ignore the importance of learning stability in meta-training, making it difficult to obtain a relatively optimal model. From this key observation, we propose an innovative generic differentiable Reinforcement Learning (RL) strategy for few-shot classification. It aims to explore stable meta-optimization patterns in meta-training by learning generalizable optimizations for producing task-adaptive embeddings. Accordingly, our differentiable RL strategy models the embedding procedure of feature transformation layers in meta-learner to optimize the gradient flow implicitly. Also, we propose a memory module to associate historical and current task states and actions for exploring inter-task similarity. Notably, our RL-based strategy can be easily extended to various backbones. In addition, we propose a novel task state encoder to encode task representation, which fully explores inner-task similarities between support set and query set. Extensive experiments verify that our approach can improve the performance of different backbones and achieve promising results against state-of-the-art methods in few-shot classification. Our code is available at an anonymous site: https://anonymous.4open.science/r/db8f0c012/.
Exploring Stable Meta-optimization Patterns via Differentiable Reinforcement Learning for Few-shot Classification
[ "Zheng Han", "Xiaobin Zhu", "Chun Yang", "Hongyang Zhou", "Jingyan Qin", "Xu-Cheng Yin" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=mL0KvSwXzk
@inproceedings{ wu2024pastnet, title={PastNet: Introducing Physical Inductive Biases for Spatio-temporal Video Prediction}, author={Hao Wu and Fan Xu and Chong Chen and Xian-Sheng Hua and Xiao Luo and Haixin Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=mL0KvSwXzk} }
In this paper, we investigate the challenge of spatio-temporal video prediction task, which involves generating future video frames based on historical spatio-temporal observation streams. Existing approaches typically utilize external information such as semantic maps to improve video prediction accuracy, which often neglect the inherent physical knowledge embedded within videos. Worse still, their high computational costs could impede their applications for high-resolution videos. To address these constraints, we introduce a novel framework called \underline{P}hysics-\underline{a}ssisted \underline{S}patio-\underline{t}emporal \underline{Net}work (PastNet) for high-quality video prediction. The core of PastNet lies in incorporating a spectral convolution operator in the Fourier domain, which efficiently introduces inductive biases from the underlying physical laws. Additionally, we employ a memory bank with the estimated intrinsic dimensionality to discretize local features during the processing of complex spatio-temporal signals, thereby reducing computational costs and facilitating efficient high-resolution video prediction. Extensive experiments on various widely-used spatio-temporal video benchmarks demonstrate the effectiveness and efficiency of the proposed PastNet compared with a range of state-of-the-art methods, particularly in high-resolution scenarios.
PastNet: Introducing Physical Inductive Biases for Spatio-temporal Video Prediction
[ "Hao Wu", "Fan Xu", "Chong Chen", "Xian-Sheng Hua", "Xiao Luo", "Haixin Wang" ]
Conference
poster
2305.11421
[ "https://github.com/easylearningscores/pastnet" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=mKUEzoKyOq
@inproceedings{ li2024controltalker, title={Control-Talker: A Rapid-Customization Talking Head Generation Method for Multi-Condition Control and High-Texture Enhancement}, author={Yiding Li and Lingyun Yu and Li Wang and Hongtao Xie}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=mKUEzoKyOq} }
In recent years, the field of talking head generation has made significant strides. However, the need for substantial computational resources for model training, coupled with a scarcity of high-quality video data, poses challenges for the rapid customization of model to specific individual. Additionally, existing models usually only support single-modal control, lacking the ability to generate vivid facial expressions and controllable head poses based on multiple conditions such as audio, video, etc. These limitations restricts the models' widespread application. In this paper, we introduce a two-stage method called Control-Talker to achieve rapid customization of identity in talking head model and high-quality generation based on multimodal conditions. Specifically, we divide the training process into two stages: prior learning stage and identity rapid-customization stage. 1) In the prior learning stage, we leverage a diffusion-based model pre-trained on the high-quality image dataset to acquire a robust controllable facial prior. Meanwhile, we innovatively propose a high-frequency ControlNet structure to enhance the fidelity of the synthesized results. This structure adeptly extracts a high-frequency feature map from the source image, serving as a facial texture prior, thereby excellently preserving facial texture of the source image. 2) In the identity rapid-customization stage, the identity is fixed by fine-tuning the U-Net part of the diffusion model on merely several images of a specific individual. The entire fine-tuning process for identity customization can be completed within approximately ten minutes, thereby significantly reducing training costs. Further, we propose a unified driving method for both audio and video, utilizing FLAME-3DMM as an intermediary representation. This method equips the model with the ability to precisely control expressions, poses, and lighting under multi conditions, significantly broadening the application fields of the talking head model. Extensive experiments and visual results demonstrate that our method outperforms other state-of-the-art models. Additionally, our model demonstrates reduced training costs and lower data requirements.
Control-Talker: A Rapid-Customization Talking Head Generation Method for Multi-Condition Control and High-Texture Enhancement
[ "Yiding Li", "Lingyun Yu", "Li Wang", "Hongtao Xie" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=mH2i0tACmy
@inproceedings{ yan2024prototypical, title={Prototypical Prompting for Text-to-image Person Re-identification}, author={Shuanglin Yan and Jun Liu and Neng Dong and Liyan Zhang and Jinhui Tang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=mH2i0tACmy} }
In this paper, we study the problem of Text-to-Image Person Re-identification (TIReID), which aims to find images of the same identity described by a text sentence from a pool of candidate images. Benefiting from Vision-Language Pre-training, such as CLIP (Contrastive Language-Image Pretraining), the TIReID techniques have achieved remarkable progress recently. However, most existing methods only focus on instance-level matching and ignore identity-level matching, which involves associating multiple images and texts belonging to the same person. In this paper, we propose a novel prototypical prompting framework (Propot) designed to simultaneously model instance-level and identity-level matching for TIReID. Our Propot transforms the identity-level matching problem into a prototype learning problem, aiming to learn identity-enriched prototypes. Specifically, Propot works by ‘initialize, adapt, enrich, then aggregate’. We first use CLIP to generate high-quality initial prototypes. Then, we propose a domain-conditional prototypical prompting (DPP) module to adapt the prototypes to the TIReID task using task-related information. Further, we propose an instance-conditional prototypical prompting (IPP) module to update prototypes conditioned on intra-modal and inter-modal instances to ensure prototype diversity. Finally, we design an adaptive prototype aggregation module to aggregate these prototypes, generating final identity-enriched prototypes. With identity-enriched prototypes, we diffuse its rich identity information to instances through prototype-to-instance contrastive loss to facilitate identity-level matching. Extensive experiments conducted on three benchmarks demonstrate the superiority of Propot compared to existing TIReID methods.
Prototypical Prompting for Text-to-image Person Re-identification
[ "Shuanglin Yan", "Jun Liu", "Neng Dong", "Liyan Zhang", "Jinhui Tang" ]
Conference
poster
2409.09427
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=mFy8n4Gdc9
@inproceedings{ wu2024hypergraph, title={Hypergraph Multi-modal Large Language Model: Exploiting {EEG} and Eye-tracking Modalities to Evaluate Heterogeneous Responses for Video Understanding}, author={Minghui Wu and Chenxu Zhao and Anyang Su and Donglin Di and Tianyu Fu and Da An and Min He and Ya Gao and Meng Ma and Kun Yan and Ping Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=mFy8n4Gdc9} }
Understanding of video creativity and content often varies among individuals, with differences in focal points and cognitive levels across different ages, experiences, and genders. There is currently a lack of research in this area, and most existing benchmarks suffer from several drawbacks: 1) a limited number of modalities and answers with restrictive length; 2) the content and scenarios within the videos are excessively monotonous, transmitting allegories and emotions that are overly simplistic. To bridge the gap to real-world applications, we introduce a large-scale Video $\textbf{S}$ubjective $\textbf{M}$ulti-modal $\textbf{E}$valuation dataset, namely Video-SME. Specifically, we collected real changes in Electroencephalographic (EEG) and eye-tracking regions from different demographics while they viewed identical video content. Utilizing this multi-modal dataset, we developed tasks and protocols to analyze and evaluate the extent of cognitive understanding of video content among different users. Along with the dataset, we designed a $\textbf{H}$ypergraph $\textbf{M}$ulti-modal $\textbf{L}$arge $\textbf{L}$anguage $\textbf{M}$odel (HMLLM) to explore the associations among different demographics, video elements, EEG and eye-tracking indicators. HMLLM could bridge semantic gaps across rich modalities and integrate information beyond different modalities to perform logical reasoning. Extensive experimental evaluations on Video-SME and other additional video-based generative performance benchmarks demonstrate the effectiveness of our method. The code and dataset are available at https://github.com/mininglamp-MLLM/HMLLM
Hypergraph Multi-modal Large Language Model: Exploiting EEG and Eye-tracking Modalities to Evaluate Heterogeneous Responses for Video Understanding
[ "Minghui Wu", "Chenxu Zhao", "Anyang Su", "Donglin Di", "Tianyu Fu", "Da An", "Min He", "Ya Gao", "Meng Ma", "Kun Yan", "Ping Wang" ]
Conference
oral
2407.08150
[ "https://github.com/mininglamp-mllm/hmllm" ]
https://huggingface.co/papers/2407.08150
0
0
0
11
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=mFhwB1hmLK
@inproceedings{ yao2024qebev, title={{QE}-{BEV}: Query Evolution for Bird's Eye View Object Detection in Varied Contexts}, author={Jiawei Yao and Yingxin Lai and Hongrui Kou and Tong Wu and Ruixi Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=mFhwB1hmLK} }
3D object detection plays a pivotal role in autonomous driving and robotics, demanding precise interpretation of Bird’s Eye View (BEV) images. The dynamic nature of real-world environments necessitates the use of dynamic query mechanisms in 3D object detection to adaptively capture and process the complex spatio-temporal relationships present in these scenes. However, prior implementations of dynamic queries have often faced difficulties in effectively leveraging these relationships, particularly when it comes to integrating temporal information in a computationally efficient manner. Addressing this limitation, we introduce a framework utilizing dynamic query evolution strategy, harnesses K-means clustering and Top-K attention mechanisms for refined spatio-temporal data processing. By dynamically segmenting the BEV space and prioritizing key features through Top-K attention, our model achieves a real-time, focused analysis of pertinent scene elements. Our extensive evaluation on the nuScenes and Waymo dataset showcases a marked improvement in detection accuracy, setting a new benchmark in the domain of query-based BEV object detection. Our dynamic query evolution strategy has the potential to push the boundaries of current BEV methods with enhanced adaptability and computational efficiency. Project page: https://github.com/Jiawei-Yao0812/QE-BEV
QE-BEV: Query Evolution for Bird's Eye View Object Detection in Varied Contexts
[ "Jiawei Yao", "Yingxin Lai", "Hongrui Kou", "Tong Wu", "Ruixi Liu" ]
Conference
poster
2310.05989
[ "https://github.com/jiawei-yao0812/qe-bev" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=mAzGIYpLka
@inproceedings{ li2024boosting, title={Boosting Non-causal Semantic Elimination: An Unconventional Harnessing of {LVM} for Open-World Deepfake Interpretation}, author={Zhaoyang Li and Zhu Teng and Baopeng Zhang and Jianping Fan}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=mAzGIYpLka} }
The rapid advancement of generation methods has sparked significant concerns about potential misuse, emphasizing the urgency to detect new types of forgeries in open-world settings. Although pioneering works have explored the classification of open-world deepfakes (OW-DF), they neglect the influence of new forgery techniques, which struggle to handle a greater variety of manipulable objects and increasingly realistic artifacts. To align research with the evolving technologies of forgery, we propose a new task named Open-World Deepfake Interpretation (OW-DFI). This task involves the localization of imperceptible artifacts across diverse manipulated objects and deciphering forgery methods, especially new forgery techniques. To this end, we leverage non-casual semantics from large visual models (LVMs) and eliminate them from the nuanced manipulated artifacts. Our proposed model includes Semantic Intervention Learning (SIL) and Correlation-based Incremental Learning (CIL). SIL enhances the inconsistency of forgery artifacts with refined semantics from LVMs, while CIL combats catastrophic forgetting and semantic overfitting through an inter-forgery inheritance transpose and a targeted semantic intervention. Exploiting LVMs, our proposed method adopts an unconventional strategy that aligns with the semantic direction of LVMs, moving beyond just uncovering limited forgery-related features for deepfake detection. To assess the effectiveness of our approach in discovering new forgeries, we construct an Open-World Deepfake Interpretation (OW-DFI) benchmark and conduct experiments in an incremental form. Comprehensive experiments demonstrate our method's superiority on the OW-DFI benchmark, showcasing outstanding performance in localizing forgeries and decoding new forgery techniques. The source code and benchmark will be made publicly accessible on [website].
Boosting Non-causal Semantic Elimination: An Unconventional Harnessing of LVM for Open-World Deepfake Interpretation
[ "Zhaoyang Li", "Zhu Teng", "Baopeng Zhang", "Jianping Fan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=mAQ2fK2myX
@inproceedings{ guo2024unseen, title={Unseen No More: Unlocking the Potential of {CLIP} for Generative Zero-shot {HOI} Detection}, author={Yixin Guo and Yu Liu and Jianghao Li and Weimin Wang and Qi Jia}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=mAQ2fK2myX} }
Zero-shot human-object interaction (HOI) detector is capable of generalizing to HOI categories even not encountered during training. Inspired by the impressive zero-shot capabilities offered by CLIP, latest methods strive to leverage CLIP embeddings for improving zero-shot HOI detection. However, these embedding-based methods train the classifier on seen classes only, inevitably resulting in seen-unseen confusion of the model during testing. Besides, we find that using prompt-tuning and adapters further increases the gap between seen and unseen accuracy. To tackle this challenge, we present the first generation-based model using CLIP for zero-shot HOI detection, coined HOIGen. It allows to unlock the potential of CLIP for feature generation instead of feature extraction only. To achieve it, we develop a CLIP-injected feature generator in accordance with the generation of human, object and union features. Then, we extract realistic features of seen samples and mix them with synthetic features together, allowing the model to train seen and unseen classes jointly. To enrich the HOI scores, we construct a generative prototype bank in a pairwise HOI recognition branch, and a multi-knowledge prototype bank in an image-wise HOI recognition branch, respectively. Extensive experiments on HICO-DET benchmark demonstrate our HOIGen achieves superior performance for both seen and unseen classes under various zero-shot settings, compared with other top-performing methods.
Unseen No More: Unlocking the Potential of CLIP for Generative Zero-shot HOI Detection
[ "Yixin Guo", "Yu Liu", "Jianghao Li", "Weimin Wang", "Qi Jia" ]
Conference
poster
2408.05974
[ "https://github.com/soberguo/hoigen" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=m83dD4v0SZ
@inproceedings{ sun2024rethinking, title={Rethinking Image Editing Detection in the Era of Generative {AI} Revolution}, author={Zhihao Sun and Haipeng Fang and Juan Cao and Xinying Zhao and Danding Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=m83dD4v0SZ} }
Considering that image editing and manipulation technologies pose significant threats to the authenticity and security of image content, research on image regional manipulation detection has always been a critical issue. The accelerated advancement of generative AI significantly enhances the viability and effectiveness of generative regional editing methods and has led to their gradual replacement of traditional image editing tools or algorithms. However, current research primarily focuses on traditional image tampering, and there remains a lack of a comprehensive dataset containing images edited with abundant and advanced generative regional editing methods. We endeavor to fill this vacancy by constructing the GRE dataset, a large-scale generative regional editing detection dataset with the following advantages: 1) Integration of a logical and simulated editing pipeline, leveraging multiple large models in various modalities. 2) Inclusion of various editing approaches with distinct characteristics. 3) Provision of comprehensive benchmark and evaluation of SOTA methods across related domains. 4) Analysis of the GRE dataset from multiple dimensions including necessity, rationality, and diversity. Extensive experiments and in-depth analysis demonstrate that this larger and more comprehensive dataset will significantly enhance the development of detection methods for generative editing.
Rethinking Image Editing Detection in the Era of Generative AI Revolution
[ "Zhihao Sun", "Haipeng Fang", "Juan Cao", "Xinying Zhao", "Danding Wang" ]
Conference
poster
2311.17953
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=m2Ilu6XyV8
@inproceedings{ yu2024gaussiantalker, title={GaussianTalker: Speaker-specific Talking Head Synthesis via 3D Gaussian Splatting}, author={Hongyun Yu and Zhan Qu and Qihang Yu and Jianchuan Chen and Zhonghua Jiang and Zhiwen Chen and Shengyu Zhang and Jimin Xu and Fei Wu and chengfei lv and Gang Yu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=m2Ilu6XyV8} }
Recent works on audio-driven talking head synthesis using Neural Radiance Fields (NeRF) have achieved impressive results. However, due to inadequate pose and expression control caused by NeRF implicit representation, these methods still have some limitations, such as unsynchronized or unnatural lip movements, and visual jitter and artifacts. In this paper, we propose GaussianTalker, a novel method for audio-driven talking head synthesis based on 3D Gaussian Splatting. With the explicit representation property of 3D Gaussians, intuitive control of the facial motion is achieved by binding Gaussians to 3D facial models. GaussianTalker consists of two modules, Speaker-specific Motion Translator and Dynamic Gaussian Renderer. Speaker-specific Motion Translator achieves accurate lip movements specific to the target speaker through universalized audio feature extraction and customized lip motion generation. Dynamic Gaussian Renderer introduces Speaker-specific BlendShapes to enhance facial detail representation via a latent pose, delivering stable and realistic rendered videos. Extensive experimental results suggest that GaussianTalker outperforms existing state-of-the-art methods in talking head synthesis, delivering precise lip synchronization and exceptional visual quality. Our method achieves rendering speeds of 130 FPS on NVIDIA RTX4090 GPU, significantly exceeding the threshold for real-time rendering performance, and can potentially be deployed on other hardware platforms.
GaussianTalker: Speaker-specific Talking Head Synthesis via 3D Gaussian Splatting
[ "Hongyun Yu", "Zhan Qu", "Qihang Yu", "Jianchuan Chen", "Zhonghua Jiang", "Zhiwen Chen", "Shengyu Zhang", "Jimin Xu", "Fei Wu", "chengfei lv", "Gang Yu" ]
Conference
poster
2404.14037
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=m1qrB9KSYD
@inproceedings{ hong2024evolutionaware, title={Evolution-aware {VA}riance ({EVA}) Coreset Selection for Medical Image Classification}, author={Yuxin Hong and Xiao Zhang and Xin Zhang and Joey Tianyi Zhou}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=m1qrB9KSYD} }
In the medical field, managing high-dimensional massive medical imaging data and performing reliable medical analysis from it is a critical challenge, especially in resource-limited environments such as remote medical facilities and mobile devices. This necessitates effective dataset compression techniques to reduce storage, transmission, and computational cost. However, existing coreset selection methods are primarily designed for natural image datasets, and exhibit doubtful effectiveness when applied to medical image datasets due to challenges such as intra-class variation and inter-class similarity. In this paper, we propose a novel coreset selection strategy termed as Evolution-aware VAriance (EVA), which captures the evolutionary process of model training through a dual-window approach and reflects the fluctuation of sample importance more precisely through variance measurement. Extensive experiments on medical image datasets demonstrate the effectiveness of our strategy over previous SOTA methods, especially at high compression rates. EVA achieves 98.27\% accuracy with only 10\% training data, compared to 97.20\% for the full training set. None of the baseline methods compared can exceed Random at 5\% selection rate, while EVA outperforms Random by 5.61\%, showcasing its potential for efficient medical image analysis.
Evolution-aware VAriance (EVA) Coreset Selection for Medical Image Classification
[ "Yuxin Hong", "Xiao Zhang", "Xin Zhang", "Joey Tianyi Zhou" ]
Conference
oral
2406.05677
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=lpmQaseCZW
@inproceedings{ liu2024compgs, title={Comp{GS}: Efficient 3D Scene Representation via Compressed Gaussian Splatting}, author={Xiangrui Liu and Xinju Wu and Pingping Zhang and Shiqi Wang and Zhu Li and Sam Kwong}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=lpmQaseCZW} }
Gaussian splatting, renowned for its exceptional rendering quality and efficiency, has emerged as a prominent technique in 3D scene representation. However, the substantial data volume of Gaussian splatting impedes its practical utility in real-world applications. Herein, we propose an efficient 3D scene representation, named Compressed Gaussian Splatting (CompGS), which harnesses compact Gaussian primitives for faithful 3D scene modeling with a remarkably reduced data size. To ensure the compactness of Gaussian primitives, we devise a hybrid primitive structure that captures predictive relationships between each other. Then, we exploit a small set of anchor primitives for prediction, allowing the majority of primitives to be encapsulated into highly compact residual forms. Moreover, we develop a rate-constrained optimization scheme to eliminate redundancies within such hybrid primitives, steering our CompGS towards an optimal trade-off between bitrate consumption and representation efficacy. Experimental results show that the proposed CompGS significantly outperforms existing methods, achieving superior compactness in 3D scene representation without compromising model accuracy and rendering quality. Our code will be released on GitHub for further research.
CompGS: Efficient 3D Scene Representation via Compressed Gaussian Splatting
[ "Xiangrui Liu", "Xinju Wu", "Pingping Zhang", "Shiqi Wang", "Zhu Li", "Sam Kwong" ]
Conference
poster
2404.09458
[ "" ]
https://huggingface.co/papers/2404.09458
0
6
0
6
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=loG3nGk7p7
@inproceedings{ wang2024embedding, title={Embedding an Ethical Mind: Aligning Text-to-Image Synthesis via Lightweight Value Optimization}, author={Xingqi Wang and Xiaoyuan Yi and Xing Xie and Jia Jia}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=loG3nGk7p7} }
Recent advancements in diffusion models trained on large-scale data have enabled the generation of indistinguishable human-level images, yet they often produce harmful content misaligned with human values, e.g., social bias, and offensive content. Despite extensive research on Large Language Models (LLMs), the challenge of Text-to-Image (T2I) model alignment remains largely unexplored. Addressing this problem, we propose LiVO (Lightweight Value Optimization), a novel lightweight method for aligning T2I models with human values. LiVO only optimizes a plug-and-play value encoder to integrate a specified value principle with the input prompt, allowing the control of generated images over both semantics and values. Specifically, we design a diffusion model-tailored preference optimization loss, which theoretically approximates the Bradley-Terry model used in LLM alignment but provides a more flexible trade-off between image quality and value conformity. To optimize the value encoder, we also develop a framework to automatically construct a text-image preference dataset of 86k (prompt, aligned image, violating image, value principle) samples. Without updating most model parameters and through adaptive value selection from the input prompt, LiVO significantly reduces harmful outputs and achieves faster convergence, surpassing several strong baselines and taking an initial step towards ethically aligned T2I models. Warning: This paper involves descriptions and images depicting discriminatory, pornographic, bloody, and horrific scenes, which some readers may find offensive or disturbing.
Embedding an Ethical Mind: Aligning Text-to-Image Synthesis via Lightweight Value Optimization
[ "Xingqi Wang", "Xiaoyuan Yi", "Xing Xie", "Jia Jia" ]
Conference
poster
[ "https://github.com/achernarwang/LiVO" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=llsOp6DhLm
@inproceedings{ zheng2024metaenzyme, title={MetaEnzyme: Meta Pan-Enzyme Learning for Task-Adaptive Redesign}, author={Jiangbin Zheng and Han Zhang and Qianqing Xu and An-Ping Zeng and Stan Z. Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=llsOp6DhLm} }
Enzyme design plays a crucial role in both industrial production and biology. However, this field faces challenges due to the lack of comprehensive benchmarks and the complexity of enzyme design tasks, leading to a dearth of systematic research. Consequently, computational enzyme design is relatively overlooked within the broader protein domain and remains in its early stages. In this work, we address these challenges by introducing MetaEnzyme, a staged and unified enzyme design framework. We begin by employing a cross-modal structure-to-sequence transformation architecture, as the feature-driven starting point to obtain initial robust protein representation. Subsequently, we leverage domain adaptive techniques to generalize specific enzyme design tasks under low-resource conditions. MetaEnzyme focuses on three fundamental low-resource enzyme redesign tasks: functional design (FuncDesign), mutation design (MutDesign), and sequence generation design (SeqDesign). Through novel unified paradigm and enhanced representation capabilities, MetaEnzyme demonstrates adaptability to diverse enzyme design tasks, yielding outstanding results. Wet lab experiments further validate these findings, reinforcing the efficacy of the redesign process.
MetaEnzyme: Meta Pan-Enzyme Learning for Task-Adaptive Redesign
[ "Jiangbin Zheng", "Han Zhang", "Qianqing Xu", "An-Ping Zeng", "Stan Z. Li" ]
Conference
poster
2408.10247
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=lkOB0hBLS5
@inproceedings{ feng2024unifying, title={Unifying Spike Perception and Prediction: A Compact Spike Representation Model using Multi-scale Correlation}, author={Kexiang Feng and Chuanmin Jia and Siwei Ma and Wen Gao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=lkOB0hBLS5} }
The widespread adoption of bio-inspired cameras has catalyzed the development of spike-based intelligent applications. Despite its innovative imaging principle allows for functionality in extreme scenarios, the intricate nature of spike signals poses processing challenges to achieve desired performance. Traditional methods struggles to deliver visual perception and temporal prediction simultaneously, and they lack the flexibility needed for diverse intelligent applications. To address this problem, we analyze the spatio-temporal correlations between spike information at different temporal scales. A novel spike processing method is introduced for compact spike representations that utilizes intra-scale correlation for higher predictive accuracy. Additionally, we propose a multi-scale spatio-temporal aggregation unit (MSTAU) that further leverages inter-scale correlation to achieve efficient perception and precise prediction. Experimental results show noticeable improvements in scene reconstruction and object classification, with increases of **3.49dB** in scene reconstruction quality and **2.20%** in accuracy, respectively. Besides, the proposed method accommodate different visual applications via switching analysis models, offering a novel perspective for spike processing.
Unifying Spike Perception and Prediction: A Compact Spike Representation Model using Multi-scale Correlation
[ "Kexiang Feng", "Chuanmin Jia", "Siwei Ma", "Wen Gao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=lfKG8o5LXv
@inproceedings{ hao2024egodt, title={Ego3{DT}: Tracking Every 3D Object in Ego-centric Videos}, author={Shengyu Hao and Wenhao Chai and Zhonghan Zhao and Meiqi Sun and Wendi Hu and Jieyang Zhou and Yixian Zhao and Qi Li and Yizhou Wang and Xi Li and Gaoang Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=lfKG8o5LXv} }
The growing interest in embodied intelligence has brought ego-centric perspectives to contemporary research. One significant challenge within this realm is the accurate localization and tracking of objects in ego-centric videos, primarily due to the substantial variability in viewing angles. Addressing this issue, this paper introduces a novel zero-shot approach for the 3D reconstruction and tracking of all objects from the ego-centric video. We present Ego3DT, a novel framework that initially identifies and extracts detection and segmentation information of objects within the ego environment. Utilizing information from adjacent video frames, Ego3DT dynamically constructs a 3D scene of the ego view using a pre-trained 3D scene reconstruction model. Additionally, we have innovated a dynamic hierarchical association mechanism for creating stable 3D tracking trajectories of objects in ego-centric videos. Moreover, the efficacy of our approach is corroborated by extensive experiments on two newly compiled datasets, with 1.04× - 2.90× in HOTA, showcasing the robustness and accuracy of our method in diverse ego-centric scenarios.
Ego3DT: Tracking Every 3D Object in Ego-centric Videos
[ "Shengyu Hao", "Wenhao Chai", "Zhonghan Zhao", "Meiqi Sun", "Wendi Hu", "Jieyang Zhou", "Yixian Zhao", "Qi Li", "Yizhou Wang", "Xi Li", "Gaoang Wang" ]
Conference
poster
2410.08530
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=lbuxSx6Xzn
@inproceedings{ weili2024infusion, title={Infusion: Preventing Customized Text-to-Image Diffusion from Overfitting}, author={Zeng Weili and Yichao Yan and Qi Zhu and Zhuo Chen and Pengzhi Chu and Weiming Zhao and Xiaokang Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=lbuxSx6Xzn} }
Text-to-image (T2I) customization aims to create images that embody specific visual concepts delineated in textual descriptions. However, existing works still face a main challenge, \textbf{concept overfitting}. To tackle this challenge, we first analyze overfitting, categorizing it into concept-agnostic overfitting, which undermines non-customized concept knowledge, and concept-specific overfitting, which is confined to customize on limited modalities, \ie, backgrounds, layouts, styles. To evaluate the overfitting degree, we further introduce two metrics, \ie, Latent Fisher divergence and Wasserstein metric to measure the distribution changes of non-customized and customized concept respectively. Drawing from the analysis, we propose Infusion, a T2I customization method that enables the learning of target concepts to avoid being constrained by limited training modalities, while preserving non-customized knowledge. Remarkably, Infusion achieves this feat with remarkable efficiency, requiring a mere \textbf{11KB} of trained parameters. Extensive experiments also demonstrate that our approach outperforms state-of-the-art methods in both single and multi-concept customized generation.
Infusion: Preventing Customized Text-to-Image Diffusion from Overfitting
[ "Zeng Weili", "Yichao Yan", "Qi Zhu", "Zhuo Chen", "Pengzhi Chu", "Weiming Zhao", "Xiaokang Yang" ]
Conference
poster
2404.14007
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=lZWaVy4IiH
@inproceedings{ liu2024arondight, title={Arondight: Red Teaming Large Vision Language Models with Auto-generated Multi-modal Jailbreak Prompts}, author={Yi Liu and Chengjun Cai and Xiaoli ZHANG and Xingliang YUAN and Cong Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=lZWaVy4IiH} }
Large Vision Language Models (VLMs) extend and enhance the perceptual abilities of Large Language Models (LLMs). Despite offering new possibilities for LLM applications, these advancements raise significant security and ethical concerns, particularly regarding the generation of harmful content. While LLMs have undergone extensive security evaluations with the aid of red teaming frameworks, VLMs currently lack a well-developed one. To fill this gap, we introduce Arondight, a standardized red team framework tailored specifically for VLMs. Arondight is dedicated to resolving issues related to the absence of visual modality and inadequate diversity encountered when transitioning existing red teaming methodologies from LLMs to VLMs. Our framework features an automated multi-modal jailbreak attack, wherein visual jailbreak prompts are produced by a red team VLM, and textual prompts are generated by a red team LLM guided by a reinforcement learning agent. To enhance the comprehensiveness of VLM security evaluation, we integrate entropy bonuses and novelty reward metrics. These elements incentivize the RL agent to guide the red team LLM in creating a wider array of diverse and previously unseen test cases. Our evaluation of ten cutting-edge VLMs exposes significant security vulnerabilities, particularly in generating toxic images and aligning multi-modal prompts. In particular, our Arondight achieves an average attack success rate of 84.5\% on GPT-4 in all fourteen prohibited scenarios defined by OpenAI in terms of generating toxic text. For a clearer comparison, we also categorize existing VLMs based on their safety levels and provide corresponding reinforcement recommendations. Our multimodal prompt dataset and red team code will be released after ethics committee approval. CONTENT WARNING: THIS PAPER CONTAINS HARMFUL MODEL RESPONSES.
Arondight: Red Teaming Large Vision Language Models with Auto-generated Multi-modal Jailbreak Prompts
[ "Yi Liu", "Chengjun Cai", "Xiaoli ZHANG", "Xingliang YUAN", "Cong Wang" ]
Conference
poster
2407.15050
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=lWo7iPsszz
@inproceedings{ liu2024disrupting, title={Disrupting Diffusion: Token-Level Attention Erasure Attack against Diffusion-based Customization}, author={Yisu Liu and Jinyang An and Wanqian Zhang and Dayan Wu and JingziGU and Zheng Lin and Weiping Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=lWo7iPsszz} }
With the development of diffusion-based customization methods like DreamBooth, individuals now have access to train the models that can generate their personalized images. Despite the convenience, malicious users have misused these techniques to create fake images, thereby triggering a privacy security crisis. In light of this, proactive adversarial attacks are proposed to protect users against customization. The adversarial examples are trained to distort the customization model's outputs and thus block the misuse. In this paper, we propose DisDiff (Disrupting Diffusion), a novel adversarial attack method to disrupt the diffusion model outputs. We first delve into the intrinsic image-text relationships, well-known as cross-attention, and empirically find that the subject-identifier token plays an important role in guiding image generation. Thus, we propose the Cross-Attention Erasure module to explicitly "erase" the indicated attention maps and disrupt the text guidance. Besides, we analyze the influence of the sampling process of the diffusion model on Projected Gradient Descent (PGD) attack and introduce a novel Merit Sampling Scheduler to adaptively modulate the perturbation updating amplitude in a step-aware manner. Our DisDiff outperforms the state-of-the-art methods by 12.75% of FDFR scores and 7.25% of ISM scores across two facial benchmarks and two commonly used prompts on average.
Disrupting Diffusion: Token-Level Attention Erasure Attack against Diffusion-based Customization
[ "Yisu Liu", "Jinyang An", "Wanqian Zhang", "Dayan Wu", "JingziGU", "Zheng Lin", "Weiping Wang" ]
Conference
poster
2405.20584
[ "https://github.com/riolys/disdiff" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=lLtaUH3s8N
@inproceedings{ deng2024mmdrfuse, title={{MMDRF}use: Distilled Mini-Model with Dynamic Refresh for Multi-Modality Image Fusion}, author={Yanglin Deng and Tianyang Xu and Chunyang Cheng and Xiaojun Wu and Josef Kittler}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=lLtaUH3s8N} }
In recent years, Multi-Modality Image Fusion (MMIF) has been applied to many fields, which has attracted many scholars to endeavour to improve the fusion performance. However, the prevailing focus has predominantly been on the architecture design, rather than the training strategies. As a low-level vision task, image fusion is supposed to quickly deliver output images for observing and supporting downstream tasks. Thus, superfluous computational and storage overheads should be avoided. In this work, a lightweight Distilled Mini-Model with a Dynamic Refresh strategy (MMDRFuse) is proposed to achieve this objective. To pursue model parsimony, an extremely small convolutional network with a total of 113 trainable parameters (0.44 KB) is obtained by three carefully designed supervisions. First, digestible distillation is constructed by emphasising external spatial feature consistency, delivering soft supervision with balanced details and saliency for the target network. Second, we develop a comprehensive loss to balance the pixel, gradient, and perception clues from the source images. Third, an innovative dynamic refresh training strategy is used to collaborate history parameters and current supervision during training, together with an adaptive adjust function to optimise the fusion network. Extensive experiments on several public datasets demonstrate that our method exhibits promising advantages in terms of model efficiency and complexity, with superior performance in multiple image fusion tasks and downstream pedestrian detection application.
MMDRFuse: Distilled Mini-Model with Dynamic Refresh for Multi-Modality Image Fusion
[ "Yanglin Deng", "Tianyang Xu", "Chunyang Cheng", "Xiaojun Wu", "Josef Kittler" ]
Conference
oral
2408.15641
[ "https://github.com/yanglindeng/mmdrfuse" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=lD9A7SS4BP
@inproceedings{ mengzhen2024segment, title={Segment Anything with Precise Interaction}, author={Mengzhen Liu and Mengyu Wang and Henghui Ding and Yilong Xu and Yao Zhao and Yunchao Wei}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=lD9A7SS4BP} }
Although the Segment Anything Model (SAM) has achieved impressive results in many segmentation tasks and benchmarks, its performance noticeably deteriorates when applied to high-resolution images for high-precision segmentation, limiting it's usage in many real-world applications. In this work, we explored transferring SAM into the domain of high-resolution images and proposed Pi-SAM. Compared to the original SAM and its variants, Pi-SAM demonstrates the following superiorities: **Firstly**, Pi-SAM possesses a strong perception capability for the extremely fine details in high-resolution images, enabling it to generate high-precision segmentation masks. As a result,Pi-SAM significantly surpasses previous methods in four high-resolution datasets. **Secondly**, Pi-SAM supports more precise user interactions. In addition to the native promptable ability of SAM, Pi-SAM allows users to interactively refine the segmentation predictions simply by clicking. While the original SAM fails to achieve this on high-resolution images. **Thirdly**, building upon SAM, Pi-SAM freezes all its original parameters and introduces very few additional parameters and computational costs to achieve the above performance. This ensures highly efficient model fine-tuning while also retaining the powerful semantic information contained in the original SAM.
Segment Anything with Precise Interaction
[ "Mengzhen Liu", "Mengyu Wang", "Henghui Ding", "Yilong Xu", "Yao Zhao", "Yunchao Wei" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=lAFO0SUjXD
@inproceedings{ liu2024fedbcgd, title={Fed{BCGD}: Communication-Efficient Accelerated Block Coordinate Gradient Descent for Federated Learning}, author={Junkang Liu and Fanhua Shang and Yuanyuan Liu and Hongying Liu and Yuangang Li and YunXiang Gong}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=lAFO0SUjXD} }
Although federated learning has been widely studied in recent years, there are still high overhead expenses in each communication round for large-scale models such as Vision Transformer. To lower the communication complexity, we propose a novel communication efficient block coordinate gradient descent (FedBCGD) method. The proposed method splits model parameters into several blocks and enables upload a specific parameter block by each client during training, which can significantly reduce communication overhead. Moreover, we also develop an accelerated FedBCGD algorithm (called FedBCGD+) with client drift control and stochastic variance reduction techniques. To the best of our knowledge, this paper is the first parameter block communication work for training large-scale deep models. We also provide the convergence analysis for the proposed algorithms. Our theoretical results show that the communication complexities of our algorithms are a factor $1/N$ lower than those of existing methods, where $N$ is the number of parameter blocks, and they enjoy much faster convergence results than their counterparts. Empirical results indicate the superiority of the proposed algorithms compared to state-of-the-art algorithms.
FedBCGD: Communication-Efficient Accelerated Block Coordinate Gradient Descent for Federated Learning
[ "Junkang Liu", "Fanhua Shang", "Yuanyuan Liu", "Hongying Liu", "Yuangang Li", "YunXiang Gong" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=l8fFs4BPOq
@inproceedings{ cheng2024serial, title={Serial section microscopy image inpainting guided by axial optical flow}, author={Yiran Cheng and Bintao He and Renmin Han and Fa Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=l8fFs4BPOq} }
Volume electron microscopy (vEM) is becoming a prominent technique in three-dimensional (3D) cellular visualization. vEM collects a series of two-dimensional (2D) images and reconstructs ultra-structures at the nanometer scale by rational axial interpolation between neighboring sections. However, section damage inevitably occurs in the sample preparation and imaging process, suffering from manual operational errors or occasional mechanical failures. The damaged regions present blurry and contaminated structure information, even local blank holes. Despite significant progress in single-image inpainting, it is still a great challenge to recover missing biological structures, that satisfy 3D structural continuity among sections. In this paper, we propose an optical flow-based serial section inpainting architecture to effectively combine the 3D structure information from neighboring sections and 2D image features from surrounding regions. We design a two-stage reference generation strategy to predict a rational and detailed intermediate state image from coarse to fine. Then, a GAN-based inpainting network is adopted to integrate all reference information and guide the restoration of missing structures, while ensuring consistent distribution of pixel values across the 2D image. Extensive experimental results well demonstrate the superiority of our method over existing inpainting tools.
Serial section microscopy image inpainting guided by axial optical flow
[ "Yiran Cheng", "Bintao He", "Renmin Han", "Fa Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=l64w1TI1T8
@inproceedings{ wang2024evolving, title={Evolving Storytelling: Benchmarks and Methods for New Character Customization with Diffusion Models}, author={Xiyu Wang and Yufei Wang and Satoshi Tsutsui and Weisi Lin and Bihan Wen and Alex Kot}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=l64w1TI1T8} }
Diffusion-based models for story visualization have shown promise in generating content-coherent images for storytelling tasks. However, how to effectively integrate new characters into existing narratives while maintaining character consistency remains an open problem, particularly with limited data. Two major limitations hinder the progress: (1) the absence of a suitable benchmark due to potential character leakage and inconsistent text labeling, and (2) the challenge of distinguishing between new and old characters, leading to ambiguous results. To address these challenges, we introduce the NewEpisode benchmark, comprising refined datasets designed to evaluate generative models' adaptability in generating new stories with fresh characters using just a single example story. The refined dataset involves refined text prompts and eliminates character leakage. Additionally, to mitigate the character confusion of generated results, we propose EpicEvo, a method that customizes a diffusion-based visual story generation model with a single story featuring the new characters seamlessly integrating them into established character dynamics. EpicEvo introduces a novel adversarial character alignment module to align the generated images progressively in the diffusive process, with exemplar images of new characters, while applying knowledge distillation to prevent forgetting of characters and background details. Our evaluation quantitatively demonstrates that EpicEvo outperforms existing baselines on the NewEpisode benchmark, and qualitative studies confirm its superior customization of visual story generation in diffusion models. In summary, EpicEvo provides an effective way to incorporate new characters using only one example story, unlocking new possibilities for applications such as serialized cartoons.
Evolving Storytelling: Benchmarks and Methods for New Character Customization with Diffusion Models
[ "Xiyu Wang", "Yufei Wang", "Satoshi Tsutsui", "Weisi Lin", "Bihan Wen", "Alex Kot" ]
Conference
oral
2405.11852
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=l5UZVCwp4q
@inproceedings{ zhong2024dreamlcm, title={Dream{LCM}: Towards High Quality Text-to-3D Generation Via Latent Consistency Model}, author={Yiming Zhong and Xiaolin Zhang and Yao Zhao and Yunchao Wei}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=l5UZVCwp4q} }
Recently, the text-to-3D task has developed rapidly due to the appearance of the SDS method. However, the SDS method always generates 3D objects with poor quality due to the over-smooth issue. This issue is attributed to two factors: 1) the DDPM single-step inference produces poor guidance gradients; 2) the randomness from the input noises and timesteps averages the details of the 3D contents. In this paper, to address the issue, we propose DreamLCM which incorporates the Latent Consistency Model (LCM). DreamLCM leverages the powerful image generation capabilities inherent in LCM, enabling generating consistent and high-quality guidance,~\ie, predicted noises or images. Powered by the improved guidance, the proposed method can provide accurate and detailed gradients to optimize the target 3D models. In addition, we propose two strategies to enhance the generation quality further. Firstly, we propose a guidance calibration strategy, utilizing Euler solver to calibrate the guidance distribution to accelerate 3D models to converge. Secondly, we propose a dual timestep strategy, which helps DreamLCM to increase the consistency of guidance and optimize 3D models from geometry to appearance. Experiments show that DreamLCM achieves state-of-the-art results in both generation quality and training efficiency.
DreamLCM: Towards High Quality Text-to-3D Generation Via Latent Consistency Model
[ "Yiming Zhong", "Xiaolin Zhang", "Yao Zhao", "Yunchao Wei" ]
Conference
poster
[ "https://github.com/1yimingzhong/dreamlcm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=kvSL0OJPAL
@inproceedings{ lu2024viewconsistent, title={View-consistent Object Removal in Radiance Fields}, author={Yiren Lu and Jing Ma and Yu Yin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=kvSL0OJPAL} }
Radiance Fields (RFs) have emerged as a crucial technology for 3D scene representation, enabling the synthesis of novel views with remarkable realism. However, as RFs become more widely used, the need for effective editing techniques that maintain coherence across different perspectives becomes evident. Current methods primarily depend on per-frame 2D image inpainting, which often fails to maintain consistency across views, thus compromising the realism of edited RF scenes. In this work, we introduce a novel RF editing pipeline that significantly enhances consistency by requir- ing the inpainting of only a single reference image. This image is then projected across multiple views using a depth-based approach, effectively reducing the inconsistencies observed with per-frame inpainting. However, projections typically assume photometric consistency across views, which is often impractical in real-world settings. To accommodate realistic variations in lighting and view- point, our pipeline adjusts the appearance of the projected views by generating multiple directional variants of the inpainted image, thereby adapting to different photometric conditions. Additionally, we present an effective and robust multi-view object segmentation approach as a valuable byproduct of our pipeline. Extensive experi- ments demonstrate that our method significantly surpasses existing frameworks in maintaining content consistency across views and enhancing visual quality.
View-consistent Object Removal in Radiance Fields
[ "Yiren Lu", "Jing Ma", "Yu Yin" ]
Conference
poster
2408.02100
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ktMvfLYFas
@inproceedings{ fang2024dero, title={{DERO}: Diffusion-Model-Erasure Robust Watermarking}, author={Han Fang and Kejiang Chen and Yupeng Qiu and Zehua Ma and Weiming Zhang and Ee-Chien Chang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ktMvfLYFas} }
The powerful denoising capability of the latent diffusion model creates new demands on the robustness of image watermarking algorithms, as attackers can erase the watermark by performing a forward diffusion, followed by backward denoising. While such denoising might introduce large distortion in the pixel domain, the image semantics remain similar. Unfortunately, most existing robust watermarking methods fail to tackle such an erasure attack since they are primarily designed for traditional channel distortions. To address such issues, this paper proposed DERO, a diffusion-model-erasure robust watermarking framework. Based on the frequency domain analysis of the diffusion model's denoising process, we designed a destruction and compensation noise layer (DCNL) to approximate the distortion effects caused by latent diffusion model erasure (LDE). In detail, DCNL consists of a multi-scale low-pass filtering and a white noise compensation process, where the high-frequency components of the image are first obliterated, and then full-frequency components are enriched with white noise. Such a process broadly simulates the LDE distortions. Besides, on the extraction side, we cascaded a pre-trained variational autoencoder before the decoder to extract the watermark in the latent domain, which closely adapts to the operation domain of the LDE process. Meanwhile, to improve the robustness of the decoder, we also design a latent feature augmentation (LFA) operation on the latent feature. Throughout the end-to-end training with the DCNL and LFA, DERO can successfully achieve robustness against LDE. Our experimental results demonstrate the effectiveness and the generalizability of the proposed framework. The LDE robustness is significantly improved from 75% with SOTA methods to an impressive 96% with DERO.
DERO: Diffusion-Model-Erasure Robust Watermarking
[ "Han Fang", "Kejiang Chen", "Yupeng Qiu", "Zehua Ma", "Weiming Zhang", "Ee-Chien Chang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ksOtgeWKVv
@inproceedings{ zhu2024trust, title={Trust Prophet or Not? Taking a Further Verification Step toward Accurate Scene Text Recognition}, author={Anna Zhu and Ke Xiao and Bo Zhou and Runmin Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ksOtgeWKVv} }
Inducing linguistic knowledge for scene text recognition (STR) is a new trend that could provide semantics for performance boost. However, most auto-regressive STR models optimize one-step ahead prediction (i.e., 1-gram prediction) for character sequence, which only utilizes the previous semantic context. Most non-auto-regressive models only apply linguistic knowledge individually on the output sequence to refine the results in parallel, which do not fully utilize the visual clues concurrently. In this paper, we propose a novel language-based STR model, called ProphetSTR. It adopts an n-stream self-attention mechanism in the decoder to predict the next characters simultaneously based on the previous predictions at each time step. It could utilize the previous semantic information and the near future clues, encouraging the model to predict more accurate results. If the prediction results for the same character at successive time steps are inconsistent, we should not trust any of them. Otherwise, they are reliable predictions. Therefore, we propose a multi-modality verification module, masking the unreliable semantic features and inputting with visual and trusted semantic ones simultaneously for masked prediction recovery in parallel. It learns to align different modalities implicitly and considers both visual context and linguistic knowledge, which could generate more reliable results. Furthermore, we propose a multi-scale weight-sharing encoder for multi-granularity image representation. Extensive experiments demonstrate that ProphetSTR achieves state-of-the-art performances on many benchmarks. Further ablative studies prove the effectiveness of our proposed components.
Trust Prophet or Not? Taking a Further Verification Step toward Accurate Scene Text Recognition
[ "Anna Zhu", "Ke Xiao", "Bo Zhou", "Runmin Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=knp6lD5MAB
@inproceedings{ xi2024global, title={Global Patch-wise Attention is Masterful Facilitator for Masked Image Modeling}, author={Gongli Xi and Ye Tian and Mengyu Yang and Lanshan Zhang and Xirong Que and Wendong Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=knp6lD5MAB} }
Masked image modeling (MIM), as a self-supervised learning paradigm in computer vision, has gained widespread attention among researchers. MIM operates by training the model to predict masked patches of the image. Given the sparse nature of image semantics, it is imperative to devise a masking strategy that steers the model towards reconstructing high-semantic regions. However, conventional mask strategies often miss these high-semantic regions or lack alignment with the masks and semantics. To solve this, we propose the Global Patch-wise Attention (GPA) framework, a transferable and efficient framework for MIM pre-training. We observe that the attention between patches can be the metric of identifying high-semantic regions, which can guide the model to learn more effective representations. Therefore, we firstly define the global patch-wise attention via vision transformer blocks. Then we design the soft-to-hard mask generation to guide the model gradually focusing on high semantic regions identified by GPA (GPA as a teacher). Finally, we design an extra task to predict GPA (GPA as a feature). Experiments conducted under various settings demonstrate that our proposed GPA framework enables MIM to learn better representations, which benefit the model across a wide range of downstream tasks. Furthermore, our GPA framework can be easily and effectively transferred to various MIM architectures.
Global Patch-wise Attention is Masterful Facilitator for Masked Image Modeling
[ "Gongli Xi", "Ye Tian", "Mengyu Yang", "Lanshan Zhang", "Xirong Que", "Wendong Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=kiH6PqRhwE
@inproceedings{ deng2024simclip, title={Sim{CLIP}: Refining Image-Text Alignment with Simple Prompts for Zero-/Few-shot Anomaly Detection}, author={ChengHao Deng and haote xu and Xiaolu Chen and Haodi Xu and Xiaotong Tu and Xinghao Ding and Yue Huang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=kiH6PqRhwE} }
Recently, large pre-trained vision-language models, such as CLIP, have demonstrated significant potential in zero-/few-shot anomaly detection tasks. However, existing methods not only rely on expert knowledge to manually craft extensive text prompts but also suffer from a misalignment of high-level language features with fine-level vision features in anomaly segmentation tasks. In this paper, we propose a method, named SimCLIP, which focuses on refining the aforementioned misalignment problem through bidirectional adaptation of both Multi-Hierarchy Vision Adapter (MHVA) and Implicit Prompt Tuning (IPT). In this way, our approach requires only a simple binary prompt to accomplish anomaly classification and segmentation tasks in zero-shot scenarios efficiently. Furthermore, we introduce its few-shot extension, SimCLIP+, integrating the relational information among vision embedding and skillfully merging the cross-modal synergy information between vision and language to address AD tasks. Extensive experiments on two challenging datasets prove the more remarkable generalization capacity of our method compared to the current state-of-the-art.
SimCLIP: Refining Image-Text Alignment with Simple Prompts for Zero-/Few-shot Anomaly Detection
[ "ChengHao Deng", "haote xu", "Xiaolu Chen", "Haodi Xu", "Xiaotong Tu", "Xinghao Ding", "Yue Huang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=kckxgbOogx
@inproceedings{ wang2024rppghibahierarchical, title={r{PPG}-HiBa:Hierarchical Balanced Framework for Remote Physiological Measurement}, author={Yin Wang and Hao LU and Ying-Cong Chen and Li Kuang and Mengchu Zhou and Shuiguang Deng}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=kckxgbOogx} }
Remote photoplethysmography (**rPPG**) is a promising technique for non-contact physiological signal measurement, which has great potential in health monitoring and emotion analysis. However, existing methods for the rPPG task ignore the long-tail phenomenon of physiological signal data, especially on multiple domains joint training. In addition, we find that the long-tail problem of the physiological label (phys-label) exists in different datasets, and the long-tail problem of domain exists under the same phys-label. To tackle these problems, in this paper, we propose a **Hi**erarchical **Ba**lanced framework (rPPG-HiBa), which mitigates the bias caused by domain and phys-label imbalance. Specifically, we propose anti-spurious domain center learning tailored to learning domain-balanced embeddings space. Then, we adopt compact-aware continuity regularization to estimate phys-label-wise imbalances and construct continuity between embeddings. Extensive experiments demonstrate that our method outperforms the state-of-the-art in cross-dataset and intra-dataset settings.
rPPG-HiBa:Hierarchical Balanced Framework for Remote Physiological Measurement
[ "Yin Wang", "Hao LU", "Ying-Cong Chen", "Li Kuang", "Mengchu Zhou", "Shuiguang Deng" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=kcFQddB5AN
@inproceedings{ yu2024towards, title={Towards Emotion-enriched Text-to-Motion Generation via {LLM}-guided Limb-level Emotion Manipulating}, author={Tan Yu and Jingjing Wang and Jiawen Wang and Jiamin Luo and Guodong Zhou}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=kcFQddB5AN} }
In the literature, existing studies on text-to-motion generation (TMG) routinely focus on exploring the objective alignment of text and motion, which largely ignore the subjective emotion information, especially the limb-level emotion information. With this in mind, this paper proposes a new Emotion-enriched Text-to-Motion Generation (ETMG) task, aiming to generate motions with the subjective emotion information. Further, this paper believes that injecting emotions into limbs (named intra-limb emotion injection) and ensuring the coordination and coherence of emotional motions after injecting emotion information (named inter-limb emotion disturbance) is rather important and challenging in this ETMG task. To this end, this paper proposes an LLM-guided Limb-level Emotion Manipulating (${\rm L^{3}EM}$) approach to ETMG. Specifically, this approach designs an LLM-guided intra-limb emotion modeling block to inject emotion into limbs, followed by a graph-structured inter-limb relation modeling block to ensure the coordination and coherence of emotional motions. Particularly, this paper constructs a coarse-grained Emotional Text-to-Motion (EmotionalT2M) dataset and a fine-grained Limb-level Emotional Text-to-Motion (Limb-ET2M) dataset to justify the effectiveness of the proposed ${\rm L^{3}EM}$ approach. Detailed evaluation demonstrates the significant advantage of our ${\rm L^{3}EM}$ approach to ETMG over the state-of-the-art baselines. This justifies the importance of the limb-level emotion information for ETMG and the effectiveness of our ${\rm L^{3}EM}$ approach in coherently manipulating such information.
Towards Emotion-enriched Text-to-Motion Generation via LLM-guided Limb-level Emotion Manipulating
[ "Tan Yu", "Jingjing Wang", "Jiawen Wang", "Jiamin Luo", "Guodong Zhou" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=kbdeQmw2ny
@inproceedings{ tian2024diffusion, title={Diffusion Networks with Task-Specific Noise Control for Radiology Report Generation}, author={Yuanhe Tian and Fei Xia and Yan Song}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=kbdeQmw2ny} }
Existing radiology report generation (RRG) studies mostly adopt autoregressive (AR) approaches to produce textual descriptions token-by-token for specific clinical radiographs, where they are susceptible to error propagation problems if irrelevant contents are half-way generated, leading to potential ill-presenting of precise diagnoses, especially when there exist complicated abnormalities in radiographs. Although the non-AR paradigm, e.g., diffusion model, provides an alternative solution to tackle the problem from AR by generating all contents in parallel, the mechanism of using Gaussian noise in existing diffusion models still has a significant room to improve when such models are used in particular circumstances, i.e., providing proper guidance in controlling noises in the diffusive process to ensure precise report generation. In this paper, we propose to conduct RRG with diffusion networks by controlling the noise with task-specific features, which leverages irrelevant visual and textual information as noise rather than the stochastic Gaussian noise, and allows the diffusion networks to filter particular information through iterative denoising, thus performing a precise and controlled report generation process. Experiments on IU X-Ray and MIMIC-CXR demonstrate the superiority of our approach compared to strong baselines and state-of-the-art solutions. Human evaluation and noise type analysis show that comprehensive noise control greatly helps diffusion networks to refine the generation of global and local report contents.
Diffusion Networks with Task-Specific Noise Control for Radiology Report Generation
[ "Yuanhe Tian", "Fei Xia", "Yan Song" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=kb9jZOgQ93
@inproceedings{ xing2024metarepair, title={MetaRepair: Learning to Repair Deep Neural Networks from Repairing Experiences}, author={Yun Xing and Qing Guo and Xiaofeng Cao and Ivor Tsang and Lei Ma}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=kb9jZOgQ93} }
Repairing deep neural networks (DNNs) to maintain its performance during deployment presents significant challenges due to the potential occurrence of unknown but common environmental corruptions. Most existing DNN repair methods only focus on repairing DNN for each corruption separately, lacking the ability of generalizing to the myriad corruptions from the ever-changing deploying environment. In this work, we propose to repair DNN from a novel perspective, i.e. Learning to Repair (L2R), where the repairing of target DNN is realized as a general learning-to-learn, a.k.a. meta-learning, process. In specific, observing different corruptions are correlated on their data distributions, we propose to utilize previous DNN repair experiences as tasks for meta-learning how to repair the target corruption. With the meta-learning from different tasks, L2R learns a meta-knowledge that summarizes how the DNN is repaired under various environmental corruptions. The meta-knowledge essentially serves as a general repairing prior which enables the DNN quickly adapt to unknown corruptions, thus making our method generalizable to different type of corruptions. Practically, L2R benefits DNN repair with a general pipeline yet tailoring meta-learning for repairing DNN is not trivial. By re-designing the meta-learning components under DNN repair context, we further instantiate the proposed L2R strategy into a concrete model named MetaRepair with pragmatic assumption of experience availability. We conduct comprehensive experiments on the corrupted CIFAR-10 and tiny-ImageNet by applying MetaRepair to repair DenseNet, ConvNeXt and VAN. The experimental results confirmed the superior repairing and generalization capability of our proposed L2R strategy under various environmental corruptions.
MetaRepair: Learning to Repair Deep Neural Networks from Repairing Experiences
[ "Yun Xing", "Qing Guo", "Xiaofeng Cao", "Ivor Tsang", "Lei Ma" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=kaJsB2H0EX
@inproceedings{ long2024dgmamba, title={{DGM}amba: Domain Generalization via Generalized State Space Model}, author={Shaocong Long and Qianyu Zhou and Xiangtai Li and Xuequan Lu and Chenhao Ying and Yuan Luo and Lizhuang Ma and Shuicheng YAN}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=kaJsB2H0EX} }
Domain generalization (DG) aims at solving distribution shift problems in various scenes. Existing approaches are based on Convolution Neural Networks (CNNs) or Vision Transformers (ViTs), which suffer from limited receptive fields or quadratic complexity issues. Mamba, as an emerging state space model (SSM), possesses superior linear complexity and global receptive fields. Despite this, it can hardly be applied to DG to address distribution shifts, due to the hidden state issues and inappropriate scan mechanisms. In this paper, we propose a novel framework for DG, named DGMamba, that excels in strong generalizability toward unseen domains and meanwhile has the advantages of global receptive fields, and efficient linear complexity. Our DGMamba compromises two core components: Hidden State Suppressing (HSS) and Semantic-aware Patch Refining (SPR). In particular, HSS is introduced to mitigate the influence of hidden states associated with domain-specific features during output prediction. SPR strives to encourage the model to concentrate more on objects rather than context, consisting of two designs: Prior-Free Scanning (PFS), and Domain Context Interchange (DCI). Concretely, PFS aims to shuffle the non-semantic patches within images, creating more flexible and effective sequences from images, and DCI is designed to regularize Mamba with the combination of mismatched non-semantic and semantic information by fusing patches among domains. Extensive experiments on four commonly used DG benchmarks demonstrate that the proposed DGMamba achieves remarkably superior results to state-of-the-art models. The code will be made publicly available.
DGMamba: Domain Generalization via Generalized State Space Model
[ "Shaocong Long", "Qianyu Zhou", "Xiangtai Li", "Xuequan Lu", "Chenhao Ying", "Yuan Luo", "Lizhuang Ma", "Shuicheng YAN" ]
Conference
poster
2404.07794
[ "https://github.com/longshaocong/dgmamba" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=kYhRv9cw9i
@inproceedings{ gong2024litemind, title={Lite-Mind: Towards Efficient and Robust Brain Representation Learning}, author={Zixuan Gong and Qi Zhang and Guangyin Bao and Lei Zhu and Yu Zhang and KE LIU and Liang Hu and Duoqian Miao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=kYhRv9cw9i} }
The limited data availability and the low signal-to-noise ratio of fMRI signals lead to the challenging task of fMRI-to-image retrieval. State-of-the-art MindEye remarkably improves fMRI-to-image retrieval performance by leveraging a large model, i.e., a 996M MLP Backbone per subject, to align fMRI embeddings to the final hidden layer of CLIP’s Vision Transformer (ViT). However, significant individual variations exist among subjects, even under identical experimental setups, mandating the training of large subject-specific models. The substantial parameters pose significant challenges in deploying fMRI decoding on practical devices. To this end, we propose Lite-Mind, a lightweight, efficient, and robust brain representation learning paradigm based on Discrete Fourier Transform (DFT), which efficiently aligns fMRI voxels to fine-grained information of CLIP. We elaborately design a DFT backbone with Spectrum Compression and Frequency Projector modules to learn informative and robust voxel embeddings. Our experiments demonstrate that Lite-Mind achieves an impressive 94.6% fMRI-to-image retrieval accuracy on the NSD dataset for Subject 1, with 98.7% fewer parameters than MindEye. Lite-Mind is also proven to be able to be migrated to smaller fMRI datasets and establishes a new state-of-the-art for zero-shot classification on the GOD dataset.
Lite-Mind: Towards Efficient and Robust Brain Representation Learning
[ "Zixuan Gong", "Qi Zhang", "Guangyin Bao", "Lei Zhu", "Yu Zhang", "KE LIU", "Liang Hu", "Duoqian Miao" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=kVQVoAUZev
@inproceedings{ wang2024mesh, title={Mesh Denoising Using Filtering Coefficients Jointly Aware of Noise and Geometry}, author={Xingtao Wang and Xianqi Zhang and Wenxue Cui and Ruiqin Xiong and Xiaopeng Fan and Debin Zhao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=kVQVoAUZev} }
Mesh denoising is a fundamental task in geometry processing, and recent studies have demonstrated the remarkable superiority of deep learning-based methods in this field. However, existing works commonly rely on neural networks without explicit designs for noise and geometry which are actually fundamental factors in mesh denoising. In this paper, by jointly considering noise intensity and geometric characteristics, a novel Filtering Coefficient Learner (FCL for short) for mesh denoising is developed, which delicately generates coefficients to filter face normals. Specifically, FCL produces filtering coefficients consisting of a noise-aware component and a geometry-aware component. The first component is inversely proportional to the noise intensity of each face, resulting in smaller coefficients for faces with stronger noise. For the effective assessment of the noise intensity, a noise intensity estimation module is designed, which predicts the angle between paired noisy-clean normals based on a mean filtering angle. The second component is derived based on two types of geometric features, namely the category feature and face-wise features. The category feature provides a global description of the input patch, while the face-wise features complement the perception of local textures. Extensive experiments have validated the superior performance of FCL over state-of-the-art works in both noise removal and feature preservation.
Mesh Denoising Using Filtering Coefficients Jointly Aware of Noise and Geometry
[ "Xingtao Wang", "Xianqi Zhang", "Wenxue Cui", "Ruiqin Xiong", "Xiaopeng Fan", "Debin Zhao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=kMQ3LAiWpx
@inproceedings{ chen2024multiscale, title={Multi-scale Change-Aware Transformer for Remote Sensing Image Change Detection}, author={HUAN CHEN and Tingfa Xu and Zhenxiang Chen and Peifu Liu and Huiyan Bai and Jianan Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=kMQ3LAiWpx} }
Change detection identifies differences between images captured at different times. Real-world change detection faces challenges from the diverse and intricate nature of change areas, while current datasets and algorithms are often limited to simpler, uniform changes, reducing their effectiveness in practical application. Existing dual-branch methods process images independently, risking the loss of change information due to insufficient early interaction. In contrast, single-stream approaches, though improving early integration, lack efficacy in capturing complex changes. To address these issues, we introduce a novel single-stream architecture, the Multi-scale Change-Aware Transformer (MACT), which features the Dynamic Change-Aware Attention module and the Multi-scale Change-Enhanced Aggregator. The Dynamic Change-Aware Attention module, integrating local self-attention and cross-temporal attention, conducts dynamic iteration on images differences, thereby targeting feature extraction of change areas. The Multi-scale Change-Enhanced Aggregator enables the model to adapt to various scales and complex shapes through local change enhancement and multiscale aggregation strategies. To overcome the limitations of existing datasets regarding the scale diversity and morphological complexity of change areas, we construct the Mining Area Change Detection dataset. The dataset offers a diverse array of change areas that span multiple scales and exhibit complex shapes, providing a robust benchmark for change detection. Extensive experiments demonstrate that the our model outperforms existing methods, especially for irregular and multi-scale changes.
Multi-scale Change-Aware Transformer for Remote Sensing Image Change Detection
[ "HUAN CHEN", "Tingfa Xu", "Zhenxiang Chen", "Peifu Liu", "Huiyan Bai", "Jianan Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=kLNuuexw65
@inproceedings{ zhuang2024glomo, title={{GL}oMo: Global-Local Modal Fusion for Multimodal Sentiment Analysis}, author={Yan Zhuang and Yanru Zhang and Zheng Hu and Xiaoyue Zhang and Jiawen Deng and Fuji Ren}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=kLNuuexw65} }
Multimodal Sentiment Analysis (MSA) has witnessed remarkable progress and gained increasing attention in recent decades, thanks to the advancements in deep learning. However, current MSA methodologies primarily rely on global representation extracted from different modalities, such as the mean of $all$ token representations, to construct sophisticated fusion networks. These approaches often overlook the valuable details present in local representations, which consist of fused representations of consecutive $several$ tokens. Additionally, the integration of multiple local representations and the fusion of local and global information present significant challenges. To address these limitations, we propose the Global-Local Modal (GLoMo) Fusion framework. This framework comprises two essential components: (i) modality-specific mixture of experts layers that integrate diverse local representations within each modality, and (ii) a global-guided fusion module that effectively combine global and local representations. The former component leverages specialized expert networks to automatically select and integrate crucial local representations from each modality, while the latter ensures the preservation of global information during the fusion process. We extensively evaluate GLoMo on various datasets, encompassing tasks in multimodal sentiment analysis, multimodal humor detection, and multimodal emotion recognition. Empirical results demonstrate that GLoMo outperforms existing state-of-the-art models, validating the effectiveness of our proposed framework.
GLoMo: Global-Local Modal Fusion for Multimodal Sentiment Analysis
[ "Yan Zhuang", "Yanru Zhang", "Zheng Hu", "Xiaoyue Zhang", "Jiawen Deng", "Fuji Ren" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=kEqGgMgIlu
@inproceedings{ peng2024ldstega, title={{LDS}tega: Practical and Robust Generative Image Steganography based on Latent Diffusion Models}, author={Yinyin Peng and Yaofei Wang and Donghui Hu and Kejiang Chen and Xianjin Rong and Weiming Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=kEqGgMgIlu} }
Generative image steganography has gained significant attention due to its ability to hide secret data during image generation. However, existing generative image steganography methods still face challenges in terms of controllability, usability, and robustness, making it difficult to apply real-world scenarios. To ensure secure and reliable communication, we propose a practical and robust generative image steganography based on Latent Diffusion Models, called LDStega. LDStega takes controllable condition text as input and designs an encoding strategy in the reverse process of the Latent Diffusion Models to couple latent space generation with data hiding. The encoding strategy selects a sampling interval from a candidate pool of truncated Gaussian distributions guided by secret data to generate the stego latent space. Subsequently, the stego latent space is fed into the Decoder to generate the stego image. The receiver extracts the secret data from the globally Gaussian distribution of the lossy-reconstructed latent space in the reverse process. Experimental results demonstrate that LDStega achieves high extraction accuracy while controllably generating image content and saving the stego image in the widely used PNG and JPEG formats. Additionally, LDStega outperforms state-of-the-art techniques in resisting common image attacks.
LDStega: Practical and Robust Generative Image Steganography based on Latent Diffusion Models
[ "Yinyin Peng", "Yaofei Wang", "Donghui Hu", "Kejiang Chen", "Xianjin Rong", "Weiming Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=kEpfY7f4wU
@inproceedings{ zheng2024sketchd, title={Sketch3D: Style-Consistent Guidance for Sketch-to-3D Generation}, author={Wangguandong Zheng and Haifeng Xia and Rui Chen and Libo Sun and Ming Shao and Siyu Xia and Zhengming Ding}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=kEpfY7f4wU} }
Recently, image-to-3D approaches have achieved significant results with a natural image as input. However, it is not always possible to access these enriched color input samples in practical applications, where only sketches are available. Existing sketch-to-3D researches suffer from limitations in broad applications due to the challenges of lacking color information and multi-view content. To overcome them, this paper proposes a novel generation paradigm Sketch3D to generate realistic 3D assets with shape aligned with the input sketch and color matching the textual description. Concretely, Sketch3D first instantiates the given sketch in the reference image through the shape-preserving generation process. Second, the reference image is leveraged to deduce a coarse 3D Gaussian prior, and multi-view style-consistent guidance images are generated based on the renderings of the 3D Gaussians. Finally, three strategies are designed to optimize 3D Gaussians, i.e., structural optimization via a distribution transfer mechanism, color optimization with a straightforward MSE loss and sketch similarity optimization with a CLIP-based geometric similarity loss. Extensive visual comparisons and quantitative analysis illustrate the advantage of our Sketch3D in generating realistic 3D assets while preserving consistency with the input.
Sketch3D: Style-Consistent Guidance for Sketch-to-3D Generation
[ "Wangguandong Zheng", "Haifeng Xia", "Rui Chen", "Libo Sun", "Ming Shao", "Siyu Xia", "Zhengming Ding" ]
Conference
poster
2404.01843
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=kE1mWdsJRm
@inproceedings{ wu2024joresdiff, title={JoReS-Diff: Joint Retinex and Semantic Priors in Diffusion Model for Low-light Image Enhancement}, author={Yuhui Wu and Guoqing Wang and Zhiwen Wang and Yang Yang and Tianyu Li and Malu Zhang and Chongyi Li and Heng Tao Shen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=kE1mWdsJRm} }
Low-light image enhancement (LLIE) has achieved promising performance by employing conditional diffusion models. Despite the success of some conditional methods, previous methods may neglect the importance of a sufficient formulation of task-specific condition strategy, resulting in suboptimal visual outcomes. In this study, we propose JoReS-Diff, a novel approach that incorporates Retinex- and semantic-based priors as the additional pre-processing condition to regulate the generating capabilities of the diffusion model. We first leverage pre-trained decomposition network to generate the Retinex prior, which is updated with better quality by an adjustment network and integrated into a refinement network to implement Retinex-based conditional generation at both feature- and image-levels. Moreover, the semantic prior is extracted from the input image with an off-the-shelf semantic segmentation model and incorporated through semantic attention layers. By treating Retinex- and semantic-based priors as the condition, JoReS-Diff presents a unique perspective for establishing an diffusion model for LLIE and similar image enhancement tasks. Extensive experiments validate the rationality and superiority of our approach.
JoReS-Diff: Joint Retinex and Semantic Priors in Diffusion Model for Low-light Image Enhancement
[ "Yuhui Wu", "Guoqing Wang", "Zhiwen Wang", "Yang Yang", "Tianyu Li", "Malu Zhang", "Chongyi Li", "Heng Tao Shen" ]
Conference
poster
2312.12826
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0