title
stringlengths
28
135
abstract
stringlengths
0
12k
introduction
stringlengths
0
12k
Kim_Shepherding_Slots_to_Objects_Towards_Stable_and_Robust_Object-Centric_Learning_CVPR_2023
Abstract Object-centric learning (OCL) aspires general and com- positional understanding of scenes by representing a scene as a collection of object-centric representations. OCL has also been extended to multi-view image and video datasets to apply various data-driven inductive biases by utilizing geometric or temporal information in the multi-image data. Single-view images carry less information about how to dis- entangle a given scene than videos or multi-view images do. Hence, owing to the difficulty of applying inductive bi- ases, OCL for single-view images remains challenging, re- sulting in inconsistent learning of object-centric represen- tation. To this end, we introduce a novel OCL framework for single-view images, SLot Attention via SHepherding (SLASH), which consists of two simple-yet-effective mod- ules on top of Slot Attention. The new modules, Attention Refining Kernel (ARK) and Intermediate Point Predictor and Encoder (IPPE), respectively, prevent slots from be- ing distracted by the background noise and indicate loca- tions for slots to focus on to facilitate learning of object- centric representation. We also propose a weak semi- supervision approach for OCL, whilst our proposed frame- work can be used without any assistant annotation during the inference. Experiments show that our proposed method enables consistent learning of object-centric representa- tion and achieves strong performance across four datasets. Code is available at https://github.com/object- understanding/SLASH .
1. Introduction Object-centric learning (OCL) decomposes an image into a set of vectors corresponding to each distinct ob- ject to acquire object-wise representations [16]. Learning object-centric representation enables machines to perceive the visual world in a manner similar to humans. We recog- *Equal contribution, a coin is flipped. Trial 1 Trial 2 Input ImagesObject Masks Figure 1. Results of training Slot Attention [35] with different seeds, which show inconsistent learning results. In the first trial, object-centric representations fail to grasp each distinct object due to the background noise. In the second, the model succeeds in distinguishing each different object from the background. nize the world as a composition of objects [27] and extend the object-related knowledge to various environments [48]. Therefore, OCL enables a compositional understanding of an image and generalization for downstream tasks, such as visual reasoning [36] and object localization [6]. Mainstream OCL has adopted an autoencoding-based compositional generative model [10, 15, 35]. Slot Atten- tion [35] is the most prominent technique for OCL, which This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19198 uses slots as the intermediate representation bottlenecks. In the Slot Attention, randomly initialized slots compete with each other to occupy their attention regions in terms of pix- els. Eventually, each slot attains object-centric representa- tion by aggregating visual features according to the atten- tion map between the slot and pixels. Recently, OCL has been extended to multi-view im- ages [4, 42] and videos [9, 30, 46]. Multi-view image [43] or video [13, 14, 51] datasets allow models to learn spa- tial geometry or temporal dynamics of objects through supplementary objective tasks such as novel view synthe- sis [42] and optical flow inference [30]. Consequently, these datasets provide additional information that enables the adoption of data-driven inductive biases, facilitating the learning of better object-centric representations. In contrast, it is challenging to obtain data-driven in- ductive biases, such as geometric or temporal informa- tion, for single-view images. To address this problem, novel architectures, such as auto-regressive generative mod- els [3,10,11,15] and Transformer [53] for encoders [44] and decoders [45], have been proposed. However, owing to the absence of additional inductive biases, OCL for complex single-view images suffers from unstable training results. This stability issue implies inconsistent learning of object-centric representation, that is, not all trials of train- ing a model with the same architecture consistently succeed in distinguishing objects from the background (Fig. 1). The attention-leaking problem, or bleeding issue, can mislead a model to yield object-centric representations based on dis- torted attention maps. The bleeding issue is fatal for OCL because it is difficult to predict the behavior of a model, that is, whether a slot will seize a distinct object or an object en- tangled with a background. To solve this bleeding issue, we propose a novel OCL framework, SLASH ( SLotAttention via SHepherding). SLASH resolves the bleeding by guiding the randomly ini- tialized slots to successfully grasp objects 1) without being distracted by the background and 2) by keeping informed of the destination. These are accomplished by adding two simple-yet-effective modules, Attention Refining Ker- nel (ARK) and Intermediate Point Predictor and Encoder (IPPE), to the Slot Attention framework. ARK is a single-channel single-layer convolutional ker- nel, designed to prevent slots from focusing on a noisy background. We adopt the Weights-Normalized Convolu- tional (WNConv) kernel, a learnable low-pass filter, as the kernel for ARK. This simple kernel refines the attention map between slots and pixels by reducing noise and solidi- fying object-like patterns. IPPE serves as an indicator to nudge a slot to focus on the proper location. Thus, the slots can consistently update their representations without being confused by the background. IPPE consists of two submodules with simple MLPs. Thefirst submodule predicts the position of an object in two- dimensional coordinates, and the second encodes the pre- dicted coordinates into a high-dimensional vector. Since IPPE needs to be trained to provide locational cues to slots, it is necessary to introduce positional labels. How- ever, using fully annotated ground-truths is costly, partic- ularly for densely-annotated labels such as object masks. Hence, we adopt a weak semi-supervision approach in which only a small subset of the dataset includes weak annotations, such as the centers of bounding boxes. We show that IPPE can be successfully trained with weakly semi-supervised learning and can be deployed under cir- cumstances where no assistant ground-truth exists. For a comprehensive study, we validate our method on numerous datasets, including CLEVR, CLEVRTEX, PTR, and MOVi. Moreover, we conduct 10 trials of train- ing for each method, including the baselines and ours, to thoroughly evaluate the results. We estimate the perfor- mance of the models using three metrics: mean Intersec- tion over Union (mIoU), Adjusted Rand Index (ARI), and foreground-ARI (fg-ARI) In particular, mIoU and ARI in- vestigate whether the bleeding issue occurs by considering the background separation. A model is defined as being sta- ble over the metrics when deviations are lower, and as being robust when averages are higher across all datasets. Exper- imental results demonstrate that our method achieves stable and robust OCL that prevents the bleeding issue. Our main contributions are as follows: • We observe OCL for single-view images suffers from the stability issue with inconsistent training results. To resolve this issue, we propose a novel framework, SLASH ( SLotAttention via SHepherding) consisting of two simple-yet-strong modules: ARK and IPPE. • ARK is a learnable low-pass filter designed to prevent the bleeding issue where the attention of a slot leaks into a background. • IPPE is introduced to inform slots of the regions to be focused. By leveraging weak semi-supervision, IPPE can inject positional information into a slot. • We empirically prove SLASH achieves stable and ro- bust OCL against four distinctive datasets. SLASH shows the best stability while outperforming the pre- vious methods for all datasets over multiple metrics.
Kuo_HAAV_Hierarchical_Aggregation_of_Augmented_Views_for_Image_Captioning_CVPR_2023
Abstract A great deal of progress has been made in image caption- ing, driven by research into how to encode the image using pre-trained models. This includes visual encodings (e.g. im- age grid features or detected objects) and more recently textual encodings (e.g. image tags or text descriptions of image regions). As more advanced encodings are available and incorporated, it is natural to ask: how to efficiently and effectively leverage the heterogeneous set of encodings? In this paper, we propose to regard the encodings as augmented views of the input image. The image captioning model en- codes each view independently with a shared encoder ef- ficiently, and a contrastive loss is incorporated across the encoded views in a novel way to improve their representation quality and the model’s data efficiency. Our proposed hier- archical decoder then adaptively weighs the encoded views according to their effectiveness for caption generation by first aggregating within each view at the token level, and then across views at the view level. We demonstrate significant performance improvements of +5.6% CIDEr on MS-COCO and +12.9% CIDEr on Flickr30k compared to state of the arts, and conduct rigorous analyses to demonstrate the im- portance of each part of our design.
1. Introduction A large amount of progress has been made in vision- and-language (VL) tasks such as image captioning [1, 8], visual question answering [16, 23], and image-text retrieval. For these tasks, recent methods [31, 36, 45, 61] observe that encoding the input image by an object detector [42] pre- trained on Visual Genome [30] into a set of detected objects is not sufficient. To provide information complementary to detected objects, recent works proposed to encode an in- put image by different pre-trained models and into different modalities, and achieve substantial performance improve- ment by combining these heterogeneous encodings. For example, some works encode from the visual perspective (e.g.stronger object detector pre-trained on a larger vocabu- Black bags sittingon top of a ?. Sofa Detector Objects Image Grid FeaturesText Descriptions •curtain are in hotel room•bag leaning against furniture•picture hotel room•sofa in hotel room•bag in front of sofa•bag beside sofa•coffee table in front of sofa•………… Augmented views ViewaggregationContrastive LossEncoderEncoderEncoder0.20.30.5Figure 1. HAA V, Hierarchical Aggregation of Augmented Views, for image captioning at the step of predicting the word “ sofa”. First, heterogeneous views such as detected objects [4], image grid fea- tures [45], and text descriptions [31] are generated from the input image by existing methods. We propose to regard these views as augmentations of the input image, and independently encode each view by a shared transformer encoder efficiently. A contrastive loss is incorporated to improve the representation quality of heteroge- neous views. Finally, our proposed hierarchical decoder models the effectiveness of each view and adaptively weigh them according to their effectiveness for predicting the current word “ sofa”. lary and datasets [61] or global image features [24]), while other works encode from the textual perspective ( e.g.image tags [36] and text descriptions of image regions [31]). Given the great success of incorporating various heteroge- neous encodings or “ views ”, one research question emerges naturally: how to efficiently andeffectively leverage these heterogeneous views for caption generation? Forefficiency , three factors are particularly important: computation, pa- rameter count, and label efficiency. State-of-art VL and This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11039 image captioning models are typically a transformer encoder- decoder model [51], which has undesirable quadratic compu- tational complexity with respect to the input sequence size. Therefore, as more views are incorporated, each represented by a sequence of tokens, we should carefully manage the computation and model size. Moreover, on the medium- scale MS-COCO image captioning benchmark [8] ( ∼0.6M training samples), we should take label efficiency into con- sideration when training the data-hungry [11] transformer model to avoid negative effects such as overfitting. For ef- fectiveness , different views contain some shared and some complementary information of the input image. Therefore, it is important to model the effectiveness of views and adap- tively weigh them according to their effectiveness for pre- dicting each word. Take image captioning in Figure 1 as an example, when predicting the current word “ sofa”, for the incomplete caption “ black bags sitting on top of a ?”, if say, the view of detected objects fails to detect sofa in the input image, the captioning model should down-weigh the less effective view of detected objects and rely on other more effective views that properly encode the information about sofa. With these considerations in mind, we propose HAA V, Hierarchical Aggregation of Augmented Views. In HAA V, given a set of heterogeneous views of the input im- age from existing works such as detected objects [4], image grid features [45], and text descriptions [31], we propose to (1) regard heterogeneous views as augmentations of the input image, and (2) devise a hierarchical decoder layer to account for the effectiveness of heterogeneous views. For (1), by regarding views as augmentations, we natu- rally choose to use a shared transformer encoder to encode each view independently . Compared to methods that con- catenate all views into a long sequence as input [31, 36, 61], where the computational complexity scales up quadratically with respect to the number of views, our method scales up linearly. Compared to models that use entire models per view [10, 22, 26, 43] or methods that encode each view with unshared encoders [2, 3, 20, 34], our method is more parame- ter efficient. Furthermore, data augmentation increases data diversity and thus improves data efficiency, which is particu- larly important for training data-hungry transformer models. Last but not least, by regarding heterogeneous views gener- ated by different pre-trained models as augmentations of the input image, we incorporate a contrastive loss in a novel way to help representation learning of heterogeneous views and increase data efficiency [6, 15, 17]. Different from how other VL methods [25,34,41,57,59] incorporate a contrastive loss, our formulation does notrequire annotated pairs ( e.g.human annotated image-caption pairs in MS-COCO or image-text pairs scraped from the internet [44]) and can work with unlabeled image-only data to achieve better performance. Also crucially, for (2) we devise a hierarchical decoder layer, which modifies the standard transformer decoder layerby introducing two-tiered cross-attention modules. The hi- erarchical decoder first aggregates within each view at the token level and then aggregates across views at the view level. By introducing this hierarchical aggregating struc- ture, we can better model the effectiveness of views and adaptively weigh them according to their effectiveness. For example, in Section 4 Experiment, we show that if we add noise to a certain view or mask out a prominent region of the input image, the proposed hierarchical decoder indeed down-weigh the noised and masked view when generating words and captions. To sum up, in this paper, given a set of heterogeneous views of the input image from existing works, we focus on how to efficiently andeffectively leverage these views and make the following contributions: (1) regard heterogeneous views as augmentations of the input image and propose a novel use of contrastive loss to improve computation, param- eter, and data efficiency; (2) devise a hierarchical decoder layer to model the effectiveness of each view and weigh each view accordingly for caption generation; (3) achieve significant improvement of +5.6% CIDEr on MS-COCO over state of the art, and achieve comparable or often better performance compared with methods using large-scale trans- former pre-training even though we do not do so; and (4) provide thorough ablations and rigorous analyses to validate our proposed method for efficiency and effectiveness.
Liu_A_Soma_Segmentation_Benchmark_in_Full_Adult_Fly_Brain_CVPR_2023
Abstract Neuron reconstruction in a full adult fly brain from high- resolution electron microscopy (EM) data is regarded as a cornerstone for neuroscientists to explore how neurons in- spire intelligence. As the central part of neurons, somas in the full brain indicate the origin of neurogenesis and neu- ral functions. However, due to the absence of EM datasets specifically annotated for somas, existing deep learning- based neuron reconstruction methods cannot directly pro- vide accurate soma distribution and morphology. More- over, full brain neuron reconstruction remains extremely time-consuming due to the unprecedentedly large size of EM data. In this paper, we develop an efficient soma re- construction method for obtaining accurate soma distribu- tion and morphology information in a full adult fly brain. To this end, we first make a high-resolution EM dataset with fine-grained 3D manual annotations on somas. Re- lying on this dataset, we propose an efficient, two-stage deep learning algorithm for predicting accurate locations and boundaries of 3D soma instances. Further, we deploy a parallelized, high-throughput data processing pipeline for executing the above algorithm on the full brain. Finally, we provide quantitative and qualitative benchmark compar- isons on the testset to validate the superiority of the pro- posed method, as well as preliminary statistics of the re- constructed somas in the full adult fly brain from the bi- ological perspective. We release our code and dataset at https://github.com/liuxy1103/EMADS .
1. Introduction Drosophila melanogaster, also known as the fruit fly, is an organism with intelligent behaviors including percep- tion, learning, and judgment [10, 34, 36]. It has a complete and relatively simple neural system [6, 37]. The interac- tions among neurons in the system guide the drosophila’s intelligent behaviors [8, 14, 23, 35]. Therefore, the study *Corresponding author.of drosophila neurons, which has fascinated neuroscientists for more than a century [5, 7, 16, 40, 44], has key implica- tions for understanding how the brains of living organisms produce intelligence [2, 41]. As the central part of the neuron, the soma maintains the neuron structure and controls the formation of neu- rites [22, 30]. Studies have shown that the location and morphology of somas in the full brain are related to neural development and the neural logic function [3, 17], and the number of somas is related to the complexity of the brain and the age of the living body [1, 25]. Therefore, it is of great biological significance to investigate soma reconstruc- tion in the full brain of model organisms such as drosophila. Traditional studies in this field are mainly based on brain images collected by optical microscopies [13, 38]. The soma structure is first stained with specific staining proteins, and the confocal images collected could show fluorescence staining signals, so as to obtain the soma distribution of the full brain of drosophila. However, the resolution of con- focal images is low, making it difficult to obtain the exact morphology of each soma. Based on the assumption that each cell has only one nucleus, the isotropic fractionator method [15, 18] is used to obtain the number of somas in the full brain of drosophila. This method destroys the brain structure during the production of cell suspension, so the distribution of somas in the full brain cannot be obtained. Recently, with the development of high-speed elec- tron microscopy (EM) scanning technology, high- resolution EM image datasets of different species including drosophila [42], mouse [31], and human [39] have been successfully acquired, and the full adult fly brain (FAFB) dataset [47] imaged from a complete drosophila brain can be regarded as a representative. Based on these datasets, advanced deep learning algorithms are developed to auto- matically reconstruct neurons [12, 23] and cell nucleus [32] in EM images for connectomics study. Meanwhile, parallel and distributed data processing pipelines [39, 46] are proposed to deploy these algorithms on large-scale EM datasets. However, due to the lack of high-resolution EM datasets specifically annotated for somas, existing works This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7402 cannot directly provide accurate soma distribution and morphology information. In this paper, we make one of the first efforts to develop an efficient soma reconstruction method in a full adult fly brain, aiming to obtain accurate soma distribution and mor- phology information for this model organism. The contri- butions of this work are four aspects: • We make a high-resolution EM soma dataset with fine- grained 3D manual annotations for more than 8×109 voxels. To the best of our knowledge, this dataset is the first of its kind. • Relying on the above dataset, we propose an effi- cient, two-stage deep learning algorithm for soma in- stance segmentation, and benchmark existing alterna- tive methods to validate the superiority of ours. • We deploy a parallelized, high-throughput data pro- cessing pipeline for executing our algorithm on the full brain, fulfilling the soma reconstruction task on a 90- GPU cluster within 4 days. • We provide quantitative and qualitative results for eval- uating the accuracy and efficiency of the proposed method, along with preliminary statistics of the re- constructed somas in the full adult fly brain, including count, size, distribution and morphology. We believe our work will contribute to the study of the drosophila neural system. The benchmark dataset has been released to facilitate future research along this line. Code and a 4K video of the full brain reconstruction result are now available through the links provided in Github website.
Kim_SMPConv_Self-Moving_Point_Representations_for_Continuous_Convolution_CVPR_2023
Abstract Continuous convolution has recently gained prominence due to its ability to handle irregularly sampled data and model long-term dependency. Also, the promising exper- imental results of using large convolutional kernels have catalyzed the development of continuous convolution since they can construct large kernels very efficiently. Lever- aging neural networks, more specifically multilayer per- ceptrons (MLPs), is by far the most prevalent approach to implementing continuous convolution. However, there are a few drawbacks, such as high computational costs, complex hyperparameter tuning, and limited descriptive power of filters. This paper suggests an alternative ap- proach to building a continuous convolution without neu- ral networks, resulting in more computationally efficient and improved performance. We present self-moving point representations where weight parameters freely move, and interpolation schemes are used to implement continuous functions. When applied to construct convolutional ker- nels, the experimental results have shown improved per- formance with drop-in replacement in the existing frame- works. Due to its lightweight structure, we are first to demonstrate the effectiveness of continuous convolution in a large-scale setting, e.g., ImageNet, presenting the im- provements over the prior arts. Our code is available on https://github.com/sangnekim/SMPConv
1. Introduction There has been a recent surge of interest in represent- ing the convolutional kernel as a function over a continu- ous input domain. It can easily handle irregularly sampled data both in time [1, 59] and space [61, 65], overcoming the drawbacks of the discrete convolution operating only on discretized sampled data with pre-defined resolutions and grids. With the progress in modeling and training continu- ous kernels, it has enjoyed great success in many practical scenarios, such as 3D point cloud classification and segmen- *Corresponding authorstation [36,41,52,58,64], image super resolution [57], object tracking [10], to name a few. Furthermore, the recent trends of using large convolutional kernels with strong empirical results urge us to develop a more efficient way to implement it [13,32], and the continuous convolution will be a promis- ing candidate because of its capability to readily construct arbitrarily large receptive fields [44, 45]. One of the dominant approaches to modeling the con- tinuous kernel is to use a particular type of neural network architecture, taking as inputs low-dimensional input coor- dinates and generating the kernel values [44, 45], known as neural fields [38, 49] or simply MLPs. Using neural fields to represent the kernels, we can query kernel values at arbi- trary resolutions in parallel and construct the large kernels with a fixed parameter budget, as opposed to the conven- tional discrete convolutions requiring more parameters to enlarge receptive fields. Thanks to recent advances to over- come the spectral bias on training neural fields [49], they can also represent functions with high-frequency compo- nents, which enables learned kernels to capture fine details of input data. While promising in various tasks and applications, this approach has a few downsides. First, it incurs consider- able computational burdens to already computation-heavy processes of training deep neural networks. Each train- ing iteration involves multiple forward and backward passes of MLPs to generate kernels and update the parameters of MLPs. This additional complexity prevents it from being applied to large-scale problems, such as ImageNet-scale, since it needs deeper and wider MLP architectures to con- struct more complex kernels with more input and output channels. Although MLPs can generate larger sizes and numbers of kernels without adding more parameters, it has been known that the size of MLPs mainly determines the complexity of the functions they represent and, eventually, the performance of the CNNs. Furthermore, the kernels generated by an MLP depend heavily on the architectural priors. As a universal function approximator, a neural network with sufficient depth and width can express any continuous functions [25]. However, we have empirically observed strong evidence that the ar- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10289 chitecture of neural networks has played a significant role in many practical settings, suggesting that various archi- tectural changes to the kernel-generating MLPs would sig- nificantly affect the performance of CNNs. Considering a large number of hyperparameters in training neural net- works, adding more knobs to tune, e.g., activation func- tions, width, depth, and many architectural variations of MLPs., would not be a pragmatic solution for both machine learning researchers and practitioners. In this paper, we aim to build continuous convolution kernels with negligible computational cost and minimal ar- chitectural priors. We propose to use moving point repre- sentations and implement infinite resolution by interpolat- ing nearby moving points at arbitrary query locations. The moving points are the actual kernel parameters, and we con- nect the neighboring points to build continuous kernels. Re- cent techniques in neural fields literature inspire it, where they used grid or irregularly distributed points to represent features or quantities in questions (density or colors) for novel view synthesis [6, 50, 63, 66, 67]. The suggested ap- proach only introduces minor computational costs (interpo- lation costs) and does not consist of neural networks (only point representations and interpolation kernels). Moreover, the spectral bias presented in training MLPs [43] does not exist in the suggested representation. Each point representa- tion covers the local area of the input domain and is updated independently of each other, contrasted with MLPs, where updating each parameter would affect the entire input do- main. Therefore, highly different values of nearby points can easily express high-frequency components of the func- tion. The proposed method can also be more parameter ef- ficient than the discrete convolution to construct the large kernels. Depending on the complexity of the kernels to be learned, a few numbers of points may be sufficient to cover a large receptive field (e.g., a unimodal function can be approximated by a single point). Many works have ex- tensively exploited non-full ranks of the learned kernels to implement efficient convolutions or compress the mod- els [32]. Our approach can likewise benefit from the pres- ence of learned kernels with low-frequency components. We conduct comprehensive experimental results to show the effectiveness of the proposed method. First, we demon- strate that moving point representations can approximate continuous functions very well in 1D and 2D function fit- ting experiments. Then, we test its ability to handle long- term dependencies on various sequential datasets. Finally, we also evaluate our model on 2D image data. Especially, we perform large-scale experiments on the ImageNet clas- sification dataset (to our knowledge, it is the first attempt to use continuous convolution for such a large-scale problem). Our proposed method can be used as a drop-in replacement of convolution layers for all the above tasks without bellsand whistles. The experimental results show that it consis- tently improves the performance of the prior arts.
Liu_PartSLIP_Low-Shot_Part_Segmentation_for_3D_Point_Clouds_via_Pretrained_CVPR_2023
Abstract Generalizable 3D part segmentation is important but challenging in vision and robotics. Training deep models via conventional supervised methods requires large-scale 3D datasets with fine-grained part annotations, which are costly to collect. This paper explores an alternative way for low-shot part segmentation of 3D point clouds by lever- aging a pretrained image-language model, GLIP , which achieves superior performance on open-vocabulary 2D de- tection. We transfer the rich knowledge from 2D to 3D through GLIP-based part detection on point cloud ren- dering and a novel 2D-to-3D label lifting algorithm. We also utilize multi-view 3D priors and few-shot prompt tun- ing to boost performance significantly. Extensive evalua- tion on PartNet and PartNet-Mobility datasets shows that our method enables excellent zero-shot 3D part segmenta- tion. Our few-shot version not only outperforms existing few-shot approaches by a large margin but also achieves highly competitive results compared to the fully super- vised counterpart. Furthermore, we demonstrate that our method can be directly applied to iPhone-scanned point clouds without significant domain gaps.
1. Introduction Human visual perception can parse objects into parts and generalize to unseen objects, which is crucial for under- *Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.standing their structure, semantics, mobility, and function- ality. 3D part segmentation plays a critical role in empower- ing machines with such ability and facilitates a wide range of applications, such as robotic manipulation, AR/VR, and shape analysis and synthesis [2, 31, 39, 69]. Recent part-annotated 3D shape datasets [40,67,72] have promoted advances in designing various data-driven ap- proaches for 3D part segmentation [34, 44, 65, 73]. While standard supervised training enables these methods to achieve remarkable results, they often struggle with out- of-distribution test shapes (e.g., unseen classes). How- ever, compared to image datasets, these 3D part-annotated datasets are still orders of magnitude smaller in scale, since building 3D models and annotating fine-grained 3D object parts are laborious and time-consuming. It is thus challeng- ing to provide sufficient training data covering all object categories. For example, the recent PartNet dataset [40] contains only 24 object categories, far less than what an in- telligent agent would encounter in the real world. To design a generalizable 3D part segmentation mod- ule, many recent works have focused on the few-shot set- ting, assuming only a few 3D shapes of each category dur- ing training. They design various strategies to learn better representations, and complement vanilla supervised learn- ing [33, 53, 54, 60, 80]. While they show improvements over the original pipeline, there is still a large gap between what these models can do and what downstream applica- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21736 tions need. The problem of generalizable 3D part segmen- tation is still far from being solved. Another parallel line of work focuses on learning the concept of universal object parts and decomposing a 3D shape into a set of (hierarchi- cal) fine-grained parts [37, 64, 74]. However, these works do not consider the semantic labeling of parts and may be limited in practical use. In this paper, we seek to solve the low-shot (zero- and few-shot) 3D part segmentation problem by leveraging pre- trained image-language models, inspired by their recent striking performances in low-shot learning. By pretrain- ing on large-scale image-text pairs, image-language mod- els [1,22,29,45,46,50,76] learn a wide range of visual con- cepts and knowledge, which can be referenced by natural language. Thanks to their impressive zero-shot capabilities, they have already enabled a variety of 2D/3D vision and language tasks [10, 16, 20, 47, 49, 51, 77]. As shown in Figure 1, our method takes a 3D point cloud and a text prompt as input, and generates both 3D semantic and instance segmentations in a zero-shot or few-shot fash- ion. Specifically, we integrate the GLIP [29] model, which is pretrained on 2D visual grounding and detection tasks with over 27M image-text pairs and has a strong capabil- ity to recognize object parts. To connect our 3D input with the 2D GLIP model, we render multi-view 2D images for the point cloud, which are then fed into the GLIP model to- gether with a text prompt containing part names of interest. The GLIP model then detects parts of interest for each 2D view and outputs detection results in the form of 2D bound- ing boxes. Since it is non-trivial to convert 2D boxes back to 3D, we propose a novel 3D voting and grouping module to fuse the multi-view 2D bounding boxes and generate 3D in- stance segmentation for the input point cloud. Also, the pre- trained GLIP model may not fully understand our definition of parts only through text prompts. We find that an effec- tive solution is prompt tuning with few-shot segmented 3D shapes. In prompt tuning, we learn an offset feature vector for the language embedding of each part name while fixing the parameters of the pretrained GLIP model. Moreover, we propose a multi-view visual feature aggregation module to fuse the information of multiple 2D views, so that the GLIP model can have a better global understanding of the input 3D shape instead of predicting bounding boxes from each isolated 2D view. To better understand the generalizability of various ap- proaches and their performances in low-shot settings, we propose a benchmark PartNet-Ensembled (PartNetE) by in- corporating two existing datasets PartNet [40] and Part- NetMobility [67]. Through extensive evaluation on Part- NetE, we show that our method enables excellent zero-shot 3D part segmentation. With few-shot prompt tuning, our method not only outperforms existing few-shot approaches by a large margin but also achieves highly competitive per-formance compared to the fully supervised counterpart. We also demonstrate that our method can be directly applied to iPhone-scanned point clouds without significant domain gaps. In summary, our contributions mainly include: • We introduce a novel 3D part segmentation method that leverages pretrained image-language models and achieves outstanding zero-shot and few-shot performance. • We present a 3D voting and grouping module, which ef- fectively converts multi-view 2D bounding boxes into 3D semantic and instance segmentation. • We utilize few-shot prompt tuning and multi-view feature aggregation to boost GLIP’s detection performance. • We propose a benchmark PartNetE that benefits future work on low-shot and text-driven 3D part segmentation.
Kim_Improving_Cross-Modal_Retrieval_With_Set_of_Diverse_Embeddings_CVPR_2023
Abstract Cross-modal retrieval across image and text modalities is a challenging task due to its inherent ambiguity: An im- age often exhibits various situations, and a caption can be coupled with diverse images. Set-based embedding has been studied as a solution to this problem. It seeks to en- code a sample into a set of different embedding vectors that capture different semantics of the sample. In this paper, we present a novel set-based embedding method, which is dis- tinct from previous work in two aspects. First, we present a new similarity function called smooth-Chamfer similar- ity, which is designed to alleviate the side effects of existing similarity functions for set-based embedding. Second, we propose a novel set prediction module to produce a set of embedding vectors that effectively captures diverse seman- tics of input by the slot attention mechanism. Our method is evaluated on the COCO and Flickr30K datasets across dif- ferent visual backbones, where it outperforms existing meth- ods including ones that demand substantially larger compu- tation at inference.
1. Introduction Cross-modal retrieval is the task of searching for data relevant to a query from a database when the query and database have different modalities. While it has been stud- ied for various pairs of modalities such as video-text [ 3,7, 19] and audio-text [ 5,16], the most representative setting for the task is the retrieval across image and text modali- ties [ 10,29,37,45]. A na ¨ıve solution to cross-modal re- trieval is a straightforward extension of the conventional unimodal retrieval framework [ 17,18,26],i.e., learning a joint embedding space of the different modalities with known ranking losses ( e.g., contrastive loss [ 9] and triplet loss [ 43]). In this framework, each sample is represented as a single embedding vector and the task reduces to neighbor search on the joint embedding space. However, this na ¨ıve approach has trouble in handling the inherent ambiguity of the cross-modal retrieval across im- age and text modalities [ 10,45,46]. A cause of the am- “Boys wearing helmets carry a bicycle up a ramp at a skate park.”“Small children stand nearbicycles at a skate park.”“A group of young childrenriding bikes and skateboards.”Figure 1. An example of the ambiguity problem introduced in the cross-modal retrieval task; an image region and a word corre- sponding to each other are highlighted in the same color. This ex- ample demonstrates that a single image can be coupled with mul- tiple heterogeneous captions. biguity is the fact that even a single image often contains various situations and contexts. Consider an image in Fig- ure1, which illustrates a group of children in a skate park. One of the captions coupled with it could be about chil- dren carrying up a bike, while another may describe the child riding a skateboard. Indeed, different local features of the image are matched to different captions. Similarly, vi- sual manifestations of a caption could vary significantly as text descriptions are highly abstract. This ambiguity issue suggests that a sample should be embedded while reflecting its varying semantics in cross-modal retrieval. Embedding models dedicated to the uni-modal retrieval do not meet this requirement since they represent a sample as a single em- bedding vector. Various methods have been studied to mitigate the am- biguity issue of cross-modal retrieval. Most of them adopt cross-attention networks that directly predict the similarity of an input image-caption pair [ 12,14,25,30,37,38,49, 50,53]. These models successfully address the ambiguity since they explicitly infer relations across the modalities by drawing attentions on both modalities at once. However, they inevitably impose a large computation burden on re- trieval systems since they demand both image and caption to be processed together for computing their similarity; all data in a database thus have to be reprocessed whenever a query arrives. In contrast, methods using separate tex- tual and visual encoders [ 17,21,24,29] seek to find sam- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23422 ples relevant to query through nearest-neighbor search on pre-computed embedding vectors, which are more suitable to nowadays retrieval systems working on huge databases. However, most of the previous arts in this direction cannot address the ambiguity issue since their encoders return a single embedding vector for a given input. Recent studies [ 10,45] have advanced embedding model architectures to tackle the ambiguity issue even with sepa- rate encoders for the two modalities. To be specific, their models compute a set of heterogeneous embedding vectors for a given sample using self-attention layers on top of vi- sual or textual features; such a set of embedding vectors is called embedding set in the remainder of this paper. Then the ambiguity issue is addressed through elements of the embedding set that encode diverse semantics of input. Although the set-based embedding models enable a re- trieval system to be powerful yet efficient, however, simi- larity functions used for their training do not consider the ambiguity of the data. Hence, training the models with the similarity functions often causes the following two side ef- fects: (1) Sparse supervision –An embedding set most of whose elements remain untrained, or (2) Set collapsing –An embedding set with a small variance where elements do not encode sufficient ambiguity. Further, self-attention modules used for set prediction in the previous work do not explicitly consider disentanglement between set elements. These lim- itations lead to an embedding set whose elements encode redundant semantics of input, which also causes the set col- lapsing and degrades the capability of learned embedding models. To address the aforementioned limitations of previous work, we propose a novel set-based embedding method for cross-modal retrieval. The proposed method is distinct from previous work in mainly two aspects. First, we design a novel similarity function for sets, called smooth-Chamfer similarity, that is employed for both training and evaluation of our model. In particular, our loss based on the smooth- Chamfer similarity addresses both limitations of the exist- ing similarity functions, i.e., sparse supervision and set col- lapsing. Second, we propose a model with a novel set pre- diction module motivated by slot attention [ 33]. In the pro- posed module, learnable embeddings called element slots compete with each other for aggregating input data while being transformed into an embedding set by progressive up- date. Therefore, our model captures the diverse semantic ambiguity of input successfully, with little redundancy be- tween elements of the embedding set. The proposed method is evaluated and compared with previous work on two realistic cross-modal retrieval bench- marks, COCO [ 32] and Flickr30K [ 41], where it outper- forms the previous state of the art in most settings. In sum- mary, our contribution is three-fold as follows: •We address issues on previous set-based embeddingmethods by proposing a novel similarity function for sets, named smooth-Chamfer similarity. •We introduce a slot attention based set prediction mod- ule where elements of embedding set iteratively com- pete with each other for aggregating input data, which can capture semantic ambiguity of input without re- dundancy. •Our model achieved state-of-the-art performance on the COCO and Flickr30K datasets, two standard benchmarks for cross-modal retrieval.
Li_Guided_Recommendation_for_Model_Fine-Tuning_CVPR_2023
Abstract Model selection is essential for reducing the search cost of the best pre-trained model over a large-scale model zoo for a downstream task. After analyzing recent hand-designed model selection criteria with 400+ ImageNet pre-trained models and 40 downstream tasks, we find that they can fail due to invalid assumptions and intrinsic limitations. The prior knowledge on model capacity and dataset also can not be easily integrated into the existing criteria. To address these issues, we propose to convert model selection as a recommendation problem and to learn from the past training history. Specifically, we characterize the meta information of datasets and models as features, and use their transfer learning performance as the guided score. With thousands of historical training jobs, a recommendation system can be learned to predict the model selection score given the features of the dataset and the model as input. Our approach enables integrating existing model selection scores as ad- ditional features and scales with more historical data. We evaluate the prediction accuracy with 22 pre-trained mod- els over 40 downstream tasks. With extensive evaluations, we show that the learned approach can outperform prior hand-designed model selection methods significantly when relevant training history is available.
1. Introduction Much of the success of deep learning can be ascribed to its flexibility: One can train a neural network on a task, and then use it on a different one, typically after fine-tuning. There are currently two trends for scaling this practice: One is to pre-train a large number of specialized models (a “Model Zoo” [ 10]) and then select one to fine-tune once the down- stream task of interest becomes manifest, typically with a smaller fine-tuning dataset. Another is to pre-train a single “Foundation Model” which is then used to support any and all downstream tasks [ 47,57]. Without additional specifications, the second case is a subset of the first, for one can take the Model Zoo and Model Selection (MS) mechanism and call it a single model.For this reason, Foundation Models are characterized as homogeneous and task-agnostic , where homogeneity refers to a single neural network architecture, in contrast with the heterogeneous collection of models in a zoo. Even with this restriction, the model zoo is more general, for nothing prevents a Foundation Model to be part of a zoo. In addition, selecting a smaller dedicated model pretrained for a task can be much more efficient than using a giant monolithic model For these reasons, we focus on model selection over a large heterogeneous model zoo for fine-tuning as the key solution for scaling inference to a wide variety of downstream tasks. Brute-force model selection [ 1,12] requires fine-tuning each pre-trained model on the task of interest, and then rank- ing them using the test error on a held-out dataset as a model selection score . This is not feasible for large model zoos. Current model-selection methods therefore aim to predict the model selection score without actually fine-tuning. However, current model selection methods do not take into explicit account even basic characteristics of the fine- tuning dataset, such as the number of classes or the number of images, nor of the pre-trained model, such as the model family, the size of the input, the number of parameters and the dataset on which it is pre-trained. While coarse, these fea- tures can affect the best model to fine-tune, since a mismatch between fine-tuning dataset size and pre-trained model, or input dimensions, or number of classes, can influence the success of downstream performance. Instead of proposing yet another model selection score, we propose re-framing model selection as a recommender system , and directly predict the selection score and corre- sponding ranking, from whatever existing model selection scores are readily available, in addition to whatever coarse features a user deems informative – which may be context dependent, as some users may wish to penalize large models, or models that require high-resolution input. Such features help guide the model selection using criteria beyond raw downstream validation error. For this reason, we refer to our recommendation approach as guided , in addition to trained . We find that incorporating model size, dataset size, cardi- nality of the hypothesis set and other simple features already improves the prediction of the expected model selection This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3633 score compared to current model selection methods. Coarse features, such as the index of the model class family (con- volutional, fully-connected, residual, attention-based, etc.) can help associate certain architectural inductive biases such as translation vs. permutation invariance, to the best-fitting downstream tasks, for instance object detection vs. image inpainting or segmentation. Our contribution can be summarized as: •We conduct comprehensive analysis of existing model selection approaches with a large heterogeneous model zoo and confirmed their limitations. We find feature- based model selection becomes inaccurate when the target dataset is different from the source task and the effect of model initialization diminishes as the number of images grows. The useful meta information and prior knowledge in the training history are often neglected and cannot be easily integrated into existing model selection criteria. •We convert the model selection problem as model rec- ommendation by learning from past training history. The meta information of both dataset and model are em- bedded as features and a recommender system can be learned to predict the performance. The existing model selection can be used as additional features and makes the framework comply with existing approaches. We show significant performance improvement over tradi- tional model selection methods when historical training data is available and relevant. In the next section we formalize the MS problem, and discuss the issues with existing approaches. In section 3 we describe our approach to casting it as a recommendation system and evaluate it in the following section.
Kim_Region-Aware_Pretraining_for_Open-Vocabulary_Object_Detection_With_Vision_Transformers_CVPR_2023
Abstract We present Region-aware Open-vocabulary Vision Transformers (RO-ViT) – a contrastive image-text pretrain- ing recipe to bridge the gap between image-level pretrain- ing and open-vocabulary object detection. At the pretrain- ing phase, we propose to randomly crop and resize regions of positional embeddings instead of using the whole image positional embeddings. This better matches the use of posi- tional embeddings at region-level in the detection finetuning phase. In addition, we replace the common softmax cross entropy loss in contrastive learning with focal loss to bet- ter learn the informative yet difficult examples. Finally, we leverage recent advances in novel object proposals to im- prove open-vocabulary detection finetuning. We evaluate our full model on the LVIS and COCO open-vocabulary de- tection benchmarks and zero-shot transfer. RO-ViT achieves a state-of-the-art 32.1 APron LVIS, surpassing the best ex- isting approach by +5.8 points in addition to competitive zero-shot transfer detection. Surprisingly, RO-ViT improves the image-level representation as well and achieves the state of the art on 9 out of 12 metrics on COCO and Flickr image- text retrieval benchmarks, outperforming competitive ap- proaches with larger models.
1. Introduction The ability to detect objects in the visual world is a hall- mark of computer vision and machine intelligence. It is key to enable many applications, e.g. autonomous agents adapting to new environments with many novel objects, a shopping system handling fine-grained user queries such as “fedora”, “bonnet” outside of the training vocabulary. How- ever, modern object detectors typically rely on manual an- notations of regions and class labels, which limits their vo- cabulary size to an order of 103and it is prohibitively expen- sive to scale up further. This is orders of magnitude smaller than the objects we encounter in the visual world. Recently, the open-vocabulary detection task (OVD) has been proposed to overcome such limitation by leveraging patch embeddingViTencoderwhole-image PEContrastive image-text pretrainingOpen-vocabulary detection finetuning detector heads patch embeddingViTencoderCPE featureregion crop GAPimage embeddingsregion embeddingstext embeddings random crop-resize Cropped PE (CPE)Figure 1. Existing vision-language models are designed for image- level tasks, e.g., classification and retrieval. In this paper, we aim to enhance the image-text pretraining for region-level downstream task: open-vocabulary object detection. At pretraining, we pro- pose to randomly crop and resize regions of positional embeddings (PE) instead of using the whole image PE. This better matches the use of PE at region-level in the detection finetuning and results in better performance in detection and surprisingly also retrieval task. abundant image-text pairs for training and ingesting the text queries provided by users at test time [54]. By treating the categories as text embeddings rather than discrete ids, open-vocabulary detectors can flexibly predict a wide range of objects unseen during the training time. Most existing works leverage image-text pre-training which contains rich semantic knowledge of open-vocabulary concepts. Knowl- edge distillation [13, 19], weak supervision [63], self- training [41, 57, 60], and frozen model [29] have been pro- posed, and CNN backbones are most commonly used. With the growing popularity of vision transformers in image un- derstanding [12, 32, 56], multimodal [1, 2, 47], and self- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11144 supervised tasks [5, 6, 22], it is important to understand how to build capable open-vocabulary detectors with vision transformers [12, 35]. To our best knowledge, all existing works assume pre- trained Vision-Language Models (VLM) are given, and de- velop adaptation or finetuning recipes to bridge the gap between image-level pretraining and object-level finetun- ing [13, 19, 41, 57, 60]. Since the VLMs are designed for image-level tasks e.g. classification, retrieval, the notion of objects/regions are not adequately utilized in the pretrain- ing process. We believe it would be beneficial for open- vocabulary detection if we bake in locality information in the image-text pretraining. We present RO-ViT, a simple recipe to pretrain vision transformers in a region-aware manner for open-vocabulary object detection. Standard pretraining typically uses full- image positional embeddings, which does not generalize well to detection tasks. Thus, we propose a novel posi- tional embedding scheme called “Cropped Positional Em- bedding” which better matches the use of region crops in detection finetuning (see Fig. 1). In addition, we found it beneficial to replace the softmax cross entropy loss with fo- cal loss in contrastive learning, which affords us more con- trol to learn from harder and more informative examples. Finally, we leverage recent advances in novel object pro- posals [28] to improve open-vocabulary detection finetun- ing. The motivation is that existing approaches often miss novel objects in the object proposal stage because the pro- posals tend to overfit to the foreground categories. We evaluate the approach on the standard LVIS and COCO open-vocabulary benchmarks. On LVIS, our best model achieves a state-of-the-art 32.1 AP rat the system level, surpassing the best existing approach by +5.8 AP r. Compared to the best existing ViT-based approach, ours outperforms by a healthy margin of +6.5 AP r. Our LVIS- trained model outperforms existing baselines on transfer detection to Objects365 without re-training. Although not explicitly optimized for retrieval, RO-ViT surprisingly achieves the state-of-the-art performance on 9 out of 12 metrics in image-text retrieval benchmark and outperforms strong baselines with standard positional embeddings, cross entropy loss, and larger model capacity. We conduct abla- tions to confirm the benefits of each proposed component. Visualization of the learnt positional embeddings shows that our approach (CPE) leads to more symmetrical and diverse patterns than the baseline. In summary: We propose RO-ViT to bridge the positional embed- dings in image-text pretraining to open-vocabulary de- tection finetuning. We show that image-text pretraining with focal loss is more effective than existing softmax CE loss. We improve the open-vocabulary detection finetuning recipe with novel object proposals.RO-ViT achieves the state of the art on the LVIS open- vocabulary detection benchmark, and 9 out 12 metrics on COCO and Flickr image-text retrieval benchmarks. We hope these findings will facilitate the research com- munity to further explore open-vocabulary detection from the perspective of image-text pretraining with potential ben- efits for both region-level and image-level tasks.
Li_Compressing_Volumetric_Radiance_Fields_to_1_MB_CVPR_2023
Abstract Approximating radiance fields with discretized volu- metric grids is one of promising directions for improv- ing NeRFs, represented by methods like DVGO, Plenox- els and TensoRF , which achieve super-fast training con- vergence and real-time rendering. However, these meth- ods typically require a tremendous storage overhead, cost- ing up to hundreds of megabytes of disk space and run- time memory for a single scene. We address this issue in this paper by introducing a simple yet effective framework, called vector quantized radiance fields (VQRF), for com- pressing these volume-grid-based radiance fields. We first present a robust and adaptive metric for estimating redun- dancy in grid models and performing voxel pruning by bet- ter exploring intermediate outputs of volumetric rendering. A trainable vector quantization is further proposed to im- prove the compactness of grid models. In combination with an efficient joint tuning strategy and post-processing, our method can achieve a compression ratio of 100 ×by reduc- ing the overall model size to 1 MB with negligible loss on visual quality. Extensive experiments demonstrate that the proposed framework is capable of achieving unrivaled per- formance and well generalization across multiple methods with distinct volumetric structures, facilitating the wide use of volumetric radiance fields methods in real-world appli- cations. Code is available at https://github.com/ AlgoHunt/VQRF .
1. Introduction Novel view synthesis aims to realize photo-realistic ren- dering for a 3D scene at unobserved viewpoints, given a set of images recorded from multiple views with known cam- era poses. The topic has growing importance because of its potential use in a wide range of Virtual Reality and Aug- mented Reality applications. Neural radiance fields (NeRF) [29] have demonstrated compelling ability on this topic by modelling and rendering 3D scenes effectively through the ∗denote equal contribution Figure 1. The pipeline can realize 100×compression rate on vol- umetric models while highly preserving the rendering quality. use of deep neural networks, which are learned to map each 3D location given a viewing direction to its correspond- ing view-dependent color and volume density according to volumetric rendering techniques [27]. The rendering pro- cess relies on sampling a huge number of points and feed- ing them through a cumbersome network, incurring con- siderable computational overhead during training and in- ference. Recent progress following radiance fields recon- struction shows that integrating voxel-based structures [23] into the learning of representations can significantly boost training and inference efficiency. These volumetric radi- ance fields methods typically store features on voxels and retrieve sampling points (including color features and vol- ume densities) by performing efficient trilinear interpola- tion without neural network [44] or only equipped with a lightweight neural network [37] instead of cumbersome net- works. However, the use of volumetric representations in- evitably introduces considerable storage cost, e.g., costing over one hundred megabytes to represent a scene, which is prohibitive in real-world applications. In this paper, we aim to counteract the storage issue of representations induced by using voxel grids meanwhile retaining rendering quality. In order to better understand the characteristic of grid models, we estimated the distri- bution of voxel importance scores (shown in Fig. 4) and This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4222 Figure 2. (a) NeRF learns a mapping from 3D coordinate (x, y, z )and viewing direction (θ, ϕ)to color and density (r, g, b, σ ). (b) V olumetric NeRF optimizes volumetric grids, estimates color features Fx,y,z and density for sampling points via tri-linear interpolation to get the final color through (or without) tiny MLPs. (c) VQRF compresses voxel features to codebook and stores per-voxel k-bits mapping index, pointing to the codebook consisting of 2kcodes. observed that only 10% voxels contribute over 99% im- portance scores of a grid model, indicating that large re- dundancy exists in the model. Inspired by traditional tech- niques of deep network compression [14], we present an ef- fective and efficient framework for compressing volumetric radiance fields, allowing about 100×storage reduction over original grid models, with competitive rendering quality. The illustration of the idea and the framework are shown in Fig. 2 and Fig. 3. The proposed framework is quite gen- eral rather than restricted to certain architecture. It is com- prised of three steps, i.e., voxel pruning, vector quantization and post processing. V oxel pruning is used to omit the least important voxels which dominate model size while con- tributing little to the final rendering. We introduce an adap- tive strategy for pruning threshold selection with the aid of a cumulative score rate metric, enabling the pruning strategy general across different scenes or base models. In order to further reduce model size, we propose to encode important voxel features into a compact codebook by developing an importance-aware vector quantization with an efficient op- timization strategy, and using a joint tuning mechanism to enable the compressed models approaching to the rendering quality of original ones. We finally perform a simple post- processing step to obtain a model with quite small storage cost. For example, as shown in Fig. 1, the volumetric model with the storage cost of 104 MB and the rendering quality of PSNR 32.66can be compressed into a tiny model costing 1.05 MB with a negligible visual quality loss (PSNR 32.65). We conduct extensive experiments and empirical studies to validate the method, showing the effectiveness and gener- alization of the proposed compression pipeline on a wide range of volumetric methods and varying scenarios.
Kotwal_Passive_Micron-Scale_Time-of-Flight_With_Sunlight_Interferometry_CVPR_2023
Abstract We introduce an interferometric technique for passive time-of-flight imaging and depth sensing at micrometer ax- ial resolutions. Our technique uses a full-field Michelson interferometer, modified to use sunlight as the only light source. The large spectral bandwidth of sunlight makes it possible to acquire micrometer-resolution time-resolved scene responses, through a simple axial scanning opera- tion. Additionally, the angular bandwidth of sunlight makes it possible to capture time-of-flight measurements insensi- tive to indirect illumination effects, such as interreflections and subsurface scattering. We build an experimental pro- totype that we operate outdoors, under direct sunlight, and in adverse environment conditions such as machine vibra- tions and vehicle traffic. We use this prototype to demon- strate, for the first time, passive imaging capabilities such as micrometer-scale depth sensing robust to indirect illumi- nation, direct-only imaging, and imaging through diffusers.
1. Introduction The recovery of 3D information for an imaged scene is one of the core problems of optics, imaging, computer vi- sion, and other sciences. In particular, the ability to sense depth at axial resolutions of a few micrometers is of great importance for critical applications areas such as medicine, precision fabrication, material science, and robotics. Existing contactless imaging techniques for micron- scale 3D sensing, such as interferometry and microscopy, require active illumination, most commonly from a coher- ent source. This makes these techniques impractical for use outdoors, in the presence of strong ambient illumination that overwhelms the active source, or in power-constrained applications. On the other hand, existing passive 3D sensing techniques, such as multi-view stereo, shape from shading, and depth from (de)focus, achieve resolutions of hundreds of micrometers at best, placing them out of scope for appli- cations requiring micron-scale resolutions. We change this state of affairs by introducing a com- pletely passive micron-scale 3D sensing technique. Our technique is interferometric, and uses sunlight as the only light source, to capture full-frame time-of-flight measure- ments at axial resolutions of 5 micrometers. Our technique additionally takes advantage of the spatial incoherence of sunlight, to enable robust 3D sensing in the presence of se- vere indirect illumination effects (interreflections, subsur- (d) inset1 mm(c) scene(e) estimated depth map(f) rendered depth 1.75 mm (g) inset(a) passive ToFschematic(b) prototype optical setuptrackersunlightbeam mirrorcamerasplittersceneFigure 1. Using sunlight interferometry to passively recon- struct part of a circuit board. (a) and(b)show a schematic and photograph of the sunlight interferometer we build for passive time-of-flight imaging. We use this system to reconstruct part of a Raspberry Pi circuit board that has multiple resistors, soldering pads, and tracks. (c)and(d)show a picture of the scene as seen through the imaging camera, along with an inset highlighting fine geometric features. (e)shows the estimated depth map, and (f) and(g)the corresponding rendered 3D surface. Our technique re- constructs fine features such as the PCB tracks and through-holes, despite operating outdoors under adverse environment conditions. face scattering), and even tasks such as imaging and 3D sensing through optically-thin scattering layers. To demonstrate these capabilities, we build an exper- imental prototype that we operate outdoors, under direct sunlight and adverse experimental conditions (wind, ma- chine vibrations, vehicle traffic). This is in stark contrast with previous demonstrations of interferometric sensing, which were only possible under carefully-controlled lab conditions (dark room, vibration-isolated optical tables, no air flow). Even under these adverse conditions, our experi- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4139 ments show that it is possible to perform passive depth scan- ning, at pixel-level lateral resolution and micrometer ax- ial resolution, for objects that are challenging to scan even with active illumination techniques (translucent, metallic, occluded by thin scatterers). More broadly, our results open the door for the deployment of interferometric techniques in uncontrolled outdoor environments, and for the devel- opment of passive computational light transport capabili- ties such as direct-only imaging, imaging through scatter- ing, and transient imaging. To facilitate future research to- wards these directions, we provide setup details, reconstruc- tion code, and data in the supplement and project website.1 Potential impact. The ability to perform interferometry outdoors can be useful for applications such as field in- spection (e.g., detecting tiny but dangerous defects on air- plane surfaces), field robotics (e.g., high-resolution manip- ulation), and field medicine (e.g., in battlefields, disaster environments, or impoverished regions). In addition to outdoor operation, all of these applications benefit from passive operation: Removing the active source allows for lighter, lower-cost, and lower-power depth sensing, all fea- tures critical for field operation—often requiring mobility, small form factors, and functionality under limited electric- ity access. This is particularly important considering that the source is typically the system component consuming the most power, and also one of the costliest and bulkiest.
Liu_DualVector_Unsupervised_Vector_Font_Synthesis_With_Dual-Part_Representation_CVPR_2023
Abstract Automatic generation of fonts can be an important aid to typeface design. Many current approaches regard glyphs as pixelated images, which present artifacts when scaling and inevitable quality losses after vectorization. On the other hand, existing vector font synthesis methods either fail to represent the shape concisely or require vector supervision during training. To push the quality of vector font synthesis to the next level, we propose a novel dual-part representa- tion for vector glyphs, where each glyph is modeled as a collection of closed “positive” and “negative” path pairs. The glyph contour is then obtained by boolean operations on these paths. We first learn such a representation only from glyph images and devise a subsequent contour refine- ment step to align the contour with an image representation to further enhance details. Our method, named DualVector, outperforms state-of-the-art methods in vector font synthe- sis both quantitatively and qualitatively. Our synthesized vector fonts can be easily converted to common digital font formats like TrueType Font for practical use. The code is released at https://github.com/thuliu-yt16/dualvector.
1. Introduction Fonts with various styles play an important role in con- tent display and distribution. Excellent font design is time- consuming and labor-intensive. Recent machine learning ∗Corresponding authorinnovations have made font generation possible, but how to automatically generate high-quality vector fonts remains a task of practical importance in the artistic and computer graphics and vision communities. Benefiting from the development of image generation techniques, mainstream font synthesis methods [2, 12, 24, 41, 42] could generate pixelated glyph images. Despite the promising quality, images of glyphs incur aliasing artifacts on edges when discretely sampled, and thus are not compe- tent for high-quality rendering or printing at arbitrary res- olutions. To alleviate this problem, some methods [7, 34] adopt coordinate-based neural networks to model a glyph as a contiguous neural field, which have also shown great po- tential in modeling 3D geometry and scenes [7, 30, 32]. Al- though glyphs represented by the implicit field can be ren- dered at arbitrary resolutions, it is hard to preserve details in high-frequency regions such as edges and corners, not to mention the high computational costs as the network needs to be evaluated for every pixel. Researchers have made much effort to directly synthesize vector fonts [4,27,33,39] in recent years, with the main difficulty lying in finding a representation of vector graphics that can be encoded or de- coded effectively in a deep learning framework. One typical approach represents a vector shape as a sequence of drawing commands and adopts sequence modeling techniques such as recurrent networks and transformers. The drawbacks are twofold: (1) Modeling command sequences can be much harder than images. There are infinitely many command se- quences that correspond to the same-looking shape, which brings ambiguities in learning and makes it hard to construct This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14193 an effective manifold for valid glyph shapes. (2) Ground- truth drawing commands are often required to provide suf- ficient supervision for high-quality modeling and synthesis. To overcome these challenges, we first propose a dual- part vector font representation, where each glyph shape is the union of a fixed number of dual parts . Each dual part is formed by the subtraction of a “positive” and a “nega- tive” geometric primitive. While there are many choices for the geometric primitives [6, 8, 25], we adopt closed B ´ezier paths for their great representational ability. They are also widely supported in digital font formats which makes it easy to convert our representation to these formats for practi- cal use. We reduce the problem of predicting complicated drawing command sequences to predicting multiple basic primitives. From this perspective, both manifold learning and latent space interpolation become more feasible. Based on the dual-part representation, we introduce Du- alVector, a method to learn such a representation for high- quality font modeling and synthesis from only glyph images without any vector supervision. A straightforward way to achieve this is to directly optimize the parameters of the B´ezier curves with differentiable rendering techniques for vector graphics [22]. However, this approach easily gets stuck in the local minima as valuable gradients are only de- fined at shape intersections. Taking inspiration from im- plicit field training for 2D and 3D shapes [6, 25, 32], we supervise the occupancy value derived from the analytical expression of the B ´ezier curves and adopt an initialization strategy based on unsigned distance field (UDF) to provide dense gradients across the entire pixel space. For local de- tail fidelity, we also train a glyph image generation model and devise a subsequent contour refinement step to align the contour of the vector shape with that of the image by differentiable rendering [22]. We compare our approach with state-of-the-art methods in font modeling and genera- tion and demonstrate the superior quality of our vector font outputs. Our main contributions are: • A new dual-part font representation based on boolean operations of B ´ezier paths, which enables efficient shape modeling and unsupervised manifold learning. • A method named DualVector that models both the dual-part and pixelated representation, and introduces a contour refinement step to obtain vector fonts with richer details as well as a UDF initialization strategy for better convergence. • DualVector achieves state-of-the-art quality in font modeling and generation, with outputs that can be eas- ily converted to common digital font formats.
Liao_High-Fidelity_Clothed_Avatar_Reconstruction_From_a_Single_Image_CVPR_2023
Abstract This paper presents a framework for efficient 3D clothed avatar reconstruction. By combining the advantages of the high accuracy of optimization-based methods and the ef- ficiency of learning-based methods, we propose a coarse- to-fine way to realize a high-fidelity clothed avatar recon- struction (CAR) from a single image. At the first stage, we use an implicit model to learn the general shape in the canonical space of a person in a learning-based way, and at the second stage, we refine the surface detail by esti- mating the non-rigid deformation in the posed space in an optimization way. A hyper-network is utilized to generate a good initialization so that the convergence o f the opti- mization process is greatly accelerated. Extensive exper- iments on various datasets show that the proposed CAR successfully produces high-fidelity avatars for arbitrarily clothed humans in real scenes. The codes will be released in https://github.com/TingtingLiao/CAR.
1. Introduction Clothed avatar reconstruction is critical to a variety of applications for 3D content creations such as video gaming, online meeting [54,55], virtual try-on and movie industry [10, 21, 39]. Early attempts are based on expensive scanning devices such as 3D and 4D scanners, or complicated multi- camera studios with carefully capturing processes. While highly accurate reconstruction results can be obtained from these recording equipment, they are inflexible and even not *Equal contribution. †Corresponding author.feasible in many applications. An alternative is to collect data using depth sensors [31, 42], which is however still less ubiquitous than RGB cameras. A more practical and low-cost way is to create an avatar from an image by RGB cameras or mobile phones. Monocular RGB reconstruction [19, 37, 51, 59] has been extensively investigated and shows promising results. ARCH [22] is the first method that reconstructs a clothed avatar from a monocular image. Due to the disadvantage of depth ambiguity, a number of methods that create an avatar from a video are proposed to resolve the problem. Most exist- ing monocular video-based methods [2, 3, 7, 9, 10, 14, 15, 46] are typically restricted to parametric human body prediction, which lacks geometry details like cloth surface. How to cre- ate a high-fidelity avatar from an in-the-wild image, with consistent surface details is still a great challenge. In this work, we focus on the shape recovery and pro- pose an efficient high-fidelity clothed human avatar creation method from a single image in a coarse-to-fine way. The method consists of a learning-based canonical implicit model and an optimization-based normal refinement process. The canonical implicit model uses the canonical normal inverse transformed from original space as geometric feature to help grasp clothing detail of the general shape in canonical space. Unlike occupancy-based methods [22, 36, 37], we adopt a Signed Distance Function (SDF) to approximate the canoni- cal human body surface, which gains advantages in learning the human body in the surface level instead of the point level, so that the reconstruction accuracy is improved. In the normal refinement process, a SDF is learned to approxi- mates the target surface in the posed space by enforcing its surface normal closed to the predicted normal image. Com- pared with mesh-based refinement, our method can obtain This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8662 (a) Input (b) Posed Reconstruction (c) Canonical Reconstruction (d) Reposed Figure 1. Images to avatars. Given an image of a person in an unconstrained pose (a), our method reconstructs 3D clothed avatars in both original posed space (b) and canonical space (c) and can repose the human body from the canonical mesh (d). more realistic results without artifacts due to the flexibility of implicit representation. Moreover, to learn the SDF of the normal refinement process efficiently, we propose a meta- learning-based hyper network for parameter initialization to accelerate the convergence of the normal refinement process. Extensive experiments have been conducted on MVP- Human [60] and RenderPeople [1] datasets. Both qualita- tive and quantitative results demonstrate that our proposed method outperforms related avatar reconstruction methods. The main contributions are summarized as follows: •We propose a coarse-to-fine framework for efficient clothed avatar reconstruction from a single image. Thanks to the integration of image and geometry fea- tures, as well as the meta-learning, it achieves high- fidelity clothed avatar reconstruction efficiently. •We design the canonical implicit regression model and the normal refinement process. The former fuses all observations to the canonical space where the general shape of a person is depicted, and the latter learns pose dependent deformation. •Results validate that our method could reconstruct high- quality 3D humans in both posed and canonical space from a single in-the-wild image.
Li_Discriminative_Co-Saliency_and_Background_Mining_Transformer_for_Co-Salient_Object_Detection_CVPR_2023
Abstract Most previous co-salient object detection works mainly focus on extracting co-salient cues via mining the con- sistency relations across images while ignore explicit ex- ploration of background regions. In this paper, we pro- pose a Discriminative co-saliency and background Mining Transformer framework (DMT) based on several econom- ical multi-grained correlation modules to explicitly mine both co-saliency and background information and effec- tively model their discrimination. Specifically, we first pro- pose a region-to-region correlation module for introduc- ing inter-image relations to pixel-wise segmentation fea- tures while maintaining computational efficiency. Then, we use two types of pre-defined tokens to mine co-saliency and background information via our proposed contrast-induced pixel-to-token correlation and co-saliency token-to-token correlation modules. We also design a token-guided fea- ture refinement module to enhance the discriminability of the segmentation features under the guidance of the learned tokens. We perform iterative mutual promotion for the seg- mentation feature extraction and token construction. Exper- imental results on three benchmark datasets demonstrate the effectiveness of our proposed method. The source code is available at: https://github.com/dragonlee258079/DMT.
1. Introduction Unlike standard Salient Object Detection (SOD) [13,21– 23, 37, 45], which detects salient objects in a single im- age, Co-Salient Object Detection (CoSOD) aims to detect common salient objects across a group of relevant images. It often faces the following two challenges: 1) The fore- ground (FG) in CoSOD refers to co-salient objects, which are inherently hard to detect since they should satisfy both the intra-group commonality and the intra-image saliency. 2) The background (BG) in CoSOD might contain com- †Corresponding author: [email protected] distractions, including extraneous salient objects that are salient but not ”common”, and similar concomitant ob- jects appearing in multiple images ( e.g. performers often appear in guitar images). Such difficult distractors can eas- ily seduce CoSOD models to make false positive predic- tions. Therefore, the effective exploration of both FG and BG and modeling their discrimination to precisely detect co-salient objects while suppressing interference from BG is crucial for CoSOD. Although many works have achieved promising perfor- mance, most of them [14, 18, 25, 30, 33, 38, 40–42] devoted to the ingenious mining of FG while ignored the explicit exploration of BG. They mainly constructed positive rela- tions between co-salient objects but paid less attention to modeling negative ones between co-saliency regions and BG. [16, 34] followed some SOD methods [4, 43] to in- corporate BG features for co-saliency representation learn- ing or contrast learning. However, these methods can be regarded as an univariate FG&BG modeling in which the essential optimization target is limited to FG and there is no explicit BG modeling, thus limiting the discriminative learning capability. To this end, this paper propose to con- duct a bivariate FG&BG modeling paradigm that explicitly models both FG and BG information and effectively facili- tates their discriminative modeling. As for co-saliency (FG) information, most previous works [12, 16, 34, 41] detected them by exploring the inter- image similarity. Calculating the Pixel-to-Pixel (P2P) cor- relation between 3D CNN features in a group is widely used in many works [12, 16, 34] and has demonstrated its effec- tiveness. However, this method introduces heavy computa- tion burdens and hinders sophisticated relation modeling. To alleviate this problem, we introduce economic multi- grained correlations among different images and the co- saliency and BG information, thus enabling modeling so- phisticated relations to extract accurate co-saliency as well as BG knowledge. Specifically, we construct a Discriminative co-saliency and BG Mining Transformer (DMT) following the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7247 paradigm of a semantic segmentation transformer archi- tecture, i.e. MaskFormer [5], which enables explicit co- saliency and BG modeling and the construction of multi- grained correlations. Using this architecture, we decom- pose the CoSOD modeling into two sub-paths, i.e. gener- ating pixel-wise segmentation feature maps and extracting category information with pre-defined co-saliency and BG detection tokens. In the first sub-path, to efficiently and thoroughly mine the common cues within the image group, we propose a Region-to-Region correlation (R2R) module to model the inter-image relation and plug it into each decoder layer. In the second sub-path, we transform the pixel-wise features into a co-saliency token and a BG token for each image, ab- stracting pixel-wise cues into high-level tokens. As such, we achieve sophisticated relation modeling among the to- kens and features while largely reducing the computational costs. Concretely, we propose an intra-image Contrast- induced Pixel-to-Token correlation (CtP2T) module to ex- tract the two tokens by considering the contrast relation between co-saliency and BG. Since the co-saliency tokens from CtP2T are separately learned on each image, we fur- ther design a Co-saliency Token-to-Token (CoT2T) correla- tion module to model their common relation. After obtaining the tokens and pixel-wise features, the MaskFormer [5] architecture adopts dot production be- tween them to obtain the final segmentation results. How- ever, such a scheme only achieves unidirectional informa- tion propagation, i.e. conveying information from the fea- ture maps to the tokens. We argue that the learned two tokens can also be used to improve the discriminability of the pixel-wise features, thus proposing our Token-Guided Feature Refinement (TGFR) module as a reverse informa- tion propagation path. Concretely, we first use the tokens as guidance to distill co-saliency and BG features from the pixel-wise feature maps, and then enhance the discrim- inability of the segmentation features between the two de- tection regions. In this way, the refined features become sensitive to both co-saliency and BG, reducing the affect of ambiguous distractors. Finally, as shown in Figure 1, our DMT iteratively de- ploys CtP2T and CoT2T to leverage the segmentation fea- tures for updating the tokens, and then adopts TGFR to re- fine the corresponding decoder feature with the updated to- kens. As a result, the learning processes can be effectively promoted, thus obtaining more accurate CoSOD results. In summary, our major contributions are as follows: • We model CoSOD from the perspective of explicitly exploring both co-saliency and BG information and ef- fectively modeling their discrimination. • We introduce several computationally economical multi-grained correlation modules, i.e. R2R, CtP2T, CoT2T, for the inter-image and intra-image relationsmodeling. • We propose a novel TGFR module to use the learned tokens as guidance to refine the segmentation fea- tures for enhancing their discriminability between co- saliency and BG regions. • Experimental results demonstrate that our DMT model outperforms previous state-of-the-art results on three benchmark datasets.
Lin_Interventional_Bag_Multi-Instance_Learning_on_Whole-Slide_Pathological_Images_CVPR_2023
Abstract Multi-instance learning (MIL) is an effective paradigm for whole-slide pathological images (WSIs) classification to handle the gigapixel resolution and slide-level label. Pre- vailing MIL methods primarily focus on improving the fea- ture extractor and aggregator. However, one deficiency of these methods is that the bag contextual prior may trick the model into capturing spurious correlations between bags and labels. This deficiency is a confounder that limits the performance of existing MIL methods. In this paper, we propose a novel scheme, Interventional BagMulti-Instance Learning (IBMIL), to achieve deconfounded bag-level pre- diction. Unlike traditional likelihood-based strategies, the proposed scheme is based on the backdoor adjustment to achieve the interventional training, thus is capable of sup- pressing the bias caused by the bag contextual prior. Note that the principle of IBMIL is orthogonal to existing bag MIL methods. Therefore, IBMIL is able to bring consis- tent performance boosting to existing schemes, achieving new state-of-the-art performance. Code is available at https://github.com/HHHedo/IBMIL .
1. Introduction The quantitative analysis of whole-slide pathological im- ages (WSIs) is essential for both diagnostic and research purposes [13]. Beyond complex biological structures, WSIs are quite different from natural images in the gigapixel res- olution and expensive annotation, which is thus formulated as a multi-instance learning (MIL) [9] problem: treating each WSI as a labeled bag and the corresponding patches as unlabeled instances. Such a de facto paradigm has been demonstrated in extensive tasks on WSIs, e.g., classifica- tion [7, 15, 18, 46], regression [40, 41, 48] and segmenta- tion [37]. The prevailing scheme for WSI classification — bag-level MIL — is depicted in Fig. 1a. Given the patchified *Corresponding author. Train Feature Extractor  Train Aggregator  Interventional T raining(a) Positive Bag BNegative Bag B Negative Bag AInstancesInstances Positive Bag AInstancesInstances (b) (c) Figure 1. (a) Traditional scheme and our interventional training. (b) Dataset bias. (c) Unreasonable attention maps with right pre- dictions. images as instances, each instance is embedded in vectors by a feature extractor in the first stage. Second, for each bag, their corresponding instance features are aggregated as a bag-level feature for classification. More and more new frameworks are proposed to im- prove the two stages following this scheme [19,28,31,44]. It is convinced that learning better instance features and mod- eling more accurate instance relationships can bring better performance of MIL. While we have witnessed the great efforts, they still leave the “bag contextual prior” issue un- solved: the information shared by bags of the same class but irrelevant to the label, which may affect the final predic- tions. For example, in Fig. 1b, due to the dataset bias, most of the instances in the positive bags are stained pink but pur- ple in the negative bags. The co-occurrence of specific color patterns and labels may mislead the model to classify bags by color statistics instead of the key instances — the more This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19830 pink instances a bag contains, the more likely it is a posi- tive bag. Fig. 1c illustrates another example: even if the prediction is correct, the underlying visual attention is not reasonable, where the high attention scores are put on the disease-irrelevant instances outside the blue curves in the bags. From the causal lens, the bag contextual prior is a con- founder that opens up a backdoor path for bags and labels, causing spurious correlations between them. To suppress such a bias, we need a more efficient mechanism for the actual causality between bags and labels, i.e., the bag pre- diction is based on the bag’s content ( e.g., key instances), which can not be fully achieved only by above mentioned new frameworks. In fact, it is challenging to achieve unbiased bag pre- dictions as such a bias happens in the data generation – the tissue preparations, staining protocols, digital scan- ners, etc. In this paper, we propose a novel MIL scheme, Interventional BagMulti-Instance Learning (IBMIL), to tackle this challenge. In particular, we propose a structure causal model (SCM) [24] to analyze the causalities among bag contextual prior, bags and labels. The key difference of IBMIL is that it contains another stage of interventional training (see Fig. 1a right). Given the aggregator trained in the second stage, instead of directly using it for inference vialikelihood: P(Y|X), we apply it for the approxima- tion of confounders. With the confounders observed, we eliminate their effect via the backdoor adjustment formu- lation [23], where the intuitive understanding is: if a WSI model can learn from “purple” and “pink” positive/negative bags, respectively, then the bag context of color will no longer confound the recognition. Therefore, our IBMIL is fundamentally different from the existing scheme as we use acausal intervention: P(Y|do(X))for bag prediction. We conduct experiments on two public WSI datasets, i.e., Camelyon16 [1] and TCGA-NSCLC. Experimental re- sults show that IBMIL is agnostic to both feature extractors and aggregation networks, i.e., it brings consistent perfor- mance boosting to all compared state-of-the-art MIL meth- ods in the WSI classification tasks. Further ablation studies and analyses demonstrate the effectiveness of interventional training.
Liu_Bitstream-Corrupted_JPEG_Images_Are_Restorable_Two-Stage_Compensation_and_Alignment_Framework_CVPR_2023
Abstract In this paper, we study a real-world JPEG image restora- tion problem with bit errors on the encrypted bitstream. The bit errors bring unpredictable color casts and block shifts on decoded image contents, which cannot be resolved by existing image restoration methods mainly relying on pre-defined degradation models in the pixel domain. To address these challenges, we propose a robust JPEG de- coder, followed by a two-stage compensation and align- ment framework to restore bitstream-corrupted JPEG im- ages. Specifically, the robust JPEG decoder adopts an error-resilient mechanism to decode the corrupted JPEG bitstream. The two-stage framework is composed of the self- compensation and alignment (SCA) stage and the guided- compensation and alignment (GCA) stage. The SCA adap- tively performs block-wise image color compensation and alignment based on the estimated color and block offsets via image content similarity. The GCA leverages the ex- tracted low-resolution thumbnail from the JPEG header to guide full-resolution pixel-wise image restoration in a coarse-to-fine manner. It is achieved by a coarse-guided pix2pix network and a refine-guided bi-directional Lapla- cian pyramid fusion network. We conduct experiments on three benchmarks with varying degrees of bit error rates. Ex- perimental results and ablation studies demonstrate the supe- riority of our proposed method. The code will be released at https://github.com/wenyang001/Two-ACIR .
1. Introduction Image restoration is a long-standing problem in computer vision that has been extensively studied. Given a degraded image, e.g., noisy, downscaled, hazing, or masked image, existing image restoration works in image deblur [ 2 , 23 ], de- hazing [ 26 ], inpainting [ 16 , 37 ], superresolution (SR) [ 6 , 36 ] are capable of restoring the high-quality counterpart, respec- tively. These methods are mainly based on pre-defined image degradation models in the pixel domain, but few attempts ∗Corresponding authors (b) 1. Standard d ecoder output 4. Our guided compensation and alignment output ( 𝒙 " ) (GCA) 2. Our robust decoder output ( 𝒚 ) 3. Our self - compensation and alignment output ( 𝒛 ) (SCA) Color cast for segments Block shifting Abort JPEG decoding Bitstream Corruption GT Image 𝒙 𝑬𝒏 JPEG Bitstream Corrupted Decrypted Bitstream Decoded Image 𝒚 Encrypted Bitstream 𝑬 𝑲 Corrupted Encrypted Bitstream 𝑫 𝑲 𝑫𝒆 Stage 2 GCA Stage 1 SCA 2. Two - stage Compensation and Alignment Framework (a) Restored Image 𝒙 " Conpensated Image 𝒛 1. Robust decoder Figure 1. (a) Our work considers a real-world JPEG image restoration problem with bit errors on the encrypted bitstream, where En /De represent JPEG encoding/decoding and EK /DK represent encryption/decryption employed in disks with the secret key K . We propose a robust JPEG decoder, followed by a two-stage compensation and alignment framework to address this problem. (b) Comparison of the standard decoder results with our robust decoder results and our proposed two-stage framework results. The proposed robust decoder can decode the corrupted JPEG bitstream and the proposed two-stage framework can ultimately restore high-quality images gradually from the decoded color-casted and misaligned images. have been made in JPEG image restoration with the cor- rupted bitstream. The big challenge of bitstream-corrupted image restoration is the incurred JPEG decoding failures make the decoding process stop at the bit errors and the following bits cannot be decoded, as shown in Fig. 1(b1). In the real world, bit errors occur naturally in JPEG bit- stream stored in digital devices, and as memory cells wear out [ 27 ], uncorrectable bit errors are exposed externally. NAND flash memory, as a type of non-volatile storage tech- nology, is widely used in portable devices to store users’ data. Due to technology trends, it exhibits progressively shorter lifetimes and increasingly relies on error correction codes (ECC) to ensure the data storage integrity [ 20 , 24 ]. It is well-known that [ 20 , 33 ] raw bit error rate (RBER) of NAND flash memory grows rapidly as the program/erase cycle, tem- perature, and retention years increase. As a result, bit errors This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9979 may exceed ECC’s error correction capability and cause un- recoverable bit errors. In addition, if the storage device is severely damaged, or the ECC controller is not functioning correctly, standard data reading [ 32 ] may not be possible. Chip-off analysis [ 31 ] is often required to expose data in this case, but it may more likely result in unpredictable bit errors in the resolved data. File carving [ 22 ] is an essential memory forensic tech- nique that allows files to be recovered from unreliable NAND flash memory. While existing JPEG file carving meth- ods [ 7 , 21 , 29 , 30 ] mainly focus on JPEG file carving in the absence of filesystem metadata, few consider the situation when the JPEG file itself is corrupted. Bit errors in the JPEG bitstream can severely deteriorate the decoded image quality by two kinds of error propagation [ 12 ]. In addition, from An- droid 5.0, full-disk encryption (FDE) [ 10 , 11 ] is introduced to protect users’ privacy. Once an Android device is encrypted, all user-created data will be automatically encrypted before committing it to disk and automatically decrypted before accessing it from disk. For encrypted files stored in an An- droid device, bit errors caused by the unreliable NAND flash memory are directly reflected on the encrypted data, making bit errors of the decrypted file become much more serious. This issue brings a significant challenge to existing works. Recently, deep learning methods [ 16 , 36 , 37 , 41 ] have shown great power in image restoration problems due to their powerful feature representation ability. However, existing image restoration methods may not be apt for the above- mentioned problem because of unpredictable color casts and block shifts of decoded image contents caused by bit errors. As Fig. 1 (b1, b2) shows, decoders fail to generate visually consistent images that may not be directly used for the end- to-end training of existing image restoration methods. Given the facts above, it is natural to raise a question: given a corrupted JPEG bitstream, is it possible to restore the image contents? With consideration of the FDE employed in smartphones for privacy, the damaged JPEG image y in the pixel domain can be formulated as: y=De(DK(Bitflip (EK(En(x)))) (1) where x represents the initial JPEG image, DK andEK represent decryption and encryption of FDE, De andEn represent JPEG decoding and encoding, EK(En(x)) repre- sents the corresponding encrypted JPEG bitstream by the secret key K , and Bitflip represents random bit errors on the encrypted data. To simplify the problem, we assume the secret key is already known. In this paper, we propose a robust JPEG decoder, followed by a two-stage compensation and alignment framework to restore bitstream-corrupted JPEG images. Specifically, the robust decoder adopts an error-resilient mechanism, which can decode the corrupted JPEG bitstream completely (see Fig. 1 (b2)), compared to the aborting of JPEG decoding in the standard decoder (see Fig. 1 (b1)). To further resolve the color cast and block shift problem in our decoded images, we propose a two-stage compensation and alignment frame- work, i.e., self-compensation and alignment (SCA) stage and guided-compensation and alignment (GCA) stage. In the first stage, SCA cast the problem as a segment detec- tion problem and adaptively estimates suitable color and block offsets for each segment to perform block-wise im- age color compensation and alignment via image content similarity. In the second stage, GCA leverages the extracted low-resolution thumbnail (normally 160 × 120 [ 9 , 25 ]) from the JPEG header to guide full-resolution pixel-wise image restoration. The GCA is achieved by coarse-to-fine neural networks, including a coarse-guided pix2pix network and a refine-guided bi-directional Laplacian pyramid fusion net- work. As Figs. 1 (b3, b4) show, the proposed two-stage framework deals with the color cast and block shift problem and restores high-quality images ultimately. In summary, our contributions are as follows: • To the best of our knowledge, this is the first work to restore the JPEG image with bit errors on the encrypted bitstream. Unlike existing works based on pre-defined degradation models in the pixel domain, the discussed problem in the bitstream domain causes unpredictable color casts and block shifts on decoded images, which is challenging and of great practical value. • We propose a two-stage compensation and alignment scheme for this problem, where the SCA stage and GCA stage are proposed and combined into an end-to-end architecture. The SCA is based on image content simi- larity without training data, and the GCA employs the coarse-guided pix2pix network and the refine-guided bi- directional Laplacian pyramid fusion network to gradu- ally restore full-resolution images. • Extensive experiments and ablation studies have been conducted to demonstrate the superiority of our pro- posed method. Even for 2k-resolution images, our pro- posed method can restore high-fidelity images with faithful details, achieving PSNR up to 38.92 dB with 5.52 dB significant improvement compared to the base- line EPDN method [ 26].
Li_Inverse_Rendering_of_Translucent_Objects_Using_Physical_and_Neural_Renderers_CVPR_2023
Abstract In this work, we propose an inverse rendering model that estimates 3D shape, spatially-varying reflectance, homoge- neous subsurface scattering parameters, and an environ- ment illumination jointly from only a pair of captured im- ages of a translucent object. In order to solve the ambigu- ity problem of inverse rendering, we use a physically-based renderer and a neural renderer for scene reconstruction and material editing. Because two renderers are differentiable, we can compute a reconstruction loss to assist parameter estimation. To enhance the supervision of the proposed neu- ral renderer, we also propose an augmented loss. In addi- tion, we use a flash and no-flash image pair as the input. To supervise the training, we constructed a large-scale syn- thetic dataset of translucent objects, which consists of 117K scenes. Qualitative and quantitative results on both syn- thetic and real-world datasets demonstrated the effective- ness of the proposed model. Code and Data are available at https://github.com/ligoudaner377/homo translucent
1. Introduction Inverse rendering is a long-standing problem in com- puter vision, which decomposes captured images into mul- tiple intrinsic factors such as geometry, illumination, and material. With the estimated factors, many challenging ap- plications like relighting [40,47,51,60,65,72], material edit- ing [41,52] and object manipulation [29,57,62,70] become feasible. In this work, we focus on a very complicated ob- ject for inverse rendering: translucent objects. From biolog- ical tissue to various minerals, from industrial raw materials to a glass of milk, translucent objects are everywhere in our daily lives. A critical physical phenomenon in translucent objects is SubSurface Scattering (SSS): a photon penetrates the object’s surface and, after multiple bounces, is even- tually absorbed or exits at a different surface point. This non-linear, multiple bounces, multiple path process makes inverse rendering extremely ill-posed. To reduce the com- plexity of the task, we assume that the SSS parameters of the object are constant in a continuous 3-dimensional space(homogeneous subsurface scattering). So, given a translu- cent object, our task is to simultaneously estimate shape, spatially-varying reflectance, homogeneous SSS parame- ters, and environment illumination. However, a well-known issue of inverse rendering is the ambiguity problem—the final appearance of an object re- sults from a combination of illumination, geometry, and ma- terial. For instance, a brighter area in the image may be due to the specular highlight. It may be very close to the light source; the coloration may also come from the light source or the object itself. The situation becomes more compli- cated when SSS is also considered because it is hard to tell the object intensity comes whether from the surface, sub- surface, or both. Solutions to the ambiguity problem can be roughly di- vided into two groups. The first group of researchers addresses the ambiguity problem by providing more in- formation to the model. For example, some works use multi-view [5–7, 38, 59, 63, 69] or multi-light [8, 59] set up. The solution of the second group is based on vari- ous assumptions and simplifications. For example, some researchers simplify the reflectance model by assuming a Lambertian reflectance [51], some simplify the illumina- tion by assuming that the object is illuminated by a sin- gle point light source [15], some simplify the geometry by assuming a near-planar [35]. For the SSS, most existing works [4, 8, 9, 12, 16, 34, 37, 45, 49, 50, 56, 58] simply ig- nore it by assuming that light does not penetrate the ob- ject’s surface, and the final appearance only depends on the pure surface reflectance. On the other hand, some re- searchers [11, 21, 22, 28, 30, 71] consider a pure SSS with- out surface reflectance. Some works [14, 17, 26, 61] use the BSSRDF model to simplify SSS of optically thick materials in the form of complex surface reflectance. The limitation of these works is obvious: most translucent objects ( e.g. wax, plastic, crystal) in our world can easily break their as- sumptions, as it contains both surface reflectance and SSS. We tackle a more challenging problem by considering both surface reflectance and SSS of translucent objects with an arbitrary shape under an environment illumination. How- ever, using a complex scene representation means introduc- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12510 FlashNo-flashMaskSSSNormalDepthRoughnessReflectanceReconstructInputOutput (a)(b)(c)(d)(e)(f)(g)(h)(i) Figure 1. Inverse rendering results of real-world translucent objects. Our model takes (a) a flash image, (b) a no-flash image, and (c) a mask as input. Then, it decomposes them into (d) homogeneous subsurface scattering parameters (denoted as SSS in the figure), (e) surface normal, (f) depth, and (g) spatially-varying roughness. We use the predicted parameters to re-render the image that only considers (h) the surface reflectance, and (i) both surface reflectance and subsurface scattering. Note that (d) subsurface scattering only contains 7 parameters (3 for extinction coefficient σt, 3 for volumetric albedo α, 1 for phase function parameter g), we use these parameters to render a sphere using Mitsuba [46] for visualization. Brighter areas in the depth map represent a greater distance, and brighter areas in the roughness map represent a rougher surface. ing more ambiguity. In this work, we propose an inverse rendering framework that handles both surface reflectance and SSS. Specifically, we use a deep neural network for parameter estimation. To enable the proposed model to train and estimate the intrinsic physical parameters more efficiently, we employ two dif- ferentiable renderers: a physically-based renderer that only considers the surface reflectance of the direct illumination and a neural renderer that creates the multiple bounce il- lumination as well as SSS effect. Two renderers work to- gether to re-render the input scene based on the estimated parameters and at the same time enable material editing. To enhance the supervision of the proposed neural renderer, we also propose an augmented loss by editing the SSS param- eters. Moreover, inspired by the recent BRDF estimation methods [2, 8] that use a flash and no-flash set up to ad- dress the problem of unpredictability in saturated highlight, we also adopt this two-shot setup not only to deal with the saturated highlight problem but also to disentangle the sur- face reflectance and SSS. To train our model, we construct a synthetic dataset consisting of more than 117K translu- cent scenes because there is no sufficient dataset supporting translucent objects. Each scene contains a human-created 3D model and is rendered with a spatially-varying micro- facet BSDF, homogeneous SSS, under an environment illu- mination.Our contributions are summarized as follows: • We first tackle the problem of estimating shape, spatially-varying surface reflectance, homogeneous SSS, and illumination simultaneously from a flash and no-flash pair of captured images at a single viewpoint. • We build a novel model that combines a physically- based renderer and a neural renderer for explicitly sep- arating the SSS and the other parameters. • We introduce the augmented loss to train the neural renderer supervised by altered images whose SSS pa- rameters were edited. • We construct a large-scale photorealistic synthetic dataset that consists of more than 117K scenes.
Li_Dynamic_Graph_Enhanced_Contrastive_Learning_for_Chest_X-Ray_Report_Generation_CVPR_2023
Abstract Automatic radiology reporting has great clinical poten- tial to relieve radiologists from heavy workloads and im- prove diagnosis interpretation. Recently, researchers have enhanced data-driven neural networks with medical knowl- edge graphs to eliminate the severe visual and textual bias in this task. The structures of such graphs are exploited by using the clinical dependencies formed by the disease topic tags via general knowledge and usually do not up- date during the training process. Consequently, the fixed graphs can not guarantee the most appropriate scope of knowledge and limit the effectiveness. To address the limi- tation, we propose a knowledge graph with Dynamic struc- ture and nodes to facilitate chest X-ray report generation withContrastive Learning, named DCL. In detail, the fun- damental structure of our graph is pre-constructed from general knowledge. Then we explore specific knowledge ex- tracted from the retrieved reports to add additional nodes or redefine their relations in a bottom-up manner. Each im- age feature is integrated with its very own updated graph before being fed into the decoder module for report gen- eration. Finally, this paper introduces Image-Report Con- trastive and Image-Report Matching losses to better repre- sent visual features and textual information. Evaluated on IU-Xray and MIMIC-CXR datasets, our DCL outperforms previous state-of-the-art models on these two benchmarks.
1. Introduction Recently, automatic report generation has received grow- ing attentions from both machine learning and automatic medicine fields. It aims to generate semantically coher- ent and informative reports to describe the referring ex- amination images, such as Chest X-Ray [8, 18], Lung CT Scan [26] or funds angiography [23]. Such techniques have great clinical potential in relieving junior radiologists from heavy workloads and reducing diagnosis errors by improv- *Corresponding author. https://github.com/mlii0117/DCL Figure 1. An illustration of one sample from MIMIC-CXR [18] and the pre-constructed graph in [46], where the blue circle, or- ange boxes and black boxes refer to the global node, organ-level entities and key findings, respectively. The red dash line here rep- resents the unconnected relation. ing the interpretation [7, 30]. Witnessed the great progress in artificial intelligence, es- pecially deep learning methods [12,25,39], researchers have proposed various data-driven neural networks for radiology reporting and achieved promising performances in metrics that measure descriptive accuracy [7, 44] and clinical cor- rectness [11, 46]. Compared with the similar task generic image captioning [14], the key challenges in chest X-ray report generation (CRG) task are the severe visual and tex- tual data bias [19, 26]. On the one hand, medical images are highly similar to each other due to the imaging methods and human tissues themselves. However, abnormal regions or lesions that should acquire more attentions usually lo- cate at a small part and lack detailed annotations in existing CRG benchmarks. On the other hand, sentences that de- scribe normal regions are likely to appear repeatedly among each dataset which disables the model to describe specific crucial abnormalities. Two concepts have been proved ef- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3334 fective in eliminating those bias. The first one is to integrate medical knowledge with ORG systems [24,30,44,46]. Zhang et al. [46] constructed a universal graph comprised of a global node, 7 organs/tissues and 20 findings (normal or disease keywords). Disease key- word nodes linked to the same organ are connected to each other and the root in the graph. This graph can enhance the relationships among findings and emphasize the dis- ease keywords. Thus, it is also adopted in the following works [30, 44]. However, this graph is built from general knowledge and may be inappropriate in some cases. As the shown report in Fig. 1, it is observed that effusion should be suggestive of edema , however, such relationship is not mod- elled in the graph. Furthermore, some nodes like ‘cicatrix’ or ‘hypoinflation’ only appear very few times in two ORG benchmarks [8, 18]. Therefore, it is necessary to update the scope of knowledge for each case; In addition to the medical knowledge, recent works [5,11,31,38,43] utilize contrastive learning to improve the visual and textual representations by contrasting positive and negative pairs. They proposed var- ious contrastive learning objectives to capture the abnormal regions from a chest X-Ray image. Since normal images usually dominate the dataset over abnormal ones [37], it is also crucial to recognize the normal or abnormal cases in the meantime. In this paper, we propose a novel framework, named DCL, which exploits a dynamic graph integrating spe- cific knowledge with general knowledge to enhance vi- sual representations learned in a contrastive manner. We adopt the general knowledge with 28 entities from [46] as the fundamental structure of our graph, and the rela- tionships are modelled in an adjacency matrix. Given a medical image, we first retrieve its semantically similar re- ports from the training set. Specific knowledge is extracted from those reports via RadGraph [17] and stored in triplets (<subjective entity, relation, objective entity >). And we in- tegrate those triplets with the pre-constructed graph by dy- namically adding additional nodes or linking two entities. We utilize a graph encoder to propagate information over the updated graph for refining the node features, which are initialized by a pretrained SciBert [4]. Then the dedicated node features are attended to visual representations for re- port generation via a Transformer [39] decoder. Based on the dynamic graph, we introduce a contrastive learning ob- jective, image-report contrastive loss to well represent the visual features and textual information. In addition, con- trastive learning can help ensure the accuracy of the report retrieval procedure in the dynamic graph construction pro- cess. Image-report matching loss is also employed to fur- ther improve the performances. We evaluate our method on two benchmarks, IU- Xray [8] and MIMIC-CXR [18]. Experimental results demonstrate that our approach can either outperform ormatch previous state-of-the-art (SOTA) methods in metrics that measure descriptive accuracy and clinical correctness. It indicates that leveraging dynamic graphs to enhance con- trastive learning is helpful to generate high-quality reports. In summary, our main contributions are as follows: • We propose a novel framework that leverages a dy- namic graph to enhance visual representations with contrastive learning paradigms for radiology reporting. • Our proposed dynamic graph integrates both general and specific knowledge; The contrastive learning ob- jective can improve visual and textual representations and dynamic graph accuracy. • We conduct extensive experiments on two popular benchmarks to show the effectiveness of our approach, which achieves the SOTA performance on both lan- guage generation and clinical efficacy metrics.
Kim_VNE_An_Effective_Method_for_Improving_Deep_Representation_by_Manipulating_CVPR_2023
Abstract Since the introduction of deep learning, a wide scope of representation properties, such as decorrelation, whitening, disentanglement, rank, isotropy, and mutual information, have been studied to improve the quality of representation. However, manipulating such properties can be challeng- ing in terms of implementational effectiveness and general applicability. To address these limitations, we propose to regularize von Neumann entropy (VNE) of representation. First, we demonstrate that the mathematical formulation of VNE is superior in effectively manipulating the eigenval- ues of the representation autocorrelation matrix. Then, we demonstrate that it is widely applicable in improving state- of-the-art algorithms or popular benchmark algorithms by investigating domain-generalization, meta-learning, self- supervised learning, and generative models. In addition, we formally establish theoretical connections with rank, disen- tanglement, and isotropy of representation. Finally, we pro- vide discussions on the dimension control of VNE and the relationship with Shannon entropy. Code is available at: https://github.com/jaeill/CVPR23-VNE .
1. Introduction Improving the quality of deep representation by pursu- ing a variety of properties in the representation has been adopted as a conventional practice. To learn representa- tions with useful properties, various methods have been proposed to manipulate the representations. For example, decorrelation reduces overfitting, enhances generalization in supervised learning [20,92], and helps in clustering [79]. Whitening improves convergence and generalization in su- pervised learning [23, 44, 45, 60], improves GAN stabil- ity [76], and helps in domain adaptation [73]. Disentan- glement was proposed as a desirable property of representa- *Corresponding author (a) Domain generalization (b) Meta-learning (c) Self-supervised learning (d) GAN Figure 1. General applicability of VNE: performance of state-of- the-art algorithms or popular benchmark algorithms can be further improved by regularizing von Neumann entropy (full result tables will be provided in Section 3). (a) Domain generalization: relative improvements over ERM and SWAD (current state-of-the-art). (b) Meta-learning: relative improvements over six popular benchmark algorithms. (c) Self-supervised learning: performance compari- son against the current state-of-the-art algorithms for COCO de- tection. (d) GAN: relative improvements in Fr ´echet Inception Dis- tance (FID) for seven popular benchmark algorithms. tions [1, 9, 42]. Increasing rank of representations was pro- posed to resolve the dimensional collapse phenomenon in self-supervised learning [43, 47]. Isotropy was proposed to improve the downstream task performance of BERT-based models in NLP tasks [56,78]. Preventing informational col- lapse (also known as representation collapse) was proposed as a successful learning objective in non-contrastive learn- ing [7,96]. In addition, maximizing mutual information was proposed as a successful learning objective in contrastive learning [39, 69, 80]. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3799 Although aforementioned properties are considered as desirable for useful representations, typical implementa- tional limitations, such as dependency to specific architec- tures or difficulty in proper loss formulation, inhibited the properties from being more popularly adopted. For ex- ample, the methods for whitening [23, 44, 45, 60, 73, 76], isotropy [56, 78], and rank [43] are typically dependent on specific architectures (e.g., decorrelated batch normaliza- tion [44] and normalizing flow [56]). Regarding disentan- glement and mutual information, loss formulations are not straightforward because measuring disentanglement gener- ally relies on external models [13, 27, 32, 41, 51] or is tuned for a specific dataset [50] and formulating mutual informa- tion in high-dimensional spaces is notoriously diff
Li_Learning_Steerable_Function_for_Efficient_Image_Resampling_CVPR_2023
Abstract Image resampling is a basic technique that is widely employed in daily applications. Existing deep neural net- works (DNNs) have made impressive progress in resam- pling performance. Yet these methods are still not the per- fect substitute for interpolation, due to the issues of effi- ciency and continuous resampling. In this work, we propose a novel method of Learning Resampling Function (termed LeRF), which takes advantage of both the structural priors learned by DNNs and the locally continuous assumption of interpolation methods. Specifically, LeRF assigns spatially- varying steerable resampling functions to input image pix- els and learns to predict the hyper-parameters that deter- mine the orientations of these resampling functions with a neural network. To achieve highly efficient inference, we adopt look-up tables (LUTs) to accelerate the inference of the learned neural network. Furthermore, we design a directional ensemble strategy and edge-sensitive indexing patterns to better capture local structures. Extensive exper- iments show that our method runs as fast as interpolation, generalizes well to arbitrary transformations, and outper- forms interpolation significantly, e.g., up to 3dB PSNR gain over bicubic for ×2upsampling on Manga109.
1. Introduction Due to the rapid growth of visual data, there is a strong demand for digital image processing. Image resampling, one of the most common techniques, aims to obtain an- other image by generating new pixels following a geomet- ric transformation rule from existing pixels in a given im- age [8]. Common transformations include upsampling ( i.e., single image super-resolution), downsampling, affine trans- formation, etc. Image resampling enjoys various applica- tions, ranging from photo editing, optical distortion com- *Equal contribution.†Corresponding author. This work was done when Jiacheng Li was a research intern at Huawei Noah’s Ark Lab. DownsampleRotationSheeringUpsampleLearned Steerable Resampling Functions Warping ⋯Fixed Bicubic Resampling Function Figure 1. LeRF assigns steerable resampling functions to input pixels, and learns to predict the hyper-parameters that determine the orientations of these continuous functions for resampling un- der arbitrary transformations. pensation [10], online content streaming [35], and visual special effects production [38]. Recently, deep neural networks (DNNs) have made im- pressive progress in the field of image resampling [9,12,17, 27, 39, 40], thanks to the learning-from-data paradigm that obtains powerful structural priors from large-scale datasets. Despite the superior performance that DNN-based methods have achieved, long-lived interpolation methods like bicu- bic [16] are still preferred choices in most cases. We attribute this phenomenon to the following two rea- sons: 1) Interpolation is simple and highly efficient, result- ing in less dependency and thus the practicality to be de- ployed on a variety of devices, ranging from IoT devices to gaming workstations. 2) Interpolation supports arbitrary transformations. It assumes a continuous resampling func- tion for a local area, resulting in the versatility in applying to not only homographic transformations like upsampling and downsampling, but also general warping. Although re- cent DNN-based methods explore beyond fixed-scale up- sampling [5, 12, 27, 39, 43, 48], an efficient and continuous This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5866 solution that matches interpolation remains less explored. In this work, we aim to fill this blank research area by taking a middle way between DNN-based methods and interpolation methods. We propose a novel method of Learning Resampling Function (termed LeRF), where pa- rameterized continuous functions for resampling different structures are learned from data. Specifically, as illustrated in Fig. 1, we assign spatially-varying steerable resampling functions to image pixels, whose orientations are parame- terized with several hyper-parameters. Then, we train a neu- ral network to predict these hyper-parameters for each pixel in an input image, thus defining the resampling function for that pixel location. Finally, we obtain the output image by interpolating an input image with these locally adapted resampling functions. LeRF takes advantage of both the structural priors learned by DNN and the locally continu- ous assumption of interpolation methods. Furthermore, we present an efficient implementation, where the inference of the learned neural network is accelerated with look-up ta- bles (LUTs) [15,22,23,29]. We further design a directional ensemble strategy and edge-sensitive indexing patterns to better capture local structures in images. We examine the advantages and generalization abil- ity of LeRF in various image resampling tasks, includ- ing arbitrary-scale upsampling, homographic transforma- tion, and general warping. In particular, as illustrated in Fig. 2, at a similar running time, our method outperforms popular interpolation methods significantly in upsampling, which demonstrates the superiority of LeRF in terms of per- formance and efficiency. Contributions of this paper are summarized as follows: 1) We propose LeRF, a novel method for continuous re- sampling. We assign spatially-varying steerable resampling functions to image pixels, where we train a neural network to predict the hyper-parameters that determine the orienta- tions of these resampling functions. 2) We present an efficient implementation of LeRF by adopting look-up tables to accelerate the inference of the trained neural network. Furthermore, we design a direc- tional ensemble strategy and edge-sensitive indexing pat- terns to better capture local structures. 3) Extensive experiments demonstrate that our method operates as efficient as interpolation, generalizes well to ar- bitrary transformations, and obtains significantly better per- formance over interpolation.
Liu_Spectral_Bayesian_Uncertainty_for_Image_Super-Resolution_CVPR_2023
Abstract Recently deep learning techniques have significantly ad- vanced image super-resolution (SR). Due to the black-box nature, quantifying reconstruction uncertainty is crucial when employing these deep SR networks. Previous ap- proaches for SR uncertainty estimation mostly focus on cap- turing pixel-wise uncertainty in the spatial domain. SR uncertainty in the frequency domain which is highly re- lated to image SR is seldom explored. In this paper, we propose to quantify spectral Bayesian uncertainty in im- age SR. To achieve this, a Dual-Domain Learning (DDL) framework is first proposed. Combined with Bayesian ap- proaches, the DDL model is able to estimate spectral un- certainty accurately, enabling a reliability assessment for high frequencies reasoning from the frequency domain per- spective. Extensive experiments under non-ideal premises are conducted and demonstrate the effectiveness of the pro- posed spectral uncertainty. Furthermore, we propose a novel Spectral Uncertainty based Decoupled Frequency (SUDF) training scheme for perceptual SR. Experimental results show the proposed SUDF can evidently boost per- ceptual quality of SR results without sacrificing much pixel accuracy.
1. Introduction Image super-resolution (SR) is a basic computer vision task that aims to recover an underlying high-resolution (HR) image from its degraded low-resolution (LR) obser- vation. Image SR is widely used in many applications where high-frequency (HF) information is required, such as medical imaging [ 38], microscopy imaging [ 36], surveil- lance [ 46], etc. In recent years, learning-based approaches with convolutional neural networks (CNN) have become the primary workhorse for image SR. Starting from the pio- neering work SRCNN [ 9], various CNN-based SR mod- els [7,21,24,31,43,49] have been proposed and significantly pushed the frontier of image SR research. *Corresponding author.Despite the impressive success in image SR benchmarks, most of these CNN-based SR models tend to overfit the training data so that their reliability and generalizability may not be guaranteed in practice. A well-trained SR model often makes inaccurate reasoning for HF details when it receives LR images away from its training distri- bution, thereby making the downstream processing unreli- able. Therefore, it is quite crucial to quantify reconstruc- tion uncertainty when employing these SR models, espe- cially in some high risk applications (e.g. medical imaging) or when under some harmful adversarial attacks. Bayesian neural networks (BNNs) which combine deep neural net- works with Bayesian learning open up the possibility to capture model uncertainty, by placing distributions over the network weights and then obtaining the predictive distribu- tion through marginalization over posterior. Since the exact Bayesian inference is usually intractable for deep networks, various stochastic techniques that are compatible with mod- ern deep learning are widely used for posterior approxima- tion, such as dropout [ 11], batch normalization [ 41], weight initialization [ 22], etc. However, existing Bayesian models for image SR are mostly developed in the spatial domain to capture pixel- wise uncertainty [ 40,41]. The uncertainty in the frequency domain which is highly related to image SR is seldom ex- plored. From the frequency domain perspective, image SR is essentially a task of recovering HF components given low-frequency (LF) ones. Thus the uncertainty of HF com- ponents directly characterizes the reliability of the SR re- sults. Besides, the common pixel-wise uncertainty is sensi- tive to local mismatch of spatial structures, where a slight pixel shift among Monte Carlo (MC) samples may result in high uncertainty. So it is also desirable to quantify the reconstruction uncertainty in a global way. Moreover, im- age HF components in the frequency domain usually play an important role in some specific areas. For instance, the calculation of imaging resolution in optical imaging heav- ily depends on the HF components of objects [ 8]. The un- certainty of HF components directly reveals the credibility of the imaging resolution. Therefore, estimating frequency spectral uncertainty for image SR is valuable. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18166 To fill this research gap, we aims to quantify SR un- certainty not only in the spatial domain but also in the frequency domain. Concretely, we first propose a dual- domain learning (DDL) framework for image SR. The pro- posed DDL introduces explicit frequency learning within networks and learns to reconstruct SR images and spectra simultaneously. Then combined with Bayesian approaches (MC-dropout [ 11] in this paper), the DDL model is able to estimate both spatial and spectral uncertainty of SR re- sults. To the best of our knowledge, we are the first to quan- tify SR uncertainty in the frequency domain. Extensive ex- periments on different non-ideal premises are conducted to show the effectiveness of the spectral uncertainty. Lastly, we further propose a spectral uncertainty based decoupled frequency (SUDF) training scheme for perceptual SR. The SUDF decouple the training of different image frequencies with the guidance of estimated spectral uncertainty map, thereby boosting perceptual quality of SR results signifi- cantly without sacrificing much pixel accuracy. In summary, the contributions of this paper are: • We propose to quantify the frequency spectral uncer- tainty for deep SR networks. Experiments under sev- eral non-ideal premises demonstrate the effectiveness. To the best of our knowledge, it is the first work to estimate SR uncertainty in the frequency domain. • A DDL method is proposed for image SR. By per- forming explicit frequency domain learning in feature space, DDL can restore more HF information and thus provide more accurate uncertainty estimation when combined with Bayesian approaches. • Based on the estimated spectral uncertainty, a novel SUDF training scheme is proposed, helping enhance perceptual quality of SR results while maintaining re- construction faithfulness.
Lee_DP-NeRF_Deblurred_Neural_Radiance_Field_With_Physical_Scene_Priors_CVPR_2023
Abstract Neural Radiance Field (NeRF) has exhibited outstand- ing three-dimensional (3D) reconstruction quality via the novel view synthesis from multi-view images and paired calibrated camera parameters. However, previous NeRF- based systems have been demonstrated under strictly con- trolled settings, with little attention paid to less ideal sce- narios, including with the presence of noise such as expo- sure, illumination changes, and blur. In particular, though blur frequently occurs in real situations, NeRF that can handle blurred images has received little attention. The few studies that have investigated NeRF for blurred im- ages have not considered geometric and appearance con- sistency in 3D space, which is one of the most important factors in 3D reconstruction. This leads to inconsistency and the degradation of the perceptual quality of the con- structed scene. Hence, this paper proposes a DP-NeRF , a novel clean NeRF framework for blurred images, which is constrained with two physical priors. These priors are derived from the actual blurring process during image ac- quisition by the camera. DP-NeRF proposes rigid blurring kernel to impose 3D consistency utilizing the physical pri- ors and adaptive weight proposal to refine the color com- position error in consideration of the relationship between depth and blur. We present extensive experimental results for synthetic and real scenes with two types of blur: camera motion blur and defocus blur. The results demonstrate that DP-NeRF successfully improves the perceptual quality of the constructed NeRF ensuring 3D geometric and appear- ance consistency. We further demonstrate the effectiveness of our model with comprehensive ablation analysis.1 2
1. Introduction The synthesis of the photo-realistic novel view image of complex three-dimensional (3D) scenes has advanced rapidly due to the emergence of the Neural Radiance Field(NeRF) [26]. NeRF has introduced implicit scene rep- resentation to the field, which maps an arbitrary continuous 1Code: https://github.com/dogyoonlee/DP-NeRF 2Project: https://dogyoonlee.github.io/dpnerf/ 𝑓𝑜𝑐𝑢𝑠𝑝𝑙𝑎𝑛𝑒 a) Deblur-NeRF𝑅𝑖𝑔𝑖𝑑𝑚𝑜𝑡𝑖𝑜𝑛𝑅𝑖𝑔𝑖𝑑𝑚𝑜𝑡𝑖𝑜𝑛b) DP-NeRF(Ours)[Camera Motion Blur][ Defocus Blur ] 𝒅!"𝒓!"𝒓!;$"%&'(𝒓!;)"%&'(𝒓!;$"%&'(𝒓!;)"%&'(2𝐷𝑏𝑙𝑢𝑟𝑟𝑖𝑛𝑔𝑘𝑒𝑟𝑛𝑒𝑙Figure 1. (a) Deblur-NeRF models blurring kernel based on 2D offset on the image pixels. This modeling break the consistency in trained neural radiance field due to the lack of a 3D consis- tency priors. However, (b) DP-NeRF can render clean neural ra- diance field guaranteeing the 3D consistency with rigid motion of the camera based on the physical priors of the blur occurrence. 3D coordinate to the volume density and radiance color us- ing volume-rendering technique and implicit neural repre- sentation. NeRF densely reconstructs continuous 3D space to produce photorealistic rendered images with novel view. Though NeRF has achieved remarkable success in a va- riety of fields, most of the NeRF variants have been de- signed and tested for a carefully controlled environment that requires well-captured images from multiple views with calibrated camera parameters. However, various forms of noises are usually included in the data captured for the NeRF in real scenarios, complicating geometric and appear- ance consistency in 3D representation. Several NeRF variants have attempted to reconstruct 3D scene in the presence of noise, including exposure noise [8, 11, 25], motion [17–19, 29, 30, 33, 49, 55], illumi- nation changes [6, 23, 57], and aliasing [1, 2]. However, although it frequently occurs in real-world settings, blur has not been sufficiently addressed to date, despite the fact that it generates critical artifacts in 3D scene reconstruc- tion. Deblur-NeRF [22] introduced blurring kernel estima- tion for a NeRF by imitating in-camera blurred image ac- quisition based on a blind deblurring method. Their method demonstrated excellent performance and produced clearly rendered images from multi-view images. However, the blurring kernel in [22] is implemented by optimizing ray deformation and composition weights depending on the 2D This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12386 pixel location independently, leading to insufficient 3D in- formation. In reality, the blurring process occurs simultane- ously for all pixels in an image due to the physical process of in-camera image acquisition, but [22] overlooks the prior for blurring, leading to a lack of consistency in an image. Moreover, the designed kernel can be inherently optimized to subobtimal in regions with complex depth or similar ap- pearance due to the independent optimization of the defor- mation of each ray. As a result, the estimated kernel has difficulty aggregating 3D information in a way that guaran- tees geometric and appearance consistency. In this paper, we propose a deblurred NeRF based on two physical scene priors (hereafter, DP-NeRF ) with a novel rigid blurring kernel (RBK) and adaptive weight pro- posal (AWP). The RBK consists of rigid ray transformation (RRT) and coarse composition weights (CCW), which uti- lize explicit physical scene priors derived from the blurring process to construct a consistent 3D scene representation from blurred images. In addition, the AWP proposes fine- grained color composition weights considering the relation- ship between depth and blur to create more realistic and clean 3D representation. Furthermore, we propose coarse- to-fine optimization for stable training and to gradually in- crease the effect of the AWP during training by introducing exponential weight decay between the two losses from the RBK and AWP. Figure 1 summarizes the DP-NeRF’s sys- tem using the rigid motion of the camera. The RBK generates a 3D deformation field and coarse weights for color composition based on the view informa- tion for each scene regardless of the pixel for each ray. This architecture is inspired by the physical scene prior that the blurring process consistently occurs for all pixels for a spe- cific view. Specifically, the deformation field is constructed as the 3D rigid motion of the camera for each view and does not depend on the 2D spatial position of each ray. In con- trast to Deblur-NeRF [22], our model successfully models 3D space with consistent geometry and appearance due to the use of these conditional physical priors and not fully depending on 2D pixel-wise independent ray optimization. Previous studies have claimed that color composition process in a blurring kernel are affected by the depth values of the pixels when compositing blurred colors from both camera motion and defocus blur [43, 44]. Hence, RBK can lose detail in regions that have a complex depth or simi- lar textures even though it achieves remarkably realistic 3D scene. For this reason, the AWP refines the composition weights using feature modulation (FM) [59] and novel mo- tion feature aggregation module(MAM) based on the depth features of samples for transformed rays, the viewing di- rection, and the view information. Following the [22], we jointly optimize the RBK, AWP, and sharp NeRF with only the reconstruction loss from the blurred input as supervi- sion. During inference stage, we can clearly render a recon-structed 3D scene using only the trained sharp NeRF model. The rest of the paper is structured as follows. In Sec- tion 3, we describe the RBK and AWP in detail. In Sec- tion 4.1 and supplementary material, we provide experi- mental results for novel view synthesis using synthetic and real scene datasets with two types of blur that are provided from [22]. The results show that DP-NeRF achieves signif- icant quantitative and qualitative improvement, preserving 3D consistency with a cleanly rendered novel view. In ad- dition, we extensively analyze the effectiveness of the pro- posed model in Section 4.2. We also demonstrate how the RBK approximately models the blurring process in the sup- plementary material. To summarize, this paper offers the following major contributions. •Rigid blurring kernel. We propose a novel RBK to construct a clean NeRF from blurred images utilizing physical scene priors derived from the blurring process during image acquisition. •Adaptive weight proposal. We propose an AWP to re- fine the composition weights in the RBK considering the relationship between depth and blur to generate more realistic results. •Coarse-to-fine optimization. To fully utilize proposed methods in training, we propose coarse-to-fine opti- mization by applying exponential weight decay be- tween the reconstruction loss from the RBK and AWP. •Significant improvement in perceptual quality. DP- NeRF produces enhanced 3D scene representation with greater perceptual quality and clean photo- realistic rendered images.
Liu_Single_Image_Depth_Prediction_Made_Better_A_Multivariate_Gaussian_Take_CVPR_2023
Abstract Neural-network-based single image depth prediction (SIDP) is a challenging task where the goal is to predict the scene’s per-pixel depth at test time. Since the prob- lem, by definition, is ill-posed, the fundamental goal is to come up with an approach that can reliably model the scene depth from a set of training examples. In the pursuit of per- fect depth estimation, most existing state-of-the-art learn- ing techniques predict a single scalar depth value per-pixel. Yet, it is well-known that the trained model has accuracy limits and can predict imprecise depth. Therefore, an SIDP approach must be mindful of the expected depth variations in the model’s prediction at test time. Accordingly, we in- troduce an approach that performs continuous modeling of per-pixel depth, where we can predict and reason about the per-pixel depth and its distribution. To this end, we model per-pixel scene depth using a multivariate Gaussian distri- bution. Moreover, contrary to the existing uncertainty mod- eling methods—in the same spirit, where per-pixel depth is assumed to be independent, we introduce per-pixel covari- ance modeling that encodes its depth dependency w.r.t. all the scene points. Unfortunately, per-pixel depth covariance modeling leads to a computationally expensive continuous loss function, which we solve efficiently using the learned low-rank approximation of the overall covariance matrix. Notably, when tested on benchmark datasets such as KITTI, NYU, and SUN-RGB-D, the SIDP model obtained by opti- mizing our loss function shows state-of-the-art results. Our method’s accuracy (named MG) is among the top on the KITTI depth-prediction benchmark leaderboard1.
1. Introduction Recovering the depth of a scene using images is critical to several applications in computer vision [2,15,25,29,30]. It is well founded that precise estimation of scene depth *Corresponding Author ([email protected]) 1http : / / www . cvlibs . net / datasets / kitti / eval _ depth.php?benchmark=depth_prediction (a) Test Image (b) DPT [51] (c) AdaBins [7] (d) NeWCRFs [75] (e)Ours (f) Ground Truth Figure 1. Qualitative Comparison . By modeling scene depth as multivariate Gaussian and enforcing the parametric low-rank covariance constraints in the loss function, we observe that our model can reliably predict depth for both high-frequency and low- frequency scene details. In the above example, we can notice bet- ter qualitative results than the state-of-the-art methods. from images is likely only under multi-view settings [65]— which is indeed a correct statement and hard to contend2. But what if we could effectively learn scene depth using im- ages and their ground-truth depth values, and be able to pre- dict the scene depth using just a single image at test time? With the current advancements in deep learning techniques, this seems quite possible empirically and has also led to ex- cellent results for the single image depth prediction (SIDP) task [40, 51]. Despite critical geometric arguments against SIDP, practitioners still pursue this problem not only for a scientific thrill but mainly because there are several real- world applications in which SIDP can be extremely benefi- cial. For instance, in medical [42], augmented and virtual reality [21, 55], gaming [19], novel view synthesis [56, 57], robotics [64], and related vision applications [24, 51]. Regardless of remarkable progress in SIDP [1,36,37,39, 40, 50, 75], the recent state-of-the-art deep-learning meth- ods, for the time being, just predict a single depth value per pixel at test time [37]. Yet, as is known, trained models have accuracy limits. As a result, for broader adoption of SIDP in 2As many 3D scene configurations can have the same image projection. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17346 applications, such as robot vision and control, it is essential to have information about the reliability of predicted depth. Consequently, we model the SIDP task using a continuous distribution function. Unfortunately, it is challenging, if not impossible, to precisely model the continuous 3D scene. In this regard, existing methods generally resort to increasing the size and quality of the dataset for better scene modeling and improve SIDP accuracy. On the contrary, little progress is made in finding novel mathematical modeling strategies and exploiting the prevalent scene priors. To this end, we propose a multivariate Gaussian distribution to model scene depth. In practice, our assumption of the Gaussian model- ing of data is in consonance with real-world depth data (see Fig. 2) and generalizes well across different scenes. Fur- thermore, many computer and robot vision problems have successfully used it and benefited from Gaussian distribu- tion modeling in the past [9, 20, 45, 53, 63, 72, 77]. Let’s clarify this out way upfront that this is not for the first time an approach with a motivation of continuous mod- eling for SIDP is proposed [4,22,26,28,31,47]. Yet, existing methods in this direction model depth per pixel indepen- dently. It is clearly unreasonable, in SIDP modeling, to as- sume absolute democracy among each pixel, especially for very closeby scene points. Therefore, it is natural to think of modeling this problem in a way where depth at a particu- lar pixel can help infer, refine, and constrain the distribution of depth value for other image pixels. Nevertheless, it has yet to be known a priori the neighboring relation among pixels in the scene space to define the depth covariance among them. We do that here by defining a very general covariance matrix of dimension number of pixels × number of pixels ,i.e., depth prediction at a given pixel is assumed to be dependent on all other pixels’ depth. Overall, we aim to advocate multivariate Gaussian mod- eling with a notion of depth dependency among pixels as a useful scene prior. Now, add a fully dependent covariance modeling proposal to it—as suitable relations among pixels are not known. This makes the overall loss function com- putationally expensive. To efficiently optimize the proposed formulation, we parameterize the covariance matrix, assum- ing that it lies in a rather low-dimensional manifold so that it can be learned using a simple neural network. For train- ing our deep network, we utilize the negative log likelihood as the loss function (cf. Sec. 3.1). The trained model when tested on standard benchmark datasets gives state-of-the-art results for SIDP task (see Fig. 1 for qualitative comparison). Contributions. To summarize, our key contributions are: • A novel formulation to perform multivariate Gaussian co- variance modeling for solving the SIDP task in a deep neural network framework is introduced. • The introduced multivariate Gaussian covariance model- ing for SIDP is computationally expensive. To solve it efficiently, the paper proposes to learn the low-rank co- (a) First scene (b) Second scene Figure 2. The marginal ground-truth depth distribution for a pixel pairZa,Zbfor two scenes. The depth values for the pixel pair are taken from the fixed image location in the dataset, but the se- lected images are visually similar for the suitability of the feature and its corresponding depth values. The statistics show that the Gaussian distribution assumption with covariance modeling is a sensible choice for SIDP problem and not an unorthodox belief arranged or staged for an intricate formulation. variance matrix approximation by deep neural networks. • Contrary to the popular SIDP methods, the proposed ap- proach provides better depth as well as a measure of the suitability of the predicted depth value at test time.
Ke_VILA_Learning_Image_Aesthetics_From_User_Comments_With_Vision-Language_Pretraining_CVPR_2023
Abstract Assessing the aesthetics of an image is challenging, as it is influenced by multiple factors including composition, color, style, and high-level semantics. Existing image aes- thetic assessment (IAA) methods primarily rely on human- labeled rating scores, which oversimplify the visual aes- thetic information that humans perceive. Conversely, user comments offer more comprehensive information and are a more natural way to express human opinions and prefer- ences regarding image aesthetics. In light of this, we pro- pose learning image aesthetics from user comments, and ex- ploring vision-language pretraining methods to learn mul- timodal aesthetic representations. Specifically, we pretrain an image-text encoder-decoder model with image-comment pairs, using contrastive and generative objectives to learn rich and generic aesthetic semantics without human labels. To efficiently adapt the pretrained model for downstream IAA tasks, we further propose a lightweight rank-based adapter that employs text as an anchor to learn the aesthetic ranking concept. Our results show that our pretrained aes- thetic vision-language model outperforms prior works on image aesthetic captioning over the AVA-Captions dataset, and it has powerful zero-shot capability for aesthetic tasks such as zero-shot style classification and zero-shot IAA, sur- passing many supervised baselines. With only minimal fine- tuning parameters using the proposed adapter module, our model achieves state-of-the-art IAA performance over the AVA dataset.1
1. Introduction Image Aesthetic Assessment (IAA) aims to quantify the hu- man perceived aesthetics of an image. It has many impor- tant applications, including photo recommendation, selec- tion, and editing. IAA is challenging because it is inherently subjective, and depends on various factors including image 1Our model is available at https://github.com/google- research/google-research/tree/master/vila Figure 1. We present VILA, a vision-language aesthetics learning framework based on image and user comment pairs. By pretrain- ing on a contrastive and generative target, it shows superior perfor- mance on aesthetic captioning as well as zero-shot aesthetic tasks, e.g., IAA, and style classification. With a lightweight rank-based adapter, we can efficiently adapt the pretrained model to IAA. composition, color usage, photographic style, and subject matter. In recent years, various learning-based IAA meth- ods have been proposed by leveraging deep models such as convolutional neural networks (CNN) [2, 12, 15, 42] and transformers [19]. These approaches learn from human- labeled IAA datasets where images are paired with aesthetic ratings, and models are trained to regress towards the mean opinion scores (MOS). Directly learning IAA models on human-labeled aes- thetic ratings, such as MOS, can be suboptimal as it lacks context regarding why an image is aesthetically pleasing or not. To provide richer supervision, various methods have attempted to integrate external knowledge such as theme [12, 33], human eye fixation [9], and aesthetic at- tributes [4, 24], to enhance IAA performance. These ap- proaches typically rely on multitask training or cascade This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10041 score prediction with a frozen attribute network. However, obtaining additional labeled data or off-the-shelf models for such methods can be costly. Compared to the aforementioned methods that require additional annotations, our approach utilizes the abundance of image-comment pairs available on aesthetic websites and photographic forums. These pairs can be easily ob- tained from the Internet and contain extensive aesthetic in- formation ( e.g. objects, themes, styles, and user emotions), since humans are better at expressing aesthetic preferences through natural language than through abstract scores. On image sharing platforms like Flickr and DPChallenge2, user comments offer valuable insights into how they evaluate an image’s aesthetics. For instance, as shown in Fig. 1 (top), comments such as “very cool patterns and curls” and “little bit on the blurry side” reflects users’ positive and negative aesthetic opinions respectively. We aim to learn the diverse aesthetic semantics present in these image-comment pairs to establish a solid foundation for downstream IAA tasks. Using image-comment pairs for aesthetics learning re- mains largely unexplored. While previous works have leveraged user comments to improve IAA, their approaches differ significantly from ours. For example, [14,57,58] pro- posed to aggregate visual and comment features, yet they require both the image and comment as inputs during infer- ence. This requirement makes it difficult to use such meth- ods in real-world settings where images may not always be accompanied by comments. To mitigate this, Niu et al. [33] proposed to use the LDA topics [1] from the comments as pseudo labels to guide image representation learning. How- ever, the simplification of comments into topics may result in a loss of valuable contextual information. Therefore, we are motivated to explore other strategies for utilizing raw comments to extract richer aesthetic textual information. In this paper, we present a novel two-stage VIsion- Language Aesthetics ( VILA ) learning framework incor- porating image-text pretraining. Our goal is to develop a model that can effectively generalize to multiple down- stream aesthetic tasks (Fig. 1). In the first Pretraining stage, we learn an image-text model ( VILA-P ) by employ- ing contrastive and text sequence generation objectives, en- baling us to fully leverage fine-grained knowledge from aesthetic image-comment pairs. Our approach is moti- vated by recent advancements in vision-language models, such as CLIP [35], ALIGN [17], and CoCa [54], which exhibit impressive performance and generalization ability across multiple tasks. These models align vision and lan- guage feature spaces to capture the rich semantic infor- mation. However, these models are typically pretrained on general image-text pairs from the web, which can re- sult in under-representation of aesthetic-related informa- tion. Our experimental results indicate that such gener- 2https://www.dpchallenge.com/ally pretrained vision-language models underperform on aesthetic tasks (Sec. 5.3). As a solution, we propose the adoption of vision-language pretraining on aesthetic image- comment pairs from photograph sharing websites. To the best of our knowledge, our work is the first to explore the use of image-comment pairs in vision-language pretraining for aesthetics learning. After pretraining VILA-P on image-comment pairs, we finetune it for downstream score-based IAA tasks using a lightweight Rank-based adapter ( VILA-R ). This adapter involves adding feature residuals to the frozen image em- beddings to move images with high aesthetic quality closer to the anchor text “good image,” and images with low aes- thetic quality away from it. This method can effectively rank images based on human rated preferences. With 0.1% tunable parameters, our model outperforms previous works on IAA correlation metrics over the A V A dataset [32]. Our proposed VILA is capable of tackling multiple aesthetic-related tasks beyond score-based IAA (Fig. 1). Not only can it generate high-quality aesthetic comments, but it also exhibits impressive zero-shot learning (ZSL) ca- pabilities for aesthetic style classification and quality anal- ysis. Using text queries such as “good image” and “bad image” to compare images, our ZSL model outperforms su- pervised learning models like NIMA [42] which requires labor-intensive ratings as ground truth. This highlights the potential of learning rich image aesthetic concepts without relying on human-labeled data, thereby significantly reduc- ing data collection costs. We summarize the contributions of our work as follows: We propose a vision-language aesthetic learning framework (VILA) for learning rich image aesthetic features using image-comment pairs. We design a novel rank-based module to adapt the model to downstream IAA tasks without perturbing the pretrained weights, effectively learning the aesthetic quality concepts with minimal additional parameters. Our pretrained aesthetic model outperforms prior works for aesthetic captioning on the A V A- Captions [10] dataset. Even without any supervised labels, our zero-shot model achieves 69% mAP on the A V A-Style [32] dataset and 0.657 SRCC on the A V A dataset [32], outperforming many supervised approaches. With the proposed adapter and a small number of tunable parameters, our method further achieves state-of-the-art performance on A V A.
Li_Boosting_Weakly-Supervised_Temporal_Action_Localization_With_Text_Information_CVPR_2023
Abstract Due to the lack of temporal annotation, current Weakly- supervised Temporal Action Localization (WTAL) methodsare generally stuck into over-complete or incomplete local- ization. In this paper , we aim to leverage the text infor-mation to boost WTAL from two aspects, i.e.,(a)the dis- criminative objective to enlarge the inter-class difference,thus reducing the over-complete; (b)the generative objec- tive to enhance the intra-class integrity, thus finding more complete temporal boundaries. F or the discriminative ob-jective, we propose a Text-Segment Mining (TSM) mecha- nism, which constructs a text description based on the ac- tion class label, and regards the text as the query to mine allclass-related segments. Without the temporal annotation ofactions, TSM compares the text query with the entire videosacross the dataset to mine the best matching segments whileignoring irrelevant ones. Due to the shared sub-actionsin different categories of videos, merely applying TSM istoo strict to neglect the semantic-related segments, whichresults in incomplete localization. We further introduce agenerative objective named Video-text Language Comple-tion (VLC), which focuses on all semantic-related segments from videos to complete the text sentence. We achieve thestate-of-the-art performance on THUMOS14 and Activi-tyNet1.3. Surprisingly, we also find our proposed methodcan be seamlessly applied to existing methods, and improve their performances with a clear margin. The code is avail-able at https://github.com/lgzlIlIlI/Boosting-WTAL .
1. Introduction Temporal action localization attempts to temporally lo- calize the action instances of interest in untrimmed videos. Although current fully-supervised temporal action local-ization methods [ 5,26,42,51] have achieved remarkable *Corresponding authorClass Text QueryT T T Attention A video of DivingA video of [MASK]T T T Language completionconsistency Video EncoderClassifier T-CAMAction Localization(b) (c) … diving high jump pole vaultpull close push awayGT Prediction Incomplete localization Over- complete localization (a) Figure 1. Comparison of our proposed framework with current WTAL methods. (a) Common failures in existing WTAL methods.(b) Existing WTAL model’s pipeline. (c) The proposed frame-work with text-segment mining and video-text language comple- tion, where the depth of color represents the degree of correlation between segments and texts. progress, time-consuming and labor-intensive frame-level annotations are required. To alleviate the annotation cost,weakly-supervised temporal action localization (WTAL) methods [ 15,22,32,35] have gained more attention recently, which only requires efficient video-level annotations. With only video-level supervision, existing WTAL meth- ods [ 15,22,35,45] generally utilize video information to train a classifier, which is used to generate a sequence of class logits or predictions named temporal class acti- vation map (T-CAM). While significant improvement has been achieved, current methods still suffer from two prob- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10648 lems, i.e., incomplete and over-complete localization. As shown in Fig. 1(a), some sub-action with low discriminabil- ity may be ignored, while some background segments thatcontribute to classification can be misclassified as action,causing incomplete and over-complete localization. Differently from current methods that only utilize the video information, in this paper, we aim to explore thetext information to improve WTAL from two aspects: (a) the discriminative objective to enlarge the inter-class differ- ence, thus reducing the over-complete; (b)the generative objective to enhance the intra-class integrity, thus finding more complete temporal boundaries. For the discrimina- tive objective, we propose a Text-Segment Mining (TSM) mechanism, where the action label texts can be used asqueries to mine all related segments in videos. Specifically, we first use the prompt templates to incorporate the classlabel information into the text query. Without temporal an- notations, TSM requires to compare the text query with allsegments of the different videos across the dataset, as shown in Fig. 1(c). During the comparison, the segments that is best matching to the text query would be mined, while otherirrelevant segments would be ignored, which is similar to ‘matched filter’ [ 43,50]. In this way, the segments and text queries with the same class from all videos would be pulledclose while pushing away others, hence enhancing the inter-class difference. For different categories of videos, there are some shared sub-actions, e.g., sub-action “Approach” exists in both “High Jump” and “Long Jump” videos. Merely usingTSM is too strict to neglect the semantic-related segments, which results in incomplete localization, e.g., neglecting “Approach” segments. To overcome this problem, we fur- ther introduce a generative objective named Video-text Lan-guage Completion (VLC) which focuses on all semantic- related segments to complete the text sentence. First, weconstruct a description sentence for the action label of thevideo and mask the key action words in the sentence, as shown in Fig. 2. Then an attention mechanism is design to collect semantic related segments as completely as possi-ble to predict masked action text via the language recon-structor, which enhances the intra-class integrity. Com-bining TSM and VLC by a self-supervised constraint, ourmethod achieves the new state-of-the-art on two popular benchmarks, i.e., THUMOS14 [ 17] and ActivityNet1.3 [ 1]. Furthermore, we also find our proposed method can be ap-plied into existing methods, and improve their performanceswith a clear margin. Our contributions are summarized as three-folds: (a)To best of our knowledge, we are the first to leverage text infor- mation to boost WTAL. We also prove that our method can be easy to extend to existing state-of-the-art approaches and improve their performance. (b)To leverage the text infor- mation, we devise two objective: the discriminative objec-tive to enlarge the inter-class difference, thus reducing the over-complete; and the generative objective to enhance theintra-class integrity, thus finding more complete temporalboundaries. (c)Extensive experiments illustrate our method outperforms current methods on two public datasets, and comprehensive ablation studies reveal the effectiveness ofthe proposed objectives.
Lin_Meta_Architecture_for_Point_Cloud_Analysis_CVPR_2023
Abstract Recent advances in 3D point cloud analysis bring a di- verse set of network architectures to the field. However, the lack of a unified framework to interpret those networks makes any systematic comparison, contrast, or analysis challenging, and practically limits healthy development of the field. In this paper, we take the initiative to explore and propose a unified framework called PointMeta, to which the popular 3D point cloud analysis approaches could fit. This brings three benefits. First, it allows us to compare different approaches in a fair manner, and use quick ex- periments to verify any empirical observations or assump- tions summarized from the comparison. Second, the big pic- ture brought by PointMeta enables us to think across differ- ent components, and revisit common beliefs and key design decisions made by the popular approaches. Third, based on the learnings from the previous two analyses, by doing simple tweaks on the existing approaches, we are able to derive a basic building block, termed PointMetaBase. It shows very strong performance in efficiency and effective- ness through extensive experiments on challenging bench- marks, and thus verifies the necessity and benefits of high- level interpretation, contrast, and comparison like Point- Meta. In particular, PointMetaBase surpasses the previ- ous state-of-the-art method by 0.7%/1.4/%2.1% mIoU with only 2%/11%/13% of the computation cost on the S3DIS datasets. The code and models are available at https: //github.com/linhaojia13/PointMetaBase . *Corresponding author: [email protected] 0 20 40 60 80 FLOPs (G)68707274766-fold mIoU (%) RandLA-NetPoint TransformerPointMetaBase PointNeXt7.6 higher mIoU7.71x FLOPs reduction 2.1 higher mIoU 68.074.975.677.0Figure 1. Segmentation performance of PointMetaBase on S3DIS [1]. PointMetaBase surpasses the state-of-the-art method Point- NeXt [16] significantly with a large FLOPs reduction.
1. Introduction In the past two decades, the popularity of 3D data ac- quisition technology has led to great interest in point cloud analysis. Unlike images, point clouds are inherently sparse, unordered, and irregular. These characteristics make it challenging for the powerful convolutional neural networks (CNN) [4, 18] to extract useful information from point clouds. Early studies attempt to convert the point cloud into grids by either voxelization or 2D projections, such that the standard CNN can be applied, but the applications are lim- ited by the extra computation and information loss. With the introduction of permutation equivariance and invariance by PointNet [13] and PointNet++ [14], it is pos- sible to use CNNs to process point clouds in their unstruc- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17682 tured format. Following PointNet++ [14], many point- based networks have been introduced, most of which fo- cus on the development of complex building blocks to ex- tract local features, such as the X − Conv convolution in PointCNN [22] and the self-attention layer in Point Trans- former [32]. Although brought significant performance gains, these models are very complicated. On one hand, this makes the approaches computational demanding and thus limited in applications. On the other hand, it also gives ex- tra challenges in systematic comparison and analysis among the models, and thus impacts efficient development of the field, which ideally would benefit from theoretic and empir- ical guidance. The community is well aware of this issue, and research efforts have been put on looking for a unified perspective to compare these methods and identify the most crucial imple- mentation details. PosPool [10] and PointNeXt [16] take a step toward this goal. Adopting a deep residual architecture as the base network, PosPool [10] evaluates the representa- tive building blocks [8, 14, 22, 26] and finds that they per- form similarly well. PointNeXt [16] revisits the improved training and scaling strategies adopted by previous SotA methods [7, 15, 22, 32] and tweaks the early model Point- Net++ [14] with the learnings, which achieves state-of-the- art performance. Both of them give insights and inspira- tions to the community via dedicated experiment designs and extensive empirical analysis. However, neither of them provides a framework that is general enough to fit a broad range of point cloud analysis approaches, such that large scale comparison and contrast could be done. In this paper, we argue that if viewed from the perspec- tive of basic building blocks, the majority of existing ap- proaches could be fit into a single meta architecture (Sec. 3). We call it PointMeta. PointMeta abstracts the compu- tation pipeline of the building blocks for point cloud analy- sis into four meta functions: a neighbor update function, a neighbor aggregation function, a point update function and a position embedding function (implicit or explicit). As will be discussed in more details in the next sections, this frame- work brings three benefits, which are also our core technical contributions: • In the dimension of models , it allows us to compare and contrast different models in a fair manner. So it becomes practical to observe and summarize learnings and assumptions, whose correctness could be further verified through experiments with variables controlled under the same framework. For example, among the systematic analysis on all the components across pop- ular models, we found for the positional embedding function, explicit positional embedding is empirically the best choice (Sec. 4.2). • In the dimension of components , it allows us to havea higher level view across components, and thus to re- visit the common beliefs and design decisions of the existing approaches. For example, despite the com- mon perception, we find that the neighbor aggregation blocks may collaborate or compete with the learned neighbor features (Sec. 4.3), and thus should be de- signed carefully. • Based on the learnings from the previous two di- mensions, we are then able to do simple tweaks on the building blocks to apply the best practices (Sec. 4.5). The result building block, PointMetaBase, achieves very strong performance, surpassing the pre- vious state-of-the-art method [16] by 0.7%/1.4/%2.1% mIoU with only 2%/11%/13% of the computation cost on S3DIS [1] (Fig. 1), and can act as a baseline for further research.
Liang_StyLess_Boosting_the_Transferability_of_Adversarial_Examples_CVPR_2023
Abstract Adversarial attacks can mislead deep neural networks (DNNs) by adding imperceptible perturbations to benign examples. The attack transferability enables adversarial examples to attack black-box DNNs with unknown archi- tectures or parameters, which poses threats to many real- world applications. We find that existing transferable at- tacks do not distinguish between style and content features during optimization, limiting their attack transferability. To improve attack transferability, we propose a novel attack method called style-less perturbation (StyLess). Specifi- cally, instead of using a vanilla network as the surrogate model, we advocate using stylized networks, which encode different style features by perturbing an adaptive instance normalization. Our method can prevent adversarial exam- ples from using non-robust style features and help gener- ate transferable perturbations. Comprehensive experiments show that our method can significantly improve the transfer- ability of adversarial examples. Furthermore, our approach is generic and can outperform state-of-the-art transferable attacks when combined with other attack techniques.1
1. Introduction Deep neural networks (DNNs) [14, 24] are currently ef- fective methods for solving various challenging tasks such as computer vision, and natural language processing. Al- though DNNs have amazing accuracy, especially for com- puter vision tasks such as image classification, they are also known to be vulnerable to adversarial examples [12, 43]. Adversarial examples are malicious images obtained by adding imperceptible perturbations to benign images. No- tably, the transferability of adversarial examples is an in- triguing phenomenon, which refers to the property that the same adversarial example can successfully attack different black-box DNNs [5, 30, 34, 51]. It has been observed that image style can be decoupled from image content, and style transfer techniques allow us *Corresponding author. 1Our code is available at https://github.com/uhiu/StyLess Updateadversarialexampleswithstyle-lessperturbationStylizedOutputInjectStyleStyleTransfer BenignSampleStylizedOutputStyLessAttack Encoder StylizedModel Decoder InjectSynthesizedStyleININFigure 1. An overview of our StyLess attack. We create stylized model Fby injecting synthesized style features into the surrogate model (F=F2F1) using an adaptive IN layer. StyLess reduces the use of non-robust style features in the vanilla surrogate model F, ultimately improving attack transferability. to generate stylized images based on arbitrary style im- ages [18]. Image style refers to the unique visual char- acteristics of an image, including its colors, textures, and lighting. For instance, two photos of the same object taken by different photographers can have very different styles. Robust DNNs should rely more on content features of data than style features. This inspired us to improve attack trans- ferability from the perspective of avoiding non-robust fea- tures. We believe that style features of DNNs are non-robust for building transferable attacks. However, existing attacks do not distinguish between the surrogate models’ style and content features, which may reduce attack transferability. We propose using stylized surrogate models to control style features, which can significantly improve transferabil- ity. We refer to the original surrogate model as the “vanilla model.” The proposed stylized model is created by adding an adaptive instance normalization (IN) layer to the vanilla This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8163 model. By adjusting the parameters of the inserted IN layer, we can easily transform the style features of the surro- gate models. To compare the stylized and vanilla surrogate models, we analyzed their network losses during optimiza- tion. Surprisingly, we found that the adversarial loss of the vanilla model increased much faster than those of the styl- ized models, resulting in a widening loss gap. This phe- nomenon reveals that as the attack iteration progresses, cur- rent attack methods only focus on maximizing the loss of the vanilla model, leading to increased use of the style fea- tures of the vanilla surrogate model. However, we believe that style features are non-robust for transferable attacks, and relying too much on them may reduce attack transfer- ability. To enhance transferability, we aim to limit the use of non-robust style features and close the loss gap. Based on the above findings, we propose a novel method called StyLess to improve the transferability of adversarial examples. Our method uses multiple synthesized style fea- tures to compete with the original style features during the iterative optimization of attack. The process is illustrated in Figure 1. We encode various synthesized style features into a surrogate model via an IN layer to achieve stylized sur- rogate models. Instead of using only the vanilla surrogate model, we use the gradients of both the stylized surrogate models and the vanilla one to update adversarial examples. The front part of the surrogate model works as a style en- coder, and the IN layer simulates synthesized style features. Although we can use a decoder to explicitly generate the final stylized samples, it is unnecessary for the proposed at- tack method. Experimental results demonstrate that StyLess can enhance the transferability of state-of-the-art adversar- ial attacks on both unsecured and secured black-box DNNs. Our main contributions are summarized as follows: • We introduce a novel perspective for interpreting at- tack transferability: the original style features may hin- der transferability. We verify that current iterative at- tacks increasingly use the style features of the surro- gate model during the optimization process. • We propose a novel attack called StyLess to enhance transferability by minimizing the use of original style features. To achieve this, we insert an IN layer to cre- ate stylized surrogate models and use gradients from both stylized and vanilla models. • We conducted comprehensive experiments on various black-box DNNs to demonstrate that StyLess can sig- nificantly improve attack transferability. Furthermore, we show that StyLess is a generic approach that can be combined with existing attack techniques.
Kim_MAGVLT_Masked_Generative_Vision-and-Language_Transformer_CVPR_2023
Abstract While generative modeling on multimodal image-text data has been actively developed with large-scale paired datasets, there have been limited attempts to generate both image and text data by a single model rather than a gener- ation of one fixed modality conditioned on the other modal- ity. In this paper, we explore a unified generative vision- and-language (VL) model that can produce both images and text sequences. Especially, we propose a generative VL transformer based on the non-autoregressive mask predic- tion, named MAGVLT , and compare it with an autoregres- sive generative VL transformer (ARGVLT). In comparison to ARGVLT, the proposed MAGVLT enables bidirectional context encoding, fast decoding by parallel token predic- tions in an iterative refinement, and extended editing capa- bilities such as image and text infilling. For rigorous train- ing of our MAGVLT with image-text pairs from scratch, we combine the image-to-text, text-to-image, and joint image- and-text mask prediction tasks. Moreover, we devise two additional tasks based on the step-unrolled mask prediction and the selective prediction on the mixture of two image-text pairs. Experimental results on various downstream gener- ation tasks of VL benchmarks show that our MAGVLT out- performs ARGVLT by a large margin even with significant inference speedup. Particularly, MAGVLT achieves com- petitive results on both zero-shot image-to-text and text-to- image generation tasks from MS-COCO by one moderate- sized model (fewer than 500M parameters) even without the use of monomodal data and networks.
1. Introduction Generalizable multimodal modeling has recently made a lot of progress, especially in the field of vision-and- language (VL) modeling [3, 17, 30, 43, 45, 46, 48, 62, 63, 66, 67]. In particular, a massive amount of image-text data [9,11,51,52,54,55] allows robust pretraining of large-scale multimodal VL models that can be easily transferred to var- ious downstream tasks including image captioning [2, 38], ∗Contributed equally. †Work done at Kakao Brain. Corresponding author.text-guided image generation [16, 17, 19, 40, 45, 46, 48, 67], visual question answering [23], and image-text retrieval [30, 38, 42, 43]. In this respect, many multimodal VL pre- training algorithms have been proposed in the literature, and these can be broadly categorized into either discrimina- tive or generative learning algorithms. Discriminative pre- training such as contrastive learning [30, 43] aims to obtain semantic representations effective for discriminative tasks while generative pretraining learns them to reconstruct an input [4, 8, 13, 25, 32, 35, 61, 63, 64]. The recent growth of model capacity and data size has led to more interest in gen- erative pretraining since it can provide more diverse and im- proved generalization ability both for VL understanding and VL generation tasks. While generative VL pretraining has been widely ex- ploited, most existing algorithms focus on representation learning for VL understanding tasks [7, 8, 12, 13, 33, 35, 63] or conditional generation tasks where a generation is per- formed on one fixed modality conditioned on the other modality [3, 13, 14, 16, 17, 19, 25, 31, 40, 45, 46, 48, 61, 64, 66,67]. A few algorithms have tried to produce data in both modalities from a single VL model [4, 32]. If one universal model can generate both modalities, it would be beneficial in concentrating training efforts on a single model as well as resource-saving under a resource-constrained deployment. Moreover, we can expect task extension as well as syn- ergetic performance improvement between the modalities from this multimodal generation. Therefore, in this work, we develop a unified generative VL model. There are two prevalent paradigms of the generative modeling for image and text processing: autoregressive (AR) generative modeling [16, 32, 46, 61, 67] and non-AR generative modeling [4, 5, 45, 48]. Many previous algo- rithms adopt AR modeling due to its excellent generation results and high training scalability, especially with trans- former networks. However, AR modeling has restrictions in unidirectional conditioning in that an image needs to be flattened into a 1D sequence by an unnatural ordering. In addition, AR sampling is performed by one-by-one predic- tions of elements, which incurs very slow generation for a long sequence. Recently, in order to overcome these limita- tions of AR modeling, non-AR generative modeling based This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23338 on the mask prediction has been proposed for language [22], image [10, 36], and video processing [24, 59]. Masked modeling is usually employed for representation learning to solve understanding tasks in language, vision, and VL do- mains. However, with an iterative refinement-based gener- ation and a variable mask ratio during training, it has been shown to be used as a promising generative modeling. In this regard, for our generative VL modeling, we propose Masked Generative VL T ransformer (MAGVLT). In con- trast to AR-based generative VL transformer (ARGVLT), the proposed MAGVLT is able to exploit bidirectional con- ditioning and fast generation through a small number of re- finement steps and parallel token predictions. In specific, MAGVLT can generate any or both of an im- age and a text sequence conditioned also on any or both of them. Namely, it can perform any kind of task in a form of image-and-text-to-image-and-text (IT2IT), includ- ing image-to-text (I2T) and text-to-image (T2I) tasks. Fol- lowing the previous masked generative modeling [10, 22], we conduct sampling by iterative denoising based on the masked token prediction and train MAGVLT by the masked token prediction objective with a randomly sampled mask ratio to take into account various denoising steps. Here, to perform robust training of MAGVLT especially with only image-text pairs from scratch, MAGVLT is learned basi- cally by the composition of image-to-text, text-to-image, and joint image-and-text mask prediction objectives. We observe that our cross-modal masking (joint image-and-text mask prediction) during training helps in improving both performances of I2T and T2I tasks over single-modal mask- ing (image-to-text + text-to-image mask predictions). Note that only masked generative modeling used in MAGVLT enables this cross-modal mask prediction during training. In addition, we propose to use two additional tasks based on the step-unrolled mask prediction and the selective pre- diction on the mixture of two image-text pairs. The former one is motivated by SUNDAE [50] and is modified to per- form the mask prediction on the unrolled prediction, which simulates the masked input samples encountered at the in- termediate refinement steps. On the other hand, the latter one learns to reconstruct the masked tokens in accordance with a selected context between two VL contexts that are mixed as a noisy input. This selective prediction improves cross-modal attention for an accurate generation. Through experiments on various downstream VL gener- ation tasks, we empirically demonstrate that our MAGVLT significantly outperforms ARGVLT even with greatly re- duced inference time. Especially, to the best of our knowl- edge, MAGVLT is the first model that obtains strong perfor- mances on both zero-shot I2T and zero-shot T2I generation tasks of MS-COCO benchmark [38] by a single moderate- sized model (fewer than 500M parameters) without relying on monomodal data and networks. Previously, as unified generative VL models, L-Verse [32] and UPGen [4] have not showed zero-shot I2T results while OFA [62] has usedmonomodal data and also has not showed zero-shot I2T and T2I results. Extensive ablations also validate the contribu- tion of each component for MAGVLT. To summarize, our main contributions are: (1) a masked generative VL transformer as a unified generative VL model that can produce both images and texts; (2) a robust train- ing on image-text pairs by multiple training tasks that in- clude the cross-modal mask prediction in tandem with the step-unrolled mask prediction and the selective prediction on the mixed context; and (3) an empirical validation of MAGVLT that outperforms the autoregressive model and moreover shows competitive performances on both of zero- shot I2T and T2I generation tasks for the first time without employing extra monomodal data and networks.
Kania_BlendFields_Few-Shot_Example-Driven_Facial_Modeling_CVPR_2023
Abstract Generating faithful visualizations of human faces re- quires capturing both coarse and fine-level details of the face geometry and appearance. Existing methods are either data-driven, requiring an extensive corpus of data not pub- licly accessible to the research community, or fail to cap- ture fine details because they rely on geometric face mod- els that cannot represent fine-grained details in texture with a mesh discretization and linear deformation designed to model only a coarse face geometry. We introduce a method that bridges this gap by drawing inspiration from tradi- tional computer graphics techniques. Unseen expressions †Work done during an internship at Microsoft Research Cambridge. ‡Work done at Simon Fraser University.are modeled by blending appearance from a sparse set of extreme poses. This blending is performed by measuring local volumetric changes in those expressions and locally reproducing their appearance whenever a similar expres- sion is performed at test time. We show that our method generalizes to unseen expressions, adding fine-grained ef- fects on top of smooth volumetric deformations of a face, and demonstrate how it generalizes beyond faces.
1. Introduction Recent advances in neural rendering of 3D scenes [ 53] offer 3D reconstructions of unprecedented quality [ 36] with an ever-increasing degree of control [ 23,29]. Human faces are of particular interest to the research commu- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 404 NeRF [ 36] NeRFies [ 42] HyperNeRF [ 43] NeRFace [ 13] NHA [ 17] A V A [ 7] V olTeMorph [ 15]Ours Applicability beyond faces ✓ ✓ ✓ ✗ ✗ ✗ ✓ ✓ Interpretable control ✗ ✗ ✗ ✓ ✓ ✗ ✓ ✓ Data efficiency ✗ ✓ ✓ ✗ ✓ ✗ ✓ ✓ Expression-dependent changes ✗ ✗ ✓ ✓ ✓ ✓ ✗ ✓ Generalizability to unknown expressions ✗ ✗ ✗ ✓ ✓ ✗ ✓ ✓ Table 1. Comparison – We compare several methods to our approach. Other methods fall short in data efficiency and applicability. For example, A V A [ 7] requires 3.1 million training images while V olTeMorph [ 15] cannot model expression-dependent wrinkles realistically. nity [ 1,13–15] due to their application in creating realistic digital doubles [ 32,53,75,79]. To render facial expressions not observed during train- ing, current solutions [ 1,13–15] rely on parametric face models [ 6]. These allow radiance fields [ 36] to be con- trolled by facial parameters estimated by off-the-shelf face trackers [ 27]. However, parametric models primarily cap- ture smooth deformations and lead to digital doubles that lack realism because fine-grained and expression-dependent phenomena like wrinkles are not faithfully reproduced. Authentic V olumetric Avatars (A V A) [ 7] overcomes this issue by learning from a large multi-view dataset of syn- chronized and calibrated images captured under controlled lighting. Their dataset covers a series of dynamic facial expressions and multiple subjects. However, this dataset remains unavailable to the public and is expensive to re- produce. Additionally, training models from such a large amount of data requires significant compute resources. To democratize digital face avatars, more efficient solutions in terms of hardware, data, and compute are necessary. We address the efficiency concerns by building on the recent works in Neural Radiance Fields [ 15,70,74]. In particular, we extend V olTeMorph [ 15] to render facial de- tails learned from images of a sparse set of expressions. To achieve this, we draw inspiration from blend-shape cor- rectives [ 26], which are often used in computer graph- ics as a data-driven way to correct potential mismatches between a simplified model and the complex phenomena it aims to represent. In our setting, this mismatch is caused by the low-frequency deformations that the tetrahe- dral mesh from V olTeMorph [ 15], designed for real-time performance, can capture, and the high-frequency nature of expression wrinkles. We train multiple radiance fields, one for each of the K sparse expressions present in the input data, and blend them to correct the low-frequency estimate provided by V olTe- Morph [ 15]; see Fig. 1. We call our method BlendFields since it resembles the way blend shapes are employed in 3DMMs [ 6]. To keep Ksmall ( i.e., to maintain a few-shot regime), we perform local blending to exploit the known correlation between wrinkles and changes in local differen- tial properties [ 21,45]. Using the dynamic geometry of [ 15], local changes in differential properties can be easily ex- tracted by analyzing the tetrahedral representation under- lying the corrective blendfields of our model.Contributions . We outline the main qualitative differences between our approach and related works in Tab. 1, and our empirical evaluations confirm these advantages. In sum- mary, we: • extend V olTeMorph [ 15] to enable modeling of high- frequency information, such as expression wrinkles in a few-shot setting; • introduce correctives [ 6] to neural field representations and activate them according to local deformations [ 45]; • make this topic more accessible with an alternative to techniques that are data and compute-intensive [ 7]; • show that our model generalizes beyond facial modeling, for example, in the modeling of wrinkles on a deformable object made of rubber.
Liu_Explicit_Visual_Prompting_for_Low-Level_Structure_Segmentations_CVPR_2023
Abstract We consider the generic problem of detecting low-level structures in images, which includes segmenting the ma- nipulated parts, identifying out-of-focus pixels, separating shadow regions, and detecting concealed objects. Whereas each such topic has been typically addressed with a domain- specific solution, we show that a unified approach performs well across all of them. We take inspiration from the widely- used pre-training and then prompt tuning protocols in NLP and propose a new visual prompting model, named Explicit Visual Prompting (EVP). Different from the previous visual prompting which is typically a dataset-level implicit embed- ding, our key insight is to enforce the tunable parameters focusing on the explicit visual content from each individ- ual image, i.e., the features from frozen patch embeddings and the input’s high-frequency components. The proposed EVP significantly outperforms other parameter-efficient tun- ing protocols under the same amount of tunable parame- ters (5.7 %extra trainable parameters of each task). EVP also achieves state-of-the-art performances on diverse low- level structure segmentation tasks compared to task-specific solutions. Our code is available at: https://github. com/NiFangBaAGe/Explicit-Visual-Prompt .
1. Introduction Advances in image editing and manipulation algorithms have made it easy to create photo-realistic but fake pic- tures [31, 39, 63]. Detecting such manipulated regions be- comes an important problem due to its potential negative impact related to surveillance and crime [31]. Low-level structures are known to be beneficial to tampered region de- tection, i.e., resizing and copy-pasting will destroy the JPEG compression levels between the temper region and the host image [28, 50, 62], the noise level of the tempered region and the background is also different [76, 87]. Interesting, to segment the blurred pixels [67], shadowed regions [59], and concealed objects [15], low-level clues also play important *Corresponding Author Frozen Pretrained Transformer task1 task2 task3 task4Prompting1 Prompting2 Prompting3 Prompting4 Task1: Camouflaged Object Detection Task2: Forgery Image Detection Task3: Shadow Detection Task4: Defocus Blur Detection High Frequency ExtractionPatch Embedding Embedding Tune High Frequency Tune Transformer Blocks +Tunable Frozen Figure 1. We propose a unified method for four low-level structure segmentation tasks: camouflaged object, forgery, shadow and defo- cus blur detection (Top). Our approach relies on a pre-trained frozen transformer backbone that leverages explicit extracted features, e.g., the frozen embedded features and high-frequency components, to prompt knowledge. roles. These detection tasks are shown to be beneficial to numerous computer vision tasks, including auto-refocus [1], image retargeting [37], object tracking [55], etc. Although all these tasks belong to low-level structure segmentation, they are typically addressed by domain- specific solutions with carefully designed network archi- tectures [8,87,90]. Moreover, the lack of large-scale datasets is often considered a major factor, which limits the perfor- mances [31]. In this work, we propose a solution to address the four tasks in a unified fashion. We take inspiration from recent This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19434 advances of prompting [2, 4, 33], which is a concept that ini- tially emerged in natural language processing (NLP) [3]. The basic idea is to efficiently adapt a frozen large foundation model to many downstream tasks with the minimum extra trainable parameters. As the foundation model has already been trained on a large-scale dataset, prompting often leads to better model generalization on the downstream tasks [3], especially in the case of the limited annotated data. Prompt- ing also significantly saves the storage of models since it only needs to save a shared basic model and task-aware promptings. Our main insight is to tune the task-specific knowledge only from the features of each individual image itself be- cause the pre-trained base model contains sufficient knowl- edge for semantic understanding. This is also inspired by the effectiveness of hand-crafted image features, such as SIFT [29], JPEG noise [50], resampling artifacts [62] in these tasks [28, 29, 46, 50, 62, 87]. Based on this observation, we propose explicit visual prompting (EVP) , where the tuning performance can be hugely improved via the re-modulation of image features. Specifically, we consider two kinds of features for our task. The first is the features from the frozen patch embedding, which is critical since we need to shift the distribution of the original model. Another is high-frequency components of the input image since the pre-trained visual recognition model is learned to be invariant to these features via data augmen- tation. As shown in Figure 1, we take a model pre-trained on a large-scale dataset and freeze its parameters. Then, to adapt to each task, we tune the embedded features and learn an extra embedding for high-frequency components of each individual image. In terms of experiments, we validate our approach on nine datasets of four tasks: forgery detection, shadow detec- tion, defocus blur detection as well as camouflaged object detection. Our simple and unified network achieves very competitive performance with the whole model fine-tuning and outperforms task-specific solutions without modifica- tion. In summary, our main contributions are as follows: •We design a unified approach that produces state-of- the-art performances for a number of tasks, including forgery detection, defocus blur detection, shadow de- tection, and camouflaged object detection. •We propose explicit visual prompting (EVP), which takes the features from the frozen patch embedding and the input’s high-frequency components as prompting. It is demonstrated to be effective across different tasks and outperforms other parameter-efficient tuning methods. •Our method greatly simplifies the low-level structure segmentation models as well as achieves comparable performance with well-designed SOTA methods.
Li_Efficient_Multimodal_Fusion_via_Interactive_Prompting_CVPR_2023
Abstract Large-scale pre-training has brought unimodal fields such as computer vision and natural language process- ing to a new era. Following this trend, the size of multi- modal learning models constantly increases, leading to an urgent need to reduce the massive computational cost of finetuning these models for downstream tasks. In this pa- per, we propose an efficient and flexible multimodal fusion method, namely PMF , tailored for fusing unimodally pre- trained transformers. Specifically, we first present a mod- ular multimodal fusion framework that exhibits high flex- ibility and facilitates mutual interactions among different modalities. In addition, we disentangle vanilla prompts into three types in order to learn different optimizing objec- tives for multimodal learning. It is also worth noting that we propose to add prompt vectors only on the deep layers of the unimodal transformers, thus significantly reducing the training memory usage. Experiment results show that our proposed method achieves comparable performance to several other multimodal finetuning methods with less than 3% trainable parameters and up to 66% saving of training memory usage.
1. Introduction Recent years have witnessed the great success of large- scale pretrained language models [8,31,32] and visual mod- els [6,10,23,39], leading to a surge of pretrained multimodal models [13, 14, 43, 47, 48] trying to align different modali- ties. Many prior methods utilize finetuning to update the entire set of model parameters for every target cross-modal task. Although finetuning can achieve good performance, it requires a large number of computational costs since the gradients and optimizer states for all parameters of multi- modal models have to store. Therefore, it encourages re- searchers to propose more parameter-efficient methods than finetuning for multimodal learning. Figure 1. Comparison over three multimodal classification tasks. We compare our proposed PMF and PMF-Large with mul- tiple finetuning (yellow) and prompt-based (purple) methods. The y-axis is the average score of three tasks, and the x-axis is the max- imum GPU memory usage during training. More recently, prompting tuning [17, 19, 21, 22, 29] is proposed to address this problem by freezing all parame- ters of a pretrained model while tuning only the continuous prompts. Specifically, it adds trainable continuous prompts to the original token sequences of input data. During train- ing, only the continuous prompts are updated. For multi- modal prompt-based learning, a most recent method [20] proposes to disentangle the functionality of the pretrained model which exhibits high flexibility. Although this method significantly reduces the tuned parameters ( e.g., less than 0.1% of the pretrained model), there still exists a large per- formance gap between it and the finetuning-based methods. In addition, this method adopts a sequential modular struc- ture that the pretrained image transformer model is followed by a language transformer model, which causes two main problems in cross-modal learning: a one-way path learn- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2604 ing and a significant increase in the number of model lay- ers. Specifically, a one-way path learning in the multimodal model usually forces one modality to align with others, but not vice versa. In this way, cross-modal learning based on multiple different modalities is not fully explored due to the missing mutual alignments. Since the prompts are added to the token sequences of input data and are updated in the training, they require extensive gradient calculations in the backward propagation which cost numerous memory usages. As a result, this kind of method does not reduce the memory usage during training by much (up to 20%) though it reduces the number of parameters to update. In other words, this parameter-efficient method still requires mas- sive computational resources which prevents it from being applied to many real-world applications. To address these issues, we propose a Prompt-based Multimodal Fusion method with a high memory efficiency, namely PMF . Firstly, we present a new form of modu- lar multimodal fusion framework which demonstrates high flexibility and facilitates a two-way interaction among dif- ferent modalities. Specifically, we adopt a two-stream struc- ture where the pretrained language model and image model construct the multimodal model in a parallel way. There- fore, tokens of different modalities can learn mutual inter- actions through a cross-attention-like operation. Such a par- allel modular structure brings two benefits. First, unimodal pretraining can be directly utilized for multimodal learn- ing through a parallel combination, eliminating the need for paired multimodal datasets that can be expensive to con- struct. Also, the type of image or language model can be changed easily ( e.g., replacing BERT with T5 for text gen- eration tasks). Furthermore, incorporating extra modalities is made possible based on the parallel modular structure. Moreover, we propose to leverage three types of inter- active prompts ( i.e., query prompts, query context prompts, and fusion context prompts) in order to dynamically learn different objectives for multimodal learning. Intuitively, the query context prompt and query prompt can be seen as a pair of ‘questions’ and ‘answers’ with an aim of extracting necessary information for exchange between two modali- ties. After being translated by a non-linear mapping ‘trans- lator’, the ‘answer’ is then delivered to the other modality for better cross-modal understanding. Finally, the fusion context prompts then provide the context to the delivered answer to facilitate the fusion. Last but most importantly, PMF is a memory-efficient method that significantly reduces the memory requirements for the large pretrained model. Considering that calculat- ing gradients for prompts for back-propagation is memory- consuming, we propose to add prompts only on the deep layers of the utilized unimodal transformers. Therefore, in- stead of passing through the entire multimodal model, the backward propagation only needs to pass through the deepfew transformer layers to reach all trainable parameters, greatly reducing the training memory usage. We conduct extensive experiments to demonstrate the superior of it in our experiments. As a result, PMF enables large pretrained models to be trained on the GPU with a low memory re- quirement. We conduct extensive experiments on three vision- language datasets: UPMC-Food101 [38], MM-IMDB [2], and SNLI-VE [41]. Through comparisons with multiple finetuning and prompt tuning methods (see in Fig. 1), we find that: (1) PMF is the most memory-efficient method for cross-modal learning so far, which reduces the train- ing memory usage by up to 66% compared with finetuning baselines, and by 55% compared with prompt-based meth- ods. (2) PMF can perform comparably compared to prior fine-tuning methods with much fewer trainable parameters (less than 2.5%) and memory usage. Concretely, our contributions are as follows: (1) we present a new form of modular multimodal fusion frame- work which enables two-way interactions between differ- ent modalities and high flexibility of the entire model; (2) we disentangle vanilla prompts into three types of prompts, in order to dynamically learn different objectives for multi- modal learning; (3) our proposed method is quite memory- efficient yet is able to achieve comparable performance with existing finetuning methods for multimodal fusion.
Krantz_Iterative_Vision-and-Language_Navigation_CVPR_2023
Abstract We present Iterative Vision-and-Language Naviga- tion (IVLN), a paradigm for evaluating language-guided agents navigating in a persistent environment over time. Ex- isting Vision-and-Language Navigation (VLN) benchmarks erase the agent’s memory at the beginning of every episode, testing the ability to perform cold-start navigation with no prior information. However, deployed robots occupy the same environment for long periods of time. The IVLN paradigm addresses this disparity by training and evaluating VLN agents that maintain memory across tours of scenes that consist of up to 100 ordered instruction-following Room- to-Room (R2R) episodes, each defined by an individual lan- guage instruction and a target path. We present discrete and continuous Iterative Room-to-Room (IR2R) benchmarks comprising about 400 tours each in 80 indoor scenes. We find that extending the implicit memory of high-performing transformer VLN agents is not sufficient for IVLN, but agents that build maps can benefit from environment persistence, motivating a renewed focus on map-building agents in VLN.
1. Introduction Robots and virtual agents that persistently operate in hu- man spaces like homes should improve over time. For ex- ample, a smart vacuum told to clean the living room, which is down the hall past the guest bedroom should learn about both the living room and guest bedroom. Likewise, agents should be able to associate references in past instructions, such as guest bedroom , with spatial and visual information from the environment to understand future instructions. Most work on language-guided, embodied agents per- forming navigation [ 3,25] or household tasks [ 38] is episodic in nature—agent memory is erased before issu- ing each new instruction. In contrast, physical robots build maps [ 12,43,49]iteratively from visual observations [ 32,39] as an explicit form of long-term memory. Agents trained to *Equal contributions. Correspondence: [email protected] language-guided navigation in simulation that are deployed on physical robots [ 2] fail to take advantage of the mapping-based strategies that facilitate robot navigation. We propose Iterative Vision-and-Language Navigation (IVLN), in which an agent follows an ordered sequence of language instructions that conduct a tour of an indoor space. Each tour is composed of individual episodes of language instructions with target paths. Agents can utilize memory to better understand future tour instructions. After just 10 episodes an agent has seen on average over 50% of the target path associated with the next language instruction in a tour. While performing an IVLN tour, agents iteratively explore the environment, meaning regions irrelevant to task instruc- tions need not ever be visited. By conditioning exploration on language, IVLN enables rich semantic representations, e.g., unusual, novel, and scene-specific referents grounded during one episode can be reasoned about later. We explore both a discrete VLN setting based on Room- to-Room [ 3] episodes and navigation graphs (IR2R) and a continuous simulation VLN-CE [ 25] setting (IR2R-CE). The markedly different action and visual observation spaces of these settings may require different memory mechanisms. In the discrete setting, agents move on graph edges and observe clear, well-framed images. For IR2R, we extend a state-of-the-art transformer agent [ 11] that learns an implicit memory based on path history when interpreting instructions. In the continuous setting, agents take motion actions while observing noisy images of a 3D environment reconstructed from discrete panorama images. For IR2R-CE, we propose an agent that builds and interprets an explicit semantic map. In short, we define Iterative Vision-and-Language Navi- gation (IVLN), a paradigm for persistent VLN, and release IR2R and IR2R-CE to study discrete and continuous navi- gation agents in the IVLN setting. We create initial agents for both benchmarks, including explicit mapping and im- plicit memory models for continuous navigation. Please see jacobkrantz.github.io/ivln for code and more details.
Li_Decoupled_Multimodal_Distilling_for_Emotion_Recognition_CVPR_2023
Abstract Human multimodal emotion recognition (MER) aims to perceive human emotions via language, visual and acous- tic modalities. Despite the impressive performance of pre- vious MER approaches, the inherent multimodal hetero- geneities still haunt and the contribution of different modal- ities varies significantly. In this work, we mitigate this issue by proposing a decoupled multimodal distillation (DMD)approach that facilitates flexible and adaptive crossmodal knowledge distillation, aiming to enhance the discrimina-tive features of each modality. Specially, the representationof each modality is decoupled into two parts, i.e., modality- irrelevant/-exclusive spaces, in a self-regression manner .DMD utilizes a graph distillation unit (GD-Unit) for eachdecoupled part so that each GD can be performed in a morespecialized and effective manner . A GD-Unit consists of a dynamic graph where each vertice represents a modality and each edge indicates a dynamic knowledge distillation.Such GD paradigm provides a flexible knowledge trans-fer manner where the distillation weights can be automati-cally learned, thus enabling diverse crossmodal knowledge transfer patterns. Experimental results show DMD con- sistently obtains superior performance than state-of-the-art MER methods. Visualization results show the graph edgesin DMD exhibit meaningful distributional patterns w.r .t. the modality-irrelevant/-exclusive feature spaces. Codes are re-leased at https://github.com/mdswyz/DMD .
1. Introduction Human multimodal emotion recognition (MER) aims to perceive the sentiment attitude of humans from video clips [ 13,17]. The video flows involve time-series data from various modalities, e.g., language, acoustic, and vi- sion. This rich multimodality facilitates us in understandinghuman behaviors and intents from a collaborative perspec- tive. Recently, MER has become one of the most active re- * The corresponding author.(a) Unimodal Accuracy (b) Cross-modal Distillation (c) Our proposed Decoupled Multimodal DistillationHeterogeneous GD 2. Feature Decoupling Homogeneous GD 1. Input3. Graph DistillationShared Encoder Private Encoder556065707580Accuracy (%) Language It is very loyal to the book AcousticVision Language It is very loyal to the book AcousticVision Figure 1. (a) illustrates the significant emotion recognition dis- crepancies using unimodality, adapted from Mult [ 28]. (b) shows the conventional cross-modal distillation. (c) shows our proposeddecoupled multimodal distillation (DMD) method. DMD consistsof two graph distillation (GD) units: homogeneous GD and hetero-geneous GD. The decoupled GD paradigm decreases the burden ofabsorbing knowledge from the heterogeneous data and allows eachGD to be performed in a more specialized and effective manner. search topics of affective computing with abundant appeal- ing applications, such as intelligent tutoring systems [ 24], product feedback estimation [ 18], and robotics [ 15]. For MER, different modalities in the same video seg- ment are often complementary to each other, providing ex- tra cues for semantic and emotional disambiguation. Thecore part of MER is multimodal representation learning and fusion, in which a model aims to encode and integrate repre- sentations from multiple modalities to understand the emo-tion behind the raw data. Despite the achievement of the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6631 mainstream MER methods [ 7,28,33], the intrinsic hetero- geneities among different modalities still perplex us andincrease the difficulty of robust multimodal representationlearning. Different modalities, e.g., image, language, andacoustic, contain different ways of conveying semantic in- formation. Typically, the language modality consists of lim- ited transcribed texts and has more abstract semantics thannonverbal behaviors. As illustrated in Fig. 1(a), language plays the most important role in MER and the intrinsic het- erogeneities result in significant performance discrepanciesamong different modalities [ 25,28,31]. One way to mitigate the conspicuous modality hetero- geneities is to distill the reliable and generalizable knowl- edge from the strong modality to the weak modality [ 6], as illustrated in Fig. 1(b). However, such manual assign- ment for the distillation direction or weights should be cum-bersome because there are various potential combinations. Instead, the model should learn to automatically adapt thedistillation according to different examples, e.g, many emo- tions are easier to recognize via language while some are easier by vision. Furthermore, the significant feature dis-tribution mismatch cross the modalities makes the direct crossmodal distillation sub-optimal [ 21,37]. To this end, we propose a decoupled multimodal distil- lation (DMD) method to learn dynamic distillations acrossmodalities, as illustrated in Fig. 1(c). Typically, the features of each modality are decoupled into modality-irrelevant/- exclusive spaces via shared encoder and the private en-codes, respectively. As to achieve the feature decoupling, we devise a self-regression mechanism that predicts the decoupled modality features and then regresses them self-supervisedly. To consolidate the feature decoupling, weincorporate a margin loss that regularizes the proximity in relationships of the representations across modalities and emotions. Consequently, the decoupled GD paradigmwould decrease the burden of absorbing knowledge from the heterogeneous data and allows each GD to be performed in a more specialized and effective manner. Based on the decoupled multimodal feature spaces, DMD utilizes a graph distillation unit (GD-Unit) in eachspace so that the crossmodal knowledge distillation can beperformed in a more specialized and effective manner. A GD-Unit consists of a graph that (1) vertices represent- ing the representations or logits from the modalities and (2) edges indicating the knowledge distillation directions and weights. As the distribution gap among the modality-irrelevant (homogeneous) features is sufficiently reduced, GD can be directly applied to capture the inter-modalitysemantic correlations. For the modality-exclusive (hetero- geneous) counterparts, we exploit the multimodal trans- former [ 28] to build the semantic alignment and bridge the distribution gap. The cross-modal attention mechanism inthe multimodal transformer reinforces the multimodal rep-resentations and reduces the discrepancy between the high- level semantic concepts that exist in different modalities.For simplification, we respectively name the distillation onthe decoupled multimodal features as homogeneous graph knowledge distillation (HomoGD) and heterogeneous graphknowledge distillation (HeteroGD). The reformulation al- lows us to explicitly explore the interaction between differ-ent modalities in each decoupled space. The contributions of this work can be summarized as: • We propose a decoupled multimodal distillation framework, Decoupled Multimodal Distillation(DMD), to learn the dynamic distillations acrossmodalities for robust MER. In DMD, we explic-itly decouple the multimodal representations into modality-irrelevant/-exclusive spaces to facilitate KDon the two decoupled spaces. DMD provides a flexible knowledge transfer manner where the distillationdirections and weights can be automatically learned,thus enabling flexible knowledge transfer patterns. • We conduct comprehensive experiments on public MER datasets and obtain superior or comparable re-sults than the state-of-the-arts. Visualization results verify the feasibility of DMD and the graph edges exhibit meaningful distributional patterns w.r.t. Ho-moGD and HeteroGD.
Kawar_Imagic_Text-Based_Real_Image_Editing_With_Diffusion_Models_CVPR_2023
Abstract Text-conditioned image editing has recently attracted considerable interest. However , most methods are cur-rently limited to one of the following: specific editing types(e.g., object overlay, style transfer), synthetically generatedimages, or requiring multiple input images of a commonobject. In this paper we demonstrate, for the very firsttime, the ability to apply complex (e.g., non-rigid) text-based semantic edits to a single real image. F or exam- ple, we can change the posture and composition of oneor multiple objects inside an im age, while preserving its original characteristics. Our method can make a stand-ing dog sit down, cause a bird to spread its wings, etc.– each within its single high-resolution user-provided nat-ural image. Contrary to previous work, our proposedmethod requires only a single input image and a targettext (the desired edit). It operates on real images, and ˚Equal contribution. The first author performed this work as an intern at Google Research. Project page: https://imagic-editing.github.io/ .does not require any additional inputs (such as image masks or additional views of the object). Our method, called Imagic, leverages a pre-trained text-to-image diffu-sion model for this task. It produces a text embeddingthat aligns with both the input image and the target text,while fine-tuning the diffusion model to capture the image- specific appearance. We demonstrate the quality and versa- tility of Imagic on numerous inputs from various domains,showcasing a plethora of high quality complex semanticimage edits, all within a single unified framework. To betterassess performance, we introduce TEdBench, a highly chal-lenging image editing benchmark. We conduct a user study,whose findings show that human raters prefer Imagic to pre-vious leading editing methods on TEdBench.
1. Introduction Applying non-trivial semantic edits to real photos has long been an interesting task in image processing [ 41]. It has attracted considerable interest in recent years, en-abled by the considerable advancements of deep learning-based systems. Image editing becomes especially impres- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6007 Input Image “A sitting dog”Edited Images “A jumping dog” “A dog playing with a toy”“A dog lying down” “A person in a greeting pose to Namaste hands” “A person with crossed arms” “A person giving the thumbs up”“A person holding a cup” “A cat wearing a hat” “A cat wearing an apron” “A cat wearing a jean jacket”“A cat wearing a necklace” “A zebra” “A horse with its head down” “A horse with a saddle” “A brown horse in IOZI[[ÅMTL ” “A pistachio cake” “A chocolate cake” “A strawberry cake” “A wedding cake” “A jumping dog holding a frisbee” “A person making a heart sign” “A drawing of a cat” “A cartoon of a horse” “A slice of cake” Figure 2. Different target texts applied to the same image. Imagic edits the same image differently depending on the input text. sive when the desired edit is described by a simple natu- ral language text prompt, since this aligns well with humancommunication. Many methods were developed for text-based image editing, showing promising results and con-tinually improving [ 8,10,33]. However, the current lead- ing methods suffer from, to varying degrees, several draw-backs: (i) they are limited to a specific set of edits such aspainting over the image, adding an object, or transferringstyle [ 6,33]; (ii) they can operate only on images from a specific domain or synthetically generated images [ 20,43]; or (iii) they require auxiliary inputs in addition to the in-put image, such as image masks indicating the desired edit location, multiple images of the same subject, or a text de-scribing the original image [ 6,17,39,47,51]. In this paper, we propose a semantic image editing method that mitigates all the above problems. Given only aninput image to be edited and a single text prompt describingthe target edit, our method can perform sophisticated non- rigid edits on real high-resolution images. The resulting im-age outputs align well with the target text, while preserving the overall background, structure, and composition of theoriginal image. For example, we can make two parrots kiss 6008 (A) Text Embedding Optimization (B) Model Fine-Tuning (C) Interpolation & Generation "A bird spreading wings."Target Emb etgt Optimized Emb eopt Pre-Trained Diffusion Model etgteopt + noiseReconstruction Loss Pre-Trained Diffusion Model eopt Reconstruction Loss+ noise Fine-Tuned Diffusion Process interpolate etgt eopt Input Input Output Figure 3. Schematic description of Imagic .Given a real image and a target text prompt: (A) We encode the target text and get the initial text embedding etgt, then optimize it to r econstruct the input image, obtaining eopt; (B) We then fine-tune the generative model to improve fidelity to the input image while fixing eopt; (C) Finally, we interpolate eoptwithetgtto generate the final editing result. or make a person give the thumbs up, as demonstrated in Figure 1 . Our method, which we call Imagic , provides the first demonstration of text-based semantic editing that ap-plies such sophisticated manipulations to a single real high-resolution image, including editing multiple objects. In ad-dition, Imagic can also perform a wide variety of edits, in- cluding style changes, color changes, and object additions. To achieve this feat, we take advantage of the recent suc- cess of text-to-image diffusion models [ 47,50,53]. Diffu- sion models are powerful state-of-the-art generative models,capable of high quality image synthesis [ 16,22]. When con- ditioned on natural language text prompts, they are able togenerate images that align well with the requested text. Weadapt them in our work to edit real images instead of syn-thesizing new ones. We do so in a simple 3-step process, asdepicted in Figure 3 : We first optimize a text embedding so that it results in images similar to the input image. Then, wefine-tune the pre-trained generative diffusion model (condi-tioned on the optimized embedding) to better reconstructthe input image. Finally, we linearly interpolate betweenthe target text embedding and the optimized one, resultingin a representation that combines both the input image andthe target text. This representation is then passed to the gen-erative diffusion process with the fine-tuned model, whichoutputs our final edited image. We conduct several experiments and apply our method on numerous images from various domains. Our methodoutputs high quality images that both resemble the input image to a high degree, and align well with the targettext. These results showcase the generality, versatility, andquality of Imagic . We additionally conduct an ablation study, highlighting the effect of each element of our method.When compared to recent approaches suggested in the lit-erature, Imagic exhibits significantly better editing qual- ity and faithfulness to the original image, especially whentasked with sophisticated non-rigid edits. This is further supported by a human perceptual evaluation study, whereraters strongly prefer Imagic over other methods on a novelbenchmark called TEdBench – Textual Editing Benchmark. We summarize our main contributions as follows:1. We present Imagic , the first text-based semantic image editing technique that allows for complex non-rigid edits on a single real input image, while preserving its overall structure and composition. 2. We demonstrate a semantically meaningful linear inter- polation between two text embedding sequences, uncov-ering strong compositional capabilities of text-to-imagediffusion models. 3. We introduce TEdBench – a novel and challenging com- plex image editing benchmark, which enables compar-isons of different text-based image editing methods.
Kwon_Probabilistic_Prompt_Learning_for_Dense_Prediction_CVPR_2023
Abstract Recent progress in deterministic prompt learning has be- come a promising alternative to various downstream vision tasks, enabling models to learn powerful visual representa- tions with the help of pre-trained vision-language models. However, this approach results in limited performance for dense prediction tasks that require handling more complex and diverse objects, since a single and deterministic description cannot sufficiently represent the entire image. In this paper, we present a novel probabilistic prompt learning to fully exploit the vision-language knowledge in dense prediction tasks. First, we introduce learnable class-agnostic attribute prompts to describe universal attributes across the object class. The attributes are combined with class information and visual-context knowl- edge to define the class-specific textual distribution. Text representations are sampled and used to guide the dense prediction task using the probabilistic pixel-text matching loss, enhancing the stability and generalization capability of the proposed method. Extensive experiments on different dense prediction tasks and ablation studies demonstrate the effectiveness of our proposed method.
1. Introduction Dense predictions, e.g., semantic segmentation [3, 31], instance segmentation [30], and object detection [11, 42], are fundamental computer vision problems, which aim to produce pixel-level predictions to thoroughly understand the scene. Due to the expensive cost of collecting dense an- notations, most approaches [10, 38] employ a “pre-training + fine-tuning” paradigm. Based on existing pre-trained net- works [8, 15] trained on large-scale datasets such as Ima- geNet [6], semi- [36,60] or self-supervised learning [29,53] has been extensively researched to fine-tune additional *Corresponding author. This research was supported by the National Research Founda- tion of Korea (NRF) grant funded by the Korea government (MSIP) (NRF2021R1A2C2006703). Prompt embedding space Attribute!Attribute"Attribute#”A {dog} in the center of the image””A photo of the {dog}on the sandy beach”Input Image Deterministic Prompt SampleProbabilistic Prompt 𝑁-Samples”A photo of the white{dog}”……“A photo of the white{dog}” …”A photo of the white{dog}”“A photo of the {dog}on the sandy beach”Dog distributionContext-awaredistributionPossible Descriptions Figure 1. Probabilistic Prompt Learning. The proposed PPL exploits Nmultiple prompts sampled from probabilistic text em- beddings, which can leverage granular textual representations, en- abling a better understanding of the details of the image. modules for dense prediction. However, due to the biased visual representations, they suffer from a lack of scalability when there exists a large semantic gap between pre-trained and target tasks w.r.t. dataset and objective, such as transfer- ring ImageNet classification network to COCO object de- tection [13, 61]. Recently, applying vision-language pre-trained models (VLM) to the downstream tasks has demonstrated remark- able success, including zero-shot classification [58, 59], referring expression segmentation [45], and object detec- tion [41]. VLM, such as CLIP [40] and ALIGN [21], is trained on large-scale web noisy image-text pair datasets via contrastive learning to align representations between text and image in a joint embedding space. In this way, VLM learns robust visual representations by exploiting the semantic relationship between text and image representa- tions, which is beneficial to transfer knowledge to various This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6768 vision tasks. To efficiently leverage the language knowl- edge, there have been many pioneering attempts [16,22,46] to deploy VLM to the downstream tasks via prompting . For example, based on a prompt template "a photo of a{class }", it measures the confidence score by calcu- lating image-text similarity and classifies the image into {class }. In practice, however, this hand-crafted prompt may not be the optimal description for a particular task, and furthermore, manually designing a task-specific prompt is laborious. To tackle this issue, several methods [41,58,59] have in- troduced learning-based prompting techniques inspired by early works in NLP [18, 28]. The goal is to automatically construct the prompts according to the task by optimiz- ing continuous prompt variables based on VLM. This sim- ple and intuitive setup, referred to as deterministic prompt learning , is the most popular approach in the current liter- ature [9, 46]. It has shown performance improvement on classification tasks [40, 52], where a single deterministic embedding is sufficient for representing a class. However, this approach is not fully compositional in dense prediction tasks due to a semantic ambiguity problem. Firstly, while the dense prediction tasks require granular information to generate precise pixel-wise results, not only complex and multiple objects within an input image but also their various attributes ( e.g., color, location, etc.) cannot be comprehen- sively represented in a single textual representation [12,49]. Thus, a single prompt fails to comprehend the object of all classes in detail. Second, the visual representation has high randomness [5, 7] due to various contexts with external ob- jects and object representations, and it results in high uncer- tainty in representing in language. For example, as shown in Fig. 1, the image can be described as "A photo of the dog on the sandy beach" , but it can also be expressed differently such as "A photo of the dog near the ocean" . Therefore, it is not appropriate to exploit a deterministically visual representation when trans- ferring VLM in dense prediction tasks. In this paper, we propose a Probabilistic Prompt Learn- ing ( PPL ) that explores learning appropriate textual de- scriptions using visual cues in a probabilistic embedding space. We present a set of prompts that express class- agnostic attributes such as position, size, and color to rep- resent objects without semantic ambiguity. To effectively learn the probabilistic distribution to describe diverse and informative attributes for the entire class, we model each attribute distribution as a distinct normal distribution. To this end, we set its variance as contextual relations between text and visual features to explicitly utilize visual-text in- formation. With these attribute distributions, class-specific attribute distribution is approximated by a Mixture of Gaus- sian (MoG). Furthermore, we introduce a novel probabilis- tic pixel-text matching loss to attenuate the instability ofprediction probability caused by high uncertainty. In summary, our contributions are as follows: (1) We propose a novel PPL to effectively represent class-agnostic attributes of objects in probabilistic text embedding space. To the best of our knowledge, this is the first attempt to leverage context-aware probabilistic prompt learning. (2) We introduce a novel probabilistic pixel-text matching loss to alleviate the adverse impact of uncertainty. (3) We demonstrate the effectiveness of the proposed probabilistic approach through extensive experiments on dense predic- tion tasks, including semantic segmentation, instance seg- mentation, and object detection.
Liu_GEN_Pushing_the_Limits_of_Softmax-Based_Out-of-Distribution_Detection_CVPR_2023
Abstract Out-of-distribution (OOD) detection has been exten- sively studied in order to successfully deploy neural net- works, in particular, for safety-critical applications. More- over, performing OOD detection on large-scale datasets is closer to reality, but is also more challenging. Sev- eral approaches need to either access the training data for score design or expose models to outliers during train- ing. Some post-hoc methods are able to avoid the afore- mentioned constraints, but are less competitive. In this work, we propose Generalized ENtropy score (GEN), a simple but effective entropy-based score function, which can be applied to any pre-trained softmax-based classi- fier. Its performance is demonstrated on the large-scale ImageNet-1k OOD detection benchmark. It consistently improves the average AUROC across six commonly-used CNN-based and visual transformer classifiers over a num- ber of state-of-the-art post-hoc methods. The average AU- ROC improvement is at least 3.5%. Furthermore, we used GEN on top of feature-based enhancing methods as well as methods using training statistics to further improve the OOD detection performance. The code is available at: https://github.com/XixiLiu95/GEN .
1. Introduction In order to make the usage of deep learning methods in real-word applications safer, it is crucial to distinguish whether an input at test time is a valid in-distribution (ID) sample or a previously unseen out-of-distribution (OOD) sample. Thus, a trained deep neural network (DNN) should ideally know what it does not know [30]. This ability is particularly important for high-stake applications in au- tonomous driving [8] and medical image analysis [35]. However, it is common for neural networks to make over- confident predictions even for OOD samples. A recent sur- vey on OOD detection [45] identifies several scenarios re- quiring the detection of OOD samples, with covariate shift 66 68 70 72 74 76 78 ImageNet-O (AUROC)84868890OpenImage-O (AUROC)ODINMSPMaxLogit EnergyEnergy+ReAct*GEN (Ours)GEN (Ours)+ReAct*Figure 1. Performance of Post-hoc OOD Detection Methods Ap- plied to 6 Classifiers Trained on ImageNet-1K. Reported are AU- ROC values ( %) averaged over the models. Methods marked with light squares use information from logits / probabilities. Meth- ods marked with dark crosses also use information from features. ReAct∗corresponds to performing extra feature clipping before computing the score. (change in the input distribution) and semantic shift (change in the label distribution) being two important settings. In this work, we focus on the semantic shift scenario, meaning that we aim to detect inputs with semantic labels not present in the training set. When solving the OOD de- tection problem, the idea is to design a scalar score function of a data sample as an argument that assigns higher values to true ID samples. The semantic shift scenario also allows us to mainly focus on the predictive distribution as provided by a DNN classifier to design such score function. A number of existing works for OOD detection rely on the predictive distribution [14, 28], but often a better OOD detection performance can be achieved when also incor- porating feature statistics for ID data [13, 24, 37, 38, 43]. These high-performing methods have practical constraints that can be challenging to eliminate: some methods require access to at least a portion of training data [13, 24, 37, 43] while others need access to internal feature activations [38]. However, commercially deployed DNNs are often black- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23946 60 55 50 45 40 Score0.000.050.100.150.200.250.30DensityAUC=94.7 , FPR95=22.6 ImageNet (ID) OpenImage-O (OOD) 60 55 50 45 40 Score0.000.050.100.150.200.250.30DensityAUC=89.43 , FPR95=40.95 ImageNet (ID) T extures (OOD) 60 55 50 45 40 Score0.000.050.100.150.200.250.30DensityAUC=97.25 , FPR95=11.55 ImageNet (ID) iNaturalist (OOD) 60 55 50 45 40 Score0.000.050.100.150.200.250.30DensityAUC=84.45 , FPR95=54.0 ImageNet (ID) ImageNet-O (OOD) 7 8 9 10 11 12 Score0.00.20.40.60.81.01.2DensityAUC=91.24 , FPR95=26.92 ImageNet (ID) OpenImage-O (OOD) 7 8 9 10 11 12 Score0.00.20.40.60.81.01.2DensityAUC=82.8 , FPR95=51.57 ImageNet (ID) T extures (OOD) 7 8 9 10 11 12 Score0.00.20.40.60.81.01.2DensityAUC=95.19 , FPR95=15.49 ImageNet (ID) iNaturalist (OOD) 7 8 9 10 11 12 Score0.00.20.40.60.81.01.2DensityAUC=82.0 , FPR95=45.85 ImageNet (ID) ImageNet-O (OOD)Figure 2. Score Distributions . The top row is GEN, and the bottom one is Energy [28]. The distributions are shown for the ID ImageNet-1K dataset (dark blue) and four OOD datasets (light blue). The classification model used here is Swin [29]. box classifiers, and the training data is likely to be confiden- tial. Hence, the goal of this work is to explore and push the limits of OOD detection when the output of a softmax layer is the only available source of information. Our method therefore falls under the post-hoc category of OOD detec- tion frameworks, where only a trained DNN is used without the need for training data. Fig. 1 highlights its performance compared to other methods in this category. Contribution We propose GEN, a simple but effective entropy-based method for OOD detection. (i) GEN uses predictive distribution only. It does not require re-training and/or outlier exposure, it does not use any training data statistics. (ii) Yet it performs very well (see Figs. 1 and 2), meaning that it can potentially be used in more constrained model deployment scenarios. Compared to other post-hoc methods, score distributions produced by GEN lead to a bet- ter ID/OOD separation. We show that our method consis- tently achieves better results in terms of AUROC (and usu- ally in terms of FPR95) compared to other post-hoc meth- ods. In particular, GEN on average outperforms other post- hoc methods on the largest and carefully constructed OOD dataset OpenImage-O as well as on the very challenging ImageNet-O dataset based on natural adversarial examples.
Kim_Re-Thinking_Federated_Active_Learning_Based_on_Inter-Class_Diversity_CVPR_2023
Abstract Although federated learning has made awe-inspiring ad- vances, most studies have assumed that the client’s data are fully labeled. However, in a real-world scenario, ev- ery client may have a significant amount of unlabeled in- stances. Among the various approaches to utilizing un- labeled data, a federated active learning framework has emerged as a promising solution. In the decentralized set- ting, there are two types of available query selector models, namely ‘global’ and ‘local-only’ models, but little litera- ture discusses their performance dominance and its causes. In this work, we first demonstrate that the superiority of two selector models depends on the global and local inter- class diversity. Furthermore, we observe that the global and local-only models are the keys to resolving the imbalance of each side. Based on our findings, we propose LoGo, a FAL sampling strategy robust to varying local heterogeneity lev- els and global imbalance ratio, that integrates both models by two steps of active selection scheme. LoGo consistently outperforms six active learning strategies in the total num- ber of 38 experimental settings. The code is available at: https://github.com/raymin0223/LoGo .
1. Introduction Federated learning (FL) is a distributed framework that allows multiple parties to learn a unified deep learning model cooperatively with preserving the privacy of the local client [23,28,31]. Typically, FL has been actively studied in a standard supervised learning setting, where all the train- ing instances are labeled, but it is more realistic for each client to contain both labeled and unlabeled data due to the high labeling cost [29, 48]. Here, active learning (AL) can be a promising solution to improve the performance of a cooperated model with the pool of unlabeled data. In practice, federated active learning (FAL) framework has re- cently attempted to bridge two different philosophies in FL and AL [3,4]. As illustrated in Figure 1-(a) FAL framework alternates an FL procedure (red line) of training a predictive *equal contribution †corresponding authorsmodel collaboratively through local updates and aggrega- tion phases, and an AL procedure (green line) of querying and annotating informative instances separately per client. Although the overall framework just appears to be a straightforward fusion of two research fields, FL factors in- troduce two major challenges to the AL procedure. First , the class imbalance of the local dataset originates from heterogeneous distribution across local clients [15, 23, 28]. Hence, in the FAL framework, the active selection algo- rithm has to ensure inter-class diversity from both local and global perspectives. Second , there are two available types of query-selecting models, a global model, which is globally optimized through the FL pipeline, and a local- only model [9,33], which can be separately trained only for each client. In the query selection phase, the global model can leverage the aggregated knowledge of all clients, while the local-only model is able to detect the most valuable in- stances for the local updates. Prior FAL literature [3, 4], which simply adapt conven- tional AL strategies, had little discussion on these chal- lenges. As our first contribution, we found the significant performance gap between two types of query selector (see Figure 1-(b)), and it is the first study to solve a conundrum of dominance trend by introducing two indicators of inter- class diversity1–local heterogeneity level (α) and global im- balance ratio (ρ). The first indicator αis the concentration parameter of Dirichlet distribution, commonly seen in the FL literature [1, 15, 23], resulting in more locally imbal- anced class distribution at lower values. Besides, ρindica- tor is the ratio of class imbalance for the aggregated global data of all clients [7, 21]. We discovered three meaningful insights on selector dominance: (Obs. 1) Interestingly, the superiority of the two selectors varies depending on the two indicators of inter-class diversity, αandρ.(Obs. 2) When local heterogeneity is severe ( αis low), a local-only model is preferred for weighing minority instances for each client, and(Obs. 3) when globally minor classes exist ( ρis high), the knowledge of the entire data distribution, inherent in a global model, is more essential. ▷See Section 4 1We use the term inter-class diversity interchangeably with class bal- ance throughout the paper. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3944 Server … Oracle K Client K𝑈!𝐿!Local UpdateAggregateDistribute QuerywithAnnotate2) Local-only1) Global Oracle 1 Client 1𝑈"𝐿" QueryAnnotate Predict (a)Federated Active Learning framework. CIFAR-10MNISTPathMNIST BloodMNISTSVHN OCTMNIST ...DermaMNIST FMNISTWing,l - Winl,gα = 0.1 α = ∞-1.0-0.500.51.0 Imbalance Ratio (ρ)11.24 1.64 2.97 5.94 58.66 (b)Superiority change of query selector. Figure 1. Motivation: (a) The red and green lines correspond to the conventional FL and AL framework. We focus on a sampling strategy for FAL (green box) with the considerations of hierarchy structure and two available query-selecting models. (b) For a fixed querying strategy (Entropy sampling), the performance gap occurs only by changing the query selector. The y-axis is the gap in the winning rate in total active rounds (refer to Section 4). Closer to 1 indicates that global models outperforms than local-only models, and -1 is the opposite. In a real FAL scenario, the superiority of two query mod- els for a given dataset cannot be known in advance due to privacy preservation. Therefore, as our second contri- bution, we design a simple yet effective FAL querying ap- proach, LoGo , that simultaneously leverages local-only and global models, to be robust to varying heterogeneity lev- els and global imbalance ratios. LoGo is a clutering-based sampling strategy, which is composed of macro and mi- cro step exploiting local-only and global models, respec- tively. The rationale behind our method is that the opti- mal querying policy needs to evaluate the informativeness of instances with both models, which implicitly learn lo- cal and global data distribution, respectively. In a macro step, to improve the local inter-class diversity first, we per- formk-means clustering [30] in hallucinated gradient space generated from local-only models. Then, in a micro step , the final query set is determined via one step of the EM algorithm [10], making cluster boundaries using instances from macro step (E-step) and cluster-wise sampling with the global model (M-step). The proposed cluster-wise sampling conservatively guarantees the diversity information of the macro step, i.e., the local inter-class diversity obtained by local-only models, while also considering the global minor- ity classes via the global model. ▷See Section 5 As our third contribution, we conduct a total number of 38 experiments on five datasets using seven AL strate- gies including our LoGo algorithm. To verify the superi- ority of our method in real-world scenarios, we build com- prehensive combinations of six categories, including query selector types (local-only vs. global models), local het- erogeneity levels ( α∈[0.1,∞)), global imbalance ratios (ρ∈[1,58]), model architectures, budget sizes, and model initialization scheme. As a result, the experimental results empirically prove our three observations (Obs. 1–3). Be- sides, our method outperforms all other AL baselines andna¨ıve implementations for an ensemble of two query selec- tors in extensive experimental settings. ▷See Section 6
Kim_Bridging_the_Gap_Between_Model_Explanations_in_Partially_Annotated_Multi-Label_CVPR_2023
Abstract Due to the expensive costs of collecting labels in multi- label classification datasets, partially annotated multi-label classification has become an emerging field in computer vi- sion. One baseline approach to this task is to assume unob- served labels as negative labels, but this assumption induces label noise as a form of false negative. To understand the negative impact caused by false negative labels, we study how these labels affect the model’s explanation. We observe that the explanation of two models, trained with full and partial labels each, highlights similar regions but with dif- ferent scaling, where the latter tends to have lower attribu- tion scores. Based on these findings, we propose to boost the attribution scores of the model trained with partial labels to make its explanation resemble that of the model trained with full labels. Even with the conceptually simple approach, the multi-label classification performance improves by a large margin in three different datasets on a single pos- itive label setting and one on a large-scale partial label setting. Code is available at https://github.com/ youngwk/BridgeGapExplanationPAMC .
1. Introduction Multi-label image classification is the task of predict- ing all labels corresponding to a given image. Since web- crawled images often contain multiple objects/concepts [3, 32, 35, 44], the importance of this task is rising. How- ever, it faces a significant issue of huge annotation costs. We need C binary labels for each training image to provide ex- haustive annotation for a model that classifies images into C categories. It acts as a severe obstacle to scaling multi-label classification datasets. For this reason, partially annotated multi-label classifi- cation [2, 11, 13, 17, 21, 24] has recently become an actively *Corresponding author. Figure 1. CAM Observation. We compare the class activation map (CAM) output from two multi-label classification models: one trained with full labels ( CAM full) and the other trained with partial labels and AN assumption ( CAM partial). We observe that the overall structure of CAM partial is not much affected by the noisy false negative labels during training. This observation motivates us to make CAM partial similar to CAM fullby boosting its relatively large attribution scores. Best viewed in color. studied topic. In this setting, instead of exhaustive annota- tion, only a few categories are labeled for each training im- age. We can effectively reduce the burden of annotation by adopting partial annotation strategies. One baseline approach for solving a partially annotated multi-label classification task is assuming unobserved la- bels as negative labels (Assume Negative, AN) [4,6,36,40]. It is a reasonable assumption since most labels are nega- tive labels in the multi-label scenario [33]. However, this assumption causes label noise in a form of false negatives since the actually positive but unannotated labels are incor- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3408 rectly assumed to be negative. Since this label noise per- turbs the learning process of the model [1, 7, 18, 45], recent studies on a partially annotated multi-label classification fo- cus on suppressing the influence of label noise by ignoring or correcting the loss of samples that are likely to be false negatives [2, 21]. Aside from recent research directions, we delve into “how” false negative labels influence a multi-label classi- fication model. We conduct control experiments with two models. One is the model trained with partial labels and AN assumption where false negative labels exist. The other is the model trained with full annotations and thus trained without false negatives. We compare the class activation map (CAM) [49] output between the two models to see the difference in how each model understands the input image and makes a prediction result. Figure 1 shows that a model trained with false negatives still highlights similar regions to one trained with full an- notation. However, the attribution scores in the highlighted areas are much small. This observation leads us to think that if we scale up the damaged score of the highlighted region in the model trained with false negatives, the explanation of this model will become similar to that of the model trained with full annotation. To this end, we introduce a simple piece-wise linear function, named BoostLU, that bridges the gap between the explanation of two models trained with false negatives and with full annotation each. Concretely, we use the mod- ified CNN model to get CAM during the forward pass di- rectly [47], and the logit in the modified CNN model is the mean of attribution scores of CAM. The BoostLU function is applied element-wisely to the CAM output of the mod- ified CNN to boost the scores of the highlighted regions, thereby compensating for the decrease of attribution scores in CAM caused by false negatives. It increases the logit value for positive labels and thus makes a better prediction. Furthermore, when we combine BoostLU with the recently proposed methods [21] that explicitly detect and modify false negatives during training, it helps to detect false neg- atives better, thus leading to better performance. As a re- sult, we achieve state-of-the-art performance on PASCAL VOC [14], MS COCO [28], NUSWIDE [10], and Openim- ages V3 [23] datasets in a partial label setting. We summarize the contributions of this paper as follows. 1. We analyze how the false negative labels affect the ex- planation of the model in a partially annotated multi- label classification scenario. 2. We propose a simple but effective function, named BoostLU, that compensates for the damage of false negatives in a multi-label classification model with lit- tle extra computational cost. 3. When applied during inference, BoostLU boosts thebaseline method (AN)’s test performance without ad- ditional training. 4. Combined with recent methods of detecting and mod- ifying false negatives during training, BoostLU boosts the state-of-the-art performance on single positive and large-scale partial label settings.
Lei_PyramidFlow_High-Resolution_Defect_Contrastive_Localization_Using_Pyramid_Normalizing_Flow_CVPR_2023
Abstract During industrial processing, unforeseen defects may arise in products due to uncontrollable factors. Although unsupervised methods have been successful in defect local- ization, the usual use of pre-trained models results in low- resolution outputs, which damages visual performance. To address this issue, we propose PyramidFlow, the first fully normalizing flow method without pre-trained models that enables high-resolution defect localization. Specifically, we propose a latent template-based defect contrastive lo- calization paradigm to reduce intra-class variance, as the pre-trained models do. In addition, PyramidFlow utilizes pyramid-like normalizing flows for multi-scale fusing and volume normalization to help generalization. Our com- prehensive studies on MVTecAD demonstrate the proposed method outperforms the comparable algorithms that do not use external priors, even achieving state-of-the-art perfor- mance in more challenging BTAD scenarios.
1. Introduction Due to the uncontrollable factors in the complex in- dustrial manufacturing process, unforeseen defects will be brought to products inevitably. As the human visual system has the inherent ability to perceive anomalies [25], quality control relies on manual inspection for a long time. How- ever, large-scale images and tiny defects are challenging for manual inspection, so increasing research is focused on au- tomated machine vision inspection. Among all the meth- ods, supervised deep learning has achieved great success. It relies on annotated datasets to learn discriminative fea- tures, effectively overcoming the hand-crafted shortcom- ings. However, because of insufficient negative samples, the high demand for labels, and the absence of prior knowl- edge, those approaches based on supervised learning may I (a)reconstruction-based methodNF (b)anomaly-based methodBackbone (c) PyramidFlow I NF dzdx zI I IFigure 1. Illustration of various anomaly localization methods. (a)Reconstruction-based method. (b)Anomaly-based method, where NFdenotes normalizing flow. (c)Our PyramidFlow, which combines latent templates and normalizing flow, enables high- resolution localization. suffer in identifying unseen defects in practices, Recently, unsupervised methods have been applied to defect detection, as shown in Fig. 1(a,b). Reconstruction- based methods [4, 15, 23, 29] are the most famous, which take reconstructed images as templates and then apply ex- plicit contrast in image space to achieve high-resolution lo- calization. However, reconstructing using decoders is an ill-posed inverse problem, it is hard to reconstruct com- plex details. To overcome the above limitations, anomaly- based methods [6, 7] utilizing texture-aware pre-trained models achieves high image-level performance, which also damages pixel-level visual performance. One of the most promising methods is convolutional normalizing flows [10, 22, 27], which models the probability distribution further from pre-trained features, earning higher performance. In this paper, a Pyramid Normalizing Flow (Pyramid- Flow) is proposed. It develops the idea of templates from image space into latent space by normalizing flow, then performing contrast ∆zdfor high-resolution anomaly lo- calization, as shown in Fig. 1(c). Specifically, we propose the multi-scale Pyramid Coupling Block, which includes This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14143 invertible pyramid and volume normalization, as the crit- ical module to construct volume-preserving PyramidFlow. To the best of our knowledge, PyramidFlow is the first UNet-like fully normalizing flow specifically designed for anomaly localization, analogous to UNet [19] for biomed- ical image segmentation. Our main contributions can be summarized as follows. • We propose a latent template-based defect contrastive localization paradigm. Similar to the reconstruction- based methods, we perform contrast localization in la- tent space, which avoids the ill-posedness and reduces intra-classes variance efficiently. • We propose PyramidFlow, which includes invertible pyramids and pyramid coupling blocks for multi-scale fusing and mapping, enabling high-resolution defect localization. Additionally, we propose volume normal- ization for improving generalization. • We conduct comprehensive experiments to demon- strate that our advanced method outperforms compa- rable algorithms that do not use external priors, and even achieves state-of-the-art performance in complex scenarios.
Liu_Unsupervised_Continual_Semantic_Adaptation_Through_Neural_Rendering_CVPR_2023
Abstract An increasing amount of applications rely on data- driven models that are deployed for perception tasks across a sequence of scenes. Due to the mismatch between training and deployment data, adapting the model on the new scenes is often crucial to obtain good performance. In this work, we study continual multi-scene adaptation for the task of semantic segmentation, assuming that no ground-truth la- bels are available during deployment and that performance on the previous scenes should be maintained. We propose training a Semantic-NeRF network for each scene by fusing the predictions of a segmentation model and then using the view-consistent rendered semantic labels as pseudo-labels to adapt the model. Through joint training with the seg- mentation model, the Semantic-NeRF model effectively en- ables 2D-3D knowledge transfer. Furthermore, due to its compact size, it can be stored in a long-term memory and subsequently used to render data from arbitrary viewpoints to reduce forgetting. We evaluate our approach on Scan- Net, where we outperform both a voxel-based baseline and a state-of-the-art unsupervised domain adaptation method. *Authors share first authorship.†Authors share senior authorship.
1. Introduction Data-driven models trained for perception tasks play an increasing role in applications that rely on scene under- standing, including, e.g., mixed reality and robotics. When deploying these models on real-world systems, however, mismatches between the data used for training and those en- countered during deployment can lead to poor performance, prompting the need for an adaptation of the models to the new environment. Oftentimes, the supervision data required for this adaptation can only be obtained through a labori- ous labeling process. Furthermore, even when such data are available, a na ¨ıve adaptation to the new environment results in decreased performance on the original training data, a phenomenon known as catastrophic forgetting [19, 23]. In this work, we focus on the task of adapting a seman- tic segmentation network across multiple indoor scenes, un- der the assumption that no labeled data from the new envi- ronment are available. Similar settings are explored in the literature in the areas of unsupervised domain adaptation (UDA) [23, 38] and continual learning (CL) [19]. How- ever, works in the UDA literature usually focus on a sin- gle source-to-target transfer where the underlying assump- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3031 tion is that the data from both the source and the target domain are available all at once in the respective training stage, and often study the setting in which the knowledge transfer happens between a synthetic and a real environ- ment [6, 31, 32, 38]. On the other hand, the CL commu- nity, which generally explores the adaptation of networks across different tasks , has established the class-incremental setting as the standard for semantic segmentation, in which new classes are introduced across different scenes from the same domain and ground-truth supervision is provided [23]. In contrast, we propose to study network adaptation in a set- ting that more closely resembles the deployment of seman- tic networks on real-world systems. In particular, instead of assuming that data from a specific domain are available all at once, we focus on the scenario in which the network is sequentially deployed in multiple scenes from a real-world indoor environment (we use the ScanNet dataset [7]), and therefore has to perform multiple stages of adaptation from one scene to another. Our setting further includes the possi- bility that previously seen scenes may be revisited. Hence, we are interested in achieving high prediction accuracy on each new scene, while at the same time preserving perfor- mance on the previous ones. Note that unlike the better explored class-incremental CL, in this setting we assume a closed set of semantic categories, but tackle the covariate shift across scenes without the need for ground-truth labels. We refer to this setting as continual semantic adaptation . In this work, we propose to address this adaptation problem by leveraging advances in neural rendering [27]. Specifically, in a similar spirit to [11], when deploying a pre-trained network in a new scene, we aggregate the se- mantic predictions from the multiple viewpoints traversed by the agent into a 3D representation, from which we then render pseudo-labels that we use to adapt the network on the current scene. However, instead of relying on a voxel- based representation, we propose to aggregate the predic- tions through a semantics-aware NeRF [27, 48]. This for- mulation has several advantages. First, we show that us- ing NeRFs to aggregate the semantic predictions results in higher-quality pseudo-labels compared to the voxel-based method of [11]. Moreover, we demonstrate that using these pseudo-labels to adapt the segmentation network results in superior performance compared both to [11] and to the state-of-the-art UDA method CoTTA [43]. An even more interesting insight, however, is that due the differentiabil- ity of NeRF, we can jointly train the frame-level semantic network and the scene-level NeRF to enforce similarity be- tween the predictions of the former and the renderings of the latter. Remarkably, this joint procedure induces better performance of both labels, showing the benefit of mutual 2D-3D knowledge transfer. A further benefit of our method is that after adapting to a new scene, the NeRF encoding the appearance, geometryand semantic content for that scene can be compactly saved in long-term storage, which effectively forms a “memory bank” of the previous experiences and can be useful in re- ducing catastrophic forgetting. Specifically, by mixing pairs of semantic and color NeRF renderings from a small num- ber of views in the previous scenes and from views in the current scene, we show that our method is able to outper- form both the baseline of [11] and CoTTA [43] on the adap- tation to the new scene and in terms of knowledge reten- tion on the previous scenes. Crucially, the collective size of the NeRF models is lower than that of the explicit replay buffer required by [11] and of the teacher network used in CoTTA [43] up to several dozens of scenes. Additionally, each of the NeRF models stores a potentially infinite num- ber of views that can be used for adaptation, not limited to the training set as in [11], and removes the need to explicitly keep color images and pseudo-labels in memory. In summary, the main contributions of our work are the following: (i) We propose using NeRFs to adapt a semantic segmentation network to new scenes. We find that enforc- ing 2D-3D knowledge transfer by jointly adapting NeRF and the segmentation network on a given scene results in a consistent performance improvement; (ii) We address the problem of continually adapting the segmentation network across a sequence of scenes by compactly storing the NeRF models in a long-term memory and mixing rendered images and pseudo-labels from previous scenes with those from the current one. Our approach allows generating a potentially infinite number of views to use for adaptation at constant memory size for each scene; (iii) Through extensive exper- iments, we show that our method achieves better adapta- tion and performance on the previous scenes compared both to a recent voxel-based method that explored a similar set- ting [11] and to a state-of-the-art UDA method [43].
Liu_Slimmable_Dataset_Condensation_CVPR_2023
Abstract Dataset distillation, also known as dataset condensation, aims to compress a large dataset into a compact synthetic one. Existing methods perform dataset condensation by as- suming a fixed storage or transmission budget. When the budget changes, however, they have to repeat the synthe- sizing process with access to original datasets, which is highly cumbersome if not infeasible at all. In this paper, we explore the problem of slimmable dataset condensation, to extract a smaller synthetic dataset given only previous condensation results. We first study the limitations of exist- ing dataset condensation algorithms on such a successive compression setting and identify two key factors: (1) the in- consistency of neural networks over different compression times and (2) the underdetermined solution space for syn- thetic data. Accordingly, we propose a novel training ob- jective for slimmable dataset condensation to explicitly ac- count for both factors. Moreover, synthetic datasets in our method adopt a significance-aware parameterization. The- oretical derivation indicates that an upper-bounded error can be achieved by discarding the minor components with- out training. Alternatively, if training is allowed, this strat- egy can serve as a strong initialization that enables a fast convergence. Extensive comparisons and ablations demon- strate the superiority of the proposed solution over existing methods on multiple benchmarks.
1. Introductions The success of deep learning is largely attributed to the enormous amount of training data [5,8,12,15,21,36,37,41, 50]. However, the massive data not only inevitably intro- duces heavy burdens on storage and transmission but also incommodes many applications that require training over datasets multiple times, such as hyper-parameter optimiza- tion [3, 9, 16, 29, 30] and neural architecture search [10, 24, 43, 49]. Moreover, it raises concerns on privacy and copyright if raw datasets are published and accessed di- *Corresponding Author. (a). Adaptive DeploymentRAMRAM Synthetic Dataset…RAM …Dataset Condensation Continual Task StreamSynthetic Bufferwith a Fixed SizeParticipant 1Slim SmallMediumLargeTask 1Task 2Task 3…SlimSlim (b). Continual Learning Participant 2 Participant N Dynamic Number of Participants…Data CenterSlimSlimSlimSharedBandwidth(c). Federated LearningFigure 1. Scenarios where slimmable dataset condensation is use- ful: (a) adapting to devices with different storage budgets, (b) con- tinual learning using a synthetic buffer with a fixed size, and (c) federated learning with synthetic data where a dynamic number of participants share the network bandwidth. rectly [7, 40, 51]. These issues can be largely alleviated by using smaller datasets containing only a few synthetic samples but with performance similar to the original ones. How to compress a given real dataset into such a synthetic dataset is the main focus of dataset distillation , also known asdataset condensation (DC), whose concept is introduced by Wang et al. [45] and further developed by a series of following works recently [2,17,28,33,34,44,54,55,57,58]. Specifically, existing DC approaches work under a pre- defined storage budget, e.g., the number of images per class. Although it has been demonstrated that most performance of original datasets can be recovered by the synthetic ones with only a few synthetic samples, the fixed storage budget does not take the variations of the storage budget into con- sideration. Some examples are shown in Fig. 1. On the one hand, different devices may have different storage and transmission resources. On the other hand, in applications This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3759 like continual learning [4,31,32,38,39,46,52] and federated learning [11,13,19,25,26,42,48,59], the storage and trans- mission budgets may change on different occasions, since a replay buffer with a static memory size needs to take ac- count for more and more historical data, and the bandwidth allocated to each participant is smaller with the increas- ing number of participants, respectively. In these scenar- ios where it is necessary to adapt to different capacities on storage and transmission, or the budget is changed, exist- ing algorithms have to repeat the synthesizing process for the newly-defined budget with access to original datasets, which is largely cumbersome if not infeasible at all due to the lack of original data. In this paper, we phrase the task of re-condensing a syn- thetic dataset, derived from dataset distillation per se, as slimmable dataset condensation (slimmable DC). In fact, it even remains unclear whether a valid synthetic dataset can be re-condensed from only previously condensed samples. Unfortunately, we find that the answer is negative for exist- ing state-of-the-art methods [28, 33, 34, 58]. The basic idea of these methods is to optimize the validation error on real datasets for models trained by synthetic ones. Although the solution is effective for the original DC setting, it is not the case for slimmable DC. Specifically, we reveal that since the synthetic data for re-condensation are much less than the original ones, existing methods suffer from two main is- sues: (1) the performance is sensitive to the inconsistency of neural networks adopted on different occasions of compres- sion, and (2) solution spaces for re-condensed datasets be- come underdetermined, which triggers deviations in train- ing results and further leads to inferior performances. To address these drawbacks, we propose to explicitly regulate the consistency between the training effects using synthetic datasets before and after a condensation step for slimmable DC. Specifically, the proposed objective is com- posed of two terms: first-order and infinity-order parame- ter matching, which are designed to explicitly account for the two aforementioned issues. The former encourages a unified embedding space over different training iterations, while the latter enforces the consistency of final training pa- rameters in such a space. Optimized with the proposed ob- jective function, we achieve favorable results for slimmable DC: the performance of a further condensed dataset from a previously condensed one effectively approaches that ob- tained with access to the real dataset. Moreover, for an efficient slimming procedure, we ex- plore a significance-aware synthetic dataset parameteriza- tion, which explicitly embeds a linear space with orthogonal bases and askew-distributed singular values during training. Theoretical derivation indicates an upper-bounded error by discarding the minor components, i.e., bases with the small- est singular values. This strategy may serve as either a learning-free slimmable DC solution or a strong initializa-tion in learning-based settings to accelerate convergence. We conduct extensive experiments on multiple bench- marks and applications, including continual learning and federated learning, and demonstrate the effectiveness of the proposed solution. Results suggest that our method out- performs all state-of-the-art baselines by a large margin on slimmable DC. Our contributions are summarized below: • We introduce the task of slimmable dataset condensa- tion beyond the typical DC setting, which alleviates the dilemma of existing DC methods when the budget changes for storage or transmission. • We delve into the limitations of existing algorithms for typical DC and propose a novel first-order and infinity- order matching-based training objective pertinently for slimmable DC. • We propose a significance-aware synthetic dataset parameterization with a theoretical guarantee for learning-free slimmable DC or initialization to accel- erate convergence in learning-based settings. • Experiments on multiple benchmarks and applications demonstrate the effectiveness of the proposed method and its superiority over state-of-the-art baselines.
Lin_PCR_Proxy-Based_Contrastive_Replay_for_Online_Class-Incremental_Continual_Learning_CVPR_2023
Abstract Online class-incremental continual learning is a specific task of continual learning. It aims to continuously learn new classes from data stream and the samples of data stream are seen only once, which suffers from the catastrophic for- getting issue, i.e., forgetting historical knowledge of old classes. Existing replay-based methods effectively allevi- ate this issue by saving and replaying part of old data in a proxy-based or contrastive-based replay manner. Although these two replay manners are effective, the former would incline to new classes due to class imbalance issues, and the latter is unstable and hard to converge because of the limited number of samples. In this paper, we conduct a comprehensive analysis of these two replay manners and find that they can be complementary. Inspired by this find- ing, we propose a novel replay-based method called proxy- based contrastive replay (PCR). The key operation is to re- place the contrastive samples of anchors with correspond- ing proxies in the contrastive-based way. It alleviates the phenomenon of catastrophic forgetting by effectively ad- dressing the imbalance issue, as well as keeps a faster con- vergence of the model. We conduct extensive experiments on three real-world benchmark datasets, and empirical re- sults consistently demonstrate the superiority of PCR over various state-of-the-art methods1.
1. Introduction Online class-incremental continual learning (online CICL) is a special scenario of continual learning [12]. Its goal is to learn a deep model that can achieve knowledge accumulation of new classes and not forget information learned from old classes. In the meantime, the samples of a continuously non-stationary data stream are accessed only once during the learning process. At present, catastrophic forgetting (CF) is the main problem of online CICL. It is as- *Corresponding author 1https://github.com/FelixHuiweiLin/PCR Softmax function (a) Proxy -basedPositive pair Negative pairs Softmax function (b) Contrastive -basedPositive pair Negative pairs Softmax function (c) Proxy -based Contrastive Replay ( Ours )Positive pair Negative pairs: Anchor sample: Positive sample : Negative samples : Positive proxy : Negative proxies: Number of samples in a batch : Number of classes to learnFigure 1. Illustration of our work. (a) The example of proxy-based replay manner. For each anchor sample, it calculates similarities of all anchor-to-proxy pairs. (b) The example of contrastive-based replay manner. For each anchor sample, it calculates similarities of all anchor-to-sample pairs in the same batch. (c) The example of our method. It calculates similarities of anchor-to-proxy pairs, which is similar to the proxy-based method. However, the anchor- to-proxy pairs are selected by the anchor-to-sample pairs in the same batch, which performs in the contrastive-based manner. sociated with the phenomenon that the model has a signif- icant performance drop for old classes when learning new classes. The main reason is historical knowledge of old data would be overwritten by novel information of new data. Among all types of methods proposed in continual learn- ing, the replay-based methods have shown superior perfor- mance for online CICL [25]. In this family of methods, part of previous samples are saved in an episodic memory buffer and then used to learn together with current samples. In general, there are two ways to replay. The first is the proxy-based replay manner, which is to replay by using the proxy-based loss and softmax classifier. As shown in Fig- ure 1(a), it calculates similarities between each anchor with This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24246 all proxies belonging to Cclasses. A proxy can be regarded as the representative of a sub-dataset [38], and the anchor is one of the samples in the training batch. The second is the contrastive-based replay manner that replays by using the contrastive-based loss and nearest class mean (NCM) clas- sifier [27]. Shown as Figure 1(b), it computes similarities between each anchor with all Nsamples in the same train- ing batch. Although these two manners are effective, they have their corresponding limitations. The former is sub- jected to the “bias” issue caused by class imbalance, tending to classify most samples of old classes into new categories. The latter is unstable and hard to converge in the training process due to the small number of samples. In this work, we comprehensively analyze their charac- teristics and find that the coupling of them can achieve com- plementary advantages. On the one hand, the proxy-based manner enables fast and reliable convergence with the help of proxies. On the other hand, although the contrastive- based manner is not very robust, it has advantages in the se- lection of anchor-to-sample pairs. Only the classes associ- ated with samples in anchor-to-sample pairs can be selected to learn. Previous studies [1, 6] have proved that suitably selecting of anchor-to-proxy pairs is effective to address the “bias” issue. Therefore, it is necessary to develop a cou- pling manner to jointly keep these advantages at the same time. In other words, it not only takes proxies to improve the robustness of the model as proxy-based manner, but also overcomes the “bias” problem by selecting anchor-to-proxy pairs as the pairs selection of contrastive-based manner. With these inspirations, we propose a novel replay-based method called proxy-based contrastive replay (PCR) to al- leviate the phenomenon of CF for online CICL. The core motivation is the coupling of proxy-based and contrastive- based loss, and the key operation is to replace anchor-to- sample pairs with anchor-to-proxy pairs in the contrastive- based loss. As shown in Figure 1(c), our method calcu- lates similarities between each anchor and other proxies, which is similar to the proxy-based loss. However, it does not straightly make full use of proxies from all classes. It only takes the proxies whose associated classes of sam- ples appear in the same batch, which is analogous to the contrastive-based loss. For one thing, it keeps fast conver- gence and stable performance with the help of proxies. For another thing, it addresses the “bias” issue by only choosing part of anchor-to-proxy pairs to calculate categorical proba- bility. And the selected anchor-to-proxy pairs are generally better than the ones selected by existing solutions [1, 6]. Our main contributions can be summarized as follows: 1) We theoretically analyze the characteristics of proxy- based and contrastive-based replay manner, discover- ing the coupling manner of them is beneficial. To the best of our knowledge, this work is the first one to com- bine these two manners for the online CICL problem.2) We develop a novel online CICL framework called PCR to mitigate the forgetting problem. By replac- ing the samples for anchor with proxies in contrastive- based loss, we achieve the complementary advantages of two existing approaches. 3) We conduct extensive experiments on three real-world datasets, and the empirical results consistently demon- strate the superiority of our PCR over various state-of- the-art methods. We also investigate and analyze the benefits of each component by ablation studies.
Kant_Invertible_Neural_Skinning_CVPR_2023
Abstract Building animatable and editable models of clothed hu- mans from raw 3D scans and poses is a challenging prob- lem. Existing reposing methods suffer from the limited ex- pressiveness of Linear Blend Skinning (LBS), require costly mesh extraction to generate each new pose, and typically do not preserve surface correspondences across different poses. In this work, we introduce Invertible Neural Skinning (INS) to address these shortcomings. To maintain corre- spondences, we propose a Pose-conditioned Invertible Net- work (PIN) architecture, which extends the LBS process by learning additional pose-varying deformations. Next, we combine PIN with a differentiable LBS module to build an expressive and end-to-end Invertible Neural Skinning (INS) pipeline. We demonstrate the strong performance of our method by outperforming the state-of-the-art reposing tech- niques on clothed humans and preserving surface corre- spondences, while being an order of magnitude faster. We also perform an ablation study, which shows the usefulness of our pose-conditioning formulation, and our qualitative results display that INS can rectify artefacts introduced by LBS well.
1. Introduction Being able to create animatable representations of clothed humans beyond skinned meshes is essential for building realistic augmented or virtual reality experiences and improving simulators. Towards this goal, we consider the task of building animatable human representations from raw 3D scans and corresponding poses. Prior work in this area has seen a shift from building parametric models of humans [6, 21, 29], to more recent works learning implicit 3D neural representations [1,12,13,47,48,52,53] from data in canonical space. These canonical representations are an- imated to a new pose by a learning skinning weight field around them [11, 14, 35, 49, 54] and applying Linear Blend Skinning (LBS) to warp the surface, where the pose is de- fined by a bone skeleton underlying the 3D surface. Figure 1. Fast and Invertible Posing. We propose an end-to-end learnable reposing pipeline that allows animating implicit surfaces with intricate pose-varying effects, without requiring mesh extrac- tion [34] for each pose, while also maintaining correspondences across poses. These prior works generally suffer from the limited ex- pressivity of LBS when handling complex pose-varying deformations, such as those of loose clothes and body tissue (i.e. muscle bulges, skin wrinkles). In paramet- ric models like SMPL [29], such deformations are repre- sented by adding simple linear pose correctives (aka blend shapes), but these are restrictive and only work for un- clothed humans. Implicit methods, to relieve this issue, learn their canonical representations conditioned on the de- formed pose [11, 14]. However, this conditioning comes with two major drawbacks during reposing. Given the se- quence of poses, a new mesh has to be extracted from scratch for each pose, which becomes a bottleneck when animating subjects at a high frame-rate or resolution. Also, as a consequence of this step, correspondences (topology preservation) between the surfaces of the same subject across different poses are lost. Invertible Neural Networks (INN) [15, 16, 24] are bijec- tive functions that can preserve exact correspondences be- tween their input and output spaces, while learning com- plex non-linear transforms between them. This ability of INNs makes them a suitable candidate for reposing, and in this work, we leverage INNs to build an Invertible Neu- ral Skinning (INS) pipeline. For this, we first build a This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8715 Pose-conditioned Invertible Network, abbreviated as PIN1, to learn pose-conditioned deformations. Next, to create an end-to-end Invertible Neural Skinning (INS) pipeline, we place two PINs around a differentiable LBS module, and use a pose-free canonical representation. These PINs help capture the non-linear surface deformations of clothes across poses and alleviate the volume loss suffered from the LBS operation. Since our canonical representation remains pose-free, we perform the expensive mesh extraction ex- actly once, and repose the mesh by simply warping it with the learned LBS and an inverse pass through PINs. We demonstrate the strong performance of INS by out- performing the previous state-of-the-art reposing method SNARF [11]. On clothed humans data, we find INS pro- vides an absolute gain of roughly 1% when compared to SNARF with pose-conditioning, and roughly 6% compared to SNARF without pose-conditioning. We conduct experi- ments on much simpler minimally clothed human data and obtain competitive results. We also find INS to be an or- der of magnitude faster at reposing long sequences. We ab- late our INS and demonstrate the effectiveness of our pose- conditioning formulation. Our results clearly show that the proposed INS can correct the LBS artefacts well.
Kim_Spatio-Focal_Bidirectional_Disparity_Estimation_From_a_Dual-Pixel_Image_CVPR_2023
Abstract Dual-pixel photography is monocular RGB-D photography with an ultra-high resolution, enabling many applications in computational photography. However, there are still sev- eral challenges to fully utilizing dual-pixel photography. Unlike the conventional stereo pair, the dual pixel exhibits a bidirectional disparity that includes positive and nega- tive values, depending on the focus plane depth in an im- age. Furthermore, capturing a wide range of dual-pixel disparity requires a shallow depth of field, resulting in a severely blurred image, degrading depth estimation perfor- mance. Recently, several data-driven approaches have been proposed to mitigate these two challenges. However, due to the lack of the ground-truth dataset of the dual-pixel disparity, existing data-driven methods estimate either in- verse depth or blurriness map. In this work, we propose a self-supervised learning method that learns bidirectional disparity by utilizing the nature of anisotropic blur kernels in dual-pixel photography. We observe that the dual-pixel left/right images have reflective-symmetric anisotropic ker- nels, so their sum is equivalent to that of a conventional image. We take a self-supervised training approach with the novel kernel-split symmetry loss accounting for the phe- nomenon. Our method does not rely on a training dataset of dual-pixel disparity that does not exist yet. Our method can estimate a complete disparity map with respect to the focus-plane depth from a dual-pixel image, outperforming the baseline dual-pixel methods.
1. Introduction Dual-pixel is an image sensor technology that a single pixel has two photodiodes, while a pixel on the traditional image sensor has only a single photodiode. The dual-pixel is orig- inally invented to leverage the phase difference of two pho- todiodes for efficient autofocusing [1, 14, 27]. Nowadays, dual-pixel image sensors can be easily found in multiple camera platforms such as Canon EOS 5D Mark IV DSLR and Google Pixel phone cameras. The dual-pixel cameras make possible not only traditional autofocusing but also Left Left Right Right Stereo D ual-pixel Figure 1. Captured images from the traditional binocular stereo camera and dual-pixel camera, with a focus-plane depth at the middle object. Upper part of the image is a left image, and lower part is a right image from each camera. Arrows note parallax, which is unidirectional in stereo imaging and bidirectional in dual- pixel imaging. a wide range of interesting applications of RGB-D com- putational photography, such as depth estimation [9], re- focusing [29], deblurring [2, 34] or reflection removal [24]. The captured dual-pixel image can be separated into two images by gathering pixels from the left and ride sides of photodiodes, respectively. Although the baseline between dual-pixel photodiodes is short, these two images capture slightly different light fields, producing disparity by paral- lax. One characteristic of the dual-pixel setup compared with a traditional binocular stereo is that it yields a bidirectional disparity shown in Figure 1, depending on the focus-plane depth that is the distance between a camera lens and the per- fect point of focus in an image (Figure 2). The dual-pixel disparity estimation should account for the direction of hor- izontal shift changes by the focus-plane depth. In addition, the magnitude of a dual-pixel disparity is proportional to the size of a circle of confusion. To guarantee enough disparity in dual-pixel imaging, optics with a large aperture are re- quired to obtain a shallow depth of field for a large circle of confusion, which is the area around the plane of focus that appears to be in focus. As a result, the disparity is estimated from a pair of blurry dual-pixel images caused by the shal- low depth of field, which raises another challenge. The con- ventional camera has an isotropic blur kernel when an im- age is out of focus (Figure 2). However, a dual-pixel camera produces anisotropic blur kernels, i.e., the kernel shapes of the left/right photodiodes are anisotropic and symmetrically This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5023 flipped [3,23]. It is impossible to learn these kernels explic- itly in a supervised manner since no training dataset with bidirectional disparity is available. To address those challenges, several data-driven super- vised learning-based methods have been proposed [9, 21]. However, the main challenge in this supervised approach is that it is difficult to create a ground-truth disparity dataset specifically designed for a dual-pixel system unless it is produced by simulating complete light transport through camera optics with the variation of focus-plane depths in each image. Up to date, there is no bidirectional disparity dataset available for supervised learning. Due to this, these methods alternatively estimate either inverse depth [9, 21], orblurriness maps [34] only, which cannot directly esti- mate the disparity of a dual-pixel image without focus-plane depth information. And thus, only traditional optimization- based approaches [23, 29] have handled the bidirectional disparity estimation problem in dual-pixel photography. No learning-based approach tackles the bidirectional disparity problem in dual-pixel photography. In this work, we propose a learning-based bidirectional disparity estimation method specially tailored for dual-pixel imaging. Our contribution consists of two main parts: (1) Assuming an isotropic blur kernel, we first pretrain a conventional stereo network using a stereo dataset with left- right inverted. (2) Based on our observation that the dual- pixel left/right images have reflective-symmetric anisotropic kernels that their sum has to be equivalent to that of a con- ventional image, we employ our novel self-supervised train- ing with the novel kernel-split symmetry loss accounting for the phenomenon. In addition, we demonstrate that our model represents more accurate 3D geometric relationships among objects with comprehensive evaluations.
Li_MDQE_Mining_Discriminative_Query_Embeddings_To_Segment_Occluded_Instances_on_CVPR_2023
Abstract While impressive progress has been achieved, video in- stance segmentation (VIS) methods with per-clip input of- ten fail on challenging videos with occluded objects and crowded scenes. This is mainly because instance queries in these methods cannot encode well the discriminative em- beddings of instances, making the query-based segmenter difficult to distinguish those ‘hard’ instances. To address these issues, we propose to mine discriminative query em- beddings (MDQE) to segment occluded instances on chal- lenging videos. First, we initialize the positional embed- dings and content features of object queries by considering their spatial contextual information and the inter-frame ob- ject motion. Second, we propose an inter-instance mask repulsion loss to distance each instance from its nearby non-target instances. The proposed MDQE is the first VIS method with per-clip input that achieves state-of-the-art re- sults on challenging videos and competitive performance on simple videos. In specific, MDQE with ResNet50 achieves 33.0% and 44.5% mask AP on OVIS and YouTube-VIS 2021, respectively. Code of MDQE can be found at https: //github.com/MinghanLi/MDQE_CVPR2023 .
1. Introduction Video instance segmentation (VIS) [51] aims to obtain pixel-level segmentation masks for instances of different classes over the entire video. The current VIS methods can be roughly divided into two paradigms: per-frame in- put based methods [3, 18, 20, 26, 47, 51, 52] and per-clip input based methods [1, 19, 27, 45, 46, 48, 53]. The for- mer paradigm first partitions the whole video into individual frames to segment objects frame by frame, and then asso- ciate the predicted instance masks across frames, while the latter takes per-clip spatio-temporal features as input to pre- dict multi-frame instance masks with the help of embedding learning [1], graph neural networks [41] and transformer *Corresponding author.networks [17, 19, 45, 46]. The recently proposed per-clip VIS methods [19, 45, 46, 48] have set new records on the YouTube-VIS datasets [51], achieving significant performance improvement over the per-frame VIS methods [3, 5, 12, 14, 26, 52, 54]. Seq- Former [46] and VITA [17] locate an instance in each frame and aggregate temporal information to learn powerful rep- resentations of video-level instances via a naive weighted manner and a video-level decoder, respectively. However, on the challenging OVIS dataset [37], which includes oc- cluded or similar-looking instances in crowded scenes, the per-clip VIS methods lag behind the per-frame ones. Ac- tually, the recently developed per-frame method IDOL [47] records state-of-the-art performance on OVIS by introduc- ing contrastive learning [10, 35, 44] to learn inter-frame in- stance embeddings. We argue that the per-clip VIS meth- ods should be able to exploit richer spatial-temporal fea- tures and achieve better performance than their per-frame counterparts. However, there are two main issues that limit the existing per-clip methods to achieve this goal. First, existing query-based VIS methods adopt zero or random input as the positional embeddings and content fea- tures of object queries in decoder layers, which cannot en- code spatio-temporal prior of objects, resulting in poor re- sults on challenging videos. Second, during training, the existing mask prediction loss mainly forces each query to match the pixels of its target instance [21,42] and mismatch the pixels of other instances and the background. No fur- ther inter-instance clue has been exploited to teach the seg- menter to distinguish mutually occluded instances. To address the above issues, we propose to mine dis- criminative query embeddings (MDQE) to better segment hard instances on challenging videos for per-clip VIS meth- ods. First, we propose to improve the initialization of ob- ject queries to specify discriminative spatial-temporal pri- ors. We divide the activation map of each frame into several patches via a grid and select the peak point in each patch as the initial positions of frame-level queries, and then asso- ciate them across frames by embedding similarity to ensure that frame-level queries in the same grid of the video clip This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10524 Figure 1. (a) The proposed MDQE architecture consists of a backbone and encoder that extract multi-scale features Ffrom a video clip, a query initialization module that produces temporally-aligned frame-level queries qt, a decoder that decodes discriminative clip-level queries q, and a Mask Net that generates mask features D. The mask features Dand clip-level queries qare combined via a linear combination to obtain the clip-level instance mask ˆM, which is supervised by our proposed inter-instance mask repulsion loss in Sec. 3.3. (b) The frame-level query initialization consists of two steps: grid-guided query selection and inter-frame query association, resulting in temporally-aligned frame-level queries. Please refer to Sec. 3.2 for more details. can correspond to the same object. Second, to teach the query-based segmenter to distinguish occluded instances, we replace the original mask prediction loss with an inter- instance mask repulsion loss, which forces each query to ac- tivate the pixels of its target instance and suppress the pixels of its surrounding non-target instances. The proposed VIS method with per-clip input, namely MDQE, is the first to achieve contrastive learning of in- stance embeddings via query initialization and the inter- instance mask repulsion supervision, which can effectively segment hard instances on challenging videos. Our exper- iments on both OVIS and YouTube-VIS datasets validate that MDQE with per-clip input achieve competitive perfor- mance with its per-frame competitors.
Kim_Demystifying_Causal_Features_on_Adversarial_Examples_and_Causal_Inoculation_for_CVPR_2023
Abstract The origin of adversarial examples is still inexplicable in research fields, and it arouses arguments from various view- points, albeit comprehensive investigations. In this paper, we propose a way of delving into the unexpected vulnera- bility in adversarially trained networks from a causal per- spective, namely adversarial instrumental variable (IV) re- gression. By deploying it, we estimate the causal relation of adversarial prediction under an unbiased environment dis- sociated from unknown confounders. Our approach aims to demystify inherent causal features on adversarial exam- ples by leveraging a zero-sum optimization game between a casual feature estimator (i.e., hypothesis model) and worst- case counterfactuals (i.e., test function) disturbing to find causal features. Through extensive analyses, we demon- strate that the estimated causal features are highly related to the correct prediction for adversarial robustness, and the counterfactuals exhibit extreme features significantly devi- ating from the correct prediction. In addition, we present how to effectively inoculate CAusal FEatures (CAFE) into defense networks for improving adversarial robustness.
1. Introduction Adversarial examples, which are indistinguishable to hu- man observers but maliciously fooling Deep Neural Net- works (DNNs), have drawn great attention in research fields due to their security threats used to compromise machine learning systems. In real-world environments, such poten- tial risks evoke weak reliability of the decision-making pro- cess for DNNs and pose a question of adopting DNNs in safety-critical areas [4, 58, 66]. To understand the origin of adversarial examples, semi- nal works have widely investigated the adversarial vulner- ability through numerous viewpoints such as excessive lin- earity in a hyperplane [26], aberration of statistical fluctu- ations [59, 63], and phenomenon induced from frequency *Equal contribution. †Corresponding author. 𝑍𝑍 𝑌𝑌𝑈𝑈 𝑇𝑇ℎConfounder Outcome TreatmentInstrument 𝑋𝑋advFeature Variation 𝒇𝒇 Causal Estimatorℎ𝑍𝑍 𝒇𝒇𝑋𝑋advUnobserved Confounders Causal FeaturesFigure 1. Data generating process (DGP) with IV . By deploying Z, it can estimate causal relation between treatment Tand outcome Yunder exogenous condition for unknown confounders U. information [73]. Recently, several works [34, 35, 40] have revealed the existence and pervasiveness of robust and non-robust features in adversarially trained networks and pointed out that the non-robust features on adversarial ex- amples can provoke unexpected misclassifications. Nonetheless, there still exists a lack of common consen- sus [22] on underlying causes of adversarial examples, al- beit comprehensive endeavors [32,64]. It is because that the earlier works have focused on analyzing associations be- tween adversarial examples and target labels in the learning scheme of adversarial training [42, 54, 67, 72, 77], which is canonical supervised learning. Such analyses easily induce spurious correlation ( i.e.,statistical bias) in the learned as- sociations, thereby cannot interpret the genuine origin of adversarial vulnerability under the existence of possibly bi- ased viewpoints ( e.g.,excessive linearity, statistical fluc- tuations, frequency information, and non-robust features). In order to explicate where the adversarial vulnerability comes from in a causal perspective and deduce true adver- sarial causality, we need to employ an intervention-oriented approach ( i.e.,causal inference) that brings in estimating causal relations beyond analyzing merely associations for the given data population of adversarial examples. One of the efficient tools for causal inference is in- strumental variable (IV) regression when randomized con- trolled trials (A/B experiments) or full controls of unknown confounders are not feasible options. It is a popular ap- proach used to identify causality in econometrics [13, 16, 47], and it provides an unbiased environment from un- known confounders that raise the endogeneity of causal in- ference [55]. In IV regression, the instrument is utilized This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12302 to eliminate a backdoor path derived from unknown con- founders by separating exogenous portions of treatments. For better understanding, we can instantiate a case of find- ing causal relations [9] between education Tand earnings Yas illustrated in Fig. 1. Solely measuring correlation be- tween the two variables does not imply causation, since there may exist unknown confounders U(e.g.,individual ability, family background, etc.). Ideally, conditioning on Uis the best way to identify causal relation, but it is impos- sible to control the unobserved variables. David Card [9] has considered IV as the college proximity Z, which is di- rectly linked with education Tbut intuitively not related with earnings Y. By assigning exogenous portion to Z, it can provide an unbiased environment dissociated from U for identifying true causal relation between TandY. Specifically, once regarding data generating process (DGP) [53] for causal inference as in Fig. 1, the existence of unknown confounders Ucould create spurious correlation generating a backdoor path that hinders causal estimator h (i.e.,hypothesis model) from estimating causality between treatment Tand outcome Y(T←U→Y). By adopt- ing an instrument Z, we can acquire the estimand of true causality from hin an unbiased state ( Z→T→Y). Bring- ing such DGP into adversarial settings, the aforementioned controversial perspectives ( e.g.,excessive linearity, statisti- cal fluctuations, frequency information, and non-robust fea- tures) can be regarded as possible candidates of unknown confounders Uto reveal adversarial origins. In most ob- servational studies, everything is endogenous in practice so that we cannot explicitly specify all confounders and con- duct full controls of them in adversarial settings. Accord- ingly, we introduce IV regression as a powerful causal ap- proach to uncover adversarial origins, due to its capability of causal inference although unknown confounders remain. Here, unknown confounders Uin adversarial settings easily induce ambiguous interpretation for the adversarial origin producing spurious correlation between adversarial examples and their target labels. In order to uncover the adversarial causality, we first need to intervene on the in- termediate feature representation derived from a network f and focus on what truly affects adversarial robustness irre- spective of unknown confounders U, instead of model pre- diction. To do that, we define the instrument Zas feature variation in the feature space of DNNs between adversar- ial examples and natural examples, where the variation Z is originated from the adversarial perturbation in the im- age domain such that Zderives adversarial features Tfor the given natural features. Note that regarding Zas in- strument is reasonable choice, since the feature variation alone does not serve as relevant information for adversar- ial prediction without natural features. Next, once we find causality-related feature representations on adversarial ex- amples, then we name them as causal features Ythat canencourage robustness of predicting target labels despite the existence of adversarial perturbation as in Fig. 1. In this paper, we propose adversarial instrumental vari- able (IV) regression to identify causal features on adversar- ial examples concerning the causal relation of adversarial prediction. Our approach builds an unbiased environment for unknown confounders Uin adversarial settings and es- timates inherent causal features on adversarial examples by employing generalized method of moments (GMM) [28] which is a flexible estimation for non-parametric IV regres- sion. Similar to the nature of adversarial learning [5, 25], we deploy a zero-sum optimization game [20, 41] between a hypothesis model and test function, where the former tries to unveil causal relation between treatment and outcome, while the latter disturbs the hypothesis model from esti- mating the relation. In adversarial settings, we regard the hypothesis model as a causal feature estimator which ex- tracts causal features in the adversarial features to be highly related to the correct prediction for the adversarial robust- ness, while the test function makes worst-case counterfac- tuals ( i.e.,extreme features) compelling the estimand of causal features to significantly deviate from correct predic- tion. Consequently, it can further strengthen the hypothesis model to demystify causal features on adversarial examples. Through extensive analyses, we corroborate that the es- timated causal features on adversarial examples are highly related to correct prediction for adversarial robustness, and the test function represents the worst-case counterfactuals on adversarial examples. By utilizing feature visualiza- tion [43, 50], we interpret the causal features on adversar- ial examples in a human-recognizable way. Furthermore, we introduce an inversion of the estimated causal features to handle them on the possible feature bound and present a way of efficiently injecting these CAusal FEatures (CAFE) into defense networks for improving adversarial robustness.
Kleinman_Critical_Learning_Periods_for_Multisensory_Integration_in_Deep_Networks_CVPR_2023
Abstract We show that the ability of a neural network to integrate information from diverse sources hinges critically on be- ing exposed to properly correlated signals during the early phases of training. Interfering with the learning process during this initial stage can permanently impair the devel- opment of a skill, both in artificial and biological systems where the phenomenon is known as a critical learning pe- riod. We show that critical periods arise from the complex and unstable early transient dynamics, which are decisive of final performance of the trained system and their learned representations. This evidence challenges the view, engen- dered by analysis of wide and shallow networks, that early learning dynamics of neural networks are simple, akin to those of a linear model. Indeed, we show that even deep linear networks exhibit critical learning periods for multi- source integration, while shallow networks do not. To bet- ter understand how the internal representations change ac- cording to disturbances or sensory deficits, we introduce a new measure of source sensitivity, which allows us to track the inhibition and integration of sources during training. Our analysis of inhibition suggests cross-source reconstruc- tion as a natural auxiliary training objective, and indeed we show that architectures trained with cross-sensor recon- struction objectives are remarkably more resilient to crit- ical periods. Our findings suggest that the recent success in self-supervised multi-modal training compared to previ- ous supervised efforts may be in part due to more robust learning dynamics and not solely due to better architectures and/or more data.
1. Introduction Learning generally benefits from exposure to diverse sources of information, including different sensory modal- ities, views, or features. Multiple sources can be more in- formative than the sum of their parts. For instance, both views of a random-dot stereogram are needed to extract the *Work conducted during an internship at AWS AI Labs.synergistic information , which is absent in each individual view [ 17]. More generally, multiple sources can help iden- tify latent common factors of variation relevant to the task, and separate them from source-specific nuisance variability, as done in contrastive learning. Much information fusion work in Deep Learning focuses on the design of the architecture, as different sources may require different architectural biases to be efficiently en- coded. We instead focus on the learning dynamics , since effective fusion of different sources relies on complex phe- nomena beginning during the early epochs of training. In fact, even slight interference with the learning process dur- ing this critical period can permanently damage a network’s ability to harvest synergistic information. Even in animals, which excel at multi-sensor fusion, a temporary deficit in one source during early development can permanently im- pair the learning process: congenital strabismus in humans can cause permanent loss of stereopsis if not corrected suf- ficiently early; similarly, visual/auditory misalignment can impair the ability of barn owls to localize prey [ 18]. In artifi- cial networks, the challenge of integrating different sources has been noted in visual question answering (VQA), where the model often resorts to encoding less rich but more read- ily accessible textual information [ 2,6], ignoring the visual modality, or in audio-visual processing, where acoustic in- formation is often washed out by visual information [ 32]. Such failures are commonly attributed to the mismatch in learning speed between sources, or their “information asymmetry” for the task. It has also been suggested, based on limiting analysis for wide networks, that the initial dy- namics of DNNs are very simple [ 16], seemingly in contrast with evidence from biology. In this paper, we instead argue thatthe early learning dynamics of information fusion in deep networks are both highly complex and brittle, to the point of exhibiting critical learning periods similar to bio- logical systems. In Sect. 2, we show that shallow networks do not exhibit critical periods when learning to fuse diverse sources of in- formation, but deep networks do. Even though, unlike an- imals, artificial networks do not age, their learning success is still decided during the early phases of training. The ex- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24296 Figure 1. Decomposition of infor
Kim_Single_Domain_Generalization_for_LiDAR_Semantic_Segmentation_CVPR_2023
Abstract With the success of the 3D deep learning models, var- ious perception technologies for autonomous driving have been developed in the LiDAR domain. While these mod- els perform well in the trained source domain, they strug- gle in unseen domains with a domain gap. In this pa- per, we propose a single domain generalization method for LiDAR semantic segmentation (DGLSS) that aims to en- sure good performance not only in the source domain but also in the unseen domain by learning only on the source domain. We mainly focus on generalizing from a dense source domain and target the domain shift from different LiDAR sensor configurations and scene distributions. To this end, we augment the domain to simulate the unseen do- mains by randomly subsampling the LiDAR scans. With the augmented domain, we introduce two constraints for gen- eralizable representation learning: sparsity invariant fea- ture consistency (SIFC) and semantic correlation consis- tency (SCC). The SIFC aligns sparse internal features of the source domain with the augmented domain based on the feature affinity. For SCC, we constrain the correlation be- tween class prototypes to be similar for every LiDAR scan. We also establish a standardized training and evaluation setting for DGLSS. With the proposed evaluation setting, our method showed improved performance in the unseen domains compared to other baselines. Even without ac- cess to the target domain, our method performed better than the domain adaptation method. The code is available at https://github.com/gzgzys9887/DGLSS .
1. Introduction Understanding the surrounding scene is essential for au- tonomous driving systems. Using LiDAR sensors for per- ception has recently gained popularity from their ability to provide accurate distance information. Among such tasks, LiDAR semantic segmentation (LSS) predicts point-wise semantic labels from a single sweep of LiDAR data. To- *The first two authors contributed equally. In alphabetical order. (a) Ground Truth (b) Domain Adaptation (c) Ours Unseen domainSource domainFigure 1. Segmentation results on the source (SemanticKITTI [4]) and unseen (nuScenes-lidarseg [7]) domains. The results are from (a) ground truth, (b) domain adaptation method [60] adapted to Waymo [64] dataset, and (c) ours. Our method predicts success- fully in both source and unseen domains, while the domain adap- tation method struggles in both domains. gether with the recent development of 3D point cloud deep- learning models and the release of several real-world 3D an- notated datasets [4,7,53,64], numerous research on LiDAR semantic segmentation have emerged lately. However, since these models do not consider the data distribution differ- ences between the train and test domains during the learn- ing process, severe performance deterioration occurs when applied to real-world applications where domain gaps exist. Two main domain gaps arise for real-scene LiDAR datasets: 1) differences in sparsity due to different sensor configurations and 2) differences in scene distribution. De- pending on the type of LiDAR sensor, the total number of beams, vertical field of view (FOV), and vertical and hori- zontal resolutions differ, which leads to different sampling patterns. SemanticKITTI [4] dataset used a 64-beam Li- DAR sensor, but a 32-beam sensor was used in nuScenes- lidarseg [7], and for SemanticPOSS [53], a 40-beam sen- sor was used. Waymo [64] dataset also used a 64-beam LiDAR, but there is a sparsity difference because the ver- tical resolution differs from SemanticKITTI. Table 1 sum- marizes configuration details of LiDAR sensors used in Se- manticKITTI, nuScenes-lidarseg, Waymo, and Semantic- POSS. In addition, each dataset was acquired from different This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17587 locations, so the scene structure as well as category distri- bution can vary enormously. For example, SemanticKITTI mainly contains suburban areas and therefore has a high number of cars, roads, and vegetation. Whereas nuScenes- lidarseg and Waymo datasets were acquired from dense ur- ban environments and contain crowded pedestrian scenes, downtown, and also residential areas. Fig. 1 shows the dif- ferences in sparsity and scene between different datasets. Recently, unsupervised domain adaptation (UDA) meth- ods for LiDAR point clouds [1, 60, 72, 76, 81] have been proposed to mitigate performance degradation in a specific target domain. While UDA methods perform well in the target domain, they cannot guarantee high performance in unseen domains, which is critical for safe driving systems. As shown in Fig. 1(b), the UDA method [60] trained on the source domain (SemanticKITTI) and adapted to the tar- get domain (Waymo) has low performance in the unseen domain (nuScenes-lidarseg). Also, the cumbersome pro- cess of acquiring new data and retraining is required to ap- ply UDA to a new target domain. These problems can be tackled by building domain generalizable LSS models that guarantee performance on unseen domains and robustness against domain gaps. Among previous works, [76] did test its UDA methodology in a domain generalization (DG) set- ting. However, the method was not specifically designed for DG and was deemed impractical for real-world applications due to its limited evaluation with 2 classes. All of these cir- cumstances further highlight the importance of developing methodologies that primarily aim at domain generalization for LiDAR semantic segmentation. In this paper, we propose a domain generalization ap- proach for LiDAR semantic segmentation (DGLSS), which aims to achieve high performance in unseen domains while training the segmentation model only on source domains. In particular, we propose a representation learning approach for DGLSS by focusing on differences in sparsity and scene distribution. Since obtaining multiple fully-labeled LiDAR datasets as source domains for training is cost-expensive, we choose to perform generalization using a single source domain. Also, as using sparser LiDAR data in an ap- plication is more efficient and a more likely scenario for autonomous driving, we focus on learning from a denser source domain. To this end, we first simulate the unseen domain during the learning process while considering the characteristics of the actual LiDAR sensor. We augment the domain by randomly subsampling LiDAR beams from the LiDAR scan of the source domain at every iteration of the learning process. With the augmented sparse domain, we introduce two constraints for generalizable representa- tion learning: sparsity invariant feature consistency (SIFC) andsemantic correlation consistency (SCC) . The purpose of SIFC is to align the internal sparse features of the source do- main with the augmented domain based on the feature affin-Table 1. Configuration of sensors used to acquire LiDAR datasets. DatasetLiDAR beamsvertical FOV(°)vertical res(°)horizontal res(°)range (m) SemanticKITTI [4] 64 [-23.6, 3.2] 0.4 0.08 120 nuScenes-lidarseg [7] 32 [-30, 10] 1.33 0.1–0.4 70 Waymo [64] 64 [-17.6, 2.4] 0.31 0.16 75 SemanticPOSS [53] 40 [-16, 7] 0.33/1 0.2 200 ity. For SCC, we build scene-wise class prototypes and con- strain the correlation between class prototypes to be similar for every LiDAR scene regardless of domain. Learning with the proposed constraints, the model can generalize well on unseen domains that have different sparsity and scene dis- tribution compared to the source domain. In addition, we build standardized training and evalua- tion settings for DGLSS, as such setup for domain general- ization has been absent. For this, we employ 4 real-world LiDAR datasets [4, 7, 53, 64] and carefully select 10 com- mon classes existing in datasets. Then, we remap the labels of each dataset to the common labels. Furthermore, we im- plement baseline methods using general-purpose DG meth- ods applicable to LiDAR point cloud [38, 51] and evaluate them in the proposed evaluation setting. With the proposed evaluation setting, our method showed good performance in the unseen domains compared to the baselines. Even with- out access to the target domain, our method showed com- parable performance to the domain adaptation method as shown in Fig. 1(c). In summary, our contributions are as follows: • To the best of our knowledge, we propose the first ap- proach that primarily aims at domain generalization for LiDAR semantic segmentation (DGLSS). • We build standardized training and evaluation settings for DGLSS and implement several DG baseline methods ap- plicable to the point cloud domain. • We propose sparsity invariant feature consistency andse- mantic correlation consistency for generalizable repre- sentation learning for DGLSS, which can effectively deal with domain gaps of LiDAR point clouds. • Extensive experiments show that our approach outper- forms both UDA and DG baselines.
Leyva-Vallina_Data-Efficient_Large_Scale_Place_Recognition_With_Graded_Similarity_Supervision_CVPR_2023
Abstract Visual place recognition (VPR) is a fundamental task of computer vision for visual localization. Existing methods are trained using image pairs that either depict the same place or not. Such a binary indication does not consider continu- ous relations of similarity between images of the same place taken from different positions, determined by the continuous nature of camera pose. The binary similarity induces a noisy supervision signal into the training of VPR methods, which stall in local minima and require expensive hard mining al- gorithms to guarantee convergence. Motivated by the fact that two images of the same place only partially share visual cues due to camera pose differences, we deploy an automatic re-annotation strategy to re-label VPR datasets. We compute graded similarity labels for image pairs based on available localization metadata. Furthermore, we propose a new Gen- eralized Contrastive Loss (GCL) that uses graded similarity labels for training contrastive networks. We demonstrate that the use of the new labels and GCL allow to dispense from hard-pair mining, and to train image descriptors that perform better in VPR by nearest neighbor search, obtaining superior or comparable results than methods that require expensive hard-pair mining and re-ranking techniques.
1. Introduction Visual place recognition (VPR) is an important task of computer vision, and a fundamental building block of nav- igation systems for autonomous vehicles [24, 48]. It is approached either with structure-based methods, namely Structure-from-Motion [36] and SLAM [26], or with im- age retrieval [2, 15, 20, 29, 30, 46]. The former focus on precise relative camera pose estimation [34, 35]. The lat- ter aim at learning image descriptors for effective retrieval of similar images to a given query in a nearest search ap- proach [28]. The goal of descriptor learning is to ensure im- ages of the same place to be projected onto close-by points in a latent space, and images of different places to be pro- jected onto distant points [9,10,21]. Contrastive [19,30] and triplet [2,22,23,27] loss were used for this goal and resulted (a) reference image (b) GPS distance 6m positive(c) GPS distance 25.6m negative Figure 1. (a) A place in the city of Amman. (b) An image taken 6m away is labeled as positive (same place), while (c) an image taken 25.6m away is labeled as negative (not the same place) despite sharing a lot of visual cues. in state-of-the-art performance on several VPR benchmarks. VPR methods are normally trained using image pairs labelled to indicate they either depict the same place or not, in a binary fashion. In practice, images of a certain place can be taken from different positions, i.e. with a different camera pose, and thus share only a part of their visual cues (or surface in 3D). In existing datasets, two images are usually labeled to be of the same place (positive) if they are taken within a predefined range (usually 25m) computed using e.g. GPS metadata. This creates ambiguous cases. For instance, Figure 1 shows a reference image (a) of a place and two other pictures taken 6m(b) and 25.6m(c) away from its position. The images are respectively labeled as positive and negative match, although they share many visual cues (e.g. the building on the right). Binary labels are thus noisy and interfere with the training of VPR networks, that usually stall in local minima. To address this, resource- and time- costly hard pair mining strategies are used to compose the training batches. For example, training NetVLAD [2] on the Mapillary Street Level Sequences (MSLS) dataset [45] can take more than 20 days on an Nvidia v100 gpu due to the complexity of pair mining. We instead build on the observation that two images depict the same place only to a certain degree of shared cues, namely a degree of similarity, and propose to embed this information in new continuous labels for existing datasets that can be used to reduce the effect of noise in the training of effective VPR methods. In this paper we exploit camera pose metadata or 3D in- formation associated to image pairs as a proxy to estimate an This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23487 approximate degree of similarity (hereinafter, graded similar- ity) between images of the same place, and use it to relabel popular VPR datasets. Graded similarity labels can be used to pick easy- and hard-pairs and compose training batches without complex pair-mining, thus speeding-up the training of VPR networks and enabling an efficient use of data. Fur- thermore, we embed the graded similarity into a Generalized Contrastive Loss (GCL) function that we use to train a VPR pipeline. The intuition behind this choice is that the update of network weights should not be equal for all training pairs, but rather be influenced by their similarity. The representa- tions of image pairs with larger graded similarity should be pushed together in the latent space more strongly than those of images with a lower graded similarity. The distance in the latent space is thus expected to be a better measure of ranking images according to their similarity, avoiding the use of expensive re-ranking to improve retrieval results. We validate the proposed approaches on several VPR benchmark datasets. To the best of our knowledge, this work is the first to use graded similarity for large-scale place recognition, and paying attention to data-efficient training. We summarize the contributions of this work as: •new labels for VPR datasets indicating the graded sim- ilarity of image pairs. We computed the labels with automatic methods that use camera pose metadata in- cluded with the images or 3D surface information; •a generalized contrastive loss (GCL) that exploits graded similarity of image pairs to learn effective de- scriptors for VPR; •an efficient VPR pipeline trained without hard-pair min- ing, and that does not require re-ranking. Training our pipeline with a VGG-16 backbone converges ∼100x faster than NetVLAD with the same backbone, achiev- ing higher VPR results on several benchmarks. The efficiency of our scheme enables training larger back- bones in a short time.
Lian_Bootstrapping_Objectness_From_Videos_by_Relaxed_Common_Fate_and_Visual_CVPR_2023
Abstract We study learning object segmentation from unlabeled videos. Humans can easily segment moving objects without knowing what they are. The Gestalt law of common fate, i.e., what move at the same speed belong together, has inspired unsupervised object discovery based on motion segmenta- tion. However, common fate is not a reliable indicator of objectness: Parts of an articulated / deformable object may not move at the same speed, whereas shadows / reflections of an object always move with it but are not part of it. Our insight is to bootstrap objectness by first learning image features from relaxed common fate and then refining them based on visual appearance grouping within the im- age itself and across images statistically. Specifically, we learn an image segmenter first in the loop of approximating optical flow with constant segment flow plus small within- segment residual flow, and then by refining it for more co- herent appearance and statistical figure-ground relevance. On unsupervised video object segmentation, using only ResNet and convolutional heads, our model surpasses the state-of-the-art by absolute gains of 7/9/5%on DAVIS16 / STv2 / FBMS59 respectively, demonstrating the effective- ness of our ideas. Our code is publicly available.
1. Introduction Object segmentation from videos is useful to many vi- sion and robotics tasks [1,19,30,32]. However, most meth- ods rely on pixel-wise human annotations [4,5,13,20,23,25, 26, 29, 33, 35, 46, 47], limiting their practical applications. We focus on learning object segmentation from entirely unlabeled videos (Fig. 1). The Gestalt law of common fate , i.e.,what move at the same speed belong together , has in- spired a large body of unsupervised object discovery based on motion segmentation [6, 18, 22, 28, 41, 43, 45]. There are three main types of unsupervised video object segmentation (UVOS) methods. 1) Motion segmentation methods [18,28,41,43] use motion signals from a pretrained Optical FlowOurs OCLRAMD+ Image ArticulationReflectionFigure 1. We study how to discover objectness from unlabeled videos based on common motion and appearance. AMD [22] and OCLR [41] rely on common fate ,i.e., what move at the same speed belong together, which is not always a reliable indicator of objectness. Top: Articulation of a human body means that object parts may not move at the same speed; common fate thus leads topartial objectness .Bottom : Reflection of a swan in water al- ways moves with it but is not part of it; common fate thus leads toexcessive objectness . Our method discovers full objectness by relaxed common fate and visual grouping. AMD+ refers to AMD with RAFT flows as motion supervision for fair comparison. optical flow estimator to segment an image into foreground objects and background (Fig. 1). OCLR [41] achieves state- of-the-art performance by first synthesizing a dataset with arbitrary objects moving and then training a motion seg- mentation model with known object masks. 2) Motion- guided image segmentation methods such as GWM [6] use motion segmentation loss to guide appearance-based segmentation. Motion between video frames is only re- quired during training, not during testing. 3) Joint appear- ance segmentation and motion estimation methods such as AMD [22] learn motion and segmentation simultane- ously in a self-supervised fashion by reconstructing the next frame based on how segments of the current frame move. However, while common fate is effective at binding parts of heterogeneous appearances into one whole moving ob- ject, it is not a reliable indicator of objectness (Fig. 1). 1.Articulation : Parts of an articulated or deformable ob- ject may not move at the same speed; common fate thus leads to partial objectness containing the major moving part only. In Fig.1 top, AMD+ discovers only the mid- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14582 Unsupervised object segmentation MG AMD GWM Ours Sources of supervision M M∗M M+A Segment stationary objects? ✗ ✓ ✓ ✓ Handle articulated objects? - ✗ ✗ ✓ Label-free hyperparameter tuning? ✗ ✗ ✗ ✓ CIS AMDTok.CutSIMOMGGWM OCLR MODOurs Ours 5055606570758085DAVIS16 () Figure 2. Advantages over leading unsupervised object segmen- tation methods MG [43]/AMD [22]/GWM [6]: 1)With motion supervision instead of motion input, we can segment stationary objects. 2)With both motion (M) and appearance (A) as supervi- sion, we can discover full objectness from noisy motion cues. M∗ refers to implicit motion via image warping. 3)By modeling rela- tive motion within an object, we can handle articulated objects. 4) By comparing motion-based segmentation with appearance-based segmentation, we can tune hyperparameters without labels. Our performance gain is substantial, more with post-processing ( †). dle torso of the street dancer since it moves the most, whereas OCLR misses the exposed belly which is very different from the red hoodie and the gray jogger. 2.Reflection : Shadows or reflections of an object always move with the object but are not part of the object; com- mon fate thus leads to excessive objectness that covers more than the object. In Fig.1 bottom, AMD+ or OCLR cannot separate the swan from its reflection in water. We have two insights to bootstrap full objectness from common fate in unlabeled videos. 1)To detect an articu- lated object, we allow various parts of the same object to as- sume different speeds that deviate slightly from the object’s overall speed. 2)To detect an object from its reflections, we rely on visual appearance grouping within the image itself and statistical figure-ground relevance. For example, swans tend to have distinctive appearances from the water around them, and reflections may be absent in some swan images. Specifically, we learn unsupervised object segmentation in two stages: Stage 1 learns to discover objects from mo- tion supervision with relaxed common fate, whereas Stage 2 refines the segmentation model based on image appearance. At Stage 1 , we discover objectness by computing the optical flow and learning an image segmenter in the loop of approximating the optical flow with constant segment flow plus small within-segment residual flow , relaxing common fatefrom the strict same-speed assumption. At Stage 2 , we refine our model by image appearance based on low-level visual coherence within the image itself and usual figure- ground distinction learned statistically across images. Existing UVOS methods have hyperparameters that sig- nificantly impact the quality of segmentation. For example,the number of segmentation channels is a critical parameter for AMD [22], and it is usually chosen according to an an- notated validation set in the downstream task, defeating the claim of unsupervised objectness discovery. We propose unsupervised hyperparameter tuning that does not require any annotations. We examine how well our motion-based segmentation aligns with appearance-based affinity on DINO [2] features self-supervisedly learned on ImageNet [34], which is known to capture semantic object- ness. Our idea is also model-agnostic and applicable to other UVOS methods. Built on the novel concept of R elaxed C ommon F ate (RCF), our method has several advantages over leading UVOS methods (Fig. 2): It is the only one that uses both motion and appearance to supervise learning; it can seg- ment stationary and articulated objects in single images, and it can tune hyperparameters without external annotations. On UVOS benchmarks, using only standard ResNet [12] backbone and convolutional heads, our RCF surpasses the state-of-the-art by absolute gains of 7.0%/9.1%/4.5% (6.3%/12.0%/5.8%) without (with) post-processing on DA VIS16 [32] / STv2 [19] / FBMS59 [30] respectively, val- idating the effectiveness of our ideas.
Li_DATE_Domain_Adaptive_Product_Seeker_for_E-Commerce_CVPR_2023
Abstract Product Retrieval (PR) and Grounding (PG), aiming to seek image and object-level products respectively according to a textual query, have attracted great interest recently for better shopping experience. Owing to the lack of relevant datasets, we collect two large-scale benchmark datasets from Taobao Mall and Live domains with about 474k and 101k image-query pairs for PR, and manually annotate the object bounding boxes in each image for PG. As anno- tating boxes is expensive and time-consuming, we attempt to transfer knowledge from annotated domain to unanno- tated for PG to achieve un-supervised Domain Adaptation (PG-DA). We propose a Domain Adaptive Produc tSeeker (DATE ) framework, regarding PR and PG as Product Seek- ing problem at different levels, to assist the query date the product. Concretely, we first design a semantics-aggregated feature extractor for each modality to obtain concentrated and comprehensive features for following efficient retrieval and fine-grained grounding tasks. Then, we present two cooperative seekers to simultaneously search the image for PR and localize the product for PG. Besides, we de- vise a domain aligner for PG-DA to alleviate uni-modal marginal and multi-modal conditional distribution shift be- tween source and target domains, and design a pseudo box generator to dynamically select reliable instances and gen- erate bounding boxes for further knowledge transfer. Exten- sive experiments show that our DATE achieves satisfactory performance in fully-supervised PR, PG and un-supervised PG-DA. Our desensitized datasets will be publicly available here1.
1. Introduction Nowadays, with the rapid development of e-commerce and livestreaming, consumers can enjoy shopping on e-mall or various livestreaming platforms. Although the fact that *Corresponding author. 1https://github.com/Taobao-live/Product-Seeking QueryTitle MalldomainSource:withboxannotationLivedomainTarget:w/o.boxannotation TextualmodalProductGalleryProductGallery VisualmodalDomaingap FeaturespaceQueryDescription GroundingFeaturespace RetrievalGrounding Retrieval Figure 1. Illustration of Product Retrieval (PR) and Grounding (PG) problems on two datasets collected from Taobao Mall and Live. (1) Given a text query (i.e. Chinese title or description of a product), PR is to seek the corresponding image-level product from gallery while PG is to seek the object-level product from an image. (2) We further explore PG-DA , which aims to transfer knowledge from the annotated source domain to the unannotated target domain under the influence of multi-modal domain gap to achieve un-supervised PG. diverse products can be presented and purchased on screen brings us convenience, we are immersed in this miscel- laneous product world. Therefore, cross-modal Retrieval [1, 3, 15, 21, 41, 43, 55] for Product (PR), aiming to seek the corresponding image based on a text query, is significant for boosting holistic product search engine and promoting consumers’ shopping experience. Besides, provided that the object-level product can be localized on the target product image or live room im- age according to a query, it will help consumers focus on the desired product and also benefit the downstream vision-to-vision retrieval. And we name this interesting task as Product Grounding (PG) like Visual Grounding [29, 36, 40, 45, 56]. Generally, PR and PG are seen as two separate tasks, but we consider mining the commonalities of PR and PG and regard them as Product Seeking at image- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19315 level and object-level respectively. And we design a uni- fied architecture to simultaneously solve PR and PG, which is more time-saving and memory-economical than separate methods. To research the PR and PG with great practical applica- tion value, we collect two large-scale benchmark Product Seeking datasets TMPS and TLPS from Taobao Mall and Taobao Live domains with about 474k image-title pairs and 101k frame-description pairs respectively, and the locations of object-level products in images are manually annotated. As annotating bounding box of product is time-consuming and expensive, we explore how to transfer knowledge from an annotated domain to the unannotated one, and achieve un-supervised PG in domain adaptation setting (PG-DA). Thus, we propose the Domain Adaptive Produc tSeeker (DATE ) to solve the following aspects of the challenging PR, PG and PG-DA problems. Firstly, due to the complexity of the mall and live scenar- ios, discriminative representations of the image and query are prerequisite to accurately localize the object. Consid- ering conventional CNNs are hard to achieve long-distance relation reasoning and full-scale understanding, we utilize and improve the Swin-TF [37] to extract hierarchical and comprehensive features. As large-scale image seeking is demanding for PR, it is vital to ensure seeking inference is of trivial cost. Thus, we inject [REP] token into Swin- TF to absorb the weighted global semantics, and condense them into a single vector, which will be discriminative and concentrated for following efficient image seeking. And we perform the same semantics-aggregated technique for query feature extraction. Secondly, the capacity of both macroscopic image seek- ing and microcosmic fine-grained object seeking is neces- sary for PR and PG. Therefore, we present two cooperative seekers, where image seeker calculates the cosine similar- ity between visual and textual concentrated features for PR, and object seeker based on cross-modal interaction trans- former directly predicts the coordinates of the product by comprehensive features for PG. We validate the reasonable- ness of such cooperative strategy through experiments. Thirdly, due to the domain gap between two datasets as Figure 1 shown, applying the model straightway to test on target domain will cause performance degeneration severely for PG-DA. To the best of our knowledge, this is the first work to consider un-supervised Visual Grounding in do- main adaptation setting, and most uni-modal DA [8, 34, 38] and multi-modal DA [5,7] methods are not directly applica- ble in our complicated object seeking. Therefore, we devise a domain aligner based on Maximum Mean Discrepancy to align the domain by minimizing uni-modal marginal distri- bution and multi-modal conditional distribution divergence between source and target domains, and design a dynamic pseudo bounding box generator to select similar instancesin target domain and generate reliable boxes for knowledge transfer. To summarize, the contributions of this paper are as fol- lows: • We collect and manually annotate two large-scale benchmark datasets for PR and PG with great practi- cal application value. • We propose a unified framework with semantics- aggregated feature extractor and cooperative seekers to simultaneously solve fully-supervised PR and PG. • We explore un-supervised PG in domain adaptation setting and design the multi-modal domain aligner and dynamic box generator to transfer knowledge. • We conduct extensive experiments which shows that our methods achieve satisfactory performance in fully- supervised PR, PG and un-supervised PG-DA.
Liu_TWINS_A_Fine-Tuning_Framework_for_Improved_Transferability_of_Adversarial_Robustness_CVPR_2023
Abstract Recent years have seen the ever-increasing importance of pre-trained models and their downstream training in deep learning research and applications. At the same time, the defense for adversarial examples has been mainly inves- tigated in the context of training from random initialization on simple classification tasks. To better exploit the potential of pre-trained models in adversarial robustness, this paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks. Existing research has shown that since the robust pre-trained model has already learned a robust feature extractor, the crucial question is how to maintain the robustness in the pre-trained model when learning the downstream task. We study the model- based and data-based approaches for this goal and find that the two common approaches cannot achieve the objective of improving both generalization and adversarial robustness. Thus, we propose a novel statistics-based approach, Two- WIngNormliSation (TWINS) fine-tuning framework, which consists of two neural networks where one of them keeps the population means and variances of pre-training data in the batch normalization layers. Besides the robust informa- tion transfer, TWINS increases the effective learning rate without hurting the training stability since the relationship between a weight norm and its gradient norm in standard batch normalization layer is broken, resulting in a faster es- cape from the sub-optimal initialization and alleviating the robust overfitting. Finally, TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness.
1. Introduction The adversarial vulnerability of deep neural networks (DNNs) [59] is one of the major obstacles for their wide *Corresponding author (a) CIFAR10(b) Caltech-256Figure 1. The performance of fine-tuning robust and non-robust large-scale pre-trained (PT) ResNet50 [26, 55] on CIFAR10 [34] and Caltech-256 [22]. We compare standard adversarial training (AT), Learning without Forgetting (LwF) ( model approach ) [39], joint fine-tuning with UOT data selection ( data approach ) [46] and our TWINS fine-tuning. The robust accuracy is evaluated using l∞norm bounded AutoAttack [11] with ϵ= 8/255. On CIFAR10, the data-based and model-based approach fail to im- prove clean and robust accuracy. On Caltech, both approaches im- prove the clean accuracy but hurt the robust accuracy. Our TWINS fine-tuning improves the clean and robust performance on both datasets. The pink triangle denotes the performance of standard AT with the non-robust pre-trained ResNet50, which drops con- siderably compared with fine-tuning starting from the robust pre- trained model. applications in safety-critical scenarios such as self-driving cars [18] and medical diagnosis [19]. Thus, addressing this issue has been one focus of deep learning research in the past eight years. Existing works have proposed to improve adversarial robustness from different perspec- tives, including data augmentation [21, 48, 53, 56], regu- larization [37, 43, 44, 51] and neural architecture [23, 28]. However, most of existing works investigate the problem under the assumption that the training data is sufficient enough, and training from scratch gives a satisfactory per- formance, which is not realistic in the real world. There are a large number of computer vision tasks where training from scratch is inferior to training from pre-trained weights, such as fine-grained image classification (e.g., Caltech- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16436 Adaptive NetFrozen Net batch/2ℒ!""#$%&ℒ'shared weight parametersbatch/2back-prop⨁adv attackConvBNReLUℎ"!"($)=$!"($)ℎ!"($)−&(ℎ ($)))(ℎ ($))++!"($)ℎ"!"($)=$!"($)ℎ!"($)−&'(($))'(($)++!"($)⋮⋮Adaptive NetFrozen Net(a) TWINS Structure(b) Training PipelineFigure 2. The TWINS structure and training pipeline. (a)The Frozen Net and Adaptive Net have the same structure and share the weight parameters, except for batch normalization (BN) layers. The Frozen Net uses pre-trained means and standard deviations (STD) in the normalization layer, while Adaptive Net uses the mean and STD computed from the current batch as in standard BN. (b)In each step of mini-batch stochastic gradient descent (SGD), we split the batch of adversarial examples, generated from attacking the Adaptive Net, into two sub-batches and feed them to the Adaptive Net and Frozen Net respectively. The loss of two networks are combined and back- propagated to their shared parameters to train the network. In the inference stage , only the Adaptive Net is used. UCSD Birds-200-2011 or CUB200 [60]), object detection [42] and semantic segmentation [49]. On the other hand, pre-trained models have been consid- ered as the foundation models in deep learning [5] as a result of their strong performance and wide employment in com- puter vision [17,24,25,45], as well as natural language pro- cessing [6, 13, 52]. Thus, how to better use the pre-trained model in downstream has emerged as a major research topic in many vision and language tasks, such as image classifica- tion under distribution shifts [47, 63], object detection [36] and semantic segmentation [29,35]. There are a few papers that investigate the pre-trained model’s robustness in target tasks [7, 8, 15, 31, 32, 57, 62]. [7, 57] mainly considers the transfer between small-scale datasets (e.g., CIFAR100 to CIFAR10), while [8, 32] use adversarial robust pre-training and fine-tuning on the same dataset, without considering a large-scale and general pre-trained model. Finally, [15, 62] investigate different kinds of robustness to corruption or out-of-distribution samples, and are not devoted to adver- sarial robustness. In this paper, we consider how to transfer the adver- sarial robustness of a large-scale robust pre-trained model (e.g., a ResNet50 pre-trained on ImageNet [12] with adver- sarial training) on various downstream classification tasks when fine-tuning with adversarial training. This problem setting is becoming more important as the standard pre- trained models do not learn robust representations from the pre-training data and are substantially weaker than the ro- bust pre-trained counterparts in some challenging down- stream tasks, e.g., fine-grained classification as shown in our experiment. Meanwhile, more large-scale robust pre- trained models are released (e.g., ResNet [55] and ViT [4]), which makes the robust pre-trained models more accessible. However, naively applying adversarial training to fine-tune from the robustly pre-trained model will lead to subopti-mal robustness, since the robust representations learned by the robust pre-trained model are not fully utilized. For ex- ample, [57] suggests that the robustness from a pre-trained model needs to be explicitly maintained for its better trans- fer to the downstream. Following the idea that the key to improving the trans- ferability of robustness is to maintain the robustness of the pre-training stage during fine-tuning [57], we first evalu- ate the data-based and model-based approach on two rep- resentative datasets, CIFAR10 and Caltech-256. The data- based approach uses pre-trained data in the fine-tuning and keeps their performance under adversarial attack, while the model-based approach regularizes the distance of features of the fine-tuned and pre-trained model. Our experiment shows that both methods fail to improve the robustness and generalization (Fig. 1), since the two methods are too ag- gressive in retaining the robustness and hurt the learning in downstream. Thus, we propose a subtle approach that keeps the batch-norm (BN) statistics of pre-training for preserving the robustness, which we call Two-WIngNormali Sation (TWINS) fine-tuning. TWINS has two neural networks with fixed and adaptive BN layers respectively, where the fixed BN layers use the population means and STDs of pre- training for normalization, while the adaptive BN layers use the standard BN normalization. Our experiment first demonstrates the importance of pre-trained BN statistics in the robust fine-tuning and then finds the benefit of TWINS in adversarial training dynamics. As the relationship be- tween weight norm and its gradient norm no longer holds in TWINS, it is able to increase the gradient magnitude with- out increasing the gradient variance. At the initial training stage, TWINS has a faster escaping speed from the sub- optimal initialization than vanilla adversarial training [41]. At the final training stage, the gradient of TWINS is more stable than adversarial training, which alleviates the robust 16437 overfitting effect [56]. In summary, the contributions of our paper are as follows: 1. We focus on the fine-tuning of large-scale robust pre- trained models as a result of their potential importance in various downstream tasks. We evaluate current ap- proaches to retain the pre-training robustness in fine- tuning, and show that they cannot substantially im- prove the robustness. 2. We propose TWINS, a statistics-based approach for better transferability of robustness and generalization from the pre-training domain to the target domain. TWINS has two benefits: a) it keeps the robust statis- tics for downstream tasks, thus helps the transfer the robustness to downstream tasks and b) it enlarges the gradient magnitude without increasing gradient vari- ance, thus helps the model escape from the initializa- tion faster and mitigates robust overfitting. The mech- anisms of these two benefits are validated by our em- pirical study. 3. The effectiveness of TWINS is corroborated on five downstream datasets by comparing with two popu- lar adversarial training baselines, adversarial training (AT) [48] and TRADES [64]. On average, TWINS improves the clean and robust accuracy by 2.18% and 1.21% compared with AT, and by 1.46% and 0.69% compared with TRADES. The experiment shows the strong potential of robust pre-trained models in boost- ing downstream’s robustness and generalization when using more effective fine-tuning methods.
Karnewar_HOLODIFFUSION_Training_a_3D_Diffusion_Model_Using_2D_Images_CVPR_2023
Abstract Diffusion models have emerged as the best approach for generative modeling of 2D images. Part of their success is due to the possibility of training them on millions if not bil- lions of images with a stable learning objective. However, extending these models to 3D remains difficult for two rea- sons. First, finding a large quantity of 3D training data is much more complex than for 2D images. Second, while it is conceptually trivial to extend the models to operate on 3D rather than 2D grids, the associated cubic growth in mem- ory and compute complexity makes this infeasible. We ad- dress the first challenge by introducing a new diffusion setup that can be trained, end-to-end, with only posed 2D images for supervision; and the second challenge by proposing an image formation model that decouples model memory from spatial memory. We evaluate our method on real-world data, using the CO3D dataset which has not been used to train 3D generative models before. We show that our dif- fusion models are scalable, train robustly, and are compet- itive in terms of sample quality and fidelity to existing ap- proaches for 3D generative modeling.
1. Introduction Diffusion models have rapidly emerged as formidable generative models for images, replacing others ( e.g., V AEs, GANs) for a range of applications, including image col- orization [47], image editing [35], and image synthesis [9, 21]. These models explicitly optimize the likelihood of the training samples, can be trained on millions if not billions of images, and have been shown to capture the underlying model distribution better [9] than previous alternatives. A natural next step is to bring diffusion models to 3D data. Compared to 2D images, 3D models facilitate di- rect manipulation of the generated content, result in perfect view consistency across different cameras, and allow object placement using direct handles. However, learning 3D dif- fusion models is hindered by the lack of a sufficient volume of 3D data for training. A further question is the choice of representation for the 3D data itself ( e.g., voxels, point clouds, meshes, occupancy grids, etc.). Researchers have proposed 3D-aware diffusion models for point clouds [33], ⋆Indicates equal contribution. †part of this work was done during an internship at MetaAI. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18423 volumetric shape data using wavelet features [22] and novel view synthesis [61]. They have also proposed to distill a pretrained 2D diffusion model to generate neural radiance fields of 3D objects [30, 45]. However, a diffusion-based 3D generator model trained using only 2D image for super- vision is not available yet. In this paper, we contribute H OLODIFFUSION , the first unconditional 3D diffusion model that can be trained with only real posed 2D images. By posed, we mean different views of the same object with known cameras, for example, obtained by means of structure from motion [49]. We make two main technical contributions: (i) We pro- pose a new 3D model that uses a hybrid explicit-implicit feature grid. The grid can be rendered to produce images from any desired viewpoint and, since the features are de- fined in 3D space, the rendered images are consistent across different viewpoints. Compared to utilizing an explicit den- sity grid, the feature representation allows for a lower res- olution grid. The latter leads to an easier estimation of the probability density due to a smaller number of variables. Furthermore, the resolution of the grid can be decoupled from the resolution of the rendered images. (ii) We design a new diffusion method that can learn a distribution over such 3D feature grids while only using 2D images for su- pervision. Specifically, we first generate intermediate 3D- aware features conditioned only on the input posed images. Then, following the standard diffusion model learning, we add noise to this intermediate representation and train a de- noising 3D UNet to remove the noise. We apply the denois- ing loss as photometric error between the rendered images and the Ground-Truth training images. The key advantage of this approach is that it enables training of the 3D diffu- sion model from 2D images, which are abundant, sidestep- ping the difficult problem of procuring a huge dataset of 3D models for training. We train and evaluate our method on the Co3Dv2 [46] dataset where H OLODIFFUSION outperforms existing alter- natives both qualitatively and quantitatively.
Kulal_Putting_People_in_Their_Place_Affordance-Aware_Human_Insertion_Into_Scenes_CVPR_2023
Abstract We study the problem of inferring scene affordances by presenting a method for realistically inserting people into scenes. Given a scene image with a marked region and an image of a person, we insert the person into the scene while respecting the scene affordances. Our model can infer the set of realistic poses given the scene context, re-pose the reference person, and harmonize the composition. We set up the task in a self-supervised fashion by learning to re- pose humans in video clips. We train a large-scale diffusion model on a dataset of 2.4M video clips that produces diverse plausible poses while respecting the scene context. Given the learned human-scene composition, our model can also hal- lucinate realistic people and scenes when prompted without conditioning and also enables interactive editing. A quan- titative evaluation shows that our method synthesizes more realistic human appearance and more natural human-scene interactions than prior work.
1. Introduction A hundred years ago, Jakob von Uexküll pointed out the crucial, even defining, role that the perceived environment (umwelt) plays in an organism’s life [64]. At a high level, he argued that an organism is only aware of the parts of the environment that it can affect or be affected by. In a sense, our perception of the world is defined by what kinds of interactions we can perform. Related ideas of functional visual understanding (what actions does a given scene afford an agent?) were discussed in the 1930s by the Gestalt psy- chologists [35] and later described by J.J. Gibson [21] as affordances. Although this direction inspired many efforts in vision and psychology research, a comprehensive com- putational model of affordance perception remains elusive. The value of such a computational model is undeniable for future work in vision and robotics research. The past decade has seen a renewed interest in such computational models for data-driven affordance percep- tion [15, 20,24,25,67]. Early works in this space deployed a Project page: https : / / sumith1896 . github . io / affordance-insertion.mediated approach by inferring or using intermediate seman- tic or 3D information to aid in affordance perception [24], while more recent methods focus on direct perception of affordances [15, 20,67], more in line with Gibson’s fram- ing [21]. However, these methods are severely constrained by the specific requirements of the datasets, which reduce their generalizability. To facilitate a more general setting, we draw inspiration from the recent advances in large-scale generative models, such as text-to-image systems [49, 50,54]. The samples from these models demonstrate impressive object-scene compo- sitionality. However, these compositions are implicit, and the affordances are limited to what is typically captured in still images and described by captions. We make the task of affordance prediction explicit by putting people “into the picture” [24] and training on videos of human activities. We pose our problem as a conditional inpainting task (Fig. 1). Given a masked scene image (first row) and a ref- erence person (first column), we learn to inpaint the person into the masked region with correct affordances. At training time, we borrow two random frames from a video clip, mask one frame, and try to inpaint using the person from the sec- ond frame as the condition. This forces the model to learn both the possible scene affordances given the context and the necessary re-posing and harmonization needed for a co- herent image. At inference time, the model can be prompted with different combinations of scene and person images. We train a large-scale model on a dataset of 2.4M video clips of humans moving in a wide variety of scenes. In addition to the conditional task, our model can be prompted in different ways at inference time. As shown in the last row Fig. 1, when prompted without a person, our model can hallucinate a realistic person. Similarly, when prompted without a scene, it can also hallucinate a realistic scene. One can also perform partial human completion tasks such as changing the pose or swapping clothes. We show that training on videos is crucial for predicting affordances and present ablations and baseline comparisons in Sec. 4. To summarize, our contributions are: •We present a fully self-supervised task formulation for learning affordances by learning to inpaint humans in This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17089 Figure 1. Given a masked scene image (first row) and a reference person (first column), our model can successfully insert the person into the scene image. The model infers the possible pose (affordance) given the scene context, reposes the person appropriately, and harmonizes the insertion. We can also partially complete a person (last column) and hallucinate a person (last row) when no reference is given. masked scenes. •We present a large-scale generative model for human insertion trained on 2.4M video clips and demonstrate improved performance both qualitatively and quantita- tively compared to the baselines. •In addition to conditional generation, our model can be prompted in multiple ways to support person hallucina- tion, scene hallucination, and interactive editing.
Lin_Video_Test-Time_Adaptation_for_Action_Recognition_CVPR_2023
Abstract Although action recognition systems can achieve top per- formance when evaluated on in-distribution test points, they are vulnerable to unanticipated distribution shifts in test data. However, test-time adaptation of video action recog- nition models against common distribution shifts has so far not been demonstrated. We propose to address this prob- lem with an approach tailored to spatio-temporal models that is capable of adaptation on a single video sample at a step. It consists in a feature distribution alignment tech- nique that aligns online estimates of test set statistics to- wards the training statistics. We further enforce prediction consistency over temporally augmented views of the same test video sample. Evaluations on three benchmark ac- tion recognition datasets show that our proposed technique is architecture-agnostic and able to significantly boost the performance on both, the state of the art convolutional ar- chitecture TANet and the Video Swin Transformer. Our pro- posed method demonstrates a substantial performance gain over existing test-time adaptation approaches in both eval- uations of a single distribution shift and the challenging case of random distribution shifts. Code will be available athttps://github.com/wlin-at/ViTTA .
1. Introduction State-of-the-art neural architectures [8,40,46,48–50] are very effective in action recognition, but recent work shows they are not robust to shifts in the distribution of the test data [33,51]. Unfortunately, in practical scenarios, such dis- tribution shifts are very difficult to avoid, or account for. For example, cameras used for recognizing motorized or pedes- ∗Equally contributing authors. †Correspondence: [email protected] traffic events may register rare weather conditions, like a hailstorm, and sports action recognition systems can be affected by perturbations generated by spectators at sports arenas, such as the smoke of flares. Shifts in the data dis- tribution can also result from inconspicuous changes in the video processing setup, for instance, a change of the algo- rithm used to compress the video feed. In image classification, distribution shift can be miti- gated by Test-Time-Adaptation (TTA) [16, 19, 21, 23, 28, 34, 38, 42], which uses the unlabeled test data to adapt the model to the change in data distribution. However, methods developed for image classification are not well suited for action recognition. Most action recognition applications re- quire running memory- and computation-hungry temporal models online, with minimal delay, and under tight hard- ware constraints. Moreover, videos are more vulnerable to distribution shifts than images [2,33,51]. Some examples of such distribution shifts are given in Fig. 1. Due to limited exposure times, video frames are likely to feature higher variations of noise level, provoked by illumination changes. They are more affected by motion blur, which varies with the speed of motion observed in the scene. They also fea- ture stronger compression artifacts, which change with the compression ratio, often dynamically adjusted to the avail- able bandwidth. Our experiments show that existing TTA algorithms, developed for image classification, do not cope with these challenges well, yielding marginal improvement over networks used on corrupted data without any adapta- tion. Our goal is to propose an effective method for online test-time-adaptation of action recognition models. Oper- ating online and with small latency can require drastically constraining the batch size, especially when the hardware resources are limited and the employed model is large. We therefore focus on the scenario in which test samples are This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22952 High JumpDiving PushUpRope Climbing gaussmotion pepper saltshot zoom impulse defocus jpegh265 abr raincontrast motionpeppersalt shotzoom impulse defocusjpegh265 abr raincontrasttrain gaussViTTA gauss motion pepper salt shot zoom impulse defocus jpeg h265 abr rain contrast ViTTA Test w/o adaptationHigh Jump Rope ClimbingDivingPushUp Test right after one-step adaptation train Video ModelAdaptation Flow Inference FlowVideo ModelBefore Adaptation HighJump with H.265 ABR compressionAfter AdaptationFigure 1. Our architecture-agnostic ViTTA enables action recog- nition models to overcome severe video corruptions. In a fully online manner, i.e. processing each test video only once, we per- form test-time adaptation. In particular, we align the statistics of the (corrupted) test data towards the (clean) training data such that they are better aligned (top: t-SNE [41] of mean features from final feature extractor layer for clean training data and for 12 differently corrupted test datasets). This results in a significant performance improvement for action recognition. processed individually, one at a time. To ease the integra- tion of our method in existing systems, we require it to be capable of adapting pretrained networks, both convolutional and transformer-based, without the need to retrain them. In the work, we propose the first video test-time adap- tation approach - ViTTA. To address the above require- ments, we turn to feature alignment [19, 28, 34, 37, 53], a common TTA method that aligns distributions of features computed for test and training data by minimizing discrep- ancy between their statistics. Feature alignment does not require any modifications to the training procedure and is architecture-agnostic. However, existing feature alignment methods are not well suited for online adaptation, because they require relatively large test batches to accurately esti- mate the statistics. We address this by employing the ex- ponential moving average to estimate test feature statistics online. This enables us to perform the alignment by pro- cessing one video sample at a time, at a low computational and memory cost. Additionally, we show that even though the temporal dimension of video data poses challenges, it also has a silver lining. We leverage this by creating aug- mented views of the input videos via temporally resampling frames in the video. This has two benefits: First, multi-ple augmented views lead to more accurate statistics of the overall video content. Second, it allows us to enforce pre- diction consistency across the views, making the adaptation more effective. Our extensive evaluations on three most popular action recognition benchmarks demonstrate that ViTTA boosts the performance of both TANet [27], the state-of-the-art convo- lutional architecture, and the Video Swin Transformer [26], and outperforms the existing TTA methods proposed for im- age data by a significant margin. ViTTA performs favorably in both, evaluations of single distribution shift and the chal- lenging case of random distribution shift. ViTTA also has a high practical valor. It is fully online and applicable in use cases requiring minimal delays. It does not require collection or storage of test video dataset, which is significant in terms of data privacy protection, es- pecially in processing confidential user videos. ViTTA can be seamlessly incorporated in systems already in operation, as it has no requirement of re-training existing networks. Therefore it can harness state-of-the-art video architectures. Our contributions can be summarized as follows: • We benchmark existing TTA methods in online adaptation of action recognition models to distribution shifts in test data, on three most popular action recog- nition datasets, UCF101 [35], Something-something v2 [13] and Kinetics 400 [18] • We adapt the feature alignment approach to online ac- tion recognition, generating a substantial performance gain over existing techniques. • We propose a novel, video-specific adaptation tech- nique (ViTTA) that enforces consistency of predictions for temporally re-sampled frame sequences and show that it contributes to adaptation efficacy.
Lee_Exploring_Discontinuity_for_Video_Frame_Interpolation_CVPR_2023
Abstract Video frame interpolation (VFI) is the task that synthe- sizes the intermediate frame given two consecutive frames. Most of the previous studies have focused on appropriate frame warping operations and refinement modules for the warped frames. These studies have been conducted on natural videos containing only continuous motions. How- ever, many practical videos contain various unnatural ob- jects with discontinuous motions such as logos, user in- terfaces and subtitles. We propose three techniques that can make the existing deep learning-based VFI architec- tures robust to these elements. First is a novel data aug- mentation strategy called figure-text mixing (FTM) which can make the models learn discontinuous motions during training stage without any extra dataset. Second, we pro- pose a simple but effective module that predicts a map called discontinuity map ( D-map), which densely distin- guishes between areas of continuous and discontinuous mo- tions. Lastly, we propose loss functions to give supervi- sions of the discontinuous motion areas which can be ap- plied along with FTM and D-map. We additionally collect a special test benchmark called Graphical Discontinuous Motion (GDM) dataset consisting of some mobile games and chatting videos. Applied to the various state-of-the-art VFI networks, our method significantly improves the inter- polation qualities on the videos from not only GDM dataset, but also the existing benchmarks containing only continu- ous motions such as Vimeo90K, UCF101, and DAVIS.
1. Introduction Video frame interpolation (VFI) task is to generate the intermediate frame given some consecutive frames from a video. When the time interval of each input frames is fixed, we can get smoother video, and when the frame rate is fixed, we can get slow-motion video. This can also be applied to other vision tasks such as video compression [2, 34], view *Both authors contributed equally to this work. [Continuous Motion] [Discontinuous Motion] 34.567 34.571 34.570 34.569 34.568 Figure 1. The examples of discontinuous motion. synthesis [7, 13, 39], and other real-world applications [23, 29, 37]. Most of the previous works focus on the motion of the objects in videos. They utilize the estimated flow maps [12, 16], kernels [6, 15, 17, 21, 22], or externally es- timated optical flow maps [19, 20, 24, 25, 29] to place each object in the middle of its position on the adjacent frames. However, as personal broadcast and cloud gaming con- tents increase, many of the practical videos contain spe- cial objects which do not move continuously such as user interfaces, watermarks, logos, chatting windows and subti- tles (as the examples on Figure 1). Besides, these elements are received at the display devices as part of each frame, not as additional information. Therefore, many of the video en- hancement frameworks, including VFI, should be improved to be robust to the discontinuous motions. In this paper, our purpose is to expand the spectrum of motions to address efficiently both continuous and discon- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9791 tinuous ones, not focusing only on special videos with dis- continuous motions. To achieve this goal, we propose three techniques to process the videos containing both types of motions. First, we propose novel data augmentation method called Figure-Text Mixing (FTM) which consists of Figure Mixing (FM) and Text Mixing (TM). FM is an augmen- tation of fixed random figures and TM is an augmentation of discontinuity moving random texts. The networks can learn to be robust on both continuous and discontinuous motions using FTM without any additional train datasets. Second, we propose a lightweight module which estimates a map called discontinuity map ( D-map), which determines whether the motion of each output pixel is continuous or discontinuous. When estimating the pixels in the discontin- uous area, a pixel on the same location of one of the input frames is copied instead of predicting the interpolated value. To prove the versatility and adaptivity of our D-map esti- mation module, we apply it to various state-of-the-art VFI models instead of proposing our original VFI architecture. Lastly, if we utilize both FTM and D-map, it is possible to supervise the model by giving the ground-truth of D-map. Therefore, we propose an additional loss function to help our model estimate D-map easily. We construct a special test set called Graphic Discontin- uous Motion (GDM) dataset to evaluate how our method and the competitive works deal with the discontinuous mo- tions. Our approach shows significantly improved results compared to the other methods on GDM dataset. More- over, our method outperforms those methods on the regu- lar benchmarks that only contain continuous motions such as Vimeo90K test [36], DA VIS [26] and UCF101 [30] datasets. Our main contributions can be summarized as fol- lows: •New Data Augmentation Strategy. We propose a new data augmentation strategy called FTM that, when ap- plied to existing video datasets, makes models learn both continuous and discontinuous motions without any additional train datasets. •New Module & Loss Function. We propose a new module which can separate continuous and discontin- uous motions. This module can be applied to many recent deep learning-based VFI architectures. We also propose a loss function to supervise the module. •Performance. Applied to various state-of-the-art VFI models, our method achieves performance improve- ment on not only the dataset containing discontinuous motions, but also many of the other benchmarks with only continuous motions.
Kweon_Weakly_Supervised_Semantic_Segmentation_via_Adversarial_Learning_of_Classifier_and_CVPR_2023
Abstract In Weakly Supervised Semantic Segmentation (WSSS), Class Activation Maps (CAMs) usually 1) do not cover the whole object and 2) be activated on irrelevant regions. To address the issues, we propose a novel WSSS frame- work via adversarial learning of a classifier and an im- age reconstructor. When an image is perfectly decomposed into class-wise segments, information (i.e., color or tex- ture) of a single segment could not be inferred from the other segments. Therefore, inferability between the seg- ments can represent the preciseness of segmentation. We quantify the inferability as a reconstruction quality of one segment from the other segments. If one segment could be reconstructed from the others, then the segment would be imprecise. To bring this idea into WSSS, we simulta- neously train two models: a classifier generating CAMs that decompose an image into segments and a reconstruc- tor that measures the inferability between the segments. As in GANs, while being alternatively trained in an ad- versarial manner, two networks provide positive feedback to each other. We verify the superiority of the proposed framework with extensive ablation studies. Our method achieves new state-of-the-art performances on both PAS- CAL VOC 2012 and MS COCO 2014. The code is available athttps://github.com/sangrockEG/ACR.
1. Introduction Over the past decade, learning-based semantic segmen- tation has made significant advancements. However, the high labeling cost remains a major challenge when applying existing methods to real cases. In response to this challenge, without relying on pixel-wise supervision, Weakly Super- vised Semantic Segmentation (WSSS) has been proposed to learn semantic segmentation with weak labels only. The field of WSSS has studied several types of weak la- bels such as scribbles [29, 36], bounding boxes [16, 23,31], and image-level classification labels [1–3, 7,12,19,21,28, *The first two authors contributed equally. In alphabetical order. Figure 1. Demonstration of our method. We are motivated by the relation between semantic segmentation and inferability. We realize it as adversarial learning of a classifier and a reconstructor. 37, 42, 46, 47,49,52], which are relatively inexpensive to acquire. Among them, using image-level labels is the most widely studied setting due to its high accessibility and effi- ciency. Different from the other supervisions that roughly notify the locations of the things in the image, image-level labels only contain categorical information. Therefore, the main challenge in WSSS using image-level labels is local- izing the regions of each class. To dispel this challenge, most of the existing literature employs Class Activation Maps (CAMs) [51] that highlight the regions highly contributing to the prediction of a classi- fier. Intuitively, the classifier learns shared patterns among the images including the same class, and thereby the CAM of each class is activated on the image regions correspond- ing to the class. However, the CAMs usually show a ten- dency to focus on the most discriminative regions of each class (e.g. face for the catclass), which leads to incomplete segmentation. Also, due to the absence of pixel-wise su- pervision, the CAMs are rather imprecise at the boundary, which is a critical issue from the perspective of segmenta- tion. Technically, the goal of WSSS can be summarized as obtaining better CAMs, which can serve as precise pseudo- labels for learning semantic segmentation. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11329 In this paper, we propose a novel method inspired by simple but meaningful intuition as in Fig. 1. If semantic segmentation is perfectly accomplished, each of the objects in the image is perfectly segmented into mutually indepen- dent segments in terms of color and texture. In this case, each of the segments does not include any clue about the rest of the image. Therefore, if segmentation of the image is perfectly performed, no single segment can “infer” colors or textures about the other segments. As a contra-positive statement, if any information such as color or textures about a segment could be inferred from the other segments, the semantic segmentation could be regarded as imperfect. Ac- cordingly, it might be possible to measure the quality of se- mantic segmentation based on the inferability between the segments. However, how could we quantify the degree of “inferability”? To this end, we propose to employ the image reconstruction task, which reconstructs one image segment from the other segments. Then, the quality of reconstruc- tion could be regarded as a measure of inferability. Here, note that the image reconstruction task does not introduce any additional supervision. We formulate the aforementioned intuition as an adver- sarial learning of a classifier and a reconstructor. In spe- cific, according to the CAMs obtained by the classifier, we decompose an image into two segments: a segment of the target class and a segment of the non-target class (the other classes). The reconstructor is trained to reconstruct one seg- ment by using the other segment as the only input. On the other hand, we promote the CAMs to decompose an im- age into segments that reduce the inferability of the recon- structor. In other words, the classifier is trained to not only classify the image but also generate CAMs correctly seg- menting the image, while competing with the reconstructor. Ultimately, we improve the quality of the CAMs by jointly training the two competitors (i.e., the classifier and the re- constructor) in an adversarial manner. The adversarial learning strategy of our framework is similar to Generative Adversarial Networks (GANs) [14]. Like the discriminator in GANs is specialized to discrimi- nate the real/fake samples, the reconstructor in our frame- work is trained to fully exploit the remnant contained in the given segment for reconstructing the other segment. Simi- larly, the classifier in our framework learns to generate pre- cise CAMs, using the reconstructor as a measure of the in- ferability between the segments, like the generator getting feedback from the discriminator in GANs. Consequently, our adversarial learning framework can achieve WSSS us- ing only the supervision that comes from the image-level classification labels and the input images themselves. The proposed method has methodological similarity to the existing Adversarial Erasing (AE) methods of WSSS in that it erases (or spatially decomposes) the image according to CAMs. However, the insights behind our method and theAE methods are far different. AE methods mask the highly activated regions of the CAMs from the image and impose classification loss on the remained image. Therefore, due to the lack of regularization for the erasing process, the CAMs usually suffer from undesirable expansion. On the other hand, the proposed method is inspired by the relation be- tween segmentation and reconstruction. And we formulate it as adversarial learning between two networks performing each task. This realization not only provides reliable guid- ance for CAMs from the perspective of segmentation, but also enables each network to improve while training pro- ceeds, based on the positive feedback from its counterpart. To verify the superiority of our method, we conduct extensive ablation studies and comparisons with the other state-of-the-art (SoTA) WSSS methods. Further, on both PASCAL VOC 2012 [11] and MS COCO [30] datasets, the proposed framework achieves a new SoTA. The contribution of this paper is threefold: •We formulate the problem of WSSS as minimizing infer- ability between the segments decomposed by the CAMs. •We propose a novel WSSS framework based on adversar- ial learning of the classifier and reconstructor. •We achieve state-of-the-art performance on both the PASCAL VOC 2012 val/test set and MS COCO valset.
Liu_DA_Wand_Distortion-Aware_Selection_Using_Neural_Mesh_Parameterization_CVPR_2023
Abstract We present a neural technique for learning to select a local sub-region around a point which can be used for mesh parameterization. The motivation for our framework is driven by interactive workflows used for decaling, tex- turing, or painting on surfaces. Our key idea is to incor- porate segmentation probabilities as weights of a classi- cal parameterization method, implemented as a novel dif- ferentiable parameterization layer within a neural network framework. We train a segmentation network to select 3D regions that are parameterized into 2D and penalized by the resulting distortion, giving rise to segmentations which are distortion-aware. Following training, a user can use our system to interactively select a point on the mesh and ob- tain a large, meaningful region around the selection which induces a low-distortion parameterization. Our code1and project2are publicly available.
1. Introduction Many interactive workflows for decaling, texturing, or painting on a 3D mesh require extracting a large surface patch around a point that can be mapped to the 2D plane with low distortion. Unlike global parameterization ap- proaches that map the entire mesh to 2D while introducing as few cuts as possible [1, 7–9, 17, 35, 38, 41, 43, 44, 53, 54, 56]), this work focuses on segmenting a local sub-region around a point of interest on a mesh for parameterization [25,36,45,46,60,69,70,73]. Local parameterizations are ad- vantageous in certain modeling settings because they are in- 1https://github.com/threedle/DA-Wand 2https://threedle.github.io/DA-Wand/herently user-interactive, can achieve lower distortion than their global counterparts, and are computationally more ef- ficient. To date, however, techniques for extracting a sur- face patch that is amenable to local parameterization have largely relied on heuristics balancing various compactness, patch size, and developability priors [18]. This work instead takes a data-driven approach to learn distortion-aware local segmentations that are optimal for local parameterization. Our proposed framework uses a novel differentiable parameterization layer to predict a patch around a point and its corresponding UV map. This enables self-supervised training, in which our network is encouraged to predict area-maximizing and distortion- minimizing patches through a series of carefully con- structed priors, allowing us to sidestep the scarcity of parameterization-labeled datasets. We name our system the Distortion-Aware Wand (DA Wand) , which given an input mesh and initial triangle se- lection, outputs soft segmentation probabilities. We incor- porate these probabilities into our parameterization layer by devising a weighted version of a classical parameterization method - LSCM [35] - which we call wLSCM . This adap- tation gives rise to a probability-guided parameterization , over which a distortion energy can be computed to enable self-supervised training. We prove a theorem stating that the wLSCM UV map converges to the LSCM UV map as the soft probabilities converge to a binary segmentation mask, which establishes the direct relation between probabilities and binary segmentation in the parameterization context. Reducing the distortion of the UV map and maximiz- ing the segmentation area are competing objectives, as UV distortion scales monotonically with patch size. Naively summing these objectives leads to poor optimization with This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16739 undesirable local minima. We harmonize these objectives by devising a novel thresholded-distortion loss , which pe- nalizes triangles with distortion above some user-prescribed threshold. We additionally encourage compactness through asmoothness loss inspired by the graphcuts algorithm [26]. We design a novel near-developable segmentation dataset to initialize the weights of our segmentation net- work, with an automatic generation algorithm which can be run out of the box. We then train this network end-to-end on a dataset of unlabelled natural shapes using our parame- terization layer with distortion and compactness priors. We leverage a MeshCNN [15] backbone to learn directly on the input triangulation which enables sensitivity to sharp features and a large receptive field which enables patch growth. Moreover, by utilizing intrinsic mesh features as in- put, our system remains invariant to rigid-transformations. DA Wand allows a user to interactively select a triangle on the mesh and obtain a large, meaningful region around the selection which can be UV parameterized with low dis- tortion. We show that the neural network is able to leverage global information to extend the segmentation with mini- mal distortion gain, in contrast to existing heuristic meth- ods which stop at the boundaries of high curvature re- gions. Our method can produce user-conditioned segmenta- tions at interactive rates, beating out alternative techniques. We demonstrate a compelling interactive application of DA Wand in Fig. 1, in which different regions on the sorting hat mesh are iteratively selected and decaled. We show addi- tional example textures in the supplemental.
Liu_PolyFormer_Referring_Image_Segmentation_As_Sequential_Polygon_Generation_CVPR_2023
Abstract In this work, instead of directly predicting the pixel-level segmentation masks, the problem of referring image seg- mentation is formulated as sequential polygon generation, and the predicted polygons can be later converted into seg- mentation masks. This is enabled by a new sequence-to- sequence framework, Polygon Transformer (PolyFormer), which takes a sequence of image patches and text query to- kens as input, and outputs a sequence of polygon vertices autoregressively. For more accurate geometric localization, we propose a regression-based decoder, which predicts the precise floating-point coordinates directly, without any co- ordinate quantization error. In the experiments, PolyFormer outperforms the prior art by a clear margin, e.g., 5.40% and 4.52% absolute improvements on the challenging Re- fCOCO+ and RefCOCOg datasets. It also shows strong generalization ability when evaluated on the referring video segmentation task without fine-tuning, e.g., achieving com- petitive 61.5% J&Fon the Ref-DAVIS17 dataset.
1. Introduction Referring image segmentation (RIS) [7, 19, 21, 29–32, 39, 45, 50, 61, 73, 78, 86, 87] combines vision-language un- derstanding [42, 55, 57, 75, 92] and instance segmentation [2, 8, 16, 25, 52], and aims to localize the segmentation mask of an object given a natural language query. It gen- eralizes traditional object segmentation from a fixed num- ber of predefined categories to any concept described by free-form language, which requires a deeper understanding of the image and language semantics. The conventional pipeline [7, 19, 21, 29–32, 45, 50, 61, 73, 87] first extracts features from the image and text inputs, and then fuses the multi-modal features together to predict the mask. A segmentation mask encodes the spatial layout of an object, and most instance segmentation models [8, 16, 25, 52] rely on a dense binary classification network to deter- *Work done during internship at AWS AI. †Equal contribution.mine whether each pixel belongs to the object. This pixel- to-pixel prediction is preferred by a convolutional opera- tion, but it neglects the structure among the output predic- tions. For example, each pixel is predicted independently of other pixels. In contrast, a segmentation mask can also be represented by a sparse set of structured polygon vertices delineating the contour of the object [1, 6, 41, 47, 49, 81]. This structured sparse representation is cheaper than a dense mask representation and is the preferred annotation format for most instance segmentation datasets [15, 49, 70]. Thus, it is also tempting to predict structured polygons directly. However, how to effectively predict this type of structured outputs is challenging, especially for convolutional neu- ral networks (CNNs), and previous efforts have not shown much success yet [41, 47, 81]. We address this challenge by resorting to a sequence-to- sequence (seq2seq) framework [3, 14, 65, 66, 74], and pro- pose Polygon trans Former (PolyFormer) for referring im- age segmentation. As illustrated in Fig. 1, it takes a se- quence of image patches and text query tokens as input, and autoregressively outputs a sequence of polygon ver- tices. Since each vertex prediction is conditioned on all preceding predicted vertices, the output predictions are no longer independent of each other. The seq2seq framework is flexible on its input and output format, as long as both of them can be formulated as sequences of variable length. Thus, it is natural to concatenate the visual and language features together as a long sequence, avoiding complicated multi-modal feature fusion as in prior work [7,30,73,86,87]. In the meantime, the output can also be a long sequence of multiple polygons separated by separator tokens, cover- ing the scenario where the segmentation masks are not con- nected, e.g., by occlusion, as shown in Fig. 1. Furthermore, since a bounding box can be represented as a sequence of two corner points ( i.e., top left and bottom right), they can also be output by PolyFormer along with the polygon ver- tices. Thus, referring image segmentation (polygon) and referring expression comprehension (bounding box) can be unified in our simple PolyFormer framework. Localization is important for polygon and bounding box This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18653 PolyFormer WD test a cute corgi holding asign thatsays “PolyFormer” Figure 1. The illustration of PolyFormer pipeline for referring image segmentation (polygon vertex sequence) and referring expression comprehension (bounding box corner points). The polygons are converted to segmentation masks in the end. generation, as a single coordinate prediction mistake may result in substantial errors in the mask or bounding box pre- diction. However, in recent seq2seq models for the visual domain [9, 10, 56, 77], in order to accommodate all tasks in a unified seq2seq framework, the coordinates are quan- tized into discrete bins and the prediction is formulated as a classification task. This is not ideal since geometric coor- dinates lie in a continuous space instead of a discrete one, and classification is thus usually suboptimal for localiza- tion task [23, 43, 93]. Instead, we formulate localization as a regression task, due to its success in object detection [5, 24, 25, 68], where floating-point coordinates are directly predicted without any quantization error. Motivated by [25], the feature embedding for any floating-point coordinate in PolyFormer is obtained by bilinear interpolation [33] of its neighboring indexed embeddings. This is in contrast with the common practice [9,56,77] in which the coordinate fea- ture is indexed from a dictionary with a fixed number of discrete coordinate bins. These changes enable our Poly- Former to make accurate polygon and bounding box pre- dictions. We evaluate PolyFormer on three major referring image segmentation benchmarks. It achieves 76.94%, 72.15%, and 71.15% mIoU on the validation sets of RefCOCO [89], RefCOCO+ [89] and RefCOCOg [60], outperforming the state of the art by absolute margins of 2.48%,5.40%, and 4.52%, respectively. PolyFormer also shows strong gen- eralization ability when directly applied to the referring video segmentation task without finetuning. It achieves 61.5% J&Fon the Ref-DA VIS17 dataset [38], compara- ble with [80] which is specifically designed for that task. Our main contributions are summarized as follows: • We introduce a novel framework for RIS and REC, called PolyFormer, which formulates them as a sequence-to-sequence prediction problem. Due to its flexibility, it can naturally fuse multi-modal features together as input and generate a sequence of polygon vertices and bounding box corner points. • We propose a regression-based decoder for accu- rate coordinate prediction in this seq2seq framework,which outputs continuous 2D coordinates directly without quantization error. To the best of our knowl- edge, this is the first work formulating geometric lo- calization as a regression task in seq2seq framework instead of classification as in [9, 10, 56, 77]. • For the first time, we show that the polygon-based method surpasses mask-based ones across all three main referring image segmentation benchmarks, and it can also generalize well to unseen scenarios, including video and synthetic data.
Lin_Vision_Transformers_Are_Parameter-Efficient_Audio-Visual_Learners_CVPR_2023
Abstract Vision transformers (ViTs) have achieved impressive re- sults on various computer vision tasks in the last several years. In this work, we study the capability of frozen ViTs, pretrained only on visual data, to generalize to audio-visual data without finetuning any of its original parameters. To do so, we propose a latent audio-visual hybrid ( LAV ISH) adapter that adapts pretrained ViTs to audio-visual tasks by injecting a small number of trainable parameters into every layer of a frozen ViT. To efficiently fuse visual and audio cues, our LAV ISHadapter uses a small set of latent to- kens, which form an attention bottleneck, thus, eliminating the quadratic cost of standard cross-attention. Compared to the existing modality-specific audio-visual methods, our approach achieves competitive or even better performance on various audio-visual tasks while using fewer tunable pa- rameters and without relying on costly audio pretraining or external audio encoders. Our code is available at https: //genjib.github.io/project_page/LAVISH/
1. Introduction Humans can seamlessly process audio-visual cues and use them in unison to learn associations between auditory and visual signals (e.g., the sound of barking and the visual concept of dog). In contrast, most modern computational audio-visual models [ 34,38,79,80,82,84,92] study each of these modalities in isolation, which leads to individually- tailored modality-specific models. While such modality- specific approaches often achieve state-of-the-art results on various audio-visual benchmarks, they also have several major shortcomings. First, optimizing and training mod- els for a specific modality (e.g., audio or video) requires significant research effort and computing power. For exam- ple, training large-scale models for audio and video requires more than 2,000 and 5,000 V100 hours respectively [ 10,86], which is not feasible for many smaller research labs. Ad- ditionally, since modern visual and audio models are be- coming larger, it can be quite costly to use separate back- bone networks for processing each modality. For instance, the audio-visual MBT-Large model [ 60], built using sepa- Frozen Update Dog Barking Shared ViTBlockViTBlockViTBlock LAV ISHLAV ISH ViTBlockViTBlock LAV ISH ViTBlockLAV ISH Image Pretrained WeightsFigure 1. We investigate whether frozen vision transformers (ViTs) pretrained only on visual data can generalize to audio data for complex audio-visual understanding tasks. For this pur- pose, we introduce a latent audio-visual hybrid adapter (LAV ISH), which is inserted into every layer of a frozen ViT model. By tun- ing only a small number of additional parameters we can enable a pretrained ViT to efficiently (i) adapt to the audio data, and (ii) fuse relevant cues across audio and visual modalities. rate audio and visual encoders, requires more than 48GB of GPU memory, which is only available on the costly, high-end GPU servers such as A100. Lastly, the modality- specific approaches are only trained on individual modali- ties and then typically combined via late fusion. As a result, such models cannot benefit from cross-modal cues in the early layers, which often leads to suboptimal performance on audio-visual tasks requiring joint audio-visual reasoning. The recent emergence of transformer models [ 2,21,24, 40,60] has propelled research in modality-agnostic archi- tectures for multi-modal understanding. In particular, the generality of the transformer architecture [ 16] makes it easy to apply these models to different modalities without any modality-specific adaptations. This property is well illus- trated by the fact that transformers [ 16] currently define state-of-the-art across many domains, including natural lan- guage processing (NLP) [ 8,15,41,42,51,59,64,91], com- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2299 puter vision (CV) [ 6,10,20], audio analysis [ 22,23,86], speech processing [ 7,73,77]. Such an architecture con- vergence across different domains/modalities inspired sev- eral recent works to investigate the cross-modal general- ization of pretrained transformers [ 40,50,62,75]. How- ever, most of them are either focused on language mod- els [47,50,75], or study close-domain transfer (e.g., image →video) [ 20,21,62]. In this work, we focus on the cross-modal generalization of pretrained vision transformers (ViT) [ 16] to the audio- visual data. Our main inspiration for this study stems from the fact that audio can be represented as a 2D spectrogram, which summarizes 1D raw audio signal into a 2D structure akin to audio images. Prior work has shown that vision ar- chitectures (e.g., CNNs [ 12,26] or ViTs [ 23,77]) can be used to process such audio images. However, most prior methods use these architectures for large-scale audio rep- resentation learning. Instead of pretraining ViTs on large- scale audio data, we hypothesize that the ViTs pretrained on images can simultaneously encode representations that are useful for both images and audio, making them useful for audio-visual tasks without large-scale audio pretraining. To investigate this hypothesis, we propose a latent audio- visual hybrid (LAV ISH) adapter that directly adapts frozen ViTs, pretrained only on images, to audio-visual tasks by adding a small number of trainable parameters for audio specialization and audio-visual fusion. Such a scheme al- lows us to apply frozen ViTs to audio-visual data without updating the original ViT parameters but only the param- eters of our proposed LAV ISH modules, which we insert into every layer of a frozen ViT. For an efficient cross- modal fusion within the LAV ISH module, we use a small set of latent tokens to first compress the information from all modality-specific tokens (e.g., either audio or video) and then apply cross-attention between the latent tokens and all the tokens of another modality (e.g., either video or audio). Such a scheme allows us to eliminate the quadratic cost of standard cross-attention. Furthermore, to allow information transfer between audio-to-video and, conversely, video-to- audio, we adopt a bi-directional LAV ISH scheme, which enables learning a better audio-visual representation. In our experimental section, we demonstrate that by keeping all the original ViT parameters frozen and updat- ing only a small set of newly added parameters, the frozen ViTs, pretrained only on image data, learn to solve com- plex audio-visual understanding tasks requiring a joint un- derstanding of audio and visual contents. In particular, com- pared to the state-of-the-art modality-specific audio-visual approaches, our method achieves competitive or even bet- ter results on the tasks of audio-visual event localization, audio-visual segmentation, and audio-visual question an- swering while using a smaller number of tunable parame- ters, and without relying on a separate pre-trained audio en-coder (e.g., VGGish [ 26], AST [ 23], etc.), or costly large- scale audio pretraining. We also show that our proposed latent audio-visual hybrid adapter (LAV ISH) is more effec- tive and efficient than the standard adapter schemes [ 27].
Kang_Variational_Distribution_Learning_for_Unsupervised_Text-to-Image_Generation_CVPR_2023
Abstract We propose a text-to-image generation algorithm based on deep neural networks when text captions for images are unavailable during training. In this work, instead of simply generating pseudo-ground-truth sentences of training im- ages using existing image captioning methods, we employ a pretrained CLIP model, which is capable of properly align- ing embeddings of images and corresponding texts in a joint space and, consequently, works well on zero-shot recogni- tion tasks. We optimize a text-to-image generation model by maximizing the data log-likelihood conditioned on pairs of image-text CLIP embeddings. To better align data in the two domains, we employ a principled way based on a varia- tional inference, which efficiently estimates an approximate posterior of the hidden text embedding given an image and its CLIP feature. Experimental results validate that the pro- posed framework outperforms existing approaches by large margins under unsupervised and semi-supervised text-to- image generation settings.
1. Introduction Recent advances in text-to-image (T2I) generation tech- niques [6, 8, 17, 18, 21, 26–29, 36] have shown promising results by employing generative adversarial networks [9], autoregressive models [33], or diffusion models [11, 32] to synthesize images based on their text captions. However, these approaches require a paired dataset that consists of images and their corresponding text captions, and, conse- quently, incur significant annotation costs, especially for la- beling image captions. To alleviate this limitation, unsu- pervised learning methods for T2I generation have recently drawn attention to the computer vision community, where the models learn to generate images without paired text cap- tions. Existing T2I models [1, 2, 34, 38] based on unsupervised learning exploit Contrastive Language-Image Pretraining (CLIP) [25] to sidestep the absence of text captions dur- ∗This work was partly done during an internship at Kakao Brain.ing training. Specifically, after a text embedding is esti- mated using a given image embedding, the T2I model is trained to synthesize an image conditioned on the estimated text embedding. However, although image and text embed- dings extracted by CLIP are not accurately aligned, exist- ing approaches assume that the distinction is ignorable [34] or simple to recover by just adding Gaussian noises [38] without considering the underlying structure of text embed- dings. Thus, those algorithms may suffer from large dis- crepancies between true and estimated text embeddings at both training and testing. To tackle the challenge, we propose a variational distri- bution learning technique for unsupervised T2I generation, where the lower-bound of the data log-likelihood is maxi- mized in a principled way. Specifically, we first regard a text embedding as a hidden random variable while an image and its CLIP embedding are observable random variables. Then, we decompose the variational lower-bound into three parts: 1) the similarity between the text embedding prior and pos- terior, 2) the log-likelihood of the image embedding given the text embedding, 3) the log-likelihood of the image given the image and text embeddings in the trained T2I model. Since the lower-bound formulation enforces the matching between the prior and posterior distributions of text embed- ding, our method achieves a more accurate estimation of the embedding and reduces the discrepancy between the true and estimated embeddings. For the optimization of the variational objective, we em- ploy a two-stage training strategy for T2I models. In the first stage, we learn an encode-decoder architecture that takes the image embedding as an input and the estimated text em- bedding as a latent bottleneck. Then, our network estimates two conditional distributions of CLIP embeddings, one for the variational distribution of the text embedding given the image embedding and the other for the model distribution of the image embedding given the text embedding. The param- eters of the two distributions are obtained from the first two terms in the variational lower-bound objective. Note that we relax the Kullback-Leibler (KL) divergence term in the training objective of the first stage to an adversarial train- ing loss, specifically, the Jensen-Shannon divergence. Since This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23380 the KL divergence is only tractable for a confined family of distributions, this relaxation allows more flexible choices for the conditional and the prior distributions. In the sec- ond stage, a T2I model learns the conditional distribution of the images given the estimated text embeddings and the im- age features. Altogether, the proposed method achieves out- standing performance on widely adopted datasets [19, 31]. The main contributions of our work are summarized below: •We propose a principled approach for unsupervised and semi-supervised text-to-image generation tasks based on a variational inference technique. •We theoretically show that our method considers the underlying structure of text embeddings, which can eventually lead to better generalization performance. •We empirically confirm that the proposed algorithm outperforms the existing methods by large margins. The rest of our paper is organized as follows. Section 2 overviews the related work about unsupervised training methods in text-to-image generation. Section 3 describes the main idea of our approach while Sections 4 and 5 dis- cuss the procedures of the first and second training stages, respectively. The experimental results are presented in Sec- tion 6, and we finally conclude this paper in Section 7.
Liu_GRES_Generalized_Referring_Expression_Segmentation_CVPR_2023
Abstract Referring Expression Segmentation (RES) aims to gen- erate a segmentation mask for the object described by a given language expression. Existing classic RES datasets and methods commonly support single-target expressions only, i.e., one expression refers to one target object. Multi- target and no-target expressions are not considered. This limits the usage of RES in practice. In this paper, we introduce a new benchmark called Generalized Referring Expression Segmentation (GRES), which extends the classic RES to allow expressions to refer to an arbitrary number of target objects. Towards this, we construct the first large- scale GRES dataset called gRefCOCO that contains multi- target, no-target, and single-target expressions. GRES and gRefCOCO are designed to be well-compatible with RES, facilitating extensive experiments to study the performance gap of the existing RES methods on the GRES task. In the experimental study, we find that one of the big challenges of GRES is complex relationship modeling. Based on this, we propose a region-based GRES baseline ReLA that adaptively divides the image into regions with sub- instance clues, and explicitly models the region-region and region-language dependencies. The proposed approach ReLA achieves new state-of-the-art performance on the both newly proposed GRES and classic RES tasks. The †Equal contribution. Corresponding author ([email protected]).proposed gRefCOCO dataset and method are available at https://henghuiding.github.io/GRES.
1. Introduction Referring Expression segmentation (RES) is one of the most important tasks of multi-modal information process- ing. Given an image and a natural language expression that describes an object in the image, RES aims to find this target object and generate a segmentation mask for it. It has great potential in many applications, such as video production, human-machine interaction, and robotics. Currently, most of the existing methods follow the RES rules defined in the popular datasets ReferIt [20] and RefCOCO [34, 47] and have achieved great progress in recent years. Limitations of classic RES. However, most classic RES methods have some strong pre-defined constraints to the task. First, the classic RES does not consider no- target expressions that do not match any object in the image. This means that the behavior of the existing RES methods is undefined if the target does not exist in the input image. When it comes to practical applications under such constraint, the input expression has to match an object in the image, otherwise problems inevitably occur. Second, most existing datasets, e.g., the most popular RefCOCO [34, 47], do not contain multi-target expressions that point to multiple instances. This means that multiple inputs are needed to search objects one by one. E.g., in Fig. 1, four distinct expressions with four times of model calls are This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23592 Table 1. Comparison among different referring expression data- sets, including ReferIt [20], RefCOCO(g) [34, 47], PhraseCut [40], and our proposed gRefCOCO . Multi-target: expression that specifies multiple objects in the image. No-target: expression that does not touch on any object in the image. ReferIt RefCOCO(g) PhraseCut gRefCOCO Image Source CLEF [8] COCO [25] VG [22] COCO [25] Multi-target ✗ ✗ (fallback) ✓ No-target ✗ ✗ ✗ ✓ Expression type free free templated free required to segment “All people” . Our experiments show that classic RES methods trained on existing datasets cannot be well-generalized to these scenarios. New benchmark and dataset. In this paper, we propose a new benchmark, called Generalized Referring Expression Segmentation (GRES), which allows expressions indicating any number of target objects. GRES takes an image and a referring expression as input, the same as classic RES. Different from classic RES, as shown in Fig. 1, GRES further supports multi-target expression that specifies mul- tiple target objects in a single expression, e.g.,“Everyone except the kid in white” , and no-target expression that does not touch on any object in the image, e.g.,“the kid in blue” . This provides much more flexibility for input expression, making referring expression segmentation more useful and robust in practice. However, existing referring expression datasets [20, 34, 47] do not contain multi-target expression nor no-target samples, but only have single- target expression samples, as shown in Tab. 1. To facilitate research efforts on realistic referring segmentation, we build a new dataset for GRES, called gRefCOCO. It complements RefCOCO with two kinds of samples: multi-target samples, in which the expression points to two or more target instances in the image, and no-target samples, in which the expression does not match any object in the image. A baseline method. Moreover, we design a baseline method based on the objectives of the GRES task. It is known that modeling relationships, e.g., region-region interactions, plays a crucial role in RES [46]. However, classic RES methods only have one target to detect so that many methods can achieve good performance without explicit region-region interaction modeling. But in GRES, as multi-target expressions involve multiple objects in one expression, it is more challenging and essential to model the long-range region-region dependencies. From this point, we propose a region-based method for GRES that explicitly model the interaction among regions with sub- instance clues. We design a network that splits the image into regions and makes them explicitly interact with each other. Moreover, unlike previous works where regions come from a simple hard-split of the input image, our network soft-collates features for each region, achieving more flexibility. We do extensive experiments on ourproposed methods against other RES methods, showing that the explicit modeling of interaction and flexible region features greatly contributes to the performance of GRES. In summary, our contributions are listed as follows: 1. We propose a benchmark of Generalized Referring Expression Segmentation (GRES), making RES more flexible and practical in real-world scenarios. 2. We propose a large-scale GRES dataset gRefCOCO. To the best of our knowledge, this is the first referring expression dataset that supports expressions indicating an arbitrary number of target objects. 3. We propose a solid baseline method ReLA for GRES to model complex ReLA tionships among objects, which achieves the new state-of-the-art performance on both classic RES and newly proposed GRES tasks. 4. We do extensive experiments and comparisons of the proposed baseline method and other existing RES methods on the GRES, and analyze the possible causes of the performance gap and new challenges in GRES.
Li_A_Simple_Baseline_for_Video_Restoration_With_Grouped_Spatial-Temporal_Shift_CVPR_2023
Abstract Video restoration, which aims to restore clear frames from degraded videos, has numerous important applica- tions. The key to video restoration depends on utilizing inter-frame information. However, existing deep learn- ing methods often rely on complicated network architec- tures, such as optical flow estimation, deformable convo- lution, and cross-frame self-attention layers, resulting in high computational costs. In this study, we propose a sim- ple yet effective framework for video restoration. Our ap- proach is based on grouped spatial-temporal shift, which is a lightweight and straightforward technique that can implicitly capture inter-frame correspondences for multi- frame aggregation. By introducing grouped spatial shift, we attain expansive effective receptive fields. Combined with basic 2D convolution, this simple framework can effectively aggregate inter-frame information. Extensive experiments demonstrate that our framework outperforms the previous state-of-the-art method, while using less than a quarter of its computational cost, on both video deblurring and video denoising tasks. These results indicate the potential for our approach to significantly reduce computational overhead while maintaining high-quality results. Code is avaliable at https://github.com/dasongli1/Shift-Net .
1. Introduction The popularity of capturing videos using handheld de- vices continues to surge. However, these videos often suffer from various types of degradation, including image noise due to low-cost sensors and severe blurs resulting from cam- era shake or object movement. Consequently, video restora- tion has garnered significant attention in recent years. The keys of video restoration methods lie in designing components to realize alignment across frames. While sev- eral methods [7, 38, 39, 53, 60] employ convolutional net- works for multi-frame fusion without explicit alignment, Figure 1. Video deblurring on GoPro dataset [40]. Our models have fewer parameters (disk sizes) and occupy the top-left corner, indicating superior performances (PSNR on y-axis) with less com- putational cost (FLOPS on x-axis). their performance tends to be suboptimal. Most methods rely on explicit alignment to establish temporal correspon- dences, using techniques such as optical flow [46, 61] or deformable convolution [11, 69]. However, these ap- proaches often necessitate either complex or computation- ally expensive network architectures to achieve large recep- tive fields, and they may fail in scenarios involving large displacements [27], frame noise [8, 63], and blurry regions [7,48]. Recently, transformer [12,15,34] becomes promis- ing alternatives for attaining long-range receptive fields. A video restoration transformer (VRT) [32] is developed to model long-range dependency, but its large number of self- attention layers make it computationally demanding. In- spired by the success of the Swin transformer [34], large kernel convolutions [14, 35] emerge as a direct solution to obtain large effective receptive fields. However, extremely large kernels (e.g. kernel size >13×13) does not necessar- ily guarantee improved performance. (shown in 5). In this study, we propose a simple, yet effective spatial- temporal shift block to achieve large effective receptive field for temporal correspondence. We introduce a Group Shift- Net, which incorporates the proposed spatial-temporal shift This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9822 K*K ConvConvWarpDeformable ConvOffsetsWindow-basedAttention5*5ConvGrouped spatial shift(a) Convolution (b) Optical Flow (c) Deformable Convolution(d) Self-Attention(e) Grouped Spatial ShiftFigure 2. Different modules for multi-frame aggregation: a) convolution [53], b) optical flow [32,42], c) deformable convolution [11,54,57], d) self-attention [32, 34] and e) our grouped spatial shift. Point-wise convolution, shortcut and normalization are omitted for simplicity. blocks for alignment along with basic 2D U-Nets for frame- wise feature encoding and restoration. The grouped spatial- temporal shift process involves the separate shifting of input clip features in both temporal and spatial dimensions, fol- lowed by fusion using 2D convolution blocks. Despite its minimal computational demands, the shift block offers large receptive fields for efficient multi-frame fusion. By stacking multiple spatial-temporal shift blocks, the aggregation of long-term information is achieved. This streamlined frame- work models long-term dependencies without depending on resource-demanding optical flow estimation [19,47,61], de- formable convolution [11, 54, 57], or self-attention [32]. Notably, while temporal shift module (TSM) [33] was originally proposed for video understanding, it is not ef- fective for video restoration. Our method distinguishes itself from TSM in three fundamental ways: a) Alterna- tive bi-directional temporal shift. TSM [33] employs bi- directional channel shift during training, causing misalign- ment of channels across three frames, which in turn in- creases the difficulty of multi-frame aggregation. Con- versely, our method utilizes alternative temporal shifts , ef- fectively circumventing this issue. b) Spatial shift. In addi- tion, our approach also incorporates a spatial shift for multi- frame features. We divide the features into several groups, each with distinct shift lengths and directions in the 2D dimension. This grouped spatial shift offers multiple can- didate displacements for matching misaligned features. c) Feature fusion. To seamlessly merge various shifted groups, the kernel size of the convolution is set equal to the base shift length. By combining elements b) and c), the spatial- temporal shift achieves large receptive fields (e.g. 23×23). The contributions of this study are two-fold: 1) We pro- pose a simple, yet effective framework for video restora- tion, which introduces a grouped spatial-temporal shift for efficient and effective temporal feature aggregation 2) Our framework surpasses state-of-the-art methods with much fever FLOPs on both video deblurring and video denoising tasks, demonstrating its generalization capability.
Lee_Im2Hands_Learning_Attentive_Implicit_Representation_of_Interacting_Two-Hand_Shapes_CVPR_2023
Abstract We present Implicit Two Hands (Im2Hands), the first neural implicit representation of two interacting hands. Un- like existing methods on two-hand reconstruction that rely on a parametric hand model and/or low-resolution meshes, Im2Hands can produce fine-grained geometry of two hands with high hand-to-hand and hand-to-image coherency. To handle the shape complexity and interaction context be- tween two hands, Im2Hands models the occupancy volume of two hands – conditioned on an RGB image and coarse 3D keypoints – by two novel attention-based modules respon- sible for (1) initial occupancy estimation and (2) context- aware occupancy refinement, respectively. Im2Hands first learns per-hand neural articulated occupancy in the canon- ical space designed for each hand using query-image at- tention. It then refines the initial two-hand occupancy in the posed space to enhance the coherency between the two hand shapes using query-anchor attention. In addi- tion, we introduce an optional keypoint refinement mod- ule to enable robust two-hand shape estimation from pre- dicted hand keypoints in a single-image reconstruction sce- nario. We experimentally demonstrate the effectiveness of Im2Hands on two-hand reconstruction in comparison to re- lated methods, where ours achieves state-of-the-art results. Our code is publicly available at https://github. com/jyunlee/Im2Hands .
1. Introduction Humans use hand-to-hand interaction in everyday activ- ities, which makes modeling 3D shapes of two interact- ing hands important for various applications (e.g., human- computer interaction, robotics, and augmented or virtual re- ality). However, the domain of two-hand shape reconstruc- tion remains relatively under-explored, while many exist- ing studies have put efforts into single-hand reconstruction from RGB [1, 2, 4, 13, 21, 24, 44], depth [38], or sparse key- points [18, 44]. These single hand-based methods are not effective when directly applied for interacting two-hand re- construction, since it introduces additional challenges in- cluding inter-hand collisions and mutual occlusions. Recently, few learning-based methods on two-hand shape reconstruction [23, 32, 42] have been proposed since the release of the large-scale interacting hand dataset (i.e., InterHand2.6M [27]). Two-Hand-Shape-Pose [42] and IHMR [32] reconstruct two-hands by estimating MANO [31] parameters, which are later mapped to triangu- lar hand meshes using a pre-defined statistical model (i.e., MANO). IntagHand [23] directly regresses a fixed number of mesh vertex coordinates using a graph convolutional net- work (GCN). These methods mainly model the shape of two interacting hands based on a low-resolution mesh represen- tation with a fixed topology of MANO (please refer to Fig- ure 1). This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21169 In this paper, we present Implicit Two Hands (Im2Hands), the first neural implicit representation of two interacting hands. Unlike the existing mesh-based two- hand reconstruction methods, Im2Hands can capture the fine-grained geometry of two interacting hands by learn- ing a continuous 3D occupancy field. Im2Hands (1) pro- duces two-hand meshes with an arbitrary resolution, (2) does not require dense vertex correspondences or statisti- cal model parameter annotations for training, and (3) learns output shapes with precise hand-to-hand and hand-to-image alignment. As two interacting hands are highly articulated objects, we take inspiration from recent neural articulated implicit functions [7, 8, 18, 25, 29, 34] that learn an implicit geometry leveraging the object canonical space computed from an input pose observation. Our two-hand articulated implicit function is also driven by input pose and shape ob- servations, which are represented as sparse 3D keypoints and an RGB image, respectively. To effectively handle the shape complexity and interac- tion context between two hands, Im2Hands consists of two novel attention-based modules responsible for initial hand occupancy estimation and context-aware two-hand occu- pancy refinement, respectively. The initial occupancy es- timation network first predicts the articulated occupancy volume of each hand in the canonical space. Given a 3D query point, it (1) performs query canonicalization using the keypoint encoder of HALO [18] to effectively capture pose-dependent hand deformations and (2) extracts a hand shape feature using our novel query-image attention module to model shape-dependent hand deformations . As it is non- trivial to model two-hand interaction while learning in the canonical space defined for each hand, our context-aware occupancy refinement network modifies the initial two-hand occupancy in the original posed space to enhance hand-to- hand coherency. Given the initial two-hand shape repre- sented as anchored point clouds, it uses query-anchor at- tention to learn a refined two-hand occupancy in a context- aware manner. Furthermore, we consider a practical sce- nario of two-hand reconstruction using Im2Hands from sin- gle images, where no ground truth keypoints are observed as inputs to our method. To this end, we introduce an op- tional input keypoint refinement network to enable more robust two-hand shape reconstruction by alleviating errors in the input 3D keypoints predicted from an off-the-shelf image-based two-hand keypoint estimation method (e.g., [11, 20, 23, 27, 42]). Overall, our main contributions are summarized as fol- lows: • We introduce Im2Hands, the first neural implicit rep- resentation of two interacting hands. Im2Hands recon- structs resolution-independent geometry of two-hands with high hand-to-hand and hand-to-image coherency.• To effectively learn an occupancy field of the complex two-hand geometries, we propose two novel attention- based modules that perform (1) initial occupancy es- timation in the canonical space and (2) context-aware occupancy refinement in the original posed space, re- spectively. We additionally introduce an optional key- point refinement module to enable more robust two- hand shape estimation using a single image input. • We demonstrate the effectiveness of Im2Hands in comparison to the existing (1) two-hand mesh-based and (2) single-hand implicit function-based recon- struction methods, where Im2Hands achieves state-of- the-art results in interacting two-hand reconstruction.
Liang_Open-Vocabulary_Semantic_Segmentation_With_Mask-Adapted_CLIP_CVPR_2023
Abstract Open-vocabulary semantic segmentation aims to seg- ment an image into semantic regions according to text de- scriptions, which may not have been seen during train- ing. Recent two-stage methods first generate class-agnostic mask proposals and then leverage pre-trained vision- language models, e.g., CLIP , to classify masked regions. We identify the performance bottleneck of this paradigm to be the pre-trained CLIP model, since it does not perform well on masked images. To address this, we propose to finetune CLIP on a collection of masked image regions and their corresponding text descriptions. We collect training data by mining an existing image-caption dataset ( e.g., COCO Cap- tions), using CLIP to match masked image regions to nouns in the image captions. Compared with the more precise and manually annotated segmentation labels with fixed classes (e.g., COCO-Stuff), we find our noisy but diverse dataset can better retain CLIP’s generalization ability. Along with finetuning the entire model, we utilize the “blank” areas in masked images using a method we dub mask prompt tuning. Experiments demonstrate mask prompt tuning brings signif- icant improvement without modifying any weights of CLIP , and it can further improve a fully finetuned model. In par- ticular, when trained on COCO and evaluated on ADE20K- 150, our best model achieves 29.6% mIoU, which is +8.5% higher than the previous state-of-the-art. For the first time, open-vocabulary generalist models match the performance of supervised specialist models in 2017 without dataset spe- cific adaptations.
1. Introduction Semantic segmentation aims to group pixels into mean- ingful regions with corresponding semantic categories. Al- though remarkable progress has been made [6, 7, 9, 29, 41], modern semantic segmentation models are mainly trained with pre-defined categories, failing to generalize to unseen classes. On the contrary, humans understand scenes in an *Work done during an internship at Meta Reality Labs. †Work done while at Meta Reality Labs.open-vocabulary manner, typically with thousands of cate- gories [2]. To approach human-level perception, this paper studies open-vocabulary semantic segmentation where the model segments an image by arbitrary categories described by texts. Vision-language models, e.g., CLIP [35], learn rich multi-modal features from billion-scale image-text pairs. Witnessing its superior open-vocabulary classification abil- ity, prior works propose to use pre-trained vision-language models for open-vocabulary segmentation [11, 16, 23, 40]. Among them, two-stage approaches have shown great po- tential: they first generate class-agnostic mask propos- als and then leverage pre-trained CLIP to perform open- vocabulary classification (see Figure 1(b)). Their success relies on two assumptions: (1) the model can generate class- agnostic mask proposals (2) pre-trained CLIP can transfer its classification performance to masked image proposals. To examine these two assumptions, we conduct the fol- lowing analysis. First, we assume an “oracle” mask gener- ator and an ordinary CLIP classifier. We use ground-truth masks as region proposals, and feed masked images to a pre-trained CLIP for classification. This model only reaches an mIoU of 20.1% on the ADE20K-150 dataset. Next, we assume an “oracle” classifier but an ordinary mask proposal generator – a MaskFormer ( [9]) pre-trained on the COCO dataset. We first extract masked region proposals, then com- pare each region with ground-truth object masks, find the object with the highest overlap, and assign the object la- bel to this extracted region. This model, despite imper- fect region proposals, reaches a significantly higher mIoU of 66.5%. This analysis clearly shows that pre-trained CLIP can notperform satisfactory classification over masked images, and it is the performance bottleneck of two-stage open- vocabulary segmentation models. We hypothesize that this is caused by the significant domain gap between masked images and CLIP’s training images. CLIP is pre-trained on natural images with minimal data augmentation [35]. On the other hand, mask proposals are cropped and re-sized from original images, and are further corrupted by noisy segmentation masks, see examples in Figure 1 (b). This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7061 bridgeskyA white cute cat lying on the ground. (a) CLIP is pre-trained with natural imagesCLIPMaskproposalgenerator…CLIPclassificationA photo of a {bridge} (b) Skeleton of two-stage approaches(c) Bottleneck analysis…20406020.166.5mIoUon ADE20K-150Oracle mask proposalsOracle classification maskclassmasked images Figure 1. (a) CLIP is pre-trained with natural images with little data augmentation. (b) Two-stage open-vocabulary semantic segmentation approaches first generate class-agnostic mask proposals and then leverage pre-trained CLIP to do open-vocabulary classification. The input of the CLIP model is cropped masked images, which have huge domain gap from the natural images. (c) Our analysis reveals that pre-trained CLIP does not work well on masked images. To address this, we propose to adapt CLIP by finetun- ing it on masked images and corresponding text labels. One direct solution is to use segmentation labels, e.g., from the COCO-stuff dataset. However, this leads to bad general- ization to unseen classes (Section 4.3.1). Such manually annotated masks are accurate but classes are limited to a closed set ( e.g., 171 classes for COCO-stuff). We hypothe- size that the lack of text diversity causes the finetuned CLIP to lose the generalization ability to open vocabulary con- cepts. Instead, we collect training data by mining an ex- isting image-caption dataset ( e.g., COCO Captions). Given an image-caption pair, we first extract nouns in the caption, and generate class-agnostic masked region proposals using a pre-trained segmentation model. Then, with a pre-trained CLIP model, we assign the best-matching proposal to each extracted noun. By learning from this weakly-supervised alignments between masked images and novel categories, the adapted CLIP better retains its generalization ability for open vocabulary classification. The next question is how to effectively finetune CLIP? The most notable difference between a masked image and a natural image is that background pixels in a masked im- age are masked out, leading to many blank areas, which will be converted to “zero tokens” when feeding to CLIP transformers. Such tokens not only contain no useful in- formation, but also bring domain distribution shift to the model (since such tokens don’t exist in natural images) and cause performance degradation. To mitigate this, we pro- pose mask prompt tuning, ´a la visual prompt tuning [20]. When tokenizing a masked image, we replace the “zero to- kens” with learnable prompt tokens. During finetuning, we either train prompts only and freeze CLIP’s weights, or train both of them. We find that mask prompt tuning alone sig- nificantly improves CLIP’s performance on masked images. This is a crucial property for multi-task scenarios where we cannot change CLIP’s weight since it is shared with other tasks. When combined with full model finetuning, maskprompt tuning can further improve the performance by a non-trivial margin (see Section 4.3.2). In our experiments, we measure the open-vocabulary segmentation performances on holdout segmentation datasets in a “zero-shot” manner – we do not adapt the model for each evaluation dataset. We train our model us- ing COCO-stuff [5] dataset with captions from [8], and test on challenging ADE20K (A-150, A-847 for 150/846 cate- gories) [43], Pascal Context (PC-59, PC-459 for 59/459 cat- egories) [33] and PASCAL VOC (PAS-20) [15]. Our best model achieves 29.6% mIoU on A-150, which is +8.5% than the state-of-the-art OpenSeg [16] under the same set- ting. On more challenging A-847 and PC-459, our model sets up a new state-of-the-art of 9.0%, 12.4% mIoU, sur- passing the previous best solution by +2.7% and 3.4%. Moreover, for the first time, we show open-vocabulary gen- eralist models can match the performance of supervised specialist models [6,29,45] in 2017 without dataset specific adaptations. In summary our contributions include: (1) Our anal- ysis reveals the pre-trained CLIP does notperform well on mask proposals, making it the performance bottleneck of two-stage approaches. (2) We collect diverse mask- category pairs from captions to adapt CLIP for masked im- ages and retain its generalization ability. (3) We propose mask prompt tuning specifically for masked image adap- tation. This method does not change CLIP’s weight, en- abling multi-task weight sharing. (4) For the first time, we show open-vocabulary generalist models can match the per- formance of supervised specialist models in 2017 without dataset specific adaptations.
Kumar_Normalizing_Flow_Based_Feature_Synthesis_for_Outlier-Aware_Object_Detection_CVPR_2023
Abstract Real-world deployment of reliable object detectors is crucial for applications such as autonomous driving. How- ever, general-purpose object detectors like Faster R-CNN are prone to providing overconfident predictions for outlier objects. Recent outlier-aware object detection approaches estimate the density of instance-wide features with class- conditional Gaussians and train on synthesized outlier fea- tures from their low-likelihood regions. However, this strat- egy does not guarantee that the synthesized outlier features will have a low likelihood according to the other class- conditional Gaussians. We propose a novel outlier-aware object detection framework that distinguishes outliers from inlier objects by learning the joint data distribution of all inlier classes with an invertible normalizing flow. The ap- propriate sampling of the flow model ensures that the syn- thesized outliers have a lower likelihood than inliers of all object classes, thereby modeling a better decision bound- ary between inlier and outlier objects. Our approach sig- nificantly outperforms the state-of-the-art for outlier-aware object detection on both image and video datasets.
1. Introduction General purpose object detectors such as Faster R- CNN [42] and Mask R-CNN [17] deliver high performance for inlier images. However, in many real-world scenarios, such as autonomous driving [3], plenty of unknown outliers (OD) naturally occur in an image or a video scene. Due to the co-existence of OD with the labeled inlier (ID) ob- jects in the scene, object detectors confuse outliers with in- liers. Therefore, reliable object-detection deployments re- quire detecting such anomalies without degrading the per- formance of inlier object detection. Many outlier detection approaches focus on multi-class image classification task by either performing outlier de- tection during inference [19, 24, 30, 31, 35, 44] or training on real outlier data [1, 20, 39, 53]. However, such OD in- puts are unaware of the decision boundary between inliers and outliers, resulting in an inaccurate model regularization. A popular work [29] proposed a novel training scheme to 7.0 6.5 6.0 5.5 5.0 Log-likelihood scores1e3012345Frequency1e 3 FFS (ours) ID objects Background patches OD samples 5 4 3 2 1 0 Log-likelihood scores1e60.00.20.40.60.81.01.21.4Frequency1e 6 VOS ID objects Background patches OD samplesFigure 1. We compare distributions of log-likelihoods for inliers (ID), background patches, and synthetic outlier (OD) samples by applying a pre-trained model on Pascal-VOC [11] validation set. The left plot shows the likelihoods of class-conditional Gaussians from the official approach of VOS [10]. The right plot shows the likelihoods recovered with our normalizing flow. We observe that our flow separates the three distributions much better than VOS. generate synthetic outlier samples in the high-dimensional pixel space using Generative Adversarial Networks for out- lier aware training of an image classification model. How- ever, GANs are challenging to optimize and are likely to deliver insufficient coverage [36]. Moreover, the previous work [29] generates an entire image as an outlier sample, whereas in an object detection problem, both inliers and outliers can coexist in the same image space. Recently, [10] proposed to model the inlier features as class-conditional Gaussians and then synthesize OD fea- tures from the low likelihood region of the modeled distri- bution. Even though density is more easily estimated in the feature space than in pixel space, the assumption of class- conditional Gaussian may not provide an accurate decision boundary between inliers and outliers. Furthermore, synthe- sizing a low-likelihood OD sample from the Gaussian for class A does not guarantee that the sample is also of the low likelihood for the Gaussian of class B, as indicated in the left plot of Figure 1. Recent work for object detection in video [9] extracts the background patches for uncertainty regularization by thresholding the dissimilarity with the ref- erence inlier features. However, the extracted background patches may lie far from the inlier-outlier decision bound- ary, leading to sub-optimal regularization of the model. We propose a novel approach for open-set object detec- tion which we call Flow Feature Synthesis ( FFS). Our ap- proach trains an invertible normalizing flow to map the data distribution of ID features of all object classes to a latent This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5156 representation that conforms to a multivariate Gaussian dis- tribution with zero mean and diagonal unit covariance. As a result, it ensures principled estimation of the complex dis- tribution of inlier features from all classes. We synthesize outlier features from low-likelihood regions of the learned distribution. This enables outlier features to be near the ID- OD decision boundary, leading to more robust uncertainty regularization of the object detector. In contrast, using gen- erative adversarial models for the same task would be cum- bersome due to its inability to infer density of the generated samples, Furthermore, variational autoencoders can only in- fer the lower bound of the sample density [27]. Specifically, FFS optimizes the normalizing flow model by maximum likelihood training on ID features. This train- ing scheme enables the model to estimate the actual data distribution of the available ID features. Next, FFS uti- lizes the invertibility of the flow model to randomly sam- ple from its latent space and generate synthetic features in the reverse direction of the model. The normalizing flow allows efficient and exact inference of the distribution den- sity in the generated samples. Consequently, our approach can deliver suitable synthetic outlier data in fewer iterations than VOS [10]. It also requires fewer synthetically gen- erated samples than VOS [10] to obtain OD features from the low-likelihood region of the modeled distribution by the flow. Furthermore, we developed our end-to-end trainable FFS framework to be effective on both image and video datasets. In contrast, previous works [9, 10] proposed stan- dalone strategies for each task. Our main contributions are: • We present a new outlier-aware object detection frame- work that utilizes Normalizing Flows to model the joint data distribution of inlier features. Invertibility of the flow allows efficient generation of synthetic out- liers for effective uncertainty regularization. • By mapping the data distribution of inlier features from all object classes to a multivariate Normal dis- tribution in the flow’s latent space, FFS ensures that an outlier sampled using the flow model is OD with reference to all ID classes. •FFS achieves better OD detection performance while training faster than VOS [10] and STUD [9], due to having to generate fewer synthetic samples. • We show that our method achieves state-of-the-art performance in OD object detection while preserv- ing the baseline ID detection performance for image dataset PASCAL-VOC [11] and video datasets such as Youtube-VIS [51] and BDD100K [52].
Liu_Exploring_the_Relationship_Between_Architectural_Design_and_Adversarially_Robust_Generalization_CVPR_2023
Abstract Adversarial training has been demonstrated to be one of the most effective remedies for defending adversarial exam- ples, yet it often suffers from the huge robustness general- ization gap on unseen testing adversaries, deemed as the adversarially robust generalization problem. Despite the preliminary understandings devoted to adversarially robust generalization, little is known from the architectural per- spective. To bridge the gap, this paper for the first time systematically investigated the relationship between adver- sarially robust generalization and architectural design. In particular, we comprehensively evaluated 20 most repre- sentative adversarially trained architectures on ImageNette and CIFAR-10 datasets towards multiple ℓp-norm adversar- ial attacks. Based on the extensive experiments, we found that, under aligned settings, Vision Transformers (e.g., PVT, CoAtNet) often yield better adversarially robust generaliza- tion while CNNs tend to overfit on specific attacks and fail to generalize on multiple adversaries. To better understand the nature behind it, we conduct theoretical analysis via the lens of Rademacher complexity. We revealed the fact that the higher weight sparsity contributes significantly towards the better adversarially robust generalization of Transform- ers, which can be often achieved by the specially-designed attention blocks. We hope our paper could help to better understand the mechanism for designing robust DNNs. Our model weights can be found at http://robust.art .
1. Introduction DNNs have achieved remarkable performance across a wide variety of applications [15,28,41,79], yet they are vul- nerable to adversarial examples (malicious inputs that could fool DNNs by adding human-imperceptible perturbations The first three authors contribute equally. Corresponding author: Xi- anglong Liu, [email protected]. AttentionPrediction Layer… Image EmbeddingViT AttentionConv… Image EmbeddingConViT AttentionNorm… Image EmbeddingDeiT ConvResNetConvConvConv Architectures ℓ!ℓ"ℓ# Adversarial Examples Attack Adversarial TrainingFigure 1. This paper aims to better understand adversarially robust generalizations from the architectural design perspective. We comprehensively evaluated 20 adversarially trained architec- tures on several datasets toward multiple ℓpadversaries and found that Transformers often yield better adversarially robust gener- alization. We also undertake theoretical analysis through the Rademacher complexity lens to further comprehend this. [12, 33, 37, 54]). Numerous defensive strategies have been offered to combat this issue. The most effective and promis- ing of these approaches is adversarial training [42, 58], which improves adversarial robustness by adding adversar- ial examples into training data. However, adversarial training tends to exhibit a huge generalization gap between training and testing data [53,59, 76]. Unlike models trained on clean data that usually gener- alize well on clean testing data, differences between train- ing and testing robustness for adversarially-trained models tend to be large. Moreover, the robustness of adversarial training fails to generalize to unseen adversaries during test- ing [29, 31]. The aforementioned adversarially robust gen- eralization problem puts a barrier in the way of the poten- tial applications of adversarial training in practice, where at- tacks are often drawn from multiple unknown attacks, there- fore attracting intensive research interests. Prior works have studied the characteristic of adversarially robust general- ization and proposed many approaches for mitigation ( e.g., unlabeled data [4, 47], robust local features [50], genera- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4096 tive adversarial training [44]), while little is known about adversarially robust generalization in terms of architectural design. Model architecture reflects the inherent nature of model robustness [55]. To better comprehend and create robust DNNs, it is important to carefully explore the links between architectural design and robust generalization. In this paper, we present the first comprehensive investi- gation on adversarially robust generalization w.r.t. archi- tectural design. In particular, we first provide an exten- sive evaluation on 20adversarially trained architectures, in- cluding the prevailing Vision Transformers ( e.g., ViT, PVT, Swin Transformer) as well as some representative CNNs (e.g., ResNet, VGG) for ImageNette and CIFAR-10 datasets towards multiple adversaries ( ℓ1,ℓ2,ℓ∞adversarial at- tacks). Based on our experiments, we found that, under aligned setting, Vision Transformers ( e.g., PVT, CoAtNet) often yield better adversarially robust generalization com- pared to CNNs; yet CNNs often tend to overfit on specific attacks and fail to generalize on multiple adversaries. To better understand the mechanism, we then conduct theoreti- cal analysis via the lens of Rademacher complexity. We the- oretically revealed that higher weight sparsity contributes significantly to the better adversarially robust generaliza- tion of Transformers, which can often be achieved by the specially-designed architectural ingredient attention blocks. At last, we also provided more detailed analyses to better understand generalization ( e.g., generalization on common corruptions, larger dataset ImageNet) and discussed the po- tential directions for designing robust architectures. Our main contributions can be summarized as: • We, for the first time, systematically studied 20 adversarially-trained architectures against multiple at- tacks and revealed the close relationship between ar- chitectural design and robust generalization. • We theoretically revealed that higher weight sparsity contributes to the better adversarially robust general- ization of Transformers, which can often be achieved by attention blocks. • We provide more detailed analyses of the generaliz- ability from several viewpoints and discuss potential pathways that may improve architecture robustness. We open-sourced the weights of our adversarially- trained model zoo. Together with our analysis, we hope this paper could help researchers build more robust DNN architectures in the future.
Wang_Zero-Shot_Pose_Transfer_for_Unrigged_Stylized_3D_Characters_CVPR_2023
Abstract Transferring the pose of a reference avatar to stylized 3D characters of various shapes is a fundamental task in com- puter graphics. Existing methods either require the stylized characters to be rigged, or they use the stylized character in the desired pose as ground truth at training. We present a zero-shot approach that requires only the widely available deformed non-stylized avatars in training, and deforms styl- ized characters of significantly different shapes at inference. Classical methods achieve strong generalization by deform- ing the mesh at the triangle level, but this requires labelled correspondences. We leverage the power of local deforma- tion, but without requiring explicit correspondence labels. We introduce a semi-supervised shape-understanding mod- ule to bypass the need for explicit correspondences at test time, and an implicit pose deformation module that deforms individual surface points to match the target pose. Further- more, to encourage realistic and accurate deformation of *Work done during Jiashun Wang’s internship at NVIDIA.stylized characters, we introduce an efficient volume-based test-time training procedure. Because it does not need rig- ging, nor the deformed stylized character at training time, our model generalizes to categories with scarce annotation, such as stylized quadrupeds. Extensive experiments demon- strate the effectiveness of the proposed method compared to the state-of-the-art approaches trained with compara- ble or more supervision. Our project page is available at https://jiashunwang.github.io/ZPT/
1. Introduction Stylized 3D characters , such as those in Fig. 1, are com- monly used in animation, movies, and video games. De- forming these characters to mimic natural human or animal poses has been a long-standing task in computer graphics. Different from the 3D models of natural humans and an- imals, stylized 3D characters are created by professional artists through imagination and exaggeration. As a result, each stylized character has a distinct skeleton, shape, mesh This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8704 topology, and usually include various accessories, such as a cloak or wings (see Fig. 1). These variations hinder the process of matching the pose of a stylized 3D character to that of a reference avatar, generally making manual rigging a requirement. Unfortunately, rigging is a tedious process that requires manual effort to create the skeleton and skin- ning weights for each character. Even when provided with manually annotated rigs, transferring poses from a source avatar onto stylized characters is not trivial when the source and target skeletons differ. Automating this procedure is still an open research problem and is the focus of many recent works [2, 4, 24, 52]. Meanwhile, non-stylized 3D humans and animals have been well-studied by numerous prior works [35, 40, 54, 62, 68]. A few methods generously provide readily available annotated datasets [11, 12, 41, 68], or carefully designed parametric models [40, 51, 68]. By taking advantage of these datasets [12,41], several learning- based methods [7, 14, 35, 62, 67] disentangle and transfer poses between human meshes using neural networks. How- ever, these methods (referred to as “part-level” in the fol- lowing) carry out pose transfer by either globally deform- ing the whole body mesh [14, 22, 47, 67] or by transform- ing body parts [35, 48], both of which lead to overfitting on the training human meshes and fail to generalize to styl- ized characters with significantly different body part shapes. Interestingly, classical mesh deformation methods [55, 56] (referred to as “local” in the following) can transfer poses between a pair of meshes with significant shape differences by computing and transferring per-triangle transformations through correspondence. Though these methods require manual correspondence annotation between the source and target meshes, they provide a key insight that by transform- ing individual triangles instead of body parts, the mesh de- formation methods are more agnostic to a part’s shape and can generalize to meshes with different shapes. We marry the benefits of learning-based methods [7, 14, 35, 62, 67] with the classic local deformation approach [55] and present a model for unrigged, stylized character defor- mation guided by a non-stylized biped or quadruped avatar. Notably, our model only requires easily accessible posed human or animal meshes for training and can be directly applied to deform 3D stylized characters with a signifi- cantly different shape at inference. To this end, we implic- itly operationalize the key insight from the local deforma- tion method [55] by modeling the shape and pose of a 3D character with a correspondence-aware shape understand- ing module and an implicit pose deformation module. The shape understanding module learns to predict the part seg- mentation label ( i.e., the coarse-level correspondence) for each surface point, besides representing the shape of a 3D character as a latent shape code. The pose deformation module is conditioned on the shape code and deforms in- dividual surface point guided by a target pose code sampledfrom a prior pose latent space [50]. Furthermore, to encour- age realistic deformation and generalize to rare poses, we propose a novel volume-based test-time training procedure that can be efficiently applied to unseen stylized characters. During inference, by mapping biped or quadruped poses from videos, in addition to meshes to the prior pose la- tent space using existing works [32,51,53], we can transfer poses from different modalities onto unrigged 3D stylized characters. Our main contributions are: • We propose a solution to a practical and challenging task – learning a model for stylized 3D character de- formation with only posed human or animal meshes. • We develop a correspondence-aware shape under- standing module, an implicit pose deformation mod- ule, and a volume-based test-time training procedure to generalize the proposed model to unseen stylized characters and arbitrary poses in a zero-shot manner. • We carry out extensive experiments on both hu- mans and quadrupeds to show that our method pro- duces more visually pleasing and accurate deforma- tions compared to baselines trained with comparable or more supervision.
Xu_Dynamic_Coarse-To-Fine_Learning_for_Oriented_Tiny_Object_Detection_CVPR_2023
Abstract Detecting arbitrarily oriented tiny objects poses intense challenges to existing detectors, especially for label assign- ment. Despite the exploration of adaptive label assignment in recent oriented object detectors, the extreme geometry shape and limited feature of oriented tiny objects still in- duce severe mismatch and imbalance issues. Specifically, the position prior, positive sample feature, and instance are mismatched, and the learning of extreme-shaped objects is biased and unbalanced due to little proper feature supervi- sion. To tackle these issues, we propose a dynamic prior along with the coarse-to-fine assigner, dubbed DCFL. For one thing, we model the prior, label assignment, and ob- ject representation all in a dynamic manner to alleviate the mismatch issue. For another, we leverage the coarse prior matching and finer posterior constraint to dynamically as- sign labels, providing appropriate and relatively balanced supervision for diverse instances. Extensive experiments on six datasets show substantial improvements to the baseline. Notably, we obtain the state-of-the-art performance for one- stage detectors on the DOTA-v1.5, DOTA-v2.0, and DIOR- R datasets under single-scale training and testing. Codes are available at https://github.com/Chasel- Tsui/mmrotate-dcfl.
1. Introduction The oriented bounding box is a finer representation for object detection since the object’s background region is greatly eradicated by introducing the rotation angle [55]. This advantage is pronounced in aerial images, where ob- jects are in arbitrary orientations, resulting in the prosper- ity of corresponding object detection datasets [7, 11,35,55] and customized oriented object detectors [10, 17,18,60,62]. Nevertheless, one unignorable fact is that there exist numer- ous tiny objects in aerial images. When oriented objects are tiny-sized, the challenges posed to existing object detec- *Corresponding Authors Static Rule (a) RetinaNet , FCOS, Rotated RPNStatic SamplesTrainDNNFixed GTM*Fixed Prior (b) DCFLDynamic PriorDynamic Rule TrainFeedback DNNFiner GTDynamic SamplesM* GT Fixed Prior Pos Samples GT Dynamic Prior Pos Samples Figure 1. Comparisons of different learning paradigms for ori- ented object detection. M* means the matching function. Each box in the 2ndrow denotes a prior location. The 3rdrow are predic- tions of the RetinaNet and DCFL, where green, blue, and red boxes denote true positive, false positive, and false negative predictions. (a) RetinaNet, FCOS, and Rotated RPN statically assign labels be- tween fixed priors and fixed gts. (b) Our proposed DCFL dynami- cally updates priors and gts, and dynamically assigns labels. tors are quite remarkable. Especially, the extreme geometry characteristics of oriented tiny objects hamper the accurate label assignment. Label assignment is a fundamental and crucial process in object detection [68], in which priors (box for anchor- based [30] and point for anchor-free detectors [50]) need to be assigned with appropriate labels to supervise the net- work training. In fact, there have been some works that lay a foundation for the effective label assignment of oriented objects, as shown in Fig. 1. Early works additionally preset anchors of different angles (e.g. Rotated RPN [36]) or re- fine high-quality anchors (e.g. S2A-Net [17]) based on the generic object detector, then a static rule (e.g. MaxIoU strat- egy [44]) is used to separate positive and negative (pos/neg) training samples. The derived prior boxes can thus cover more ground truth (gt ) boxes and a considerable accuracy improvement can be expected. However, the static assign- ment cannot adaptively divide pos/neg samples according to the gt’sshape and filter out low-quality samples, usually This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7318 Mismatch ImbalanceAngle -sample Imbalance Scale -sample Imbalance 0° 45°90°−45° 0 32×32 64×64Figure 2. The mismatch and imbalance issues. Each point in the left image denotes a prior location. The number in the pie-shaped bar chart denotes the mean number of positive samples assigned to each instance in a specific angle range. leading to sub-optimal performance. Recently, the exploration of adaptive label assign- ment [68] brings new insight to the community. For ori- ented object detection, DAL [38] defines a prediction- aware matching degree and utilizes it to reweight anchors, achieving dynamic sample learning. Besides, several stud- ies [21, 23,26] incorporate the shape information into detec- tors and propose shape-aware sampling and measurement. Despite the progress, the arbitrary orientation and ex- treme size of oriented tiny objects still pose a dilemma to the detector. As shown in Fig. 2, the mismatch andim- balance issues are particularly pronounced. For one thing, there is a mutual mismatch issue between the position prior, feature, and instance. Although some adaptive label assign- ment schemes may explore a better pos/neg division of the prior boxes or points, the sampled feature location behind the prior is still fixed and the derived prior is still static and uniformly located, most priors deviate from the tiny object’s main body. The prior and feature themselves cannot well- match the extreme shapes of oriented tiny objects, no matter how we divide pos/neg samples. For another, the existing detectors tend to introduce bias and imbalance for oriented and tiny objects. More precisely, for anchor-based detec- tors, gtwith shapes different from anchor boxes will yield low IoU [38, 59], leading to the lack of positive samples. In Fig. 2, we calculate the mean number of positive samples assigned to different gtswith the RetinaNet and observe that there is an extreme lack of positive samples for gtswith an- gles and scales far from predefined anchors. For anchor-free detectors, the static prior and its fixed stride limit the upper number of high-quality positive samples. Tiny objects only cover a limited number of feature points, and most of these points are away from the object’s main body. This motivates us to design a more dynamic and bal- anced learning pipeline for oriented tiny object detection. As shown in Fig. 1, we alleviate the mismatch issue by reformulating the prior, label assignment, and gtrepresen- tation all in a dynamic manner, which can be updated bythe Deep Neural Network (DNN). Simultaneously, we dy- namically and progressively assign labels in a coarse-to-fine manner to seek balanced supervision for various instances. Specifically, we introduce a dynamic Prior Capturing Block (PCB) to learn the prior, which adaptively adjusts the prior location while retaining the physical meaning of prior [54]. The PCB is inspired by the paradigm of learnable proposals in the DETR [4] and Sparse R-CNN [48] which naturally avoids the mismatch issue between the predefined prior and feature. Compared to this paradigm, we intro- duce its flexibility for prior updates while keeping the fast- convergence ability of dense detectors [32, 54]. Based on the dynamic prior, we then select Cross-FPN-layer Coarse Positive Sample (CPS) candidates for further label assign- ment, and the CPS is realized by the Generalized Jensen- Shannon Divergence [39] (GJSD) between the gtand the dynamic prior. The GJSD is able to enlarge the CPS to the object’s nearby spatial locations and adjacent FPN lay- ers, ensuring more candidates for extreme-shaped objects. After obtaining the CPS, we re-rank these candidates with predictions (posterior) and represent the gtwith a finer Dy- namic Gaussian Mixture Model (DGMM), filtering out low- quality samples. All designs are incorporated into an end- to-end one-stage detector without additional branches. In short, our contributions are listed as follows: (1) We identify that there exist severe mismatch and imbalance is- sues in the current learning pipeline for oriented tiny ob- ject detection. (2) We design a Dynamic Coarse-to-Fine Learning (DCFL) scheme for oriented tiny object detection, which is the first to model the prior, label assignment, and gtrepresentation all in a dynamic manner. In the DCFL, we propose to use the GJSD to construct Coarse Positive Samples (CPS) and represent objects with a finer Dynamic Gaussian Mixture Model (DGMM), obtaining coarse-to- fine label assignment. (3) Extensive experiments on six datasets show promising results.
Wang_VL-SAT_Visual-Linguistic_Semantics_Assisted_Training_for_3D_Semantic_Scene_Graph_CVPR_2023
Abstract The task of 3D semantic scene graph (3DSSG) predic- tion in the point cloud is challenging since (1) the 3D point cloud only captures geometric structures with limited se- mantics compared to 2D images, and (2) long-tailed re- lation distribution inherently hinders the learning of unbi- ased prediction. Since 2D images provide rich semantics and scene graphs are in nature coped with languages, in this study, we propose Visual- Linguistic Semantics Assisted Training ( VL-SAT ) scheme that can significantly empower 3DSSG prediction models with discrimination about long- tailed and ambiguous semantic relations. The key idea is to train a powerful multi-modal oracle model to as- sist the 3D model. This oracle learns reliable structural representations based on semantics from vision, language, and 3D geometry, and its benefits can be heterogeneously passed to the 3D model during the training stage. By effectively utilizing visual-linguistic semantics in training, our VL-SAT can significantly boost common 3DSSG pre- diction models, such as SGFN and SGG point, only with 3D inputs in the inference stage, especially when dealing with tail relation triplets. Comprehensive evaluations and ab- lation studies on the 3DSSG dataset have validated the ef- fectiveness of the proposed scheme. Code is available at https://github.com/wz7in/CVPR2023-VLSAT.
1. Introduction Structurally understanding 3D geometric scenes is par- ticularly important for tasks that require interaction with real-world environments, such as AR/VR [7, 21, 26–28, 49, 51] and navigation [4, 5, 10]. As one vital topic in this field, predicting 3D semantic scene graph (3DSSG) in point cloud [36] has received emerging attention in recent years. Specifically, given the point cloud of a 3D scene that is as- sociated with class-agnostic 3D instance masks, the 3DSSG *Lu Sheng and Yang Tang are the corresponding authors. Train&InferenceStageBuildinStanding onAttached to Standing onSame as Same as3DModel TrainingStage OracleModel“A Scene of a sinkbuild ina bathcabinet.”3DModelInferenceStage2D InputLang Assist.3D Input/Output3D Assist.Gradient Back-Prop.bath cabinet sinkbath cabinetbath cabinet BuildinAttached to(a) (b)Figure 1. Comparison between previous method and our VL- SAT. (a) SGPN [36], as the 3D model, fails to find capture predi- cates such as build in . (b) VL-SAT creates an oracle model by het- erogeneously fusing 2D semantics, and language knowledge along with the geometrical features, and the 3D model receives benefits from the oracle model during training. During inference, the en- hanced 3D model can correctly detect the tail predicates. prediction task would like to construct a directed graph whose nodes are semantic labels of the 3D instances and the edges recognize the directional semantic or geometrical relations between connected 3D instances. However, in addition to common difficulties faced by scene graph prediction, there are several challenges spec- ified to the 3DSSG prediction task. (1) 3D data such as point clouds only capture the geometric structures of each instance and may superficially define the relations by rela- tive orientations or distances. (2) Recent 3DSSG predica- tion datasets [36, 47] are quite small and suffer from long- tailed predicate distributions, where semantic predicates are often rarer than geometrical predicates. For example, as shown in Fig. 1, the pioneering work SGPN [36] usually This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21560 prefers a simple and common geometric predicate standing onbetween sinkandbath cabinet , while the ground-truth re- lation triplet ⟨sink,build in ,bath cabinet ⟩cares more about the semantics, and the frequency of build in in the training dataset is quite low, as shown in Fig. 3(a). Even though some attempts [38, 47, 48] have been proposed thereafter, the inherent limitations of the point cloud data to some ex- tent hinder the effectiveness of these methods. Since 2D images provide rich and meaningful seman- tics, and the scene graph prediction task is in nature aligned with natural languages, we explore using visual-linguistic semantics to assist the training, as another pathway to essen- tially enhance the capability of common 3DSSG prediction models with the aforementioned challenges. How to assist 3D structural understanding with visual- linguistic semantics remains an open problem. Previous studies mainly focus on employing 2D semantics to en- hance instance-level tasks, such as object detection [3, 20, 22,29], visual grounding and dense captioning [4,5,44,50]. Most of them require visual data both in training and in- ference, but a few of them, such as SAT [42] and X- Trans2Cap [44] treat 2D semantics as auxiliary training sig- nals and thus offer more practical inference only with 3D data. But these methods are specified to instance-level tasks and require delicately designed networks for effective as- sistance, thus they are less desirable to our structural pre- diction problem. Thanks to the recent success of large- scale cross-modal pretraining like CLIP [24], 2D semantics in images can be well aligned with linguistic semantics in natural languages, and the visual-linguistic semantics have been applied for alleviating long-tailed issue in tasks related to 2D scene graphs [1,31,32,45] and human-object interac- tion [15]. But how to adapt similar assistance of visual- linguistic semantics to the 3D scenario remains unclear. In this study, we propose the Visual-Linguistic Seman- tics Assisted Training (VL-SAT) scheme to empower the point cloud-based 3DSSG prediction model (termed as the 3D model) with sufficient discrimination about long-tailed and ambiguous semantic relation triplets. In this scheme, we simultaneously train a powerful multi-modal prediction model as the oracle (termed as oracle model) that is het- erogeneously aligned with the 3D model, which captures reliable structural semantics by extra data from vision, ex- tra training signals from language, as well as the geometri- cal features from the 3D model. These introduced visual- linguistic semantics have been aligned by CLIP. Conse- quently, the benefits of the oracle model, especially the multi-modal structural semantics, can be efficiently embed- ded into the 3D model through the back-propagated gradi- ent flows. In the inference stage, the 3D model can perform superior 3DSSG prediction performance with only 3D in- puts. For example, in Fig. 1(b), the predicate build in can be reliably detected. To our best knowledge, VL-SAT isthe first visual-linguistic knowledge transfer work that is applied to 3DSSG prediction in the point cloud. More- over, VL-SAT can successfully enhance SGFN [38] and SGG point[47], validating that this scheme is generalizable to common 3DSSG prediction models. We benchmark VL-SAT on the 3DSSG dataset [36]. Quantitative and qualitative evaluations, as well as compre- hensive ablation studies, validate that the proposed training scheme leads to significant performance gains, especially for tail relations, as discussed in Sec. 4.
Wang_Privacy-Preserving_Adversarial_Facial_Features_CVPR_2023
Abstract Face recognition service providers protect face privacy by extracting compact and discriminative facial features (representations) from images, and storing the facial fea- tures for real-time recognition. However, such features can still be exploited to recover the appearance of the original face by building a reconstruction network. Although sev- eral privacy-preserving methods have been proposed, the enhancement of face privacy protection is at the expense of accuracy degradation. In this paper, we propose an adver- sarial features-based face privacy protection (AdvFace) ap- proach to generate privacy-preserving adversarial features, which can disrupt the mapping from adversarial features to facial images to defend against reconstruction attacks. To this end, we design a shadow model which simulates the attackers’ behavior to capture the mapping function from facial features to images and generate adversarial la- tent noise to disrupt the mapping. The adversarial features rather than the original features are stored in the server’s database to prevent leaked features from exposing facial information. Moreover, the AdvFace requires no changes to the face recognition network and can be implemented as a privacy-enhancing plugin in deployed face recogni- tion systems. Extensive experimental results demonstrate that AdvFace outperforms the state-of-the-art face privacy- preserving methods in defending against reconstruction at- tacks while maintaining face recognition accuracy.
1. Introduction Face recognition is a way of identifying an individual’s identity using their face, which has been widely used in many security-sensitive applications. Undoubtedly, biomet- ric facial images are private and discriminative information ∗Zhibo Wang is the corresponding author.to each person that should be protected. Recently, much at- tention has been paid to privacy protection, such as the Gen- eral Data Protection Regulation, making the preservation of face privacy increasingly important. In order to avoid direct leakage of facial images, mainstream face recogni- tion systems usually adopt a client-server mode that extracts features from facial images with a feature extractor on the client side and stores the facial features rather than facial images on the server side for future online identification. As facial features suppress the visual information of faces, face privacy protection can be realized to some extent. However, recent studies showed that it is possible to reconstruct original images from facial features, which is called reconstruction attack, including optimization-based [9, 29] and learning-based reconstruction attacks [7, 13, 23, 37]. The former gradually adjusts the pixels of the input image to make the output of the feature extractor as close as possible to a particular feature until the facial image (the input image) corresponding to this feature is reconstructed [9, 29]. The latter trains a feature-image decoder with a de- convolutional neural network (D-CNN) to reconstruct im- ages directly from facial features [7,13,23,37]. These stud- ies imply that existing face recognition systems suffer from severe privacy threats once the features in their database were leaked. Therefore, it is essential to provide approaches to prevent facial features from being reconstructed. Several approaches have been proposed to protect face privacy. [1, 10, 18, 22] transform the features into the encrypted space and perform face recognition based on the cryptographic primitives and security protocols, which however bear prohibitive computation and communication costs for face recognition systems. [3, 24] utilize differen- tial privacy to protect face privacy by perturbing features with noises, which however suffers from a significant accu- racy drop in face recognition. [19, 34] proposed adversarial This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8212 training-based methods that retrain the main task network (e.g., gender classification from facial images) using adver- sarial training between the reconstruction network and the main task network to generate the privacy-preserving fea- tures directly. However, [19] demonstrated that facial fea- tures learned from adversarial training significantly com- promise accuracy when dealing with face recognition tasks. Recently, several frequency domain-based methods [15,25] were proposed, which transform raw images into the fre- quency domain and remove features’ critical channels used for visualization to protect face privacy. However, [15] struggles with the trade-off between accuracy and privacy protection and our experimental results demonstrate that [25] actually cannot resist powerful reconstruction attacks. In addition, both the adversarial learning-based and the fre- quency domain-based methods require retraining the face recognition network, which is not applicable to deployed face recognition systems. In this paper, we aim to propose a novel approach to gen- erate privacy-preserving facial features which are able to thwart reconstruction attacks as well as maintain satisfac- tory recognition accuracy. Undoubtedly, it is non-trivial to realize this objective. The first challenge is how to defend against reconstruction attacks under the black-box setting . An attacker may utilize different reconstruction networks, which are unknown to the face recognition systems in ad- vance, to reconstruct images from facial features. How to enable the generated facial features to defend against such unknown and different reconstruction attacks is very chal- lenging. The second challenge is how to disrupt the visual information embedded in facial features while keeping the recognition accuracy . Since visual information is some- what critical to face recognition, disrupting visual informa- tion may incur a reduction in recognition accuracy. The last challenge is how to generate privacy-preserving fea- tures without changing the face recognition network . Once a face recognition network is deployed, it would be expen- sive to retrain the network and redeploy it to millions of clients. Therefore, a plug-in module is more welcomed for the deployed face recognition systems. To address the above challenges, we propose an ad- versarial features-based face privacy protection (AdvFace) method, which generates the privacy-preserving adversar- ial features against reconstruction attacks. The intuition of AdvFace is to disrupt the mapping from features to facial images by obfuscating features with adversarial latent noise to maximize the difference between the original images and the reconstructed images from the features. To this end, we train a shadow model to simulate the behavior of the re- construction attacks to obtain the reconstruction loss which denotes the quality of the reconstructed images. Thereafter, we maximize the reconstruction loss to generate the adver- sarial features by iteratively adding the adversarial latentnoise to features along the direction of the gradient (loss w.r.t. the targeted feature). Moreover, to ensure face recog- nition accuracy, the magnitude of adversarial latent noise would be constrained during the optimization. Our main contributions are summarized as follows: • We propose a novel facial privacy-preserving method (namely AdvFace), which can generate privacy- preserving adversarial features against unknown re- construction attacks while maintaining face recogni- tion accuracy. Moreover, AdvFace requires no changes to the deployed face recognition model and thus can be integrated as a plug-in privacy-enhancing module into face recognition systems. • We unveil the rationale of the reconstruction attack and build a shadow model to simulate the behavior of the reconstruction attacks and generate adversarial fea- tures, which can disrupt the mapping from features to facial images by maximizing the reconstruction loss of the shadow model. • Extensive experimental results demonstrate that our proposed AdvFace outperforms the state-of-the-art fa- cial privacy-preserving methods in terms of superior privacy protection performance while only incurring negligible face recognition accuracy loss. Moreover, the transferability of AdvFace is validated. That is, it can effectively resist different reconstruction networks.
Wu_Asymmetric_Feature_Fusion_for_Image_Retrieval_CVPR_2023
Abstract In asymmetric retrieval systems, models with different capacities are deployed on platforms with different compu- tational and storage resources. Despite the great progress, existing approaches still suffer from a dilemma between retrieval efficiency and asymmetric accuracy due to the limited capacity of the lightweight query model. In this work, we propose an Asymmetric Feature Fusion (AFF) paradigm, which advances existing asymmetric retrieval systems by considering the complementarity among differ- ent features just at the gallery side. Specifically, it first em- beds each gallery image into various features, e.g., local features and global features. Then, a dynamic mixer is in- troduced to aggregate these features into compact embed- ding for efficient search. On the query side, only a sin- gle lightweight model is deployed for feature extraction. The query model and dynamic mixer are jointly trained by sharing a momentum-updated classifier. Notably, the proposed paradigm boosts the accuracy of asymmetric re- trieval without introducing any extra overhead to the query side. Exhaustive experiments on various landmark retrieval datasets demonstrate the superiority of our paradigm.
1. Introduction Image retrieval [17, 30, 34, 40, 49, 53] has been studied for a long time in the literature. Typically, high-performing image retrieval systems deploy a large powerful model to embed both query and gallery images, which is widely known as symmetric image retrieval . However, in some real-world applications, e.g., mobile search, the query side is constrained by resource limitation and thus cannot meet the overhead of deploying a large model. To this end, the paradigm of asymmetric image retrieval is first proposed in HVS [9] and AML [3], which has attracted increasing attention from the community [26, 38, 43, 56, 62]. In such *Corresponding Author: Min Wang and Wengang Zhou. Query Model 𝜙௤ Gallery Model 𝜙௚Compatibility TrainingAsymmetric Retrieval Gallery featureEfficiency AccuracyQuery featureQuery set 𝒬 Gallery set 𝒢 ☹🙂(a) Previous single feature pipeline [3, 9, 38, 56]. Query set 𝒬 Gallery set 𝒢Query Model 𝜙௤ Gallery Model 𝜙௚ଵEfficiency Accuracy Gallery Model 𝜙௚௡ ……Gallery feature (1) Gallery feature (n)Mixer 𝜙୫୧୶Query feature Mixed gallery featureAsymmetric Retrieval 🙂 🙂 (b) Our asymmetric feature fusion pipeline. Figure 1. Illustration of (a) previous single-feature asymmet- ric retrieval pipeline and (b) our asymmetric feature fusion paradigm . Due to limited capacity of the lightweight model, ex- isting pipeline achieves efficiency for the query side at the cost of retrieval accuracy degradation. In contrast, our approach en- hances existing asymmetric retrieval pipeline from the perspective ofgallery feature fusion . For efficient retrieval, a dynamic mixer is introduced to aggregate multiple gallery features into a compact embedding. Query model and mixer are jointly trained with com- patible constraints. Our method realizes high efficiency without sacrificing retrieval accuracy. a paradigm, deep representation models with different ca- pacities are first trained to be compatible and then deployed on platforms with different resources to strike a balance be- tween retrieval accuracy and efficiency, as shown in Fig. 1a. For an asymmetric retrieval system, the most crucial thing is to ensure that the embedding spaces of different models are well aligned. To this end, BCT [38] first pro- poses a backward-compatible learning framework, in which the classifier of the gallery model is inherited to guide the learning of the query model. Recently, various efforts have been devoted to improving cross-model feature compatibil- ity in terms of training objectives [3, 26, 51, 56, 57], model structures [8, 9], etc. Despite the great progress, a dilemma This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11082 0 1 2 3 44550556065 Higher is better 52.5957.7959.81 52.5355.6560.9662.00 53.80 47.7452.83 42.9150.0854.16 54.0756.74 51.92 Query Flops (G) Lower is bettermAP (%) 46:664.40CVNet!CVNet ShuffleNetV2!Mixer (Ours) EfficientNet!Mixer (Ours) GhostNet!Mixer (Ours) MobileNetV3!Mixer (Ours) MobileNetV2!Mixer (Ours) ShuffleNetV2!CVNet (Previous) EfficientNet!CVNet (Previous) GhostNet!CVNet (Previous) MobileNetV3!CVNet (Previous) MobileNetV2!CVNet (Previous) 0 2 4 6 8 104550556065 Higher is better 52.5957.7959.81 52.5355.6560.9662.00 53.80 47.7452.8354.16 42.9150.0854.0756.74 51.92 Model Size (MB) Lower is bettermAP (%) 42:564.40 Figure 2. Average mAP vs. FLOPs/Model Size of the query model forROxf + 1M [33] dataset. The notation format “query model!gallery model” in the legend means embedding queries with the query model and retrieving in a gallery set embedded by the gallery model. A line connecting the dots with one color represents a family of lightweight models with different model sizes .Previous : The latest asymmetric retrieval method CSD [56] is adopted to train query model with CVNet [21] deployed as gallery model. Ours : our paradigm utilizes CVNet, Token [54], DELG [4] and DOLG [59] to generate aggregated gallery features and trains the mixer and query model jointly. is still unresolved, i.e., the accuracy of asymmetric retrieval is still unsatisfactory compared to that of symmetric re- trieval, especially in limited-resource and large-scale sce- narios, as shown in Fig. 2 ( ;;:::;4vs.+). We argue that such dilemma is due to the low capacity of lightweight query model, which cannot perfectly achieve feature com- patibility with the static powerful gallery model. To alleviate above issue, we introduce a new paradigm named Asymmetric Feature Fusion (AFF). It boosts the accuracy of existing asymmetric retrieval systems by con- sidering the complementarity among different features, as shown in Fig. 1b. On the gallery side, it deploys sev- eral large powerful models on the cloud to extract diverse features, e.g., local features, which are suitable for cap- turing local matches, or global features that are effective for holistic semantic matching. For efficient retrieval, a dynamic mixer is further proposed to aggregate diverse gallery features into compact embedding, which allows ef- ficient vector search [11, 18] to be exploited. As for the query side, queries are embedded with a single lightweight model. It eliminates time-consuming multiple feature ex- traction and aggregation processes, realizing a solution suit- able for resource-constrained platforms. During training, all the gallery models are fixed, while the mixer and query model are trained jointly by a momentum-updated classifier for achieving feature compatibility. Compared to previous retrieval approaches, the proposed paradigm has two unique advantages. First, it fuses various features on the gallery side, which notably advances the re- trieval accuracy of existing asymmetric retrieval systems.Although the extraction and aggregation processes increase the computational complexity, they are typically performed on resource-rich cloud platforms. In addition, gallery im- ages are embedded offline in advance, whose computational overhead has no influence on the query side. Second, com- pared with multi-feature fusion methods, our paradigm only deploys a single lightweight model on the query side, which is free of the complex and time-consuming multi-feature ex- traction and aggregation. Thus, it introduces no extra com- putational and storage overhead for the query side. Overall, with the proposed asymmetric feature fusion paradigm, our approach achieves high retrieval efficiency and accuracy si- multaneously, as shown in Fig. 2 ( vs.,:::,4vs.N). To evaluate our approach, comprehensive experiments are conducted on popular landmark retrieval datasets. The pro- posed paradigm realizes promising performance improve- ment for existing asymmetric retrieval systems and leads to the state-of-the-art results across the public benchmarks.
Wang_Adaptive_Patch_Deformation_for_Textureless-Resilient_Multi-View_Stereo_CVPR_2023
Abstract In recent years, deep learning-based approaches have shown great strength in multi-view stereo because of their outstanding ability to extract robust visual features. How- ever, most learning-based methods need to build the cost volume and increase the receptive field enormously to get a satisfactory result when dealing with large-scale tex- tureless regions, consequently leading to prohibitive mem- ory consumption. To ensure both memory-friendly and textureless-resilient, we innovatively transplant the spirit of deformable convolution from deep learning into the tra- ditional PatchMatch-based method. Specifically, for each pixel with matching ambiguity (termed unreliable pixel), we adaptively deform the patch centered on it to extend the receptive field until covering enough correlative reliable pixels (without matching ambiguity) that serve as anchors. When performing PatchMatch, constrained by the anchor pixels, the matching cost of an unreliable pixel is guar- anteed to reach the global minimum at the correct depth and therefore increases the robustness of multi-view stereo significantly. To detect more anchor pixels to ensure bet- ter adaptive patch deformation, we propose to evaluate the matching ambiguity of a certain pixel by checking the con- vergence of the estimated depth as optimization proceeds. As a result, our method achieves state-of-the-art perfor- mance on ETH3D and Tanks and Temples while preserving low memory consumption.
1. Introduction Multi-view stereo (MVS) is one of the core tasks in com- puter vision which aims to recover the 3D geometry of *Corresponding author. Contact him at [email protected] or [email protected]. Code: https://github.com/whoiszzj/APD-MVS. Figure 1. Comparison with the latest learning-based methods [6, 19, 27–29, 32] and traditional methods [12, 36, 37, 39] on Tanks and Temples [10] and ETH3D [23]. When comparing memory cost, we set the number of source images to 10 for all methods and the image size 6,221×4,146(ETH3D) as 100% resolution (8.04% corresponds to Tanks and Temples). Note that learning- based methods train their models on DTU [1] or BlendedMVS [44] and only regard the train set of ETH3D as one of their test sets. a scene using images captured from different viewpoints. It has been playing an essential role in many downstream tasks, such as automatic drive and virtual reality. Plentiful ideas stem from this vein [5,13,20,31,33] and continuously boost the reconstruction performances to a new level. These prior arts can be roughly divided into traditional and deep learning-based methods. Many existing newly-proposed traditional MVS meth- ods [22, 37, 39, 47] are extended versions of the Patch- Match [2] (PM), which calculate the matching cost between This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1621 a fixed-size reference patch and patches in source images according to a plane hypothesis. These PM-based meth- ods avoid the construction of cost volume as they employ a propagation and local refinement strategy to find proper matches and hence require little memory. Nevertheless, ac- cording to [14], when a patch lies in a textureless region, the matching cost will lose credibility since there is no useful feature information in the receptive field. To mitigate this problem, attempts [14, 30, 37, 40] either downsample im- ages or use multiple window sizes to increase the receptive field. However, they can only handle small areas of texture- less regions well. To better cope with large-scale textureless regions, methods such as ACMP [39], PCF-MVS [12], and ACMMP [36] introduce a coarse fitting plane hypothesis to the textureless region. Nevertheless, such an approach suffers from gradual deviation from the provided plane hy- pothesis, leading to inaccurate depth estimation. On the contrary, deep learning-based methods [15–17, 34, 38] usually suffer less from the above issue. Benefiting from the prevalent application of convolution operation, the receptive field of these methods is much larger than tradi- tional ones. AA-RMVSNet [32] and TransMVSNet [6] fur- ther expand the receptive field by introducing deformable convolution [4]. As the receptive field increases, unreliable pixels can obtain adequate geometrical information from surrounding reliable pixels, which results in better depth es- timation. Nevertheless, as shown in Fig. 1, a larger receptive field results in more memory consumption, making them hard to handle datasets with large-scale textureless regions or high-resolution images using mainstream GPU devices. Although several recent works have endeavored to reduce the memory consumption [19, 27, 28], the results are still not satisfactory compared with traditional methods [36,39]. To develop a memory-friendly solution that can well handle large-scale textureless regions at the same time, in this paper, we transplant the spirit of deformable convolu- tion to a traditional PM-based MVS pipeline. Specifically, for each unreliable pixel, we adaptively deform its corre- sponding patch to extend the receptive field until covering enough reliable pixels, as shown in Fig. 2. We then use RANSAC to filter out unrelated reliable pixels (belonging to different geometry hyperthesis or gathered due to occlu- sion). The residual reliable pixels serve as anchor pixels for the deformable patch. Then we conduct PM based on the widely-used normalized cross-correlation (NCC) metric within this deformable patch. As demonstrated in Fig. 2, the profile of matching cost using our deformable PM reaches a salient single valley at the ground-truth depth and hence guides the unreliable pixels to find a correct match. One remaining and non-trivial issue that affects the suc- cess of our adaptive patch deformation is how to evaluate pixel reliability. Many existing approaches [14, 21, 40] rely solely on the pixel’s intensity, which is unreliable when fac- Adaptive deformationFigure 2. The right image is a demo of adaptive patch deforma- tion. The green point represents the center pixel, the blue points around it represent the conventional patch, and the red points form the receptive field of our deformable patch. The left profile shows matching costs around the ground truth (green dashed line). Com- pared with conventional PM, our deformable PM has significant convergence performance around the ground truth for the unreli- able pixel. ing repetitive texture or drastic changes in illumination that can also cause matching ambiguity. Others [36, 39] simply set a threshold for the pixel’s matching cost to evaluate re- liability. However, as mentioned before, the matching cost is unreliable in textureless regions, making it hard to set a proper threshold. Instead, we propose to evaluate the reli- ability of pixels by checking the convergence of estimated depth as optimization proceeds. Specifically, in each itera- tion, we use conventional PM to calculate the matching cost of each pixel within a neighboring window of the current depth and form a matching cost profile. Then we evalu- ate pixel reliability by analyzing the geometric features of the profile, including local and global minima. Our eval- uation approach can help to find more anchor pixels while maintaining their credibility, bringing better adaptive patch deformation. In summary, our contributions are as follows: • For PM-based MVS, we propose to adaptively de- form the patch of an unreliable pixel when comput- ing the matching cost, which increases the receptive field when facing textureless regions to ensure robust matching. • We propose to detect reliable pixels by checking the convergence of matching cost profiles, maintaining the accuracy of detection while being able to find more an- chor pixels, which ensures better adaptive patch defor- mation. • We realize a PM-based MVS method, APD-MVS, which adopts our adaptive patch deformation and an NCC-based matching metric. Our method achieves state-of-the-art results on ETH3D dataset and Tanks and Temples dataset with lower memory consumption.
Wang_Look_Before_You_Match_Instance_Understanding_Matters_in_Video_Object_CVPR_2023
Abstract Exploring dense matching between the current frame and past frames for long-range context modeling, memory- based methods have demonstrated impressive results in video object segmentation (VOS) recently. Nevertheless, due to the lack of instance understanding ability, the above approaches are oftentimes brittle to large appearance vari- ations or viewpoint changes resulted from the movement of objects and cameras. In this paper, we argue that instance understanding matters in VOS, and integrating it with memory-based matching can enjoy the synergy, which is in- tuitively sensible from the definition of VOS task, i.e., identi- fying and segmenting object instances within the video. To- wards this goal, we present a two-branch network for VOS, where the query-based instance segmentation (IS) branch delves into the instance details of the current frame and the VOS branch performs spatial-temporal matching with the memory bank. We employ the well-learned object queries from IS branch to inject instance-specific information into the query key, with which the instance-augmented match- ing is further performed. In addition, we introduce a multi- path fusion block to effectively combine the memory readout with multi-scale features from the instance segmentation de- coder, which incorporates high-resolution instance-aware features to produce final segmentation results. Our method achieves state-of-the-art performance on DAVIS 2016/2017 val (92.6% and 87.1%), DAVIS 2017 test-dev (82.8%), and YouTube-VOS 2018/2019 val (86.3% and 86.3%), outper- forming alternative methods by clear margins.
1. Introduction Video object segmentation aims to identify and seg- ment the specific objects in a video sequence, which has very broad applications, e.g., interactive video editing and †Corresponding authors. Time First Frame (GT)Current FrameXMemOursFigure 1. J &F-time curve of XMem [13], a state-of-the-art memory-based VOS model and our proposed method. XMem will suffer from a distinct accuracy degradation when the appearance of the target object ( e.g., pose of the dancing person) changes dra- matically compared to the reference frame. Comparatively, our approach is more robust to this challenging case. autonomous driving. This work focuses on the semi- supervised setting where the annotation of the first frame is given. Starting from Space-Time Memory network (STM) [46], memory-based methods [13–15, 25, 36, 41, 47, 53, 58] have almost dominated this field due to their supe- rior performance and simplicity. STM [46] and its vari- ants [25,35,65] typically build a feature memory to store the past frames as well as corresponding masks, and perform dense matching between the query frame and the memory to separate targeted objects from the background. Despite the prominent success achieved, there exists a non-negligible limitation for the above approaches, i.e., the object deformation and large appearance variations result- ing from the motion of camera and objects will inevitably give rise to the risk of false matches [13,15,46], thus making them struggle to generate accurate masks. We visualize the J&F-time curve of XMem [13], a state-of-the-art memory- based VOS model, on a representative video from DA VIS 2017 in Figure 1. It can be seen that, when the target ob- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2268 Instance discrimination First frame (in memory)Instance matching Current frame Figure 2. A conceptual introduction on how humans address the VOS task . For the current frame of a video stream, humans first distinguish between different instances and then match them with the target object(s) in memory. ject undergoes a distinct pose change compared to the first reference frame, XMem [13] misidentifies another person wearing the same color as the foreground and suffers from drastic performance degradation. In contrast, humans are capable of avoiding such mis- takes and achieving consistently accurate matching. This gap motivates us to reflect on how we humans resolve the VOS task. Intuitively, given the current frame in a video sequence, humans typically first distinguish between differ- ent instances within it by identifying which instance each pixel belongs to. After that, the instance matching with the target object(s) in memory is conducted to obtain the final results (see Figure 2 for a conceptual illustration). In fact, this intuition is also consistent with the definition of VOS itself, i.e., identify (matching) and segmenting objects (in- stance understanding). Moreover, in the absence of instance understanding, it is theoretically difficult to generate accu- rate predictions for regions that are invisible in the reference frame by pure matching. Inspired by this, we argue that instance understanding is critical to video object segmentation, which could be incor- porated with memory-based matching to enjoy the synergy. More specifically, we aim to derive instance-discriminative features that are able to distinguish different instances . Equipped with these features, we then perform semantic matching with the memory bank to effectively associate the target object(s) with specific instance(s) in current frame. In this spirit, we present a two-branch network, ISVOS, for semi-supervised VOS, which contains an instance seg- mentation (IS) branch to delve into the instance details for the current frame and a video object segmentation branch that resorts to external memory for spatial-temporal match- ing. The IS branch is built upon a query-based instance seg- mentation model [10] and supervised with instance masks to learn instance-specific representations. Note that ISVOS is a generic framework and IS branch can be easily re- placed with more advanced instance understanding mod- els. The video object segmentation (VOS) branch, on the other hand, maintains a memory bank to store the features of past frames and their predictions. We compare the query key of the current frame and memory key1from memory 1In this paper, we follow previous work [15,46] of which are comparedbank for affinity calculation following [13, 15, 46, 53]. Mo- tivated by recent approaches that use learnable queries serv- ing as region proposal networks to identify instances in im- ages [10, 11, 20, 71], we employ object queries from the IS branch to inject instance-specific information into our query key, with which the instance-augmented matching is per- formed. After that, the readout features are produced by ag- gregating the memory value with the affinity matrix. More- over, in order to make use of the fine-grained instance de- tails reserved in high-resolution instance-aware features, we further combine the multi-scale features from the instance segmentation decoder with the memory readout through a carefully designed multi-path fusion block to finally gener- ate the segmentation masks. We conduct experiments on the standard DA VIS [49,50] and YouTube-VOS [67] benchmarks. The results demon- strate that our ISVOS can achieve state-of-the-art perfor- mance on both single-object ( i.e., 92.6% in terms of J&F on DA VIS 2016 validation split) and multi-object bench- marks ( i.e., 87.1% and 82.8% on DA VIS 2017 validation and test-dev split, 86.3% and 86.3% on YouTube-VOS 2018 &2019 validation split) without post-processing.
Wang_VideoMAE_V2_Scaling_Video_Masked_Autoencoders_With_Dual_Masking_CVPR_2023
Abstract Scale is the primary factor for building a powerful foun- dation model that could well generalize to a variety of downstream tasks. However, it is still challenging to train video foundation models with billions of parameters. This paper shows that video masked autoencoder (VideoMAE) is a scalable and general self-supervised pre-trainer for building video foundation models. We scale the VideoMAE in both model and data with a core design. Specifically, we present a dual masking strategy for efficient pre-training, with an encoder operating on a subset of video tokens and a decoder processing another subset of video tokens. Although VideoMAE is very efficient due to high masking ratio in encoder, masking decoder can still further reduce the overall computational cost. This enables the efficient pre-training of billion-level models in video. We also use a progressive training paradigm that involves an initial pre-training on a diverse multi-sourced unlabeled dataset, followed by a post-pre-training on a mixed labeled dataset. Finally, we successfully train a video ViT model with a billion parameters, which achieves a new state-of-the-art performance on the datasets of Kinetics (90.0% on K400 and 89.9% on K600) and Something-Something (68.7% on V1 and 77.0% on V2). In addition, we extensively verify the pre-trained video ViT models on a variety of downstream tasks, demonstrating its effectiveness as a general video representation learner.
1. Introduction Effectively pre-training large foundation models [5] on huge amounts of data is becoming a successful paradigm in learning generic representations for multiple data modali- ties (e.g., language [6, 16], audio [13, 50], image [3, 22, 79], video [18, 63, 76], vision-language [27, 55]). These foun- dation models could be easily adapted to a wide range of downstream tasks through zero-shot recognition, linear * : Equal contribution. Figure 1. VideoMAE with dual masking. To improve the overall efficiency of computation and memory in video masked autoen- coding, we propose to mask the decoder as well and devise the dual masking strategy. Like encoder, we also apply a masking map to the deocoder and simply reconstruct a subset of pixel cubes se- lected by the running cell masking. The final reconstruction loss only applies for the invisible tokens dropped by the encoder. probe, prompt tuning, or fine tuning. Compared with the specialized model to a single task, they exhibit excellent generalization capabilities and have become the main driv- ing force for advancing many areas in AI. For vision research, many efforts have been devoted to developing effective pre-trained models. Among them, Transformer [65] with masked autoencoding [16] is be- coming a conceptually simple yet effective self-supervised visual learner (e.g., BEiT [3], SimMIM [79], MAE [22] for images, and MaskFeat [76], VideoMAE [63], MAE- ST [18] for videos). Meanwhile, based on the results in language models [6], scaling model capacity and data size is an important ingredients for its remarkable performance improvement. However, for pre-trained vision models, very few work [44] has tried to scale up this masked autoencoder pre-training to the billion-level models in image domain, partially due to the high data dimension and the high com- putational overhead. This issue is even more serious for scaling up video masked autoencoder pre-training owning to its extra time dimension and strong temporal variations. Following the promising findings in languages and im- ages, we aim to study the scaling property of video This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14549 masked autoencoder (VideoMAE) , and push its perfor- mance limit on a variety of video downstream tasks . We scale VideoMAE in both model and data. For model scal- ing, we try to instantiate the VideoMAE with vision trans- former (ViT) [17] having billion-level parameters (e.g., ViT- g [84]), and for data scaling, we hope to increase the pre- training dataset size to million-level to fully unleash the power of billion-level ViT model. However, to successfully train giant VideoMAE on such huge amounts of data and achieve impressive improvements on all considered down- stream tasks, we still need to carefully address a few issues. First, we find computational cost and memory consump- tion is the bottleneck of scaling VideoMAE on the current GPUs with limited memory. Although VideoMAE [63] has improved its pre-training efficiency and reduced its mem- ory consumption by employing the efficient asymmetric encoder-decoder architecture [22] (i.e., dropping large num- bers of tokens in encoder), it still fails to well support the billion-level video transformer pre-training. It takes more than two weeks to pre-train a ViT-g model with VideoMAE on 64 A100 GPUs. To further improve its pre-training ef- ficiency, we find video data redundancy can be used to not only mask a high portion of cubes in the encoder, but also drop some cubes in the decoder. This solution yields higher pre-training efficiency and creates a similarly challenging and meaningful self-supervised task. In practice, it will in- crease the pre-training batchsize and reduce the pre-training time by a third with almost no performance drop. Second, MAE is still demanding for large data [80] and billion-level video transformer tends to overfit on relatively small data. Unlike images, the existing public video dataset is much smaller. For example, there are only 0.24M videos in the Kinetics400 dataset [28], while the ImageNet-22k dataset [15] has 14.2M images, let alone those publicly in- accessible image datasets such as JFT-3B [84]. Therefore, we need to come up with new ways to build a larger video pre-training dataset to well support the billion-level video transformer pre-training. We show that simply mixing the video datasets from multiple resources could produce an ef- fective and diverse pre-training dataset for VideoMAE and improve its downstream performance of pre-trained models. Finally, it is still unknown how to adapt the billion- level pre-trained model by VideoMAE. Masked autoencod- ing is expected to learn invariant features that provide a fa- vored initialization for vision transformer fine-tuning [30]. However, directly fine-tuning billion-level pre-trained mod- els on a relatively small video dataset (e.g., 0.24M videos) might be suboptimal, as the limited labeled samples might lead to overfitting issue in fine-tuning. In fact, in image domain, the intermediate fine-tuning technique [3, 44] has been employed to boost the performance of masked pre- trained models. We show that collecting multiple labeled video datasets and building a supervised hybrid dataset canact as a bridge between the large-scale unsupervised dataset and the small-scale downstream target dataset. Progressive fine-tuning of the pre-trained models through this labeled hybrid dataset could contribute to higher performance in the downstream tasks. Based on the above analysis, we present a simple and efficient way to scale VideoMAE to billion-level ViT mod- els on a dataset containing million-level pre-training videos. Our technical improvement is to introduce the dual mask- ing strategy for masked autoencoder pipeline as shown in Figure 1. In addition to the masking operation in en- coder, we propose to mask decoder as well based on the data redundancy prior in video. With this dual-masked VideoMAE, we follow the intermediate fine-tuning in im- ages [3, 44], and use a progressive training pipeline to per- form the video masked pre-training on the million-level un- labeled video dataset and then post-pre-training on the la- beled hybrid dataset. These core designs contribute to an ef- ficient billion-level video autoencoding framework, termed asVideoMAE V2 . Within this framework, we successfully train the first video transformer model with one billion pa- rameters , which attains a new state-of-the-art performance on a variety of downstream tasks, including action recog- nition [20, 28, 32, 57], spatial action detection [21, 34], and temporal action detection [26, 43].
Wang_MCF_Mutual_Correction_Framework_for_Semi-Supervised_Medical_Image_Segmentation_CVPR_2023
Abstract Semi-supervised learning is a promising method for medical image segmentation under limited annotation. However, the model cognitive bias impairs the segmentation performance, especially for edge regions. Furthermore, current mainstream semi-supervised medical image seg- mentation (SSMIS) methods lack designs to handle model bias. The neural network has a strong learning ability, but the cognitive bias will gradually deepen during the training, and it is difficult to correct itself. We propose a novel mutual correction framework (MCF) to explore network bias correction and improve the performance of SSMIS. Inspired by the plain contrast idea, MCF intro- duces two different subnets to explore and utilize the dis- crepancies between subnets to correct cognitive bias of the model. More concretely, a contrastive difference review (CDR) module is proposed to find out inconsistent pre- diction regions and perform a review training. Addition- ally, a dynamic competitive pseudo-label generation (DC- PLG) module is proposed to evaluate the performance of subnets in real-time, dynamically selecting more reliable pseudo-labels. Experimental results on two medical im- age databases with different modalities (CT and MRI) show that our method achieves superior performance compared to several state-of-the-art methods. The code will be avail- able at https://github.com/WYC-321/MCF .
1. Introduction Making pixel-level annotation is difficult and time- consuming, especially for medical images. Semi- supervised learning is a promising approach for processing images with limited supervised data [2,3,14,17,23,30,31]. In recent years, semi-supervised methods based on consis- tency regularization [21, 27] have attracted the attention of researchers and are one of the mainstream techniques, es- Corresponding author Figure 1. A brief description of the process of MT-like methods (a) and our proposed MCF (b). (c) Biased prediction of MT and MCF during training, MCF reduces more biased predictions. (d) The red masks represent the model’s wrong predictions, and the white numbers represent the number of wrong pixels. pecially in SSMIS. These methods usually include two sub- nets, and the general process is to add random perturbations to the same samples and force the subnets to produce con- sistent prediction results for these perturbed inputs. Mean Teacher (MT) [22] is a typical method among them and in- spired a series of SSMIS work [13, 33]. Although these methods have achieved promising results, they ignore the impact of cognitive biases in the model. Cognitive bias is the phenomenon in which the model persists in mispre- dictions, caused by overfitting to wrong supervision sig- nals [12]. There is evidence that cognitive biases reduce the performance of consistency-regularization-based meth- ods [1]. We take MT-like methods as an example to analyze this issue. As shown Fig. 1 (a) the methods based on the MT framework have three features: (1)The model consists This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15651 of teacher and student networks with a shared structure. (2)The student network parameters are updated through stochastic gradient descent, while the teacher network pa- rameters0are updated from the student network using Ex- ponential Moving Average (EMA) as Eq. (1): 0= 0+ (1 ) (1) where is the EMA decay that controls the updating rate. (3)Consistency regularization is implemented to encourage subnets to produce consistent predictions. Characteristics (1)and(2)naturally give the model a tendency to output consistent predictions. And explicit consistency constraints provide a supervised signal for unlabeled data. Therefore, the above three features make model training simpler and accelerate model convergence. However, there are also three limitations hidden in it: (1)Structural sharing among subnets reduces model vari- ability. (2)Due to the parameter update method of EMA, the teacher network is a weighted mixture of the historical states of the student network. Therefore, the performance of the teacher network is constrained by the student network. (3)Consistency regularization can also be regarded as a la- beling strategy that subnets to generate pseudo-labels for each other. The quality of the pseudo-labels greatly affects the performance of the model. Combining limitations (1) and(2), since the pseudo-labels come from a mixture of his- torical states with the same architecture as the student net- work, the consistency-based pseudo-label generation meth- ods are more prone to trap the network in cognitive biases and difficult to correct for mispredictions. In addition, these limitations make the model waste the potential of the multi- subnet architecture. To further demonstrate the cognitive bias of the model, we test MT and our proposed MCF every 15 iterations and record the number of erroneous pixels, as shown in Fig. 1 (c). It can be seen that as the training progresses, although the number of bias/wrong pixels continues to decrease, the model overfits some bias predictions that are difficult to cor- rect on its own. The visualization result is shown in Fig. 1 (d). The red mask shows the model’s bias predictions, and the white number shows the number of bias pixels. These bias pixels are mainly located in the target edge region, thus reducing the biased prediction is beneficial to improve the accuracy of edge segmentation. Finally compared with MT, MCF reduces more biased predictions under the same train- ing steps. In summary, our goal is to find a mechanism for the net- work to be aware of cognitive biases and correct them. To this end, we propose a mutual correction framework in the semi-supervised medical image segmentation for the explo- ration of model bias correction. We think that while the network is highly capable of learning, it is difficult to cor- rect biases on its own. Inspired by the idea of contrast,MCF consists of two distinct structural subnets with in- dependent parameter updates, which learn to correct each other through a strong inter-subnet interaction. Specifically, MCF proposes contrastive difference review (CDR) and dy- namic competitive pseudo-label generation (DCPLG) for labeled and unlabeled data training, respectively. The CDR takes prediction discrepancies of subnets as potential bias areas and guiding the subnets to correct them. Furthermore, we observe that one of the differences between the medical image segmentation databases and natural image databases is that all medical images are related to the target object. Therefore, it is reasonable to evaluate the performance of the subnets on a small amount of labeled data. Based on this, unlike MT-like methods [14, 23], MCF does not bind teacher or student roles to fixed subnets, but instead pro- poses DCPLG to dynamically evaluate and select pseudo- label generation networks for more reliable label propaga- tion. The main contributions of this work are as follows: We explore the problem of model bias correction and propose a new framework MCF for semi-supervised medical image segmentation. A CDR module is proposed to guide the network to pay attention and correct its own potential bias. Combined with the characteristics of medical image segmentation databases, DCPLG is proposed to obtain more reliable pseudo-labels. We evaluate the proposed MCF framework on semi- supervised medical image segmentation with both CT and MRI modalities. Experiments verify the effectiveness of this framework, showing that MCF outperforms the SOTA method, especially in edge segmentation accuracy.
Wen_CAP-VSTNet_Content_Affinity_Preserved_Versatile_Style_Transfer_CVPR_2023
Abstract Content affinity loss including feature and pixel affinity is a main problem which leads to artifacts in photorealis- tic and video style transfer. This paper proposes a new framework named CAP-VSTNet, which consists of a new re- versible residual network and an unbiased linear transform module, for versatile style transfer. This reversible resid- ual network can not only preserve content affinity but not introduce redundant information as traditional reversible networks, and hence facilitate better stylization. Empow- ered by Matting Laplacian training loss which can address the pixel affinity loss problem led by the linear transform, the proposed framework is applicable and effective on ver- satile style transfer. Extensive experiments show that CAP- VSTNet can produce better qualitative and quantitative re- sults in comparison with the state-of-the-art methods.
1. Introduction Photorealistic style transfer aims to reproduce content image with the style from a reference image in a photore- *Corresponding Authoralistic way. To be photorealism, the stylized image should preserve clear content detail and consistent stylization of the same semantic regions. Content affinity preservation, including feature and pixel affinity preservation [23,25,28], is the key to achieve both clear content detail and consistent stylization in the transfer. The framework of a deep learning based photorealis- tic style transfer generally uses such an architecture: an encoder module extracting content and style features, fol- lowed by a transformation module to adjust features statis- tics, and finally a decoder module to invert stylized feature back to stylized image. Existing photorealistic methods typ- ically employ pre-trained VGG [30] as encoder. Since the encoder is specifically designed to capture object-level in- formation for the classification task, it inevitably results in content affinity loss. To reduce the artifacts, existing meth- ods either use skip connection modules [2, 14, 40] or build a shallower network [8, 23, 39]. However, these strategies, limited by the image recovery bias, cannot achieve a perfect content affinity preservation on unseen images. In this work, rather than use the traditional encoder- transformation-decoder architecture, we resort to a re- versible framework [1] based solution called CAP-VSTNet, which consists of a specifically designed reversible residual This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18300 network followed by an unbiased linear transform module based on Cholesky decomposition [19] that performs style transfer in the feature space. The reversible network takes the advantages of the bijective transformation and can avoid content affinity information loss during forward and back- ward inference. However, directly using the reversible net- work cannot work well on our problem. This is because re- dundant information will accumulate greatly when the net- work channel increases. It will further lead to content affin- ity loss and noticeable artifacts as the transform module is sensitive to the redundant information. Inspired by knowl- edge distillation methods [8, 35], we improve the reversible network and employ a channel refinement module to avoid the redundant information accumulation. We achieve this by spreading the channel information into a patch of the spatial dimension. In addition, we also introduce cycle con- sistency loss in CAP-VSTNet to make the reversible net- work robust to small perturbations caused by numerical er- ror. Although the unbiased linear transform based on Cholesky decomposition [19] can preserve feature affinity, it cannot guarantee pixel affinity. Inspired by [25, 28], we introduce Matting Laplacian [22] loss to train the network and preserve pixel affinity. Matting Laplacian [22] may result in blurry images when it is used with another net- work like one with an encoder-decoder architecture. But it does not have this issue in CAP-VSTNet, since the bijective transformation of reversible network theoretically requires all information to be preserved. CAP-VSTNet can be flexibly applied to versatile style transfer, including photorealistic and artistic image/video style transfer. We conduct extensive experiments to eval- uate its performance. The results show it can produce better qualitative and quantitative results in comparison with the state-of-the-art image style transfer methods. We show that with minor loss function modifications, CAP-VSTNet can perform stable video style transfer and outperforms existing methods.
Xia_Towards_Effective_Visual_Representations_for_Partial-Label_Learning_CVPR_2023
Abstract Under partial-label learning (PLL) where, for each training instance, only a set of ambiguous candidate labels containing the unknown true label is accessible, contrastive learning has recently boosted the performance of PLL on vi- sion tasks, attributed to representations learned by contrast- ing the same/different classes of entities. Without access to true labels, positive points are predicted using pseudo- labels that are inherently noisy, and negative points often require large batches or momentum encoders, resulting in unreliable similarity information and a high computational overhead. In this paper, we rethink a state-of-the-art con- trastive PLL method PiCO [24], inspiring the design of a simple framework termed PaPi ( Partial-label learning with a guided Prototyp ical classifier), which demonstrates sig- nificant scope for improvement in representation learning, thus contributing to label disambiguation. PaPi guides the optimization of a prototypical classifier by a linear classi- fier with which they share the same feature encoder, thus ex- plicitly encouraging the representation to reflect visual sim- ilarity between categories. It is also technically appealing, as PaPi requires only a few components in PiCO with the opposite direction of guidance, and directly eliminates the contrastive learning module that would introduce noise and consume computational resources. We empirically demon- strate that PaPi significantly outperforms other PLL meth- ods on various image classification tasks.
1. Introduction The excellent performance of modern deep neural net- works (DNNs) is attributed to the large-scale fully super- vised training data, but the requirement for high-quality data poses a challenge for the practical application. As a result, non-expert but cheap labelers are often resorted as an ap- pealing substitute, which inevitably leads to low-quality la- beled data due to their expertise limitation. A typical sit- * Equal contributions. †Corresponding author. Eagle Vulture HawkSupervised Learning Partial -Label LearningTrue label Candidate label set… Figure 1. An image of “Eagle” in PLL is equipped with a candidate label set. PLL learns from such ambiguous supervision, in contrast to its supervised counterpart where only the true label is chosen. uation is that the labelers have difficulty in making an ac- curate judgement about an instance from multiple ambigu- ous labels, and therefore choose multiple likely ones. For example, in Fig. 1, it can be difficult for labelers without specialist knowledge to identify an Eagle from a Hawk, so both “Eagle” and “Hawk” are labeled as possible candi- dates for the true label. Learning from such training in- stances with a set of possible candidate labels, where only one fixed but unknown is true, is known as partial-label learning (PLL) [3, 4, 15, 22–24, 32, 38, 40]. This problem naturally arises in various important applications in the real world such as web mining [13] and image annotation [1]. Research into PLL dates back some twenty years and a number of practical approaches have been proposed, which can be divided into identification-based strategies [9, 20, 32, 38, 41] and average-based strategies [3, 8], depending on how they treat candidate labels. Recently, DNNs bring the research of PLL into a new era [4, 15, 26–28, 36, 37], among which PiCO [24] has achieved state-of-the-art per- formance on multiple benchmarks. It introduces a con- trastive learning module into PLL that uses predictions of onelinear classifier to select pseudo positive samples for each anchor point and maintains a queue of negative sam- ples. Meanwhile, a momentum encoder is used to improve consistency. In addition, PiCO adds a prototypical classifier module (called prototype-based in the original) to guide the update of the linear classifier, which is based on the idea that there exists an embedding space where points from the same class cluster around its prototype [19]. PiCO claims credit for its success to the mutual benefit of contrastive learning and prototype-based label disambiguation. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15589 050100 150 200 250 300 350 400 450 500 Epochs60.065.070.075.080.085.090.095.0100.0Accuracy(%) PiCO PiCO-v2 PiCO-v3(a) CIFAR-10 ( q= 0.7) 050100 150 200 250 300 350 400 450 500 Epochs20.030.040.050.060.070.080.0Accuracy(%) PiCO PiCO-v2 PiCO-v3 (b) CIFAR-100 ( q= 0.2) 050100 150 200 250 300 350 400 450 500 Epochs0510152025303540Number of correct predicions#linear - #prototypical #prototypical - #linear (c) CIFAR-10 ( q= 0.7) 050100 150 200 250 300 350 400 450 500 Epochs051015202530Number of correct predicions#linear - #prototypical #prototypical - #linear (d) CIFAR-100 ( q= 0.2) Figure 2. Visualizations of impacts of the unreliability of pseudo positives and the improper direction of disambiguation guidance in PiCO. In (a)-(b), PiCO-v2 means positives are selected based on fully supervised information, i.e., true labels are known by the contrastive learning module. Further, PiCO-v3 removes the guidance of prototypical classifier to linear classifier, such that the linear classifier performs self- teaching. The red lines in (c)-(d) indicate the number of samples that were correctly classified by linear classifier and incorrectly classified by prototypical classifier per mini-batch, and the green lines are the opposite. The first 100 epochs shown in (d) are in a warm-up period. qmeans the flipping probability of each incorrect label, which will be introduced in Sec. 3.1. In this paper, we rethink the two modules in PiCO and empirically point out that they do not work as well in practice as one might think due to the unreliability of the pseudo positives and the improper direction of disambigua- tion guidance. Fig. 2a and Fig. 2b show accuracy of three versions of PiCO. Fig. 2c and Fig. 2d show the performance differences between the linear and prototypical classifier during training. Fig. 2 delivers two important messages: (1) noisy pseudo-labels do lead to significant performance degradation, and (2) the phenomenon “poor teacher teaches good student” possibly happens. Specifically, the good stu- dent, the linear classifier, always made more correct pre- dictions than its teacher, the prototypical classifier, at the beginning. In some cases, due to the forced direction of guidance, the teacher performed better than the student for a while, but soon the teacher had nothing new to teach the student, shown in Fig. 2c. And sometimes the student’s ad- vantage was even maintained until convergence as shown in Fig. 2d. These also explain the significant improvement compared to PiCO-v2 after PiCO-v3 made the clever stu- dent perform self-teaching. Inspired by the above observations, we propose a sim- ple PLL framework termed PaPi ,i.e.,Partial-label learn- ing with a guided Prototyp ical classifier . PaPi directly eliminates the contrastive learning module which introduces noisy positives, and adopts the opposite direction of disam- biguation guidance compared to PiCO. Specifically, PaPi produces a similarity distribution over classes for each sam- ple based on a softmax function over distances to class- specific prototypes in a projected low-dimensional space. Afterwards, PaPi aligns the distribution with the disam- biguated probability post-processed from one linear clas- sifier prediction. Meanwhile, the linear classifier performs self-teaching wherein each stage of learning is guided by the current and previous stages. We conduct extensive ex- periments on multiple image classification tasks. PaPi sur-passes the state-of-the-art PLL methods by a large margin, with a 4.57%improvement on CIFAR-100 in very difficult scenarios. Moreover, PaPi learns effective representations efficiently without using neither large batches nor a momen- tum encoder, where training instances from the same class are grouped into tighter clusters. Our main contributions are summarized as follows: • We propose a simple PLL framework termed PaPi which explicitly encourages the representation to re- flect visual similarity between categories, such that PaPi is remarkable for improving the class-level dis- crimination of learned representation. • Extensive experiments on various image classification datasets with different generation processes of candi- date labels demonstrate PaPi significantly outperforms state-of-the-art PLL methods.
Wu_Switchable_Representation_Learning_Framework_With_Self-Compatibility_CVPR_2023
Abstract Real-world visual search systems involve deployments on multiple platforms with different computing and stor- age resources. Deploying a unified model that suits the minimal-constrain platforms leads to limited accuracy. It is expected to deploy models with different capacities adapt- ing to the resource constraints, which requires features ex- tracted by these models to be aligned in the metric space. The method to achieve feature alignments is called “com- patible learning”. Existing research mainly focuses on the one-to-one compatible paradigm, which is limited in learn- ing compatibility among multiple models. We propose a Switchable representation learning Framework with Self- Compatibility (SFSC). SFSC generates a series of compati- ble sub-models with different capacities through one train- ing process. The optimization of sub-models faces gradients conflict, and we mitigate this problem from the perspective of the magnitude and direction. We adjust the priorities of sub-models dynamically through uncertainty estimation to co-optimize sub-models properly. Besides, the gradients with conflicting directions are projected to avoid mutual in- terference. SFSC achieves state-of-the-art performance on the evaluated datasets.
1. Introduction Visual search systems are widely deployed, which recall the nearest neighbors in gallery features according to their distances to the query feature. Real-world visual search systems consist of multiple models deployed on different platforms [17, 38, 45] ( e.g. clouds, mobiles, smart cam- eras), where different platforms interact with each other by visual features. Typically, as for person re-identification systems, images are captured and processed into features on edge sides. And such features are sent to the cloud side to compare with the database features for identifying *Corresponding author.a specific person by feature similarities. As diverse plat- forms meet different computing and storage resource limita- tions, deploying the unified model which suits the minimal- constrain platforms leads to a waste of resources on other platforms and limited accuracy. To make better use of re- sources and achieve higher accuracy, it is expected to deploy models with different capacities which adapt to the resource limitations. Such a solution requires compatibility among the models to satisfy their interaction requirements, which means that the similar data processed by different models are close to each other in the feature space while dissimilar data are far apart. To achieve compatibility, compatible learning methods have been proposed. Existing research focuses on the one- to-one compatible paradigm [6,49,62], which constrains the learned features of a latter (learnable) model to be compat- ible with its previous (fixed) version. They are limited in achieving many-to-many compatibilities, which means any two in a series of models are compatible with each other. In this work, we propose a Switchable representation learning Framework with Self-Compatibility (SFSC) to achieve many-to-many compatibilities. As shown in Figure 1, SFSC can deploy different sub-models to suit the diverse computing and storage resource limitation of different plat- forms. The compatibility between any two sub-models can be achieved, termed as “self-compatibility”. However, there are gradients conflicts between sub-models as they are co- optimized during the training process. We summarize such conflicts into the gradient magnitude and gradient direc- tion. Specifically, a gradient of a large magnitude will dom- inate the optimization process, which may lead to the over- fitting of its corresponding sub-model and impair the im- provements of other sub-models. To solve this problem, we estimate the uncertainty of different sub-models to weight the gradient, denoting the optimization priorities. Besides, as the sub-models may produce gradients in various direc- tions, there is mutual interference between them. Thus the improvement in different sub-models may be overestimated This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15943 Figure 1. An example of SFSC in multi-platform deployments. The switchable neural network generates a series of sub-networks {ϕ1, ϕ2, ..., ϕ m}with different capacities for different computing resources. ∀i, j∈ {1,2, ..., m},|ϕi(a)−ϕj(p)|<|ϕi(a)−ϕj(n)|, where (a, p)is a pair of samples with the same class ID, while (a, n)is a pair of samples with different class IDs. or underestimated. To tackle such conflict, the gradients are projected onto planes that are orthogonal to each other. The contributions of this paper are summarized as fol- lows: • We propose a switchable representation learning framework with self-compatibility (SFSC). SFSC gen- erates a series of feature-compatible sub-models with different capacities for visual search systems, which can be deployed on different platforms flexibly. • We resolve the conflicts between sub-models from the aspect of gradient magnitude and gradient direction. A compatible loss based on uncertainty estimation is pro- posed to guide optimization priorities and alleviate the imbalance of gradient magnitude between sub-models. An aggregation method based on the gradient projec- tion is proposed to avoid mutual interference and find a generic optimal direction for all sub-models. • SFSC achieves state-of-the-art performance on the evaluated benchmark datasets. Compared to deploy- ing a unified model, adopting SFSC to obtain different sub-models can achieve 6% to 8% performance im- provements on three different datasets.
Wynn_DiffusioNeRF_Regularizing_Neural_Radiance_Fields_With_Denoising_Diffusion_Models_CVPR_2023
Abstract Under good conditions, Neural Radiance Fields (NeRFs) have shown impressive results on novel view synthesis tasks. NeRFs learn a scene’s color and density fields by minimiz- ing the photometric discrepancy between training views and differentiable renderings of the scene. Once trained from a sufficient set of views, NeRFs can generate novel views from arbitrary camera positions. However, the scene geom- etry and color fields are severely under-constrained, which can lead to artifacts, especially when trained with few input views. To alleviate this problem we learn a prior over scene geometry and color, using a denoising diffusion model (DDM). Our DDM is trained on RGBD patches of the syn- thetic Hypersim dataset and can be used to predict the gra- dient of the logarithm of a joint probability distribution of color and depth patches. We show that, these gradients of logarithms of RGBD patch priors serve to regularize geom- etry and color of a scene. During NeRF training, random RGBD patches are rendered and the estimated gradient of the log-likelihood is backpropagated to the color and den- sity fields. Evaluations on LLFF , the most relevant dataset, show that our learned prior achieves improved quality in the reconstructed geometry and improved generalization to novel views. Evaluations on DTU show improved recon- struction quality among NeRF methods.
1. Introduction Neural radiance fields, neural implicit surfaces, and coordinate-based scene representations are proving valu- able for novel view synthesis and 3D reconstruction tasks. NeRFs [17] learn a specific scene’s appearance as a multi- layer perceptron that predicts density and color, when given any 3D point and a viewing direction. This volume representation allows differentiable render- ing from arbitrary views, where predicted color contribu- tions along a ray are alpha-composited according to the den- (a) DiffusioNeRF (Ours) (b) RegNeRF [21] Figure 1. Image and depth map rendered from a test view. All NeRF models were trained with 3views of the LLFF [16] dataset’s “Room” scene. Our priors encourage NeRF to explain the TV and table geometry with flat surfaces in the density field, and to explain the view-dependent color changes with the color field. sity predictions. The model is trained with the aim of faithfully recon- structing images captured with known camera poses. Even when trained with just a photometric reconstruction loss, NeRFs show impressive generalization capabilities, inspir- ing novel applications in virtual and augmented reality, and visual special effects. However, with small numbers or even with large num- bers of input views, the scene color and geometry fields are severely under-constrained. Indeed, an infinite number of NeRFs can explain all training views. In practice, NeRFs can generate low-quality and physically implausible ge- ometries and surface appearances. For example, “floaters” are one common artifact, where the fitted density field con- tains clouds of semi-transparent material floating in mid-air that would look reasonable in 2D once rendered from train- ing views, but look implausible from novel views. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4180 Various hand-crafted regularizers and learned priors have been proposed to tackle these issues: hand-engineered priors to constrain the scene geometry [2,21], learned priors that force plausible renderings from arbitrary views [21], and methods that use single image depth and normal esti- mation [38, 46] to provide high-level constraints on the es- timated scene geometry. However, there are no approaches that learn a joint probability distribution of the scene geom- etry and color. Our contribution is leveraging denoising diffusion mod- els (DDMs) as a learned prior over color and geometry. Specifically, we use an existing synthetic dataset to gener- ate a dataset of RGBD patches to train our DDM. DDMs do not predict a probability for RGBD patch distribution. Rather, they provide the gradient of the log-probability of the RGBD patch distribution, i.e. the negative direction to the noise predicted by DDM is equivalent to moving towards the modes of the RGBD patch distribution. As NeRFs are trained with stochastic gradient descent, gradi- ents of log-probabilities are sufficient, as they can be back- propagated to NeRF networks during training to act as a regularizer; probabilities are not required for this purpose. We demonstrate that the DDM gradient encourages NeRFs to fit density and color fields that are more physically plau- sible on the LLFF and DTU datasets.
Wu_Semi-Supervised_Video_Inpainting_With_Cycle_Consistency_Constraints_CVPR_2023
Abstract Deep learning-based video inpainting has yielded promising results and gained increasing attention from re- searchers. Generally, these methods assume that the cor- rupted region masks of each frame are known and easily ob- tained. However, the annotation of these masks are labor- intensive and expensive, which limits the practical applica- tion of current methods. Therefore, we expect to relax this assumption by defining a new semi-supervised inpainting setting, making the networks have the ability of completing the corrupted regions of the whole video using the anno- tated mask of only one frame. Specifically, in this work, we propose an end-to-end trainable framework consisting of completion network and mask prediction network, which are designed to generate corrupted contents of the current frame using the known mask and decide the regions to be filled of the next frame, respectively. Besides, we introduce a cycle consistency loss to regularize the training parameters of these two networks. In this way, the completion network and the mask prediction network can constrain each other, and hence the overall performance of the trained model can be maximized. Furthermore, due to the natural existence of prior knowledge (e.g., corrupted contents and clear bor- ders), current video inpainting datasets are not suitable in the context of semi-supervised video inpainting. Thus, we create a new dataset by simulating the corrupted video of real-world scenarios. Extensive experimental results are reported to demonstrate the superiority of our model in the video inpainting task. Remarkably, although our model is trained in a semi-supervised manner, it can achieve compa- rable performance as fully-supervised methods.
1. Introduction Video inpainting aims to fill corrupted regions of the video with plausible contents, which is a promising yet *Corresponding author Input Output Input Output (c) an example of semi -supervised video inpainting (a) fully -supervised inpainting (b) semi -supervised inpainting . . . . . .. . .. . .. . .. . . . . .. . .. . .. . .Figure 1. Existing methods perform video inpainting in a fully- supervised setting. The typical issue of such methods is the need to elaborately annotate the corrupted regions miof each frame xi in the video (Fig.(a)), which is labor-intensive and expensive in real-world applications. In this paper, we formulate a new task: semi-supervised video inpainting, which only annotates the cor- rupted regions of one frame to complete the whole video (Fig.(b)). Fig.(c) shows an example of semi-supervised video inpainting: the top row shows sample frames with the mask, where pink denotes the manually annotated mask of corrupted regions, and blue de- notes the mask of corrupted regions predicted by the model. The completed results byiare shown in the bottom row. challenging task in computer vision. It can benefit a wide range of practical applications, such as scratch restora- tion [1], undesired object removal [30], and autonomous driving [18], etc. In essence, unlike image inpainting which usually learns on the spatial dimension, video in- painting task pays more attention to exploiting the tem- poral information. Naively using image inpainting meth- ods [13, 34, 42, 45] on individual video frame to fill cor- rupted regions will lose inter-frame motion continuity, re- sulting in flicker artifacts in the inpainted video. Similar to the texture synthesis, traditional video inpaint- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22586 ing methods [5, 8, 23] attempt to generate missing contents to fill missing regions by searching and copying similar patches from known regions. Despite the authentic com- pletion results have been achieved, these approaches still meet grand challenges, such as the lack of high-level under- standing of the video [32] and high computational complex- ity [1, 16]. In fact, recently, deep learning-based video in- painting methods [1,4,11,16,17,26,32,41,50] have been no- ticed by many researchers and made promising progress in terms of the quality and speed. These methods usually ex- tract relevant features from the neighboring frames by con- volutional neural networks and perform different types of context aggregation to generate missing contents. Neverthe- less, these methods are only suitable in the scenarios where the corrupted region mask of each frame in the video is available (Fig.1(a)). Hence, when we resort to these meth- ods in real-world applications, annotating the corrupted re- gions of each frame in the video is required, which is labor- intensive and expensive, especially for long videos. For- mally, a naive combination of video object segmentation and video inpainting methods can be used to reduce anno- tation costs in a two-stage manner. In this way, we can first generate the masks for all video frames using a video ob- ject segmentation method, and then complete the missing regions with the fully-supervised video inpainting methods. However, such a two-stage approach has some intuitive dis- advantages. One the one hand, since each module is learned individually and sometimes not well combined to maximize performance, the overall performance is sub-optimal. On the other hand, existing video object segmentation methods are unsatisfactory for segmentation results of the scratch re- gions similar to Fig.1(c), resulting in the inpainted video with critical errors. Therefore, to realize the goal of re- ducing annotation cost for video inpainting from scratch, in this paper, we introduce a new semi-supervised video in- painting setting that we can complete the corrupted regions of the whole video using the annotated mask of only one frame (Fig.1(b)) and train the network from end-to-end. In this way, compared with the conventional fully-supervised setting, the annotation cost can be greatly reduced, making video inpainting more convenient in practical application. However, fulfilling the task of semi-supervised video in- painting is non-trivial and has some issues to be addressed. On the one hand, except for one annotated known frame, there are no masks for other frames to indicate the corrupted regions in the proposed semi-supervised setting. To solve this problem, we decompose the semi-supervised video in- painting task into dual tasks: frame completion and mask prediction. Specifically, we first perform frame completion on the frame with corresponding given mask to obtain the completed frame using the designed frame completion net- work. Then, we feed the completed frame and the subse- quent frame into the proposed mask prediction network togenerate the corrupted region mask of the subsequent frame. Last, by iterating frame by frame, we can complete cor- rupted regions of each frame in the video. On the other hand, to precisely capture the accurate correspondence be- tween the completion network and mask prediction net- work, a cycle consistency loss is introduced to regularize the trained parameters. In addition, existing video inpainting datasets usually take black or noise pixels as the corrupted contents of the video frame. In fact, such a setting will in- troduce some specific prior knowledge (e.g., corrupted con- tents and clear borders) into the dataset, making it easy for the mask prediction network to distinguish corrupted re- gions from natural images. In this way, existing datasets cannot realistically simulate complex real-world scenarios. Hence, in our work, to effectively avoid the introduction of the above prior knowledge into the dataset, we use natural images as corrupted contents of the video frame and ap- ply iterative Gaussian smoothing [35] to extend the edges of corrupted regions. Experimental results demonstrate that our proposed method can achieve comparable inpainting re- sults as fully-supervised methods. An example result of our method is shown in Fig.1(c). Our contributions are summarized as follows: • We formulate a novel semi-supervised video inpaint- ing task that aims to complete the corrupted regions of the whole video with the given mask of one frame. To the best of our knowledge, this is the first end-to-end semi-supervised work in the video inpainting field. • A flexible and efficient framework consisting of com- pletion network and mask prediction network is de- signed to solve the semi-supervised video inpainting task, where cycle consistency loss is introduced to reg- ularize the trained parameters. • A novel synthetic dataset1is tailored for the semi- supervised video inpainting task. which consists of 4,453 video clips. This dataset will be published to facilitate subsequent research and benefit other re- searchers.
Xiong_FedDM_Iterative_Distribution_Matching_for_Communication-Efficient_Federated_Learning_CVPR_2023
Abstract Federated learning (FL) has recently attracted increas- ing attention from academia and industry, with the ultimate goal of achieving collaborative training under privacy and communication constraints. Existing iterative model av- eraging based FL algorithms require a large number of communication rounds to obtain a well-performed model due to extremely unbalanced and non-i.i.d data partitioning among different clients. Thus, we propose FedDM to build the global training objective from multiple local surrogate functions, which enables the server to gain a more global view of the loss landscape. In detail, we construct syn- thetic sets of data on each client to locally match the loss landscape from original data through distribution match- ing. FedDM reduces communication rounds and improves model quality by transmitting more informative and smaller synthesized data compared with unwieldy model weights. We conduct extensive experiments on three image classifica- tion datasets, and show that our method outperforms other FL counterparts in terms of efficiency and model perfor- mance given a limited number of communication rounds. Moreover, we demonstrate that FedDM can be adapted to preserve differential privacy with Gaussian mechanism and train a better model under the same privacy budget.
1. Introduction Traditional machine learning methods are designed with the assumption that all training data can be accessed from a central location. However, due to the growing data size together with the model complexity [10, 23, 26], distributed optimization [7, 8, 43] is necessary over different machines. This leads to the problem of Federated Learning [32] (FL) – multiple clients (e.g. mobile devices or local organizations) collaboratively train a global model under the orchestration of a central server (e.g. service provider) while the training *Equal contribution.data are kept decentralized and private. Such a practical set- ting poses two primary challenges [20,24,29,31,32]: train- ing data of the FL system are highly unbalanced and non-i.i.d. across downstream clients andmore efficient communication with fewer costs is expected because of unreliable devices with limited transmission bandwidth. Most of the existing FL methods [22,28,31,32,48] adopt an iterative training procedure from FedAvg [32], in which each round takes the following steps: 1) The global model is synchronized with a selected subset of clients; 2) Each client trains the model locally and sends its weight or gra- dient back to the server; 3) The server updates the global model by aggregating messages from selected clients. This framework works effectively for generic distributed opti- mization while the difficult and challenging setting of FL, unbalanced data partition in particular, would result in sta- tistical heterogeneity in the whole system [30] and make the gradient from each client inconsistent. It poses a great challenge to the training of the shared model, which re- quires a substantial number of communication rounds to converge [28]. Although some improvements have been made over FedAvg [32] including modifying loss func- tions [31], correcting client-shift with control variates [22] and the like, the reduced number of communication round is still considerable and even the amount of information re- quired by the server rises [56]. In our paper, we propose a different iterative surrogate minimization based method, FedDM , referred to Federated Learning with iterative Distribution Matching. Instead of the commonly-used scheme where each client maintains a locally trained model respectively and sends its gradi- ent/weight to the server for aggregation, we take a distinct perspective at the client’s side and attempt to build a lo- cal surrogate function to approximate the local training ob- jective. By sending those local surrogate functions to the server, the server can then build a global surrogate function around the current solution and conduct the update by min- imizing this surrogate. The question is then how to build local surrogate functions that are informative and with a rel- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16323 ative succinct representation. Inspired by recent progresses in data condensation [54, 55] we build local surrogate func- tions by learning a synthetic dataset to replace the origi- nal one to approximate the objective. It can be achieved by matching the original data distribution in the embed- ding space [16]. After the optimization of synthesized data, the client can transmit them to the server, which can then leverage the synthetic dataset to recover the global objec- tive function for training. Our method enables the server to have implicit access to the global objective defined by the whole balanced dataset from all clients, and thus out- performs previous algorithms involved in training a local model with unbalanced data in terms of communication ef- ficiency and effectiveness. We also demonstrate that our method can be adapted to preserve differential privacy un- der, an important factor to the deployment of FL systems. Our contributions are primarily summarized as follows: • We propose FedDM, which is based on iterative distri- bution matching to learn a surrogate function. It sends synthesized data to the server rather than commonly- used local model updates and improves communica- tion efficiency and effectiveness significantly. • We analyze how to protect privacy of client’s data for our method and show that it is able to guarantee (ϵ, δ)- differential privacy with the Gaussian mechanism and train a better model under the same privacy budget. • We conduct comprehensive experiments on three tasks and demonstrate that FedDM is better than its FL counterparts in communication efficiency and the final model performance.
Wang_Towards_Professional_Level_Crowd_Annotation_of_Expert_Domain_Data_CVPR_2023
Abstract Image recognition on expert domains is usually fine- grained and requires expert labeling, which is costly. This limits dataset sizes and the accuracy of learning systems. To address this challenge, we consider annotating expert data with crowdsourcing. This is denoted as PrOfeSsional lEvel cRowd (POSER) annotation. A new approach, based on semi-supervised learning (SSL) and denoted as SSL with human filtering (SSL-HF) is proposed. It is a human-in- the-loop SSL method, where crowd-source workers act as filters of pseudo-labels, replacing the unreliable confidence thresholding used by state-of-the-art SSL methods. To en- able annotation by non-experts, classes are specified im- plicitly, via positive and negative sets of examples and aug- mented with deliberative explanations, which highlight re- gions of class ambiguity. In this way, SSL-HF leverages the strong low-shot learning and confidence estimation ability of humans to create an intuitive but effective labeling ex- perience. Experiments show that SSL-HF significantly out- performs various alternative approaches in several bench- marks.
1. Introduction While deep learning enabled tremendous advances in image recognition, high recognition performance is still dif- ficult to achieve in expert domains, such as specialized areas of biology or medicine, due to two challenges. First, these problems involve fine-grained classes, such as the dogs of Figure 1, which differ by subtle visual attributes. Second, large annotated datasets are difficult to produce, since image labeling requires expert knowledge, which can be too ex- pensive or infeasible at scale. This makes it difficult to train models as strong as those available for non-expert domains, where crowd-source annotation enables training with mil- lions, or even billions, of labeled examples. To address this challenge, we consider the problem of how to leverage crowd-source platforms to provide professional level anno- tation for expert domain data, which is denoted as PrOfeS- sional lEvel cRowd (POSER) annotation. “Beagle” Beagle Basset Hound Fox Hound Teaching set “Beagle” Query Beagle Support set Is this a beagle? “Yes” Query Query DNN “Beagle” SSLMachine TeachingSSL-HFConfidence > threshold Not Beagle DNNFigure 1. Different approaches to the labeling of a query image. Left: Machine Teaching [44] generates a teaching set, which is used to teach different classes to crowd source workers, who label the query. Center: SSL [21, 34] methods produce a pseudo-label that is accepted or rejected by thresholding a confidence score. Right: SSL-HF uses crowd source workers to filter pseudo-labels, by comparing the query to a positive (im- ages of the pseudo-label class) and a negative (images from other classes) support set. The annotator implements a binary filter with ‘I don’t know’ (IDK) option. Since the difficulty is lack of annotator expertise, one route to POSER annotation is to rely on machine teaching (MT) algorithms [23, 42, 44]. As illustrated in Figure 1, a small teaching set annotated by an expert is used to teach crowd-source workers to discriminate the various classes. The scalability of crowd sourcing is then leveraged to as- semble a large labeled dataset [44]. While machine teaching is surprisingly effective for problems of small class cardi- nality, it is difficult to teach crowd-workers a large number of classes. This is partly because they are averse to compli- cated training procedures and the teaching relies on short- term memory, which has limited capacity [13, 29]. The POSER combination of expert domain data and crowd-sourcing also creates challenges to most human-in- the-loop schemes in the crowd-source annotation literature. These are usually based on active learning (AL) [31, 32], assuming an oracle that produces a ground-truth label per example. To minimize the number of labelling iterations and cost, AL selects the hardest examples in the dataset to be labelled. However, this is misguided for POSER anno- tation, where noisy annotators are inevitable and the oracle assumption is violated. Since hard examples are precisely those where workers make most mistakes, their selection maximizes labeling noise . Hence, while AL is successful This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3166 in domains where crowd-source workers are experts, e.g. everyday objects, it is not effective for expert domains. In this work, we consider an alternative formulation, in- spired by semi-supervised learning (SSL) methods [4,5,28, 34, 35] where a classifier trained on labelled data produces pseudo-labels for unlabeled examples. These labels are then accepted or rejected by thresholding a classification score, as illustrated in the middle of Figure 1. We refer to this pro- cess as pseudo-label filtering . Accepted labels are added to the training set, the classifier retrained, and the process repeated. SSL has been shown successful for datasets of everyday objects [4, 5, 34, 55], such as CIFAR [20], STL- 10 [9], SVHN [27], or ImageNet [11] but frequently col- lapses in expert domains, even under-performing supervised baselines trained on the small labeled dataset [36, 44, 50]. This is due to the increased difficulty of finer-grained clas- sification, and the well-known inability of deep learning to produce well calibrated confidence scores [14, 46]. While SSL, by itself, does not solve POSER annotation, its strategy of choosing the easier examples (higher classifi- cation confidence) is more suitable for the noisy POSER an- notators than the hardest example strategy of AL. Further- more, the major SSL weakness - poor pseudo-label filtering - can be significantly improved upon by using humans to fil- ter pseudo-labels. This suggests solving the POSER anno- tation problem with the SSL with human filtering (SSL-HF) approach at the right of Figure 1. Unlike machine teaching, where workers are image classifiers, POSER annotation is framed as an SSL problem where they become filters that verify the pseudo-labels produced by the classifier for un- labeled images. This has the critical benefit of framing the annotator operation as an instantaneous low-shot learning problem, which does not require prior training. In the proposed SSL-HF solution, given a query image and its pseudo-label (‘Beagle’), the annotator is presented with a small support set containing both positive (‘Beagle’ class) and negative (other classes) images. The annotator then simply declares if they agree with the pseudo-label, based on the similarity of the query image to the support set examples. Due to the well-known ability of humans for confidence calibration [10], this label filtering procedure is much more accurate than that of SSL, enabling POSER an- notation with high accuracy. Furthermore, because the fil- tering is by visual similarity, the labeling is implicit , i.e. the annotator does not even need to know the ‘Beagle’ class. Hence, there is no need to teach annotators a priori, elimi- nating the short-term memory constraints of MT. Together, these properties enable the ultimate goal of POSER annota- tion: accurate crowd-sourced annotation of expert datasets with large numbers of classes. The main insight behind SSL-HF is that the human low- shot learning ability [1, 41, 49] can be leveraged to enable annotators to filter labels in domains where they are not ex-pert. However, when the differences between support set examples are very fine-grained, it can be difficult to iden- tify the object details to look for. To address this problem, we propose to augment SSL-HF with deliberative explana- tions [43, 45], which visualize image regions of ambiguity between class pairs, tailored to the SSL-HF setting. Overall, this work makes five contributions. First, we introduce the SSL-HF framework for POSER annotation. Second, we propose an implementation, where the classi- fier suggests a label for the image and a support set of a few positive and close-negative examples. Third, to en- hance the accuracy of the human filtering of pseudo-labels, the support set is complemented with visualization-based explanations. Fourth, we present experiments showing that SSL-HF significantly outperforms SSL, AL, and MT ap- proaches to POSER annotation and that explanations en- hance these gains. Finally, to minimize the development cost of POSER annotation methods, we introduce an eval- uation protocol based on simulated human labeling. These contributions establish a new research direction at the in- tersection of human-in-the loop and fine-grained classifica- tion, needed to advance the effectiveness of deep learning in expert domains.
Vasconcelos_CUF_Continuous_Upsampling_Filters_CVPR_2023
Abstract Neural fields have rapidly been adopted for represent- ing 3D signals, but their application to more classical 2D image-processing has been relatively limited. In this pa- per, we consider one of the most important operations in image processing: upsampling. In deep learning, learnable upsampling layers have extensively been used for single im- age super-resolution. We propose to parameterize upsam- pling kernels as neural fields. This parameterization leads to a compact architecture that obtains a 40-fold reduction in the number of parameters when compared with competing arbitrary-scale super-resolution architectures. When up- sampling images of size 256x256 we show that our archi- tecture is 2x-10x more efficient than competing arbitrary- scale super-resolution architectures, and more efficient than sub-pixel convolutions when instantiated to a single-scale model. In the general setting, these gains grow polynomi- ally with the square of the target scale. We validate our method on standard benchmarks showing such efficiency gains can be achieved without sacrifices in super-resolution performance. https://cuf-paper.github.io
1. Introduction Neural-fields represent signals with coordinate-based neural-networks. They have found application in a multi- tude of areas including 3D reconstruction [ 24], novel-view synthesis [ 32], convolutions [ 14], and many others [ 37]. Recent research has investigated the use of neural fields in the context of single image super-resolution [6,19].1 These models are based on multi-layer perceptrons con- ditioned on latent representation produced by encoders2. While such architectures allow for continuous-scale super- resolution, they require the execution of a conditional neural field for every pixel at the target resolution, making them unsuitable in applications with limited computational re- sources. Further, such a large use of resources is not justi- fied by a increase in performance compared to classical con- volutional architectures such as sub-pixel convolutions [ 31]. In summary, neural fields have not yet found widespread adoption as classical solutions are 1/circlecopyrttrivial to implement and 2/circlecopyrtmore efficient. As they generally perform compa- 1Thereon we assume single image when talking about super-resolution. 2These encoders were originally proposed for classical super-resolution applications, and include both convolutional [ 8,21,40] as well as atten- tional [ 7,20,39] architectures. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9999 10000 10001 10002 10003 reduce the number of layers that operate at the target spa- tial resolution and at the encoder’s channel resolution. At the same time, our upsampler head is considerably lighter than previous arbitrary scale methods, not only by the use of a depth-wise convolution but also by keeping the same number of channels produced by the features encoder in the main network ( Ce= 64 , which results in dense layers 16× cheaper than its LIIF and LTE counterparts using 256neu- rons). When performing non-integer upsampling, the costs with processing the hyper-network layers are proportional to the target image resolution, but still smaller than a single dense layer of LIFF and LTE heads, due to the adoption of a reduced number of channels (as Ch= 32 , each of CUF’s hypernetwork dense layers is 64×cheaper than LIIF and LTE layers using 256neurons). Instantiating CUFs . At inference time, when targeting aninteger upscaling factor s, the hyper-network represent- ingKcan be queried to retrieve the weights correspond- ing tos2relative subpixel positions as an initialization step during pre-processing. The retrieved weights are re-used across all pixels taking advantage of the existent periodic- ity. Thus, in the CUF-instantiated architecture the contin- uous kernel is replaced at test time with a discrete depth- wise convolution, followed by a pixel shuffling operation, in contrast with the unfolding
Woo_ConvNeXt_V2_Co-Designing_and_Scaling_ConvNets_With_Masked_Autoencoders_CVPR_2023
Abstract Driven by improved architectures and better representa- tion learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt [33], have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learn- ing techniques such as masked autoencoders (MAE) [14]. However, we found that simply combining these two ap- proaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder frame- work and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural im- provement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M- parameter Atto model with 76.7% top-1 accuracy on Im- ageNet, to a 650M Huge model that achieves a state-of-the- art 88.9% accuracy using only public training data.Code: https://github.com/facebookresearch/ConvNeXt-V2
1. Introduction Building on research breakthroughs in earlier decades [16,25,28,37,44], the field of visual recognition has ushered in a new era of large-scale visual representation learning. Pre-trained, large-scale vision models have become essen- tial tools for feature learning and enabling a wide range of vision applications. The performance of a visual represen- tation learning system is largely influenced by three main factors: the neural network architecture chosen, the method *Work done during an internship at FAIR. †Corresponding author. Atto Femto Pico Nano Tiny Base Large Huge74.076.078.080.082.084.086.0ImageNet Top-1 Accuracy (%) 3.7M+1.0% 5.2M+1.0% 9.1M+0.8% 15.6M+1.1% 28M+0.9% 89M+1.1% 198M+1.5% 659M ConvNeXt V1 Sup ConvNeXt V2 Self-Sup ConvNeXt V1 Self-SupFigure 1. ConvNeXt V2 model scaling . The ConvNeXt V2 model, which has been pre-trained using our fully convolutional masked autoencoder framework, performs significantly better than the previous version across a wide range of model sizes. used for training the network, and the data used for training. In the field of visual recognition, progress in each of these areas contributes to overall improvements in performance. Innovation in neural network architecture design has consistently played a major role in the field of represen- tation learning. Convolutional neural network architec- tures (ConvNets) [16, 25, 28] have had a significant im- pact on computer vision research by allowing for the use of generic feature learning methods for a variety of visual recognition tasks [10, 15], rather than relying on manual feature engineering. In recent years, the transformer archi- tecture [44], originally developed for natural language pro- cessing, has also gained popularity due to its strong scaling behavior with respect to model and dataset size [7]. More recently, ConvNeXt [33] architecture has modernized tradi- tional ConvNets and demonstrated that pure convolutional models could also be scalable architectures. However, the most common method for exploring the design space for neural network architectures is still through benchmarking supervised learning performance on ImageNet. 1 This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16133 In a separate line of research, the focus of visual repre- sentation learning has been shifting from supervised learn- ing with labels to self-supervised pre-training with pre- text objectives. Among many different self-supervised al- gorithms, masked autoencoders (MAE) [14] have recently brought success in masked language modeling to the vision domain and quickly become a popular approach for visual representation learning. However, a common practice in self-supervised learning is to use a predetermined architec- ture designed for supervised learning, and assume the de- sign is fixed. For instance, MAE was developed using the vision transformer [7] architecture. It is possible to combine the design elements of archi- tectures and self-supervised learning frameworks, but do- ing so may present challenges when using ConvNeXt with masked autoencoders. One issue is that MAE has a specific encode-decoder design that is optimized for the sequence processing capabilities of transformers, which allows the compute-heavy encoder to focus on visible patches and thus reduce the pre-training cost. This design may not be com- patible with standard ConvNets, which use dense sliding windows. Additionally, if the relationship between the ar- chitecture and the training objective is not taken into con- sideration, it may be unclear whether optimal performance can be achieved. In fact, previous research has shown that training ConvNets with mask-based self-supervised learn- ing can be difficult [24], and empirical evidence suggests that transformers and ConvNets may have different feature learning behaviors that can affect representation quality. To this end, we propose to co-design the network archi- tecture and the masked autoencoder under the same frame- work, with the aim of making mask-based self-supervised learning effective for ConvNeXt models and achieving re- sults similar to those obtained using transformers. In designing the masked autoencoder, we treat the masked input as a set of sparse patches and use sparse con- volutions [12] to process only the visible parts. The idea is inspired by the use of sparse convolutions in processing large-scale 3D point clouds [5, 52]. In practice, we can im- plement ConvNeXt with sparse convolutions, and at fine- tuning, the weights are converted back to standard, dense layers without requiring special handling. To further im- prove the pre-training efficiency, we replace the transformer decoder with a single ConvNeXt block, making the entire design fully convolutional. We have observed mixed results with these changes: the learned features are useful and im- prove upon the baseline results, but the fine-tuning perfor- mance is still not as good as the transformer-based model. We then conduct a feature space analysis of different training configurations for ConvNeXt. We identify a poten- tial issue of feature collapse at the MLP layer when training ConvNeXt directly on masked input. To address this issue, we propose adding a Global Response Normalization layerto enhance inter-channel feature competition. This change is most effective when the model is pre-trained with masked autoencoders, suggesting that reusing a fixed architecture design from supervised learning may be suboptimal. In summary, we introduce ConvNeXt V2 which demon- strates improved performance when used in conjunction with masked autoencoders. We have found that this model significantly improves the performance of pure ConvNets across various downstream tasks, including ImageNet clas- sification [37], COCO object detection [30] and ADE20K segmentation [55]. The ConvNeXt V2 models can be used in a variety of compute regimes and includes models of varying complexity: from an efficient 3.7M-parameter Atto model that achieves 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that reaches a state-of-the-art 88.9% accuracy when using IN-22K labels.
Xu_Binarizing_Sparse_Convolutional_Networks_for_Efficient_Point_Cloud_Analysis_CVPR_2023
Abstract In this paper, we propose binary sparse convolutional networks called BSC-Net for efficient point cloud analysis. We empirically observe that sparse convolution operation causes larger quantization errors than standard convolu- tion. However, conventional network quantization methods directly binarize the weights and activations in sparse con- volution, resulting in performance drop due to the signif- icant quantization loss. On the contrary, we search the optimal subset of convolution operation that activates the sparse convolution at various locations for quantization error alleviation, and the performance gap between real- valued and binary sparse convolutional networks is closed without complexity overhead. Specifically, we first present the shifted sparse convolution that fuses the information in the receptive field for the active sites that match the pre- defined positions. Then we employ the differentiable search strategies to discover the optimal opsitions for active site matching in the shifted sparse convolution, and the quanti- zation errors are significantly alleviated for efficient point cloud analysis. For fair evaluation of the proposed method, we empirically select the recently advances that are bene- ficial for sparse convolution network binarization to con- struct a strong baseline. The experimental results on Scan- Net and NYU Depth v2 show that our BSC-Net achieves sig- nificant improvement upon our srtong baseline and outper- forms the state-of-the-art network binarization methods by a remarkable margin without additional computation over- head for binarizing sparse convolutional networks.
1. Introduction 3D deep learning on point clouds [6,12,25,27] has been widely adopted in a wide variety of downstream applica- tions including autonomous driving, AR/VR and robotics due to its strong discriminative power and generalization ability. In these applications, real-time interaction and fast *Corresponding author. ... (a) (b) Sparseconv ...... Shifted Sparseconv ... Channels ChannelsFigure 1. Demonstration of sparse convolution and the proposed shifted sparse convolution. (a) Sparse convolution only operates when the center of kernel slides over the active sites. (b) Our shifted sparse convolution performs different operations for each group of output channels, which brings more information from the neighbor active sites. response are required to guarantee safety and practicality. Submanifold sparse convolution (we call it ”sparse con- volution” for short in the rest of this paper) [12] is one of the most popular and basic operator for point cloud analy- sis, which first voxelizes the point clouds and then applies 3D convolution on the voxels while keeping the same spar- sity pattern throughout the layers of the network. Sparse convolution is widely adopted in most state-of-the-art ar- chitectures for point cloud analysis and so it is desirable to further improve its efficiency for more practical applica- tion. We opt for architecture-agnostic methods such as em- ploying network binarization to achieve this goal. Binarized neural networks [19,36] restrict the bitwidth of weights and activations to only one bit and substitute the multiplication- addition by xnor-bitcount operations, which decreases the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5313 storage and computational cost by 32and64respec- tively. We empirically find sparse convolution operation brings larger quantization errors compared to standard con- volution, which leads to significant performance degrada- tion when directly applying existing network binarization methods due to the large quantization errors. In this paper, we present BSC-Net to learn binary sparse convolutional networks for efficient point cloud analysis in resource-exhaustive scenarios. Instead of directly binariz- ing the weights and activations in sparse convolutional net- works, we search the optimal subset of convolution oper- ation that activates the sparse convolution at various loca- tions for binarization. The acquired convolution patterns significantly reduces the quantization errors in deployment, and achieves remarkable performance enhancement with- out extra computational cost. More specifically, we propose the shifted sparse convolutional networks whose convolu- tion operations are activated for active sites consistent with the pre-defied locations, and the optimal positions for ac- tive site matching across various channels are obtained via differentiable search strategies. Therefore, the quantization errors in the fixed convoltion operations are significantly al- leviated by leveraging the shifted sparse convolution with the searched active site matching locations. Moreover, we empirically select the recently advances that are beneficial for sparse convolution network binarization to construct a strong baseline. Extensive experimental results on Scan- Net and NYU Depth v2 for semantic segmentation of point clouds show that our BSC-Net reduces the operations per second (OPs) by 92:4%with only 3%mIOU degradation.
Wang_On_the_Pitfall_of_Mixup_for_Uncertainty_Calibration_CVPR_2023
Abstract By simply taking convex combinations between pairs of samples and their labels, mixup training has been shown to easily improve predictive accuracy. It has been recently found that models trained with mixup also perform well on uncertainty calibration. However, in this study, we found that mixup training usually makes models less calibratable than vanilla empirical risk minimization, which means that it would harm uncertainty estimation when post-hoc cali- bration is considered. By decomposing the mixup process into data transformation and random perturbation, we sug- gest that the confidence penalty nature of the data transfor- mation is the reason of calibration degradation. To miti- gate this problem, we first investigate the mixup inference strategy and found that despite it improves calibration on mixup, this ensemble-like strategy does not necessarily out- perform simple ensemble. Then, we propose a general strat- egy named mixup inference in training , which adopts a simple decoupling principle for recovering the outputs of raw samples at the end of forward network pass. By em- bedding the mixup inference, models can be learned from the original one-hot labels and hence avoid the negative impact of confidence penalty. Our experiments show this strategy properly solves mixup’s calibration issue without sacrificing the predictive performance, while even improves accuracy than vanilla mixup.
1. Introduction Although modern neural networks have made notable performance on predictive accuracy in various computer vi- sion tasks [7], they have been found to perform poorly in terms of uncertainty calibration, which is an important con- sideration in many real-world applications [5]. Intuitively, we expect a predictive model to be accurate when it is confi- * Corresponding author This work is done when Deng-Bao works as an intern in Tencent AI Lab.dent about its outputs while reveal high uncertainty when it is likely to be inaccurate. Otherwise, the miscalibrated pre- diction of models could cause undesired consequences in many safety-critical applications such as medical diagnosis and autonomous driving. Early researches on uncertainty estimation mainly focus on probabilistic models. However, in deep learning paradigm, training of deep bayesian mod- els are expensive and their performance usually depends on approximate inference methods due to the computational constraint in real-world deployment. Therefore, uncertainty calibration of deterministic neural networks becomes an im- portant topic in recent years. Guo et al. [5] systematically studied the uncertainty cal- ibration problem of modern neural networks with compre- hensive experiments. They pointed out that popular mod- ern neural networks usually suffer from severe miscalibra- tion issue than shallow models. They empirically showed that large model capacity without proper regularization is closely related to the miscalibration issue. They also evalu- ated the performance of various calibration strategies and found that simple post-hoc approaches like temperature scaling (TS) [22] and histogram binning (HB) [34] can re- duce the calibration error to a quite low level. Following their work, a number of calibration friendly regularization methods and post-calibration approaches were proposed to address the miscalibration issue of deep neural networks [12, 17, 18, 21]. Recently, researchers investigated the impact of mixup training for calibration. Thulasidasan et al. [27] empirically found that mixup improves calibration across various model architectures and datasets. Zhang et al. [38] provided a the- oretical explanation for the effect of mixup training on cal- ibration in high-dimensional regime. Carratino et al. [3] pointed out that mixup implicitly performs label smoothing and hence can avoid the overconfidence issue. However, there are also empirical observations showing that mixup does not necessarily improve calibration. The experiments in [16] provides evidence showing mixup degrades calibra- tion in some cases. The empirical studies in [31] and [23] This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7609 found that combining mixup with ensemble degrades cal- ibration performance than individually using one of them. In particular, they suggest that both mixup and ensembling encourage models to be less confident, and hence the under- confidence issue occurs when they are used together. We notice that most of existing work investigates mixup for calibration without the consideration of post-calibration. As the research in [1] suggested, the comparison of calibra- tion performance between different methods without post- calibration might not provide a fair ranking. Another recent work [30] also pointed out that models with better calibra- tion performance during main training do not necessarily yield better calibration results after post-calibration. There- fore, in this work, we revisit mixup’s calibration problem by considering the training stage and post-hoc processing as a unified system. Under this setting, three questions are nat- urally raised: (i) Does mixup really help calibration? (ii) If it does not, what leads to the failure? (iii) How can we mit- igate the pitfall of mixup on calibration? To answer these questions, we make the following contributions : •We conduct comprehensive experiments showing that mixup often leads to less calibratable models than vanilla empirical risk minimization (ERM), and hence degrades uncertainty estimation in general when post- calibration is considered after training. •To explain this phenomenon, we decompose mixup into two components: data transformation and random perturbation. We show that the former part shrinks the training labels to their means and implicitly performs confidence penalty , which serves as the reason of cali- bration degradation. •We investigate the mixup inference strategy for calibra- tion and found that despite it improves calibration on mixup, this ensemble-like approach is no better than vanilla deep ensemble in terms of both calibration and accuracy with same inference budget. •We show that mixup’s calibration issue can be easily solved by translating the mixup inference into training. By this process, the output of each raw sample can be approximately recovered to be learned from the orig- inal one-hot labels, and hence avoiding the negative effect induced by confidence penalty. Our experiments show that this strategy outperforms mixup in terms of both accuracy and calibration.
Vasu_MobileOne_An_Improved_One_Millisecond_Mobile_Backbone_CVPR_2023
Abstract Efficient neural network backbones for mobile devices are often optimized for metrics such as FLOPs or param- eter count. However, these metrics may not correlate well with latency of the network when deployed on a mobile de- vice. Therefore, we perform extensive analysis of different metrics by deploying several mobile-friendly networks on a mobile device. We identify and analyze architectural and optimization bottlenecks in recent efficient neural networks and provide ways to mitigate these bottlenecks. To this end, we design an efficient backbone MobileOne, with variants achieving an inference time under 1 ms on an iPhone12 with 75.9% top-1 accuracy on ImageNet. We show that Mo- bileOne achieves state-of-the-art performance within the ef- ficient architectures while being many times faster on mo- bile. Our best model obtains similar performance on Ima- geNet as MobileFormer while being 38 ×faster. Our model obtains 2.3% better top-1 accuracy on ImageNet than Ef- ficientNet at similar latency. Furthermore, we show that our model generalizes to multiple tasks – image classifi- cation, object detection, and semantic segmentation with significant improvements in latency and accuracy as com- pared to existing efficient architectures when deployed on a mobile device. Code and models are available at https: //github.com/apple/ml-mobileone
1. Introduction Design and deployment of efficient deep learning archi- tectures for mobile devices has seen a lot of progress [5, 30,31,42,44,46] with consistently decreasing floating-point operations (FLOPs) and parameter count while improving accuracy. However, these metrics may not correlate well with the efficiency [9] of the models in terms of latency. Ef- ficiency metric like FLOPs do not account for memory ac- cess cost and degree of parallelism, which can have a non- trivial effect on latency during inference [42]. Parameter count is also not well correlated with latency. For exam- ple, sharing parameters leads to higher FLOPS but smaller model size. Furthermore, parameter-less operations likeskip-connections [24] or branching [33,49] can incur signif- icant memory access costs. This disconnect can get exacer- bated when custom accelerators are available in the regime of efficient architectures. Our goal is to improve the latency cost of efficient archi- tectures while improving their accuracy by identifying key architectural and optimization bottlenecks that affect on- device latency. To identify architectural bottlenecks, we de- ploy neural networks on an iPhone12 by using CoreML [56] and benchmark their latency costs. To alleviate optimiza- tion bottlenecks, we decouple train-time and inference- time architectures, i.e. using a linearly over-parameterized model at train-time and re-parameterizing the linear struc- tures at inference [11–13]. We further alleviate optimization bottleneck by dynamically relaxing regularization through- out training to prevent the already small models from being over-regularized. Based on our findings on the key bottlenecks, we de- sign a novel architecture MobileOne , variants of which run under 1 ms on an iPhone12 achieving state-of-the-art accu- racy within efficient architecture family while being signif- icantly faster on the device. Like prior works on structural re-parameterization [11–13], MobileOne introduces linear branches at train-time which get re-parameterized at infer- ence. However, a key difference between our model and prior structural re-parameterization works is the introduc- tion of trivial over-parameterization branches, which pro- vides further improvements in low parameter regime and model scaling strategy. At inference, our model has sim- ple feed-forward structure without any branches or skip- connections. Since this structure incurs lower memory access cost, we can incorporate wider layers in our net- work which boosts representation capacity as demonstrated empirically in Table 9. For example, MobileOne-S1 has 4.8M parameters and incurs a latency of 0.89ms, while MobileNet-V2 [46] has 3.4M (29.2% less than MobileOne- S1) parameters and incurs a latency of 0.98ms. At this oper- ating point, MobileOne attains 3.9% better top-1 accuracy than MobileNet-V2. MobileOne achieves significant improvements in latency compared to efficient models in literature while maintain- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7907 (a) Top 1 accuracy vs Latency on iPhone 12. (b) Zoomed out (a) (c) Top-1 accuracy vs mAP. Figure 1. We show comparisons of Top-1 accuracy on image classification vs latency on an iPhone 12 (a), and zoomed out area (b) to include recent transformer architectures. We show mAP on object detection vs Top-1 accuracy on image classification in (c) with size of the marker indicating latency of the backbone on iPhone 12. Our models have significantly smaller latency compared to related works. Please refer to supp. mat. for higher resolution figures. ing the accuracy on several tasks – image classification, ob- ject detection, and semantic segmentation. As shown in Figure 1b, MobileOne performs better than MobileViT- S [44] while being 5 ×faster on image classification. As compared to EfficientNet-B0 [53], we achieve 2.3% bet- ter top-1 accuracy on ImageNet [10] with similar latency costs (see Figure 1a). Furthermore, as seen in Figure 1c, MobileOne models not only perform well on ImageNet, they also generalize to other tasks like object detection. Models like MobileNetV3-L [30] and MixNet-S [54] im- prove over MobileNetV2 on ImageNet, but those improve- ments do not translate to object detection task. As shown in Figure 1c, MobileOne shows better generalization across tasks. For object detection on MS-COCO [37], best vari- ant of MobileOne outperforms best variant MobileViT by 6.1% and MNASNet by 27.8%. For semantic segmen- tation, on PascalVOC [16] dataset, best variant of Mo- bileOne outperforms best variant MobileViT by 1.3% and on ADE20K [64] dataset, best variant of MobileOne out- performs MobileNetV2 by 12.0%. In summary, our contri- butions are as follows: • We introduce MobileOne , a novel architecture that runs within 1 ms on a mobile device and achieves state-of-the-art accuracy on image classification within ef- ficient model architectures. The performance of our model also generalizes to a desktop CPU and GPU. • We analyze performance bottlenecks in activations and branching that incur high latency costs on mobile in recent efficient networks. • We analyze the effects of train-time re-parameterizable branches and dynamic relaxation of regularization in training. In combination, they help alleviating opti- mization bottlenecks encountered when training small models. • We show that our model generalizes well to other tasks – object detection and semantic segmentation while outperforming recent state-of-the-art efficient models. We will release our trained networks and code for research purposes. We will also release the code for iOS application to enable benchmarking of networks on iPhone.
Wei_Towards_Realistic_Long-Tailed_Semi-Supervised_Learning_Consistency_Is_All_You_Need_CVPR_2023
Abstract While long-tailed semi-supervised learning (LTSSL) has received tremendous attention in many real-world classi- fication problems, existing LTSSL algorithms typically as- sume that the class distributions of labeled and unlabeled data are almost identical. Those LTSSL algorithms built upon the assumption can severely suffer when the class dis- tributions of labeled and unlabeled data are mismatched since they utilize biased pseudo-labels from the model. To alleviate this issue, we propose a new simple method that can effectively utilize unlabeled data of unknown class dis- tributions by introducing the adaptive consistency regular- izer (ACR). ACR realizes the dynamic refinery of pseudo- labels for various distributions in a unified formula by esti- mating the true class distribution of unlabeled data. Despite its simplicity, we show that ACR achieves state-of-the-art performance on a variety of standard LTSSL benchmarks, e.g., an averaged 10% absolute increase of test accuracy against existing algorithms when the class distributions of labeled and unlabeled data are mismatched. Even when the class distributions are identical, ACR consistently outper- forms many sophisticated LTSSL algorithms. We carry out extensive ablation studies to tease apart the factors that are most important to ACR’s success. Source code is available athttps://github.com/Gank0078/ACR .
1. Introduction Semi-supervised learning (SSL) is an effective way of using unlabeled data to improve the generalization of deep neural networks (DNNs) [1, 10, 16] when only a limited amount of labeled data is accessible [3, 23, 29, 31]. The core idea of most SSL algorithms is to generate pseudo- labels for unlabeled data and select confident ones to train models. Recent progress on SSL has revealed promising Tong Wei is the corresponding author. This research was supported by the National Science Foundation of China (62206049).performance in various tasks, such as image recognition [29] and text categorization [35, 39]. However, most exist- ing SSL algorithms assume the datasets are class-balanced, i.e., each class is associated with an equivalent number of samples in both labeled and unlabeled datasets. In con- trast, class distributions in many real-world tasks are long- tailed [6, 19, 33, 38, 43]. It is well known that classifiers trained on long-tailed datasets tend to be biased towards majority classes, leading to low test accuracy on minority classes [20, 37, 44]. To improve the performance, many long-tailed semi- supervised learning (LTSSL) algorithms have been pro- posed to generate unbiased pseudo-labels. They pursue class-balanced classifiers using techniques including re- sampling [18], re-weighting [17], label smoothing [36], and pseudo-label alignment [14, 34]. These algorithms have shown strong generalization for the minority class by as- suming the class distributions of labeled and unlabeled data are almost identical. However, this assumption is frequently violated in real-world applications, for instance, if the la- beled and unlabeled data are collected from different tasks. The unlabeled data may have a large class distribution gap from labeled data, and using the erroneous assumption can severely deteriorate the performance [17, 25]. Contribution. This paper studies the under-explored yet practical LTSSL problem, i.e., learning from unlabeled data of unknown class distributions. Notably, we start with three representative types of class distributions of unlabeled data, i.e.,consistent ,uniform , and reversed , as illustrated in Fig- ures 1a to 1c. We then propose a new simple algorithm to effectively use unlabeled data through the adaptive con- sistency regularizer (ACR), which is built upon one of the most popular SSL algorithms FixMatch [29]. Concretely, ACR is developed based on two findings: i) to learn a class- balanced classifier, it is helpful to generate pseudo-labels biased appropriately toward the minority class, whereas ii) to learn a better feature extractor, the accuracy of pseudo- labels is critical. Those two findings seem to contradict. We thus present a two-branch network, including a bal- anced branch and a standard branch, to resolve this con- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3469 (a)Consistent class distribution (b)Uniform class distribution (c)Reversed class distribution (d) F 1gain of pseudo-labels Figure 1. (1a to 1c): Three typical types of class distribution of unlabeled data. (1d): F 1gain due to our method ACR compares to a recent state of the art DASO [25] under three types of class distributions of unlabeled data. We can see that ACR significantly improves the quality of pseudo-labels, showing its great capability of taking advantage of unlabeled data to alleviate the class imbalance problem. flict. Specifically, ACR learns a class-balanced classifier via imposing consistency between its predictions and the adjusted outputs of the standard classifier. The adjusted outputs are designed to be appropriately biased toward the minority class. However, for the second finding, it is ob- served that the accuracy of pseudo-labels produced by the standard classifier varies as the class distribution of unla- beled data changes. We resolve this difficulty by refining the original pseudo-labels to match the true class distribu- tion of unlabeled data and enhance their accuracy. Impor- tantly, ACR realizes the adaptive refinery of pseudo-labels for various distributions in a unified formula by estimating the true class distribution. We demonstrate the effectiveness of the proposed ap- proach under various realistic LTSSL scenarios by varying the class distributions of unlabeled data. Despite its sim- plicity, the proposed algorithm improves recent LTSSL al- gorithms in all test cases, e.g., our method improves DARP [14], CReST [34], DASO [25] with up to 10.8% ,11.2% , and7.2% absolute increase on the test accuracy, respec- tively. Nevertheless, more importantly, in addition to three types of representative class distributions, i.e., consistent , uniform , and reversed , we also test our method under many other class distributions. As expected, our method signif- icantly improves the performance when the class distribu- tions are mismatched between labeled and unlabeled data.
Wen_Hierarchical_Temporal_Transformer_for_3D_Hand_Pose_Estimation_and_Action_CVPR_2023
Abstract Understanding dynamic hand motions and actions from egocentric RGB videos is a fundamental yet challenging task due to self-occlusion and ambiguity. To address occlu- sion and ambiguity, we develop a transformer-based frame- work to exploit temporal information for robust estimation. Noticing the different temporal granularity of and the se- mantic correlation between hand pose estimation and ac- tion recognition, we build a network hierarchy with two cascaded transformer encoders, where the first one exploits the short-term temporal cue for hand pose estimation, and the latter aggregates per-frame pose and object informa- tion over a longer time span to recognize the action. Our approach achieves competitive results on two first-person hand action benchmarks, namely FPHA and H2O. Exten- sive ablation studies verify our design choices.
1. Introduction Perceiving dynamic interacting human hands is funda- mental in fields such as human-robot collaboration, imita- tion learning, and VR/AR applications. Viewing through the egocentric RGB video is especially challenging, as there are frequent self-occlusions between hands and objects, as well as severe ambiguity of action types judged from in- dividual frames ( e.g. see Fig. 1 where the actions of pour milk andplace milk can only be discerned at complete se- quences). Recent years have witnessed tremendous improvement in 3D hand pose estimation and action recognition. While many works focus on only one of these tasks [10,12,13,18, 23,39,51,56], unified frameworks [27,47,52] have also been proposed to address both tasks simultaneously, based on the critical observation that the temporal context of hand poses Work is partially done during the internship of Y . Wen with Microsoft Research Asia. Code and data are available at https://github.com/ fylwen/HTT . Figure 1. Image sequences for pour milk andplace milk under ego- centric view from H2O [27], with frequently occluded hand joints and ambiguous action type judged by individual frames. Using temporal information can benefit both tasks of 3D hand pose esti- mation and action recognition. helps resolve action ambiguity, implemented via models like LSTM, graph convolutional network or temporal con- volutional network. However, we note that temporal infor- mation can also benefit hand pose estimation: while inter- acting hands are usually under partial occlusion and trunca- tion, especially in the egocentric view, they can be inferred more reliably from neighboring frames with different views by temporal motion continuity. Indeed, this idea has not been fully utilized yet among the existing works [27,47,52]: e.g., [27, 47] perform hand pose estimation at each frame, leaving the temporal dimension unexplored, and [52] jointly refines action and hand pose through hand-crafted multiple- order motion features and a complex iterative scheme. We build a simple end-to-end trainable framework to ex- ploit the temporal dimension and achieve effective hand pose estimation and action recognition with a single feed- forward pass. To exploit the relationship among frames, we adopt the transformer architecture [48] which has demon- strated superior performance in sequence modeling. How- ever, action and pose have different temporal granularity: while the action is related to longer time spans lasting for several seconds, the hand pose depicts instantaneous mo- tions. Correspondingly, we use two transformer encoders with different window sizes to respectively leverage the short-term and long-term temporal cues for the per-frame This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21243 pose estimation and the action recognition of a whole se- quence. Moreover, we notice that the action has a higher semantic level, which is usually defined in the form of “ verb + noun ” [14, 27], where verb can be derived from the hand motion and the noun is the object being manipulated. We thus follow this pattern to build a hierarchy by cascading the pose and action blocks, where the pose block outputs the per-frame hand pose and object label, which are then aggregated by the action block for action recognition. We evaluate our approach on FPHA [14] and H2O [27], and achieve state-of-the-art performances for 3D hand pose estimation and action recognition from egocentric RGB videos. Our contribution is summarized as follows: • We propose a simple but efficient end-to-end trainable framework to leverage the temporal information for 3D hand pose estimation and action recognition from ego- centric RGB videos. • We build a hierarchical temporal transformer with two cascaded blocks, to leverage different time spans for pose and action estimation, and model their semantic correlation by deriving the high-level action from the low-level hand motion and manipulated object label. • We show state-of-the-art performance on two public datasets FPHA [14] and H2O [27].
Wang_Model_Barrier_A_Compact_Un-Transferable_Isolation_Domain_for_Model_Intellectual_CVPR_2023
Abstract As scientific and technological advancements result from human intellectual labor and computational costs, protect-ing model intellectual property (IP) has become increas-ingly important to encourage model creators and owners.Model IP protection involves preventing the use of well-trained models on unauthorized domains. To address thisissue, we propose a novel approach called Compact Un-Transferable Isolation Domain (CUTI-domain), which actsas a barrier to block illegal transfers from authorized tounauthorized domains. Specifically, CUTI-domain blockscross-domain transfers by highlighting the private style fea-tures of the authorized domain, leading to recognition fail-ure on unauthorized domains with irrelevant private stylefeatures. Moreover , we provide two solutions for usingCUTI-domain depending on whether the unauthorized do-main is known or not: target-specified CUTI-domain andtarget-free CUTI-domain. Our comprehensive experimen-tal results on four digit datasets, CIF AR10 & STL10, andVisDA-2017 dataset demonstrate that CUTI-domain can beeasily implemented as a plug-and-play module with differ-ent backbones, providing an efficient solution for model IP protection.
1. Introduction The recent success of deep learning models heavily re- lies on massive amounts of high-quality data, specializedtraining resources, and elaborate manual fine-tuning [ 4,10, 26,35]. Obtaining a well-trained deep model is both time- consuming and labor-intensive [ 22]. Therefore, it should be protected as a kind of scientific and technological achieve-ment intellectual property (IP) [ 5,50], thereby stimulating innovation enthusiasm in the community and further pro-moting the development of deep learning. As shown in *L. Wang and M. Wang contributed equally to this work. †Corresponding author: D. Zhang ([email protected]) and H. Fu ([email protected]). Transferable StealerAuthorization Model ownerAuthorized domain Correct predictions Model Wrong predictionsUnauthorized domainUn- transferabl eAuthorized domain Correct predictions Wrong predictionsUnauthorized domain Model ownerModelAuthorization CUTI-domain Figure 1. Model IP protection with our proposed CUTI-domain. Left : In standard supervised learning (SL), the model owner trains a high-performance model on the authorized domain (pink square) and then authorizes a specific user. Authorized users have the right to use the model on the authorized domain to get the correct pre-diction. However, a stealer can easily access the model on theunauthorized domain (blue square), which violates the legitimaterights and interests of the model owner. Right : Our method con- structs a CUTI-domain between the authorized and unauthorizeddomains, which could block the illegal transferring and lead to awrong prediction for unauthorized domains. Fig. 1, in supervised learning (SL), the model owner uses the overall features of the authorized domain (pink square)for training, obtains a high-performance model, and grants the right to use it to a specific user. Authorized users canuse the model on the authorized domain to obtain correctpredictions. However, since the model is trained with over- all features, its potential feature generalization region islarge and may cover some unauthorized domains. There-fore, there is a natural pathway between the authorized do- main and the unauthorized domain, and the released high-performance model obtained by SL can be illegally trans-ferred to the unauthorized domain (blue squares) through methods such as domain adaptation [ 30,51], and domain generalization [ 44,52], to obtain correct prediction results. This presents a challenge in protecting well-trained models.One of the most concerning threats raised is “Will releasing This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20475 the model make it easy for the main competitor to copy this new feature and hurt owner differentiation in the market?”Thus, the model IP protection has been proposed to defendagainst model stealing or unauthorized use. A comprehensive intellectual property (IP) protection strategy for deep learning models should consider bothownership verification and applicability authorization [ 45, 47]. Ownership verification involves verifying who has permission to use the deep model by embedding water-marks [ 37,48], model fingerprint [ 31], and predefined trig- gers [ 14]. The model owner can grant usage permission to a specific user, and any other users will be infringing on theowner’s IP rights. However, an authorized user can easilytransfer the model to an unauthorized user, so the modelowner must add special marks during training to identify and verify ownership. Moreover, these methods are vulner- able to fine-tuning, classifier retraining, elastic weight con-solidation algorithms, and watermark overwriting, whichcan weaken the model’s protection. On the other hand, ap-plicability authorization involves verifying the model’s us-age scenarios. Users with permission can apply the deep model for the tasks specified by the model owner, and it is an infringement to use it for unauthorized tasks [ 45]. How- ever, users can easily transfer high-performance models toother similar tasks to save costs, which is a common andhidden infringement. Therefore, if the performance of themodel can be limited to the tasks specified by the owner and reduced on other similar tasks, unauthorized users will lose confidence in stealing and re-authoring the model. Toachieve this, a non-transferable learning (NTL) method isproposed [ 45], which uses an estimator with a characteristic kernel from Reproducing Kernel Hilbert Spaces to approx- imate and increase the maximum mean difference between two distributions on finite samples. However, the authorsonly considered using limited samples to increase the meandistribution difference of features between domains and ig-nored outliers. The convergence region of NTL is not tightenough. Moreover, the calculation of the maximum mean difference is class-independent, which reduces the model’s feature recognition ability in the authorized domain to a cer-tain extent. To address the challenges outlined above, we first pro- pose a novel approach called the Compact Un-TransferableIsolation (CUTI) domain to prevent illegal transferring ofdeep models from authorized to unauthorized domains. Ourapproach considers the overall feature of each domain, con- sisting of two components: shared features and private fea-tures. Shared features refer to semantic features, while pri- vate features include stylistic cues such as perspective, tex-ture, saturation, brightness, background environment, andso on. We emphasize the private features of the autho- rized domain and construct a CUTI-domain as a model bar-rier with similar private style features. This approach pre-vents illegal transfers to unauthorized domains with new private style features, thereby leading to wrong predictions.Furthermore , we also provide two CUTI-domain solutions for different scenarios. When the unauthorized domainis known, we propose the target-specified CUTI-domain,where the model is trained with a combination of autho-rized, CUTI, and unauthorized domains. When the unau-thorized domain is unknown, we use the target-free CUTI-domain, which employs a generator to synthesize unau-thorized samples that replace the unauthorized domain inmodel training. At last , our comprehensive experimen- tal results on four digit datasets, CIFAR10 & STL10, andVisDA-2017 demonstrate that our proposed CUTI-domaineffectively reduces the recognition ability on unauthorizeddomains while maintaining strong recognition on autho-rized domains. Moreover, as a plug-and-play module, ourCUTI-domain can be easily implemented within differentbackbones and provide efficient solutions. *
Xu_Dream3D_Zero-Shot_Text-to-3D_Synthesis_Using_3D_Shape_Prior_and_Text-to-Image_CVPR_2023
Abstract Recent CLIP-guided 3D optimization methods, such as DreamFields [19] and PureCLIPNeRF [24], have achieved impressive results in zero-shot text-to-3D synthesis. How- ever, due to scratch training and random initialization with- out prior knowledge, these methods often fail to generate accurate and faithful 3D structures that conform to the in- put text. In this paper, we make the first attempt to introduce explicit 3D shape priors into the CLIP-guided 3D optimiza- tion process. Specifically, we first generate a high-quality 3D shape from the input text in the text-to-shape stage as a 3D shape prior. We then use it as the initialization of a neu- ral radiance field and optimize it with the full prompt. To address the challenging text-to-shape generation task, we present a simple yet effective approach that directly bridges the text and image modalities with a powerful text-to-image diffusion model. To narrow the style domain gap between the images synthesized by the text-to-image diffusion model and shape renderings used to train the image-to-shape generator, we further propose to jointly optimize a learn- able text prompt and fine-tune the text-to-image diffusion model for rendering-style image generation. Our method, *Work done during an internship at ARC Lab, Tencent PCG. †Corresponding Author.Dream3D, is capable of generating imaginative 3D con- tent with superior visual quality and shape accuracy com- pared to state-of-the-art methods. Our project page is at https://bluestyle97.github.io/dream3d/ .
1. Introduction Text-to-3D synthesis endeavors to create 3D content that is coherent with an input text, which has the potential to benefit a wide range of applications such as animations, games, and virtual reality. Recently developed zero-shot text-to-image models [34,44,45,48,50] have made remark- able progress and can generate diverse, high-fidelity, and imaginative images from various text prompts. However, extending this success to the text-to-3D synthesis task is challenging because it is not practically feasible to collect a comprehensive paired text-3D dataset. Zero-shot text-to-3D synthesis [19,22,24,40,51], which eliminates the need for paired data, is an attractive approach that typically relies on powerful vision-language models such as CLIP [58]. There are two main categories of this approach. 1)CLIP-based generative models, such as CLIP- Forge [51]. They utilize images as an intermediate bridge and train a mapper from the CLIP image embeddings of This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20908 ShapeNet renderings to the shape embeddings of a 3D shape generator, then switch to the CLIP text embedding as the in- put at test time. 2)CLIP-guided 3D optimization methods, such as DreamFields [19] and PureCLIPNeRF [24]. They continuously optimize the CLIP similarity loss between a text prompt and rendered images of a 3D scene representa- tion, such as neural radiance fields [1,26,32,41]. While the first category heavily relies on 3D shape generators trained on limited 3D shapes and seldom has the capacity to ad- just its shape structures, the second category has more cre- ative freedom with the “dreaming ability” to generate di- verse shape structures and textures. We develop our method building upon CLIP-guided 3D optimization methods. Although these methods can pro- duce remarkable outcomes, they typically fail to create pre- cise and accurate 3D structures that conform to the input text (Fig. 1, 2ndrow)). Due to the scratch training and ran- dom initialization without any prior knowledge, these meth- ods tend to generate highly-unconstrained “adversarial con- tents” that have high CLIP scores but low visual quality. To address this issue and synthesize more faithful 3D con- tents, we suggest generating a high-quality 3D shape from the input text first and then using it as an explicit “3D shape prior” in the CLIP-guided 3D optimization process. In the text-to-shape *stage, we begin by synthesizing a 3D shape without textures of the main common object in the text prompt. We then use it as the initialization of a voxel-based neural radiance field and optimize it with the full prompt. Thetext-to-shape generation itself is a challenging task. Previous methods [19, 51] are often trained on images and tested with texts, and use CLIP to bridge the two modali- ties. However, this approach leads to a mismatching prob- lem due to the gap between the CLIP text and image embed- ding spaces. Additionally, existing methods cannot produce high-quality 3D shapes. In this work, we propose to di- rectly bridge the text and image modalities with a powerful text-to-image diffusion model, i.e., Stable Diffusion [48]. We use the text-to-image diffusion model to synthesize an image from the input text and then feed the image into animage-to-shape generator to produce high-quality 3D shapes. Since we use the same procedure in both training and testing, the mismatching problem is largely reduced. However, there is still a style domain gap between the im- ages synthesized by Stable Diffusion and the shape render- ings used to train the image-to-shape generator. Inspired by recent work on controllable text-to-image synthesis [11,49], we propose to jointly optimize a learnable text prompt and fine-tune the Stable Diffusion to address this domain gap. The fine-tuned Stable Diffusion can reliably synthesize im- ages in the style of shape renderings used to train the image- *Throughout this paper, we use the term “shape” to refer to 3D geomet- ric models without textures , while some works [7,28] also use this term for textured 3D models.to-shape module without suffering from the domain gap. To summarize, 1)We make the first attempt to introduce the explicit 3D shape prior into CLIP-guided 3D optimiza- tion methods. The proposed method can generate more ac- curate and high-quality 3D shapes conforming to the corre- sponding text, while still enjoying the “dreaming” ability of generating diverse shape structures and textures (Fig. 1, 1strow). Therefore, we name our method “Dream3D” as it has both strengths. 2)Regarding text-to-shape genera- tion, we present a straightforward yet effective approach that directly connects the text and image modalities using a powerful text-to-image diffusion model. To narrow the style domain gap between the synthesized images and shape ren- derings, we further propose to jointly optimize a learnable text prompt and fine-tune the text-to-image diffusion model for rendering-style image generation. 3)Our Dream3D can generate imaginative 3D content with better visual quality and shape accuracy than state-of-the-art methods. Addition- ally, our text-to-shape pipeline can produce 3D shapes of higher quality than previous work.
Wang_Propagate_and_Calibrate_Real-Time_Passive_Non-Line-of-Sight_Tracking_CVPR_2023
Abstract Non-line-of-sight (NLOS) tracking has drawn increasing attention in recent years, due to its ability to detect ob- ject motion out of sight. Most previous works on NLOS tracking rely on active illumination, e.g., laser, and suf- fer from high cost and elaborate experimental conditions. Besides, these techniques are still far from practical appli- cation due to oversimplified settings. In contrast, we pro- pose a purely passive method to track a person walking in an invisible room by only observing a relay wall, which is more in line with real application scenarios, e.g., security. To excavate imperceptible changes in videos of the relay wall, we introduce difference frames as an essential car- rier of temporal-local motion messages. In addition, we propose PAC-Net, which consists of alternating propaga- tion and calibration, making it capable of leveraging both dynamic and static messages on a frame-level granular- ity. To evaluate the proposed method, we build and publish the first dynamic passive NLOS tracking dataset, NLOS- Track, which fills the vacuum of realistic NLOS datasets. NLOS-Track contains thousands of NLOS video clips and corresponding trajectories. Both real-shot and synthetic data are included. Our codes and dataset are available at https://againstentropy.github.io/NLOS-Track/.
1. Introduction In contrast to conventional imaging within the di- rect line-of-sight (LOS), non-line-of-sight (NLOS) imag- ing aims to tackle an inverse problem, i.e., using indirect signal (e.g., reflection from a visible relay wall) to recover information of invisible areas. To specify, NLOS tracking manages to reconstruct a continuous trajectory in real time when an object or a person is moving in an invisible region, which is sketched in Fig. 1. The ability to track moving objects outside the LOS would enable promising applica- tions, such as autonomous driving, robotic vision, security, medical imaging, post-disaster searching, and rescue opera- tions, etc. [2, 14,17,27], thus receiving increasing attention in recent years. *Equal contribution. †Corresponding author. Y X CAMERAFigure 1. A schematic of the passive NLOS tracking. The char- acter is walking in the hidden scene and we can perform real-time tracking by observing and analyzing the relay wall from outside the room with a RGB camera, without any additional equipment. Existing NLOS tracking techniques mostly rely on active illumination from the detection side [4, 5,7,8,16, 23,25,28,35,36]. Although introducing denser and finer information, active illumination typically requires expen- sive equipment (e.g., ultra-fast pulsed laser) and elaborate experimental conditions [29]. These defects cause a gap be- tween active techniques and practical applications. Besides, the oversimplified setting in previous works even expands the gap. Unlike active methods, passive NLOS techniques [1,3,5,18,19,29–31, 33,39,41] only depend on the feeble diffuse reflection of the hidden region, getting rid of require- ments of expensive equipment. So this paper focuses on the low-cost passive NLOS tracking task in realistic scenarios. We find that most existing NLOS tracking works merely locate the object in each frame independently [4, 5,7,8,19, 23,28,36], without considering the position relationship be- tween adjoining moments. This practice directly causes jit- ters of trajectory, thus resulting in inaccurate tracking (see Sec. 5.3for more details). In this paper, we consider the sig- nificance of making use of motion information and taking advantage of motion continuity prior, which helps achieve more coherent and accurate tracking results. Furthermore, passive NLOS techniques face the dilemma that the signal-to-noise ratio (SNR) is extremely low [6, 8]. To address this problem, some previous works This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 972 conduct background estimation with the video’s temporal mean and apply background subtraction to every frame [3, 19, 33]. In this way, the difference between frames could be amplified, thus increasing the SNR. However, temporal-mean subtraction inevitably mixes up information from early period. Consequently, it reintroduces extra noise into originally low-SNR signals, which is still a hazard to excavating faint differences between frames. To address the aforementioned problems, we first in- troduce difference frame to describe motion information. Compared to background estimation and subtraction, a dif- ference frame can be readily obtained by subtracting the previous frame from the current frame. In this way, a dif- ference frame can represent the immediate motion infor- mation, and will not introduce noise from other periods. Our experiments show that difference frames do convey es- sential dynamic messages (see Sec. 5.3for more details). Additionally, we propose a novel network named PAC-Net (Propagation And Calibration Network), which integrates motion continuity prior into the algorithm. Consisting of two dual modules, Propagation-Cell and Calibration-Cell, PAC-Net maintains a good continuity of trajectory via prop- agating with difference frames and then alternately calibrat- ing with raw frames. Our experimental results demonstrate that PAC-Net can achieve centimeter-level precision when tracking a walking person in real time. We also build NLOS-Track, the first public-accessible video dataset for passive NLOS tracking. It contains real- istic scenes to support the proposed task and method, and we expect NLOS-Track to facilitate more NLOS works. In contrast to oversimplified settings in existing NLOS track- ing works, NLOS-Track dataset manages to simulate realis- tic scenarios with humans walking in unknown scenes. The dataset consists of 500 real-shot videos and more than 1,000 synthetic videos, each recording the relay wall when a char- acter walks along the randomly generated trajectory. Paired trajectory ground truth of each video clip is also provided. Our contributions are mainly in three folds: • We propose and formulate the purely passive NLOS tracking task, which avoids the use of expensive equip- ment. Development on this task will allow promis- ing and valuable applications in many fields, such as robotic vision, medical imaging, etc. • We propose a passive NLOS tracking network, PAC- Net, which is capable of utilizing both dynamic and static messages on a frame level. As for dynamic messages, we specially introduce difference frames as clear carriers of motion information, which gets rid of introducing extra noise from other periods. • We establish the first passive NLOS trajectory track- ing dataset, NLOS-Track, which contains thousands of video clips with a variety of scene settings.
Wang_PyPose_A_Library_for_Robot_Learning_With_Physics-Based_Optimization_CVPR_2023
Abstract Deep learning has had remarkable success in robotic perception, but its data-centric nature suffers when it comes to generalizing to ever-changing environments. By contrast, physics-based optimization generalizes better, but it does not perform as well in complicated tasks due to the lack of high-level semantic information and reliance on manual parametric tuning. To take advantage of these two comple- mentary worlds, we present PyPose: a robotics-oriented, PyTorch-based library that combines deep perceptual mod- els with physics-based optimization. PyPose’s architecture is tidy and well-organized, it has an imperative style interface and is efficient and user-friendly, making it easy to integrate into real-world robotic applications. Besides, it supports par- allel computing of any order gradients of Lie groups and Lie algebras and 2nd-order optimizers, such as trust region meth- ods. Experiments show that PyPose achieves more than 10× speedup in computation compared to the state-of-the-art libraries. To boost future research, we provide concrete ex- amples for several fields of robot learning, including SLAM, planning, control, and inertial navigation. Corresponding Author. [email protected] 1Carnegie Mellon University, Pittsburgh, PA 15213, USA. 2State University of New York at Buffalo, NY 14260, USA. 3Massachusetts Institute of Technology, Cambridge, MA 02139, USA. 4Nanyang Technological University, Singapore 639798. 5ETH Zürich, 8092 Zürich, Switzerland. 6Pennsylvania State University, University Park, PA 16801, USA. 7Delhi Technological University, Delhi, India. 8University of Virginia, Charlottesville, V A 22904, USA. 9The Chinese University of Hong Kong, Shenzhen, China. 10University of Massachusetts Amherst, MA 01003, USA. 11Georgia Institute of Technology, Atlanta, GA 30332, USA 12Wuhan University, Hubei 430072, China. 13University of Michigan, Ann Arbor, MI 48109, USA. 14Fox Chapel Area High School, Pittsburgh, PA 15238, USA. 15Lexington High School, Lexington, MA 02421, USA. 16Stanford University, Stanford, CA 94305, USA.
1. Introduction Deep learning has made great inroads in visual perception tasks such as classification [66], segmentation [45], and de- tection [38]. However, it is still often unsatisfactory in some robotic applications due to the lack of training data [65], ever-changing environments [76], and limited computational resources [67]. On the other hand, physics-based optimiza- tion has shown great generalization ability and high accu- racy in many vision and robotic tasks, such as control [24], planning [70], and simultaneous localization and mapping (SLAM) [76]. Nevertheless, it relies on problem-specific parameter tuning and suffers from the lack of semantic in- formation. Since both methods have shown their own merits, more and more efforts have been made to take advantage of the two complementary worlds [75]. Currently, learning-based methods and physics-based op- timization are typically used separately in different modules of a robotic system [23]. For example, in semantic SLAM, learning-based methods showed promising results in scenar- ios where high-level semantic information is needed or as a replacement for hand-crafted features and descriptors, e.g., feature matching in the front-end [53], while physics-based optimization plays a vital role in cases where a well-defined physical model can be established, e.g., pose graph optimiza- tion in the back-end [12]. Researchers usually first execute the front end and then pass the results to the back end for optimization. Despite the tremendous progress in the past decades, such a two-stage, decoupled paradigm may only achieve sub-optimal solutions, which in turn limits system performance and generalization ability. Hence, developing integrated methods with end-to-end differentiation through optimization is an emerging trend [59–61]. A variety of applications in perception, motion planning, and automatic control have been explored for end-to-end learning [30,37,59]. However, most of these applications rely This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22024 on problem-specific implementations that are often coded from scratch, which makes it difficult for researchers to build upon prior work and explore new ideas. This hinders the development cycle due to the lack of a unified and system- atic development framework. For example, people usually leverage PyTorch-based [48] models for developing learning- based perceptual models, but then have to use C(++)-based optimization libraries, such as GTSAM [20], OMPL [54], and CT [27], for physics-based optimization. The mixed usage of Python and C++ libraries increases the system complexity and slows down the development cycle as it is time-consuming for cross-language debugging and ineffi- cient to transfer data among different processes, e.g., ROS nodes [51]. Therefore, there is an urgent need for a sys- tematic development tool in a single language, accelerating end-to-end learning for physics-based optimization. Some researchers have spent effort towards this objec- tive. For example, LieTorch exploits the group structure of 3D transformations and performs back-propagation in the tangent spaces of manifolds [60]. However, only 1st-order differentiable operations are currently implemented, which limits its practical use, since higher order derivatives provide additional local information about the data distribution and enable new applications [44]. CvxpyLayer [3] takes con- vex optimization as a differentiable neural network layer, while it doesn’t support operation for Lie groups and 2nd- order optimizers. Similarly, Theseus [49] takes non-linear optimization as network layers; however, it adopts rotation matrices for transformation representation, which is memory inefficient for practical robotic applications. To address the above limitations, we present PyPose, an open-source library based on PyTorch to connect learning- based perceptual models with classical algorithms that can be formulated as physics-based optimization, e.g., geometry problem, factor-graph optimization, and optimal control. In summary, our main contributions are: •We present a new python-based open-source library, Py- Pose, to further enable end-to-end learning with physics- based optimization and accelerate the next generation of developments in robotics. PyPose is designed to be easily interpretable, user-friendly, and efficient with a tidy and well-organized architecture. It provides an im- perative programming style for the convenience of real- world robotic applications. PyPose supports any order gradient computation of Lie groups and Lie algebras, and 2nd-order optimizers such as Levenberg-Marquardt with trust region steps. As demonstrated in Figure 1, our experiments show that PyPose achieves more than 10×faster compared to state-of-the-art libraries. •We provide sample uses of PyPose. Users can easily build upon existing functionalities for various robotic applications. To the best of our knowledge, PyPose is one of the first Python libraries to comprehensivelycover several sub-fields of robotics, such as perception, SLAM, and control, where optimization is involved.
Wang_LG-BPN_Local_and_Global_Blind-Patch_Network_for_Self-Supervised_Real-World_Denoising_CVPR_2023
Abstract Despite the significant results on synthetic noise un- der simplified assumptions, most self-supervised denoising methods fail under real noise due to the strong spatial noise correlation, including the advanced self-supervised blind- spot networks (BSNs). For recent methods targeting real- world denoising, they either suffer from ignoring this spatial correlation, or are limited by the destruction of fine textures for under-considering the correlation. In this paper, we present a novel method called LG-BPN for self-supervised real-world denoising, which takes the spatial correlation statistic into our network design for local detail restora- tion, and also brings the long-range dependencies model- ing ability to previously CNN-based BSN methods. First, based on the correlation statistic, we propose a densely- sampled patch-masked convolution module. By taking more neighbor pixels with low noise correlation into account, we enable a denser local receptive field, preserving more use- ful information for enhanced fine structure recovery. Sec- ond, we propose a dilated Transformer block to allow dis- tant context exploitation in BSN. This global perception addresses the intrinsic deficiency of BSN, whose receptive field is constrained by the blind spot requirement, which can not be fully resolved by the previous CNN-based BSNs. These two designs enable LG-BPN to fully exploit both the detailed structure and the global interaction in a blind manner. Extensive results on real-world datasets demon- strate the superior performance of our method. https: //github.com/Wang-XIaoDingdd/LGBPN
1. Introduction Image denoising is a fundamental research topic for low- level vision [7, 36]. Noise can greatly degrade the quality of the captured images, thus bringing adverse impacts on the subsequent downstream tasks [22, 32]. Recently, with the rapid development of neural networks, learning-based methods have shown significant advances compared with traditional model-based algorithms [5, 8, 10, 11]. *Corresponding Author SIDD Validation: 0018-0031 DnCNN [36] C2N [15] R2R [26] 31.17/0.778 28.09/0.706 30.37/0.770 CVF-SID [25] AP-BSN [20] LG-BPN (Ours) 28.56/0.792 31.92/0.826 32.76 /0.897 Figure 1. Visual comparison of various methods on the SIDD val- idation [1] dataset. Compared with DnCNN [36], C2N [15] and R2R [15], LG-BPN can be trained in a self-supervised manner without extra data. CVF-SID [25] still contains noise in the out- put, and AP-BSN [20] suffers from the loss of details. Unfortunately, learning-based methods often rely on massive labeled image pairs for training [2, 34, 35]. This can not be simply addressed by synthesizing additive white Gaussian noise (AWGN) pairs, since the gap between AWGN and real noise distribution severely degrades their performance in the real world [2, 12]. To this end, several attempts have been made for collecting real-world datasets [1, 4]. Nonetheless, its application is still hindered by the rigorously-controlled and labor-intensive collection proce- dure. For instance, capturing ground truth images requires long exposure or multiple shots, which is unavailable in complex situations, e.g., dynamic scenes with motion. To alleviate the constraint of the large-scale paired dataset, methods without the need for ground truth have at- tracted increasing attention. The pioneer work Noise2Noise (N2N) [21] uses paired noisy observations for training, which can be applied when clean images are not available. Still, obtaining such noisy pairs under the same scene is less feasible. To make self-supervised methods more prac- tical, researchers seek to learn from one, instead of pairs of observations. Among these methods, blind-spot networks (BSNs) [3, 17, 19, 30] show significant advances to restore clean pixels by utilizing neighbor pixels, with a special blind spot receptive field requirement. Despite their promis- ing results on simple noise such as AWGN, these methods This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18156 usually work under simplified assumptions, e.g., the noise is pixel-wise independent. This obviously does not hold for real noise, where the distribution can be extremely complex and present a strong spatial correlation. Accordingly, a few methods have been proposed for self- supervised real noise removal. Recorrupted-to-Recorrupted (R2R) [26] tries to construct noisy-noisy pairs, while it can not be directly applied without extra information, which is not practical in real situations. CVF-SID [25] disentangles the noise components from noisy images, but it assumes the real noise is spatially invariant and ignores the spatial correlation, which contradicts real noise distribution. Recently, AP-BSN [20] combines pixel-shuffle down- sampling (PD) with the blind spot network (BSN). Though PD can be utilized to meet the noise assumption of BSN, simply combining PD with CNN-based BSN is sub-optimal for dealing with spatially-correlated real noise. It causes damage to local details, thus bringing artifacts to the sub- sampled images, e.g., aliasing artifact, especially for large PD stride factors [20, 38]. Also, though more advanced de- signs of BSNs have been proposed [18,19,31], CNN-based BSNs fail to capture long-range interactions due to their convolution operator, which is further bounded by the lim- ited receptive field under the blind spot requirement. In this paper, we present a novel method, called LG- BPN, to address these issues on self-supervised real im- age denoising, including the reliance on extra information, the loss of local structures by noise correlation, and also the lacking of modeling distant pixel interaction. LG-BPN can be directly trained without external information. Fur- thermore, we ease the destruction of fine textures by care- fully considering the spatial correlation in real noise, at the same time injecting long-range interaction by tailoring Transformers to the blind spot network. First, for local in- formation, we introduce a densely-sampled patch-masked convolution (DSPMC) module. Based on the prior statis- tic of real noise spatial correlation, we take more neighbor pixels into account with a denser receptive field, allowing the network to recover more detailed structures. Second, for global information, we introduce a dilated Transformer block (DTB). Under the special blind spot requirement, this greatly enlarges the receptive field compared with previous CNN-based BSNs, permitting more neighbors to be utilized when predicting the central blind spot pixel. These two de- signs enable us to fully exploit local and global informa- tion, respectively. Extensive studies demonstrate that LG- BPN outperforms other state-of-the-art un-/self-supervised methods on real image denoising, as shown in Figure 1. We summarize our contributions as follows: • We present a novel self-supervised method called LG- BPN for real-world image denoising, which can effec- tively encode both the local detailed structure and the capture of global representation.• Based on the analysis of real noise spatial correlation, we propose DSPMC module, which takes advantage of the higher sampling density on the neighbor pixels, en- abling a denser receptive field for improved local tex- ture recovery. • To establish long-distance dependencies in previous CNN-based BSN methods, we introduce DTB, which aggregates global context while complying with the constraint of blind spot receptive field.
Wu_Masked_Scene_Contrast_A_Scalable_Framework_for_Unsupervised_3D_Representation_CVPR_2023
Abstract As a pioneering work, PointContrast conducts unsuper- vised 3D representation learning via leveraging contrastive learning over raw RGB-D frames and proves its effective- ness on various downstream tasks. However, the trend of large-scale unsupervised learning in 3D has yet to emerge due to two stumbling blocks: the inefficiency of match- ing RGB-D frames as contrastive views and the annoying mode collapse phenomenon mentioned in previous works. Turning the two stumbling blocks into empirical stepping stones, we first propose an efficient and effective contrastive learning framework, which generates contrastive views di- rectly on scene-level point clouds by a well-curated data augmentation pipeline and a practical view mixing strat- egy. Second, we introduce reconstructive learning on the contrastive learning framework with an exquisite design of contrastive cross masks, which targets the reconstruction of point color and surfel normal. Our Masked Scene Con- trast (MSC) framework is capable of extracting comprehen- sive 3D representations more efficiently and effectively. It accelerates the pre-training procedure by at least 3 ×and still achieves an uncompromised performance compared with previous work. Besides, MSC also enables large- scale 3D pre-training across multiple datasets, which fur- ther boosts the performance and achieves state-of-the-art fine-tuning results on several downstream tasks, e.g., 75.5% mIoU on ScanNet semantic segmentation validation set.
1. Introduction Unsupervised visual representation learning aims at learning visual representations from vast amounts of unla- beled data. The learned representations are proved to be beneficial for various downstream tasks like segmentation and detection. It has attracted lots of attention and achieved remarkable progress in 2D image understanding, exceeding the upper bound of human supervision [20, 25]. *Corresponding Author. Email: [email protected] Random CropPointContrast Masked Scene Contrast Visible RegionRGB-D Camera Figure 1. Comparison of unsupervised 3D representation learning. The previous method [56] (top) relies on raw RGB-D frames with restricted views for contrastive learning, resulting in low efficiency and inferior versatility. Our approach (bottom) directly operates on scene-level views with contrastive learning and masked point modeling, leading to high efficiency and superior generality, fur- ther enabling large-scale pre-training across multiple datasets. Despite the impressive success of unsupervised visual representation learning in 2D, it is underexplored in 3D. Modern 3D scene understanding algorithms [10, 52] are fo- cused on supervised learning, where models are trained di- rectly from scratch on targeted datasets and tasks. Well-pre- trained visual representations can undoubtedly boost the performance of these algorithms and are currently in ur- gent demand. Recent work PointContrast [56] conducts a preliminary exploration in 3D unsupervised learning. How- ever, it is limited to raw RGB-D frames with an inefficient learning paradigm, which is not scalable and applicable to large-scale unsupervised learning. To address this essential and inevitable challenge, we focus on building a scalable framework for large-scale 3D unsupervised learning. One technical stumbling block towards large-scale pre- training is the inefficient learning strategy introduced by This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9415 matching RGB-D frames as contrastive views. PointCon- trast [56] opens the door to pre-training on real indoor scene datasets and proposes frame matching to generate con- trastive views with natural camera views, as in Figure 1 top. However, frame matching is inefficient since duplicated en- coding exists for matched frames, resulting in limited scene diversity in batch training and optimization. Meanwhile, not all of the 3D scene data contains raw RGB-D frames, leading to failure deployments of the algorithm. Inspired by the great success of SimCLR [7], we investigate gen- erating strong contrastive views by directly applying a se- ries of well-curated data augmentations to scene-level point clouds, eliminating the dependence on raw RGB-D frames, as in Figure 1 bottom. Combined with an effective mech- anism that mixes up query views, our contrastive learning design accelerates the pre-training procedure by 4.4 ×and achieves superior performance with purely point cloud data, compared to PointContast with raw data. The superior de- sign also enables large-scale pre-training across multiple datasets like ScanNet [16] and ArkitScenes [5]. Another obstacle is the mode collapse phenomenon that occurs when scaling up the optimization iterations. We owe the culprit for this circumstance to the insufficient difficulty of unsupervised learning tasks. To further tackle the mode collapse challenge in unsupervised learning and scale up the optimization iterations, inspired by recent masked au- toencoders [23, 57], we construct a masked point modeling paradigm where both point color reconstruction objective and surfel normal reconstruction objective are proposed to recover the masked color and geometric information of the point cloud respectively. We incorporate the mask point modeling strategy into our contrastive learning framework via an exquisite design of contrastive cross masks, leading towards a scalable unsupervised 3D representation learning framework, namely Masked Scene Contrast (MSC). Our framework is efficient, effective, and scalable. We conduct extensive experimental evaluations to validate its capability. On the popular point cloud dataset ScanNet, our algorithm accelerates the pre-training procedure by more than 3×, and achieves better performance on downstream tasks, when compared to the previous representative Point- Contrast. Besides, our method also enables large-scale 3D pre-training across multiple datasets, leading to state-of- the-art fine-tuning results on several downstream tasks, e.g. 75.5% mIoU on ScanNet semantic segmentation validation set. In conclusion, our work opens up new possibilities for large-scale unsupervised 3D representation learning.
Wang_DaFKD_Domain-Aware_Federated_Knowledge_Distillation_CVPR_2023
Abstract Federated Distillation (FD) has recently attracted in- creasing attention for its efficiency in aggregating multi- ple diverse local models trained from statistically hetero- geneous data of distributed clients. Existing FD methods generally treat these models equally by merely computing the average of their output soft predictions for some given input distillation sample, which does not take the diversity across all local models into account, thus leading to de- graded performance of the aggregated model, especially when some local models learn little knowledge about the sample. In this paper, we propose a new perspective that treats the local data in each client as a specific domain and design a novel domain knowledge aware federated distilla- tion method, dubbed DaFKD, that can discern the impor- tance of each model to the distillation sample, and thus is able to optimize the ensemble of soft predictions from di- verse models. Specifically, we employ a domain discrimi- nator for each client, which is trained to identify the corre- lation factor between the sample and the corresponding do- main. Then, to facilitate the training of the domain discrim- inator while saving communication costs, we propose shar- ing its partial parameters with the classification model. Ex- tensive experiments on various datasets and settings show that the proposed method can improve the model accuracy by up to 6.02% compared to state-of-the-art baselines.
1. Introduction Federated learning (FL) has emerged as a prominent dis- tributed machine learning framework to train a global model via the collaboration among users without sharing their original dataset [17, 22, 27]. Due to the benefits of preserv- ing privacy and economic communication efficiency, FL has been widely adopted in various applications such as medical *Ruixuan Li is the corresponding author.image processing [8, 19, 32] and recommendation [2, 26]. The classic FL paradigm, FedAvg [22], iteratively opti- mize the global model by aggregating the parameters of lo- cal models trained from data resides on a number of remote devices or servers. However, these methods usually suffer from serious model performance degradation when the data is not independently and identically distributed (Non-IID) across clients, which is a common issue in FL scenarios. This is mainly because the model parameters on different clients are optimized towards diverse directions [14], lead- ing to the overlarge variance of the aggregated model. To tackle this challenge, federated distillation (FD) [13] proposes to distill the knowledge of multiple local models into the global model by aggregating only the output soft predictions, which recently attracts increasing attention. For instance, Tao et al. [18] leveraged the public dataset as the distillation data samples to obtain the soft predictions from multiple local models and then updated the global model with the average of these soft predictions. Based on [18], Zhu et al. [40] and Zhang et al. [37] improved the distillation by replacing the public dataset with data gen- erated by the generative model. Although these methods achieve significant improvement over existing parameter- averaging methods, they still do not take the model diver- sity into account and may still limit the model performance. More specifically, only computing the average of soft pre- dictions will inevitably bring errors when some local mod- els make wrong predictions for the distillation sample. To break the limitations of existing federated distillation methods, we in this paper propose a novel federated distil- lation method dubbed DaFKD that can discern the impor- tance of each model to the given distillation sample, and thus is able to reduce the impact of wrong soft predictions. More specifically, we consider that the local data in each client constitutes as a specific domain and employ a domain discriminator for each client to identify the correlation fac- tor between the sample and the domain. For a given dis- tillation sample, we endow the local model with high im- This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20412 portance when its correlation factor is significant and vice versa. The principle behind this is the fact that a model tends to make the correct prediction when the sample is con- tained in the domain for training the model. Furthermore, to facilitate the training of domain discriminator, we pro- pose sharing its partial parameters with the target classifica- tion model. Through extensive experiments over various datasets (MNIST, EMNIST, FASHION MNIST, SVHN) and different settings (various models and data distribu- tions), we show that the proposed method significantly im- proves the model accuracy as compared to state-of-the-art algorithms. The contributions of this paper are: • We propose a new domain aware federated distillation method named DaFKD which endows the model with different importance according to the correlation be- tween the distillation sample and the training domain. • To adaptively discern the importance of multiple local models, we propose employing the domain discrimi- nator for each client which identifies the correlation factors. To facilitate the training of the discriminator, we further propose sharing partial parameters between the discriminator and the target classification model. • We establish the theories for the generalization bound of the proposed method. Different from existing meth- ods, we theoretically show that DaFKD efficiently solves the Non-IID problem where the generalization bound of DaFKD does not increases with the growth of data heterogeneity. • We conduct extensive experiments over various datasets and settings. The results demonstrate the ef- fectiveness of the proposed method which improves the model accuracy by up to 6.02% compared to state- of-the-art methods.
Xiong_CASP-Net_Rethinking_Video_Saliency_Prediction_From_an_Audio-Visual_Consistency_Perceptual_CVPR_2023
Abstract Incorporating the audio stream enables Video Saliency Prediction (VSP) to imitate the selective attention mech- anism of human brain. By focusing on the benefits of joint auditory and visual information, most VSP methods are capable of exploiting semantic correlation between vi- sion and audio modalities but ignoring the negative effects due to the temporal inconsistency of audio-visual intrinsics. Inspired by the biological inconsistency-correction within multi-sensory information, in this study, a consistency- aware audio-visual saliency prediction network (CASP- Net) is proposed, which takes a comprehensive considera- tion of the audio-visual semantic interaction and consistent perception. In addition a two-stream encoder for elegant association between video frames and corresponding sound source, a novel consistency-aware predictive coding is also designed to improve the consistency within audio and vi- sual representations iteratively. To further aggregate the multi-scale audio-visual information, a saliency decoder is introduced for the final saliency map generation. Substan- tial experiments demonstrate that the proposed CASP-Net outperforms the other state-of-the-art methods on six chal- lenging audio-visual eye-tracking datasets. For a demo of our system please see our project webpage.
1. Introduction The task of saliency prediction is to automatically esti- mate the most prominent area in a scenario by simulating human selective attention. It has been extended to an al- ternative way to extract the most valuable information from a massive of data, which serves wide applications such as robotic camera control [7], video captioning [35], motion tracking [30], image quality evaluation [50] and video com- pression [51], etc. In recent years, a lot of saliency prediction works have been developed by their increasing attention [4, 15, 41–43, *Indicates equal contributions. †Corresponding author: [email protected]. ⋯Consistent ⋯ ⋯Consistent Consistent Inconsistent visual audio Time GT Ours STAViS⋯ ⋯ ⋯ Figure 1. The example figure shows the saliency results of our model compared to STA ViS [39] in audio and video temporal se- quences. In the last time segment, the audio information that oc- curs in the event is inconsistent with the visual information. Our method can cope with such challenge by automatically learning to align the audio-visual features. The results of STA ViS, however, show that it is incapable to address the problem of audio-visual inconsistency. GTdenotes ground truth. 49]. According to different data types, these studies can be categorized into Image Saliency Prediction (ISP) and Video Saliency Prediction (VSP). The ISP investigates how to combine the low-level heuristic characteristics ( e.g., colour, texture and luminance) with high-level semantic image at- tributes to predict prominent areas in the scene [15, 41, 42]. Differently, VSP exploits how to apply the spatio-temporal structure information in videos, and benefits the perception and identification of dynamic scenes [4, 49]. From the view of data modalities, the vision and au- dio present the video content from different sensing, which complement each other to enhance the perception. Based on multi-modal data, more recent studies have that audio information can significantly improve the understanding of the video semantics [33, 37, 39]. Min et al. [33] conduct a cross-modal kernel canonical correlation analysis (CCA) This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6441 by exploring audio-visual correspondence clues, and ef- fectively enhance the video-level saliency prediction accu- racy. Tsiami et al. [39] propose a deep model by combin- ing spatio-temporal visual and auditory information to ad- dress the video saliency estimation efficiently. Neverthe- less, these works heavily depend on temporal consistency of visual and audio information, and thus may suffer an unexpected degradation in practical scenarios, where such consistency cannot be satisfied as shown in Figure 1. Temporal inconsistency commonly exists in real-life videos because realistic visual scenarios usually contain multiple sound sources, which may come from on-screen (e.g., dialogue in a talk show), or from off-screen ( e.g., nar- ration in a movie). Without understanding the complex sce- nario components, simply performing audio-visual consis- tency learning would result in an irrelevant semantic match- ing. A promising solution to this challenge is motivated by the study of neuroscience [18, 36], which explains how our brain minimizes the matching errors within multisen- sory data using both iterative inference and learning, and also inspired the Consistency-aware Audio-visual Saliency Prediction network CASP-Net of this study. By substantially exploring the latent semantic correla- tions of cross-modal signals, in CASP-Net , the potential temporal inconsistency between different modalities can be corrected as well. In addition, a two-stream network is also introduced to elegantly associate video frames with the corresponding sound source, which is able to achieve semantic similarities between audio and visual features by cross-modal interaction. To further reason the coherent vi- sual and audio content in an iterative feedback manner, a consistency-aware predictive coding (CPC) module is de- signed. Subsequently, a saliency decoder (SalDecoder) is proposed to aggregate the multi-scale audio-visual informa- tion from all previous decoder’s blocks and to generate the final saliency map. The main contributions in this work can be summarized as follows: (1) A novel audio-visual saliency prediction model is proposed by comprehensively considering the functionali- ties of audio-visual semantic interaction and consistent per- ception. (2) A consistency-aware predictive coding module is designed to improve the consistency within audio and vi- sual representations iteratively. (3) Solid experiments have been conducted on six audio-visual eye-tracking datasets, which demonstrate a superior performance of the proposed method in comparison to the other state-of-the-art works.
VS_Mask-Free_OVIS_Open-Vocabulary_Instance_Segmentation_Without_Manual_Mask_Annotations_CVPR_2023
Abstract Existing instance segmentation models learn task- specific information using manual mask annotations from base (training) categories. These mask annotations re- quire tremendous human effort, limiting the scalability to annotate novel (new) categories. To alleviate this prob- lem, Open-Vocabulary (OV) methods leverage large-scale image-caption pairs and vision-language models to learn novel categories. In summary, an OV method learns task- specific information using strong supervision from base an- notations and novel category information using weak su- pervision from image-captions pairs. This difference be- tween strong and weak supervision leads to overfitting on base categories, resulting in poor generalization towards novel categories. In this work, we overcome this issue by learning both base and novel categories from pseudo- mask annotations generated by the vision-language model in a weakly supervised manner using our proposed Mask- free OVIS pipeline. Our method automatically generates pseudo-mask annotations by leveraging the localization ability of a pre-trained vision-language model for objects present in image-caption pairs. The generated pseudo- mask annotations are then used to supervise an instance segmentation model, freeing the entire pipeline from any labour-expensive instance-level annotations and overfitting. Our extensive experiments show that our method trained with just pseudo-masks significantly improves the mAP scores on the MS-COCO dataset and OpenImages dataset compared to the recent state-of-the-art methods trained with manual masks. Codes and models are provided in https://vibashan.github.io/ovis-web/ .
1. Introduction Instance segmentation is a challenging task as it requires models to detect objects in an image while also precisely *This work was done when Vibashan VS interned at Salesforce Re- search. Primary contact: [email protected] SupervisionNovel Class Information “A photo of cat and keyboard ”Manual Mask Annotations Mask-RCNNOur PipelinePseudo-Mask Annotations (Base, Novel)Expensive Base Annotations + Overfitting Weak SupervisionTask-specific InformationStrong Supervision Weak Mask-RCNNNo Instance-level Base Annotations + No OverfittingA) Previous Methods B) Our Method “A photo of cat and keyboard ” Human Annotator(Keyboard) (Cat) (Keyboard, Cat)(Base) Image-CaptionImage-CaptionFigure 1. A) Previous Methods: Learn task-specific information (detection/segmentation) in a fully-supervised manner and novel category information with weak supervision. During training, this difference in strong and weak supervision signals leads to overfit- ting and requires expensive base annotations. B) Our method: Given image-caption pairs, we generate pseudo-annotations for both base and novel categories under weak supervision, solving the problems of labour-expensive annotation and overfitting. segment each object at the pixel-level. Although the rise of deep neural networks has significantly boosted the state-of- the-art instance segmentation performance [ 7,15,42], these methods are still trained for a pre-defined set of object cat- egories and are data-hungry [ 33]. Particularly, one needs to manually annotate thousands of instance-level masks for each object category, which takes around 78 seconds per in- stance mask [ 3]. If we look at this quantitatively on Open Images [ 21], a large-scale dataset with 2.1M instance-level mask annotations requires around 5 years of human labour. Even after extensive annotation, these training datasets are still limited to a small number of categories and segmenting objects from a novel category requires further annotation. Therefore, it is difficult to scale up existing methods to seg- ment a large number of categories due to intensive labour. Recently, Open-V ocabulary (OV) methods have gained much attention due to their success in detecting [ 11,14, This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23539 RPN WSPN A) RPN vs WSPN Proposal for Base and Novel Class B) RPN vs WSPN Proposal for COCO Novel Classes Novel Class GT Proposals Base Class GT Figure 2. RPN is supervised using bounding box annotations from COCO base and WSPN is supervised using image-labels from COCO base. A)WSPN produces better quality proposals for novel object categories compared to fully-supervised RPN. B)WSPN consistently produces better recall for all COCO novel categories than RPN. 46,52] and segmenting [ 17]novel categories beyond base (training) categories. An OV method learns task-specific information (detection/segmentation) from base categories that have manual instance-level bounding boxes or masks and learns novel category information from the pre-trained Vision-Language Model (VLM) [ 19,34] (see Fig. 1). All these methods produce promising results on novel cate- gories by leveraging different forms of weak supervision such as caption pretaining [ 46,50], knowledge distillation [14,52] and pseudo-labelling [ 17,28]. However, all above- mentioned OV methods still rely on the manually-annotated base categories to improve their performances on novel cat- egories. Without fine-tuning on base categories, existing OV methods lack task/domain specific knowledge and the performances on novel categories will be affected [ 14,46]. Although manual instance-level annotations of base cat- egories are critical to open-vocabulary segmentation meth- ods, we find empirically that such fully-supervised informa- tion causes OV methods to overfit to base categories, lead- ing to a higher failure rate when evaluated on novel cate- gories. Specifically, OV methods utilize a region proposal network (RPN) [ 37] supervised with bounding box annota- tions obtained from the base categories to generate a set of bounding box proposals for all objects categories in a given image [ 46]. The feature representation from these object proposals is later matched with text embedding to learn the visual-semantic space for base and novel categories [ 46]. Therefore, the quality of proposals generated for novel ob- ject categories plays a key role in determining the perfor- mance in later stages. However, from our experiments, we find that many objects of novel categories wouldn’t be in- cluded in such proposals due to the RPN’s overfitting to base categories. Fig. 2(A) - Top gives some examples where the RPN trained with COCO base categories fails to generate high-quality region proposals for novel categories such as elephant, cat and skateboard. Therefore, a fully-supervised proposal network is a bottleneck in OV pipeline due to its poor generalization towards novel categories. Given the aforementioned observations of poor general- ization, we raise the question of whether we can improve the generalization by using weak supervision instead of relying on strong supervision from manual base annotations. If so, we can reduce overfitting to the base categories and the re- quirement for costly instance-level human annotations can be entirely removed from the pipeline. Our preliminary ex- periments give us some hope. Our experiments show that if we train a weakly-supervised proposal network (WSPN) with image-level annotations instead of box-level annota- tions, the region proposals it generates can better generalize to novel objects. As shown in Fig. 2A), the novel objects that the RPN proposals miss are covered by WSPN. Fig. 2 B) shows WSPN proposals have consistently better average recall than RPN for all COCO novel categories, indicating that WSPN proposals are more likely to cover the ground- truth bounding boxes of novel objects. Inspired by these observations, we propose open- vocabulary segmentation without manual mask annotations. We do not use any human-provided box-level or pixel-level annotations during the training of our method. We first train a simple WSPN model with image-level annotations on base categories as a proposal generator to generate pro- posals for all objects given an image. Then, we adopt pre-trained vision-language models to select proposals as pseudo bounding boxes for novel objects. Given a novel object’s text name, we can utilize the name as a text prompt to localize this object in an image with a pre-trained vision- language model. To obtain a more accurate pseudo-mask that covers the entire object, we conduct iterative mask- ing with GradCAM [ 39] given the vision-language model. Finally, we train a weakly-supervised segmentation (WSS) [20] network with previously generated bounding box and GradCAM activation map to obtain pixel-level annotation. 23540 Our contributions are summarized as follows: (1)We propose a Mask-free OVIS pipleline where we produce manual-effort-free pseudo-mask annotations for base and novel instance segmentation using open-vocabulary and weakly supervised techniques. (2)We propose a novel pseudo-mask generation pipeline leveraging a pre-trained vision-language model to generate instance-level annota- tions. (3)Benefiting from pseudo-labels, our method sets up SOTA’s for both detection and instance segmentation tasks compared to recent methods trained with manual masks on MS-COCO and OpenImages datasets.
Walmer_Teaching_Matters_Investigating_the_Role_of_Supervision_in_Vision_Transformers_CVPR_2023
Abstract Vision Transformers (ViTs) have gained significant pop- ularity in recent years and have proliferated into many applications. However, their behavior under different learning paradigms is not well explored. We compare ViTs trained through different methods of supervision, and show that they learn a diverse range of behaviors in terms of their attention, representations, and downstream performance. We also discover ViT behaviors that are consistent across supervision, including the emergence of Offset Local Attention Heads. These are self-attention heads that attend to a token adjacent to the current token with a fixed directional offset, a phenomenon that to the best of our knowledge has not been highlighted in any prior work. Our analysis shows that ViTs are highly flexible and learn to process local and global information in different orders depending on their training method. We find that contrastive self-supervised methods learn features that are competitive with explicitly supervised features, and they can even be superior for part-level tasks. We also find that the representations of reconstruction-based models show non-trivial similarity to contrastive self-supervised models.
1. Introduction The field of Computer Vision has advanced massively in the past decade, largely built on the backbone of Con- volutional Neural Networks (CNNs). More recently, Vi- sion Transformers (ViTs) [ 18] have shown the potential to overtake CNNs as the go-to visual processing model. Prior works have asked the question do ViTs see like CNNs do?[52], but in this work, we ask: how do ViTs learn un- der different supervision? Past examinations of ViTs have largely focused on models trained through full supervision. Instead, we aim to characterize the differences and similar- ities of ViTs trained through varying training methods, in- cluding self-supervised methods. Unlike CNNs, the ViT ar- chitecture imposes few structural biases to guide the learn- ing of representations. This gives them the flexibility to *Equal contributors. Web: www.cs.umd.edu/ ˜sakshams/vit_analysis Code: www.github.com/mwalmer-umd/vit_analysis Explicit Supervision (Fully Supervised, CLIP)Contrastive Self-Supervision (DINO, MoCo)Reconstruction-Based Self-Supervision (MAE, BEiT)[email protected] CLS CKABEiTMAEMoCoDINOCLIPFSFSCLIPDINOMoCoMAEBEiTHow: Attention What: FeaturesWhy: Downstream Tasks FSCLIPDINOMoCoMAEBEiTRandomFigure 1. ViTs exhibit highly varied behaviors depending on their method of training. In this work, we compare ViTs through three domains of analysis representing the How, What, and Why of ViTs. How do ViTs process information through attention? (Top) Attention maps averaged over 5000 images show clear differences in the mid-to-late layers. What do ViTs learn to represent? (Left) Contrastive self-supervised ViTs have a greater feature similarity to explicitly supervised ViTs, but also have some similarity with ViTs trained through masked reconstruction. Why do we care about using ViTs? (Right) We evaluate ViTs on a variety of global and local tasks and show that the best model and layer vary greatly. learn diverse information processing strategies, and through our analyses, we uncover a wide array of ViT behaviors. There are countless ways to analyze ViTs, so to guide this analysis we choose three major domains which corre- spond to the How ,What , and Why of ViTs. For the How , we focus on how ViTs process information through Atten- tion. Multi-Headed Attention (MHA) layers are arguably the key element of ViTs, and they most distinguish them This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7486 from CNNs. For the What , we examine the Features of ViTs, as these are typically what practitioners take away from them. Finally for the Why, we focus on Downstream Tasks , which are why we care about using ViTs. Our work unveils that a powerful aspect of the ViT ar- chitecture is its local-global dual nature, which plays a role in all three aspects of our analyses. While standard CNNs are restricted to building representations hierarchi- cally from local to global, in a ViT each token can attend to information from any other image region at any time. And unlike popular CNN modifications like Spatial Pyra- mids [ 20,28,33,35] and top-down strategies [ 6,47,56], ViTs have the freedom to decide when and where global in- formation should be integrated. In this study, we show that the order and the relative ratio of local and global attention in ViTs varies dramatically based on the method of supervi- sion. We also find clearly different trends in the allocation of attention in the mid-to-late layers of these networks, as highlighted in Figure 2. This local-global dual nature is also embedded into the structure and features of the ViT, which encodes both local spatial tokens and a non-local classifier (CLS) token throughout its entire depth. We analyze the features of ViTs for both the CLS and spatial tokens, and assess how they align with semantics at the image, object, part, and pixel-level. We perform this analysis at every layer of the ViT to show the emergence of different levels of se- mantic information. Finally, we assess ViTs on a number of local and global downstream tasks. Overall, our contributions are: [1]A detailed comparison of ViTs trained with six different methods, including both fully supervised and self-supervised training. [2]A cross- cutting analysis spanning three major domains: Attention, Features, and Downstream Tasks. [3]Multiple insights into the inner workings of ViTs to guide future development of ViT variants, training strategies, and applications. In addition, we summarize some of our key observations about ViT behavior: [1]The attention maps of explicitly supervised ViTs devolve into Sparse Repeating Patterns in the mid-to-late layers, but the quality of features con- tinues to improve in these layers (Section 4.1).[2]All ViTs studied learn to use Offset Local Attention Heads , suggesting they are fundamentally necessary in ViTs (Sec- tion4.2). To the best of our knowledge, no prior work has brought attention to this phenomenon. [3]ViTs learn to pro- cess local and global information in different orders depend- ing on their method of supervision (Section 4.3).[4]All ViTs studied differentiate salient foreground objects by the early-to-mid layers (Section 4.4).[5]Reconstruction-based self-supervised methods can learn semantically meaningful CLS representations, even when the CLS token is only a placeholder (Section 5.1,5.2).[6]Supervised method’s fea- tures are the most semantically rich, but contrastive self- supervised methods are comparable or even superior in BEiTMAEMoCoDINOCLIPFS DINO[10,1]DINO[11,2]DINO[12,3]MoCo[10,1]MoCo[11,2]MoCo[12,3]FS[10,1]FS[11,2]FS[12,3]CLIP[10,1]CLIP[11,2]CLIP[12,3]FS & CLIP Layers 7-12: Sparse Repeating PatternsDINO & MoCo Layers 7-12: Object Centered BlobsAverage CLS Token Attention over 5000 Images MAE[10,1]MAE[11,2]MAE[12,3]BEiT[10,1]BEiT[11,2]BEiT[12,3]MAE & BEiT Layers 7-12: Diverse Attention Maps1 12Layer1 12LayerFigure 2. Clear differences in attention emerge in the mid-to- late layers under different supervision methods . These plots show the attention maps of CLS tokens averaged over 5000 Ima- geNet images. Rows indicate layers and columns indicate heads. For brevity, we show only three heads per layer. The bracketed numbers in the lower half denote the layer and head. some cases (Section 5.2,5.3).[7]For localized tasks, the best performance often comes from a mid-to-late layer (Section 6.2).[8]There is no single “best” training method or layer for all downstream tasks (Section 6.3).
Wang_HypLiLoc_Towards_Effective_LiDAR_Pose_Regression_With_Hyperbolic_Fusion_CVPR_2023
Abstract LiDAR relocalization plays a crucial role in many fields, including robotics, autonomous driving, and computer vi- sion. LiDAR-based retrieval from a database typically incurs high computation storage costs and can lead to globally inaccurate pose estimations if the database is too sparse. On the other hand, pose regression methods take images or point clouds as inputs and directly regress global poses in an end-to-end manner. They do not perform database matching and are more computationally efficient than re- trieval techniques. We propose HypLiLoc, a new model for LiDAR pose regression. We use two branched back- bones to extract 3D features and 2D projection features, respectively. We consider multi-modal feature fusion in both Euclidean and hyperbolic spaces to obtain more ef- fective feature representations. Experimental results indi- cate that HypLiLoc achieves state-of-the-art performance in both outdoor and indoor datasets. We also conduct ex- tensive ablation studies on the framework design, which demonstrate the effectiveness of multi-modal feature extrac- tion and multi-space embedding. Our code is released at: https://github.com/sijieaaa/HypLiLoc
1. Introduction Visual relocalization aims at estimating the 6-degree of freedom (DoF) pose of an agent using perception sensors, such as LiDARs and cameras. It plays a crucial role in many fields that include robot navigation [12], autonomous driv- ing [23], and scene recognition [22]. Image-based relocal- ization methods have achieved good performance in various applications [15, 33, 36]. However, images taken from cam- eras can only capture RGB color information and are easily influenced by environmental conditions, including low illu- mination and light reflections. By contrast, LiDARs, which cast active beams to estimate the depth of surrounding ob- jects, are more robust against those changes. *These authors contribute equally.In recent years, the LiDAR has become an important sensor in smart robots, autonomous vehicles, and mobile devices. LiDAR-based relocalization, which is a basic and important module impacting other perception tasks, has at- tracted more attention [8, 19, 20, 27, 37, 46]. One of the clas- sical approaches, LiDAR odometry, estimates the relative poses among successive LiDAR frames to obtain locally accurate pose estimation. However, errors accumulate over the trajectory, resulting in unsatisfactory global pose esti- mation. To compensate for the error, LiDAR odometry is usually treated as a component in a complete simultaneous localization and mapping system (SLAM), where the global pose estimated by a global positioning method or detected loop closure is used to correct the accumulated error in the LiDAR odometry [32, 43]. LiDAR-based retrieval is also used for relocalization [34]. It first constructs a database of LiDAR features learned from all candidate LiDAR frames. During inference, given a query LiDAR scan, the similarities between the query feature and all features stored in the database are computed so that the top-matched poses can be obtained. Although this approach provides accurate global pose estimation, it inherently suf- fers from high computation cost and storage burden [38]. Therefore, it is more appropriate for offline scenarios rather than for real-time mobile applications. Pose regression is favored as a relocalization method due to its lower computation and storage cost during inference. The pose regression network is still trained on a database containing LiDAR frames in an end-to-end manner to obtain a regression model. During inference, taking the LiDAR scan as input, the pose regression network directly regresses the global pose without any pre-constructed candidate database or map. It can mitigate the high computation and storage burden that occurs in the LiDAR-based retrieval methods. As a result, pose regression can be operated in real-time to satisfy various relocalization requirements in robotics, un- manned aerial vehicles (UA Vs), mobile relocalization APPs, autonomous vehicles, and SLAM systems. In this paper, we propose a relocalization method called HypLiLoc, which is a pose regression network with LiDAR This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5176 data as input. HypLiLoc uses a parallel feature extraction design, in which 3D features and 2D spherical projection fea- tures are obtained in two backbone branches simultaneously. The paper [24] leverages hyperbolic embeddings for 3D point clouds that can be viewed as hierarchical compositions of small parts. We thus follow this motivation to design our pipeline with hyperbolic learning. Specifically, we conduct feature fusion in both Euclidean and hyperbolic spaces to enhance the information representation and to achieve more effective multi-modal feature interaction. We test HypLiLoc in both outdoor and indoor datasets. Experiments indicate that HypLiLoc surpasses current approaches and achieves state-of-the-art (SOTA) performance. Our main contributions are summarized as follows: 1.We propose a novel LiDAR-based pose regression net- work HypLiLoc. It has one backbone that learns 3D features directly from the 3D point cloud and another backbone that learns features from a 2D projection of the point cloud onto a spherical surface. To achieve effective multi-modal feature interaction, the features are embedded in both Euclidean and hyperbolic spaces using multi-space learning. An attention mechanism is then used to fuse the features from different spaces together. 2.We test our network in both outdoor and indoor datasets, where it outperforms current LiDAR pose regression counterparts and achieves SOTA performance. We also conduct extensive ablation studies on the effectiveness of each design component.